title stringlengths 15 163 | paper_decision stringclasses 4
values | review_1 stringlengths 853 32.6k | rebuttals_1 stringlengths 0 15.1k | review_2 stringlengths 1.03k 35.6k | rebuttals_2 stringlengths 0 15.1k | review_3 stringlengths 807 27.4k ⌀ | rebuttals_3 stringlengths 0 15k ⌀ | review_4 stringlengths 780 22.2k ⌀ | rebuttals_4 stringlengths 0 15.1k ⌀ | review_5 stringclasses 171
values | rebuttals_5 stringclasses 166
values | review_6 stringclasses 25
values | rebuttals_6 stringclasses 24
values | review_7 stringclasses 4
values | rebuttals_7 stringclasses 4
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Learning-Augmented Algorithms for MTS with Bandit Access to Multiple Predictors | Accept (poster) | Summary: The paper studies the online metrical task system (MTS) problem in a learning-augmented variant. Given a metric space and an initial state $s_0$ (point), at each time a cost function $c_t$ over points is revealed, and an algorithm needs to select a state $s_t$ and pays cost equal to $d(s_{t-1},s_t) + c_t(s_t)$. The goal is to minimize the total cost.
The present paper studies MTS in a setting with expert advice: an online algorithm has access to $\ell$ heuristics, and the goal is to achieve a good performance compared to the best heuristic in hindsight.
Specifically, an algorithm only has bandit access to the heuristics, which means that it does not see the state of each heuristic at every timestep, but can only query the state of at most one heuristic. Moreover, they assume that an algorithm needs to query a heuristic for $m$ consecutive timestep before the current state will be revealed.
The authors present an algorithm with sublinear regret and prove that it is best-possible. Their algorithm reduces the bandit setting to a black box full feedback setting, with a tradeoff between exploration and exploitation. This gives a fraction solution. The authors then present an online rounding scheme.
Claims And Evidence: All claims are proven formally.
Methods And Evaluation Criteria: N/A
Theoretical Claims: I checked the proofs in the main part of the paper and they seem correct to me.
Experimental Designs Or Analyses: N/A
Supplementary Material: I scanned the technical appendix.
Relation To Broader Scientific Literature: The key contribution of the paper is the presentation and exhaustive study of a new interesting model within the area of multiple predictions for learning-augmented algorithms. I believe that the results are non-trivial and add valuable insights to this area. Also, the contribution of upper and lower regret bound builds a nice package.
Essential References Not Discussed: As far as I see, all relevant literature is being discussed.
Other Strengths And Weaknesses: Strengths
- The paper is well written
- Broadens the discussions on multiple predictions
- Gives tight results, hence answers the research question completely
Other Comments Or Suggestions: - page 1, line 18 right: replace $x_t$ with $s_t$
- references: there are several arxiv references, for which conference versions exist. I suggest to update them.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your review and for your suggestions. We will incorporate them in
our manuscript. | Summary: The paper studied the learning-augmented metrical task system (MTS) problem and gave near-tight bounds of $\tilde{\Theta}(\text{OPT}^{2/3})$. The problem is similar to the adversarial bandits with $T$ days; however, the ``switching’’ between bandits would involve a cost, which is measured by the distance between two states in a metric space. Let $s_t$ and $s_{t+1}$ be the states (arms) we commit to on time step $t$ and $t+1$, the cost of this step is defined as $c_t(s_t)+d(s_{t}, s_{t+1})$, where $c_t$ is a cost function only known on time step $t$.
In the learning-augmented setting, there are $\ell$ online ML models (known as the ``heuristics’’), and each ML model $i$ would recommend a prediction $s^i_t$ at time $t$. Importantly, we cannot observe the full cost since we do not know what will be the state $s^i_{t+1}$ for model $i$ to switch to in step $t+1$. The goal is to be competitive with the best ML model in hindsight. As standard in the literature, we define the extra additive error as regret.
The main contributions of the paper are as follows:
- An algorithm that achieves $O(\text{OPT}^{2/3})$ regret for the learning-augmented MTS problem (assuming constant $\ell$ and range of the metric space).
- A near-matching lower bound such that any algorithm has to suffer $\tilde{\Omega}(\text{OPT}^{2/3})$ regret.
- The generalization of the results, including results in a setting where the signal delay is $m\geq 2$ and the setting where the algorithm could make at most $k$ switches.
Claims And Evidence: The theoretical results are with full proofs and/or external references.
Methods And Evaluation Criteria: N/A, no experiments included.
Theoretical Claims: The main techniques used in this paper are heavily connected to the existing work of ADT [ICML’12] and DDKP [STOC’14]. At a high level, the algorithm tosses a coin to decide whether to follow $m$ steps of exploration or take the next step from an MWU-type of algorithm (e.g., the HEDGE algorithm). Running this algorithm would result in a distribution over the actions at each time step, and we can run a standard rounding algorithm to get the actual action sequence. The bounds on regret intuitively follow from the bounded loss argument in the online learning literature, although there are some additional technical steps for the formal argument. The lower bound was built on the foundation of the lower bound instance in DDKP [STOC’14], and the paper did more work to incorporate the metric switching cost into the argument.
The proofs look intuitively correct to me, although I did not check their correctness in detail.
Experimental Designs Or Analyses: N/A, theoretical paper without experiments.
Supplementary Material: The appendix contains the omitted proofs. I skimmed over some arguments but did not check the details due to time constraints.
Relation To Broader Scientific Literature: This paper has significance in both machine learning and theoretical computer science. There could also be many applications in online learning.
Essential References Not Discussed: Nothing essential is missing, although I think the paper could expand the discussion on adversarial bandits and learning-augmented algorithms for graph problems.
Other Strengths And Weaknesses: In general, I believe the problem studied in the paper is well-motivated, and the results are interesting. I have worked in both learning-augmented algorithms and online learning, and it is great to see connections between the areas. The analyses are non-trivial, and the paper is written in a relatively clean and coherent manner such that the intuitions are well-explained. The results also demonstrate a nice comparison with ACEPS [ICML’23] and show the importance of the extra information in that work.
On the flip side, I think that although the paper has already made reasonable effort to clarify the problem and the existing work, I still doubt whether people unfamiliar with the literature would find the paper hard to follow. This is partially attributed to the fact that the paper is a follow-up of a long line of existing work. However, maybe the authors should consider an expanded discussion (maybe in the appendix) to help readers less familiar with the literature to understand the problem and the techniques.
Other Comments Or Suggestions: See above.
Questions For Authors: N/A, I do not have additional questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank you for your feedback and for highlighting the strengths of our
contribution. We will incorporate your suggestions in the next revision
of our manuscript. | Summary: The paper studies the problem of metrical task system (MTS) under the bandit feedback setting. Given multiple heuristic predictors of what action to take, the algorithm can choose one predictor and receive feedback only if the same predictor is used consecutively across m time steps. A tight regret bound was proven and the authors also studied an extension to the setting that one can switch between heuristics at most k times.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I skimmed the proofs. They look reasonable.
Experimental Designs Or Analyses: No experiments as it is a theoretical paper.
Supplementary Material: I skimmed the appendix
Relation To Broader Scientific Literature: This work would be of interest to the bandit and expert selection communities.
Essential References Not Discussed: Not that I am aware of
Other Strengths And Weaknesses: # Strengths
- Asymptotically matching regret bounds were proven (Theorem 1.1 and 1.3) for the problem studied
# Weaknesses
- I do not believe that this work fits into the learning-augmented algorithms framework. It fits much more closely to the bandit or expert selection framework. Typically, a work in this area would require one to prove robustness guarantees showing the performance of the algorithm when the predictor is of arbitrarily poor quality; in this case, one would expect a result showing that the expected cost of ALG degrades back into predictor-free guarantee of $O(\log^2 n)$ when all heuristic predictors are arbitrarily bad. However, the paper only gives bounds with respect to the performance of the best predictor (which could be arbitrarily bad), which is precisely what the bandit or expert selection literature measures against.
- The problem setting of "m-delayed bandit access to heuristics" feels made up just to introduce additional constraints to distinguish itself from the settings of prior works. For instance, I don't see anything algorithmically or theoretically interesting about this m-delay feedback beyond trivially being forced to repeat the same choice m times (see Algorithm 1) and incurring a corresponding factor m in the analyses.
Other Comments Or Suggestions: - In Proposition 2.2, you should write "There is an online randomized algorithm Round..." so that it makes sense to refer to the algorithm here as Round subsequently.
- Given that your method uses HEDGE and SHARE as black-boxes, you should provide a description and discussion of them either in the main paper or at least in the appendix.
Questions For Authors: - Is there no constraint on number of heuristics $\ell$ and delay length $m$ with respect to the input length? What if we don't even get to execute some heuristic?
- How are the hyperparameters such as $\eta$ in Property 2.3 and $\varepsilon$ in Algorithm 1 determined in practice? How should someone using your algorithm set them? Where does $\eta$ show up in your actual algorithm?
- What are $X$ and $E$ in Algorithm 1? They are undefined and uninitialized, but they should be sets, right?
- Can you point out anything interesting about the algorithms or proof techniques used to obtain your results? Everything looks rather "standard" to me, modulo some minor tweaking to accommodate the m-delay setting (which feels made up to me). I am happy to upgrade my score if I am sufficiently convinced by the rebuttal (or the reviews given by other reviewers) that there is something substantially interesting going on in the paper that I have missed.
Ethical Review Concerns: NIL
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We provide answers to your questions clarifying the framing of our work
in the learning-augmented framework and the difficulties present in our setting
compared to the previous works.
We believe that these answers could also be valuable to other
researchers as suggested by
Reviewer uNsW. We will add a further discussion to our paper.
> I do not believe that this work fits into the learning-augmented algorithms framework... Typically, a work in this area would require one to prove robustness guarantees...
In fact, our result can be used (and is intended) as a tool to robustify any
learning-augmented algorithm (or any heuristic) with a negligible overhead.
Let $A_0$ be a classical online algorithm
and let $A_1$ be a heuristic with no worst-case guarantee.
On any input sequence, our algorithm is never more than $(1+o(1))$ factor worse
than the *best* of $A_0$ and $A_1$.
I.e., if $A_0$ is $R$-competitive for the given MTS variant,
our algorithm is guaranteed to be at most $(1+o(1))R$-competitive.
However, if the cost of $A_1$ on the given input is only
1.01 times the offline optimum, our algorithm's
cost will be only factor $(1+o(1))*1.01$ from offline optimum.
Here, it is crucial that our regret guarantees are in terms
of $OPT$ instead of the time horizon $T$ as common in online learning
literature, because $OPT$ can be much smaller than $T$ in general MTS inputs.
> The problem setting of "m-delayed bandit access to heuristics" feels made up
Our results, in order to be meaningful in MTS setting, require $m\geq 2$
consecutive queries to the same heuristic when performing exploration.
This is necessary to estimate the cost of the
heuristic: imagine a heuristic which, at each time $t$, moves to a state $s_t$
such that $c_t(s_t)$ is 0. Typically, this is a very bad idea in MTS, since such
algorithm would most likely pay very large movement costs.
However, the movement cost can be calculated only if we know its previous state
$s_{t-1}$, i.e., if we have queried the same heuristic in the previous time
step.
Generalization to $m>2$ is inspired by Arora et al.
In their setting, algorithm has to wait $m$ time steps
before seeing the relevant feedback. In our case, we do not even see
the actions taken by the heuristic and we need to find suitable actions on our
own.
> Can you point out anything interesting... Everything looks rather "standard"
Approach from the previous work of Arora et al.
would lead to a regret of order $T^{2/3}$ instead of $OPT^{2/3}$,
see our response to Reviewer syMW, which is not useful for robustification.
In turn, our algorithm is more similar to the classical algorithm for bandit
setting alternating exploration and exploitation.
However, there are three key differences, each of them is necessary to achieve
our performance guarantees:
* our algorithm makes improper steps (i.e., steps not taken by any of the
heuristics)
* we use MTS-style rounding to ensure bounded switching cost instead
of independent random choice at each time step
* exploration steps are not sampled independently since our setting requires $m\geq 2$.
In particular, the last difference leads to much more involving analysis:
We cannot assume that we have an unbiased estimator of the loss vector
and therefore we need to do a lot of conditioning on past events.
Moreover, the cost of only one of the $m>2$ time steps during each exploration can
be directly charged to the expected loss of the internal full-feedback algorithm
and the trivial bound of $2D$ is insufficient to achieve regret sublinear in OPT.
We exploit stability property of the internal full-feedback algorithm
in order to relate the costs incurred during the $m$ steps of each exploration.
## Answers to more specific comments:
> Is there no constraint on number of heuristics and delay length with respect to the input length?
In the usual setting of online learning, the number $\ell$ of experts (or
arms, etc) is fixed while the time horizon $T$ is increasing towards infinity.
In Section 1, we also state our theorems with $\ell, m, D$ constant which
we consider the most natural setting.
However, our bounds hold as far as
$D\ell m\leq o(OPT^{1/3})$, see Theorems 3.9 and 3.10 for the formal statement.
Our analysis extends to higher $m$, giving a regret depending on $m^{3/2}$.
> hyperparameters
Optimal choice of the hyperparameters of our algorithms is established at the end
of the proof of each upper bound (Theorems 3.9 and 3.10).
Note that $\eta = -\log (1-\gamma)$ in our notation.
The hyperparameters are chosen based on $D, \ell, m$ which are all known beforehand,
and OPT, which can be guessed by the standard doubling techniques.
We will include the details of guessing OPT in our revised manuscript.
> Initialization of $X$ and $E$
Algorithm 1 starts with $X$ and $E$ being empty sets.
We will use your comments to improve our manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying my doubts. Your responses are very thoughtful and I am convinced to increase my score. Please add as much of these discussions in a suitable manner into your revision. Thanks! | Summary: This paper considers the problem of sequentially selecting heuristics for Metrical Task Systems (MTS) when multiple heuristics are available. We focus on the bandit feedback setting, in which only the output of the heuristic chosen at each time step is observable. For this problem, we design and analyze algorithms that minimize the difference in total cost compared to the best fixed heuristic in hindsight, a quantity analogous to regret in bandit problems. Furthermore, we establish lower bounds that nearly match the upper bounds achieved by our proposed algorithms.
Claims And Evidence: The main contributions of this paper are theoretical, and all appear to be supported by correct proofs.
Methods And Evaluation Criteria: The evaluation metric used in this paper is natural and reasonable.
Theoretical Claims: I have briefly checked the proofs of Lemma 3.8, Theorem 3.9, Lemma C.2, and Theorem 4.6.
No particular issues were found.
Experimental Designs Or Analyses: N/A
Supplementary Material: I have briefly checked the proofs of Lemma C.2.
Relation To Broader Scientific Literature: This paper considers MTS as an application, while its technical components are primarily based on techniques from online learning and bandit problems. In particular, it appears to be closely related to topics such as bandits with switching costs and non-stationary bandits.
Essential References Not Discussed: For the setting where $ k \geq 1 $, concepts from non-stationary bandits and dynamic regret appear to be relevant, so it would be beneficial to mention them, e.g., Section 8 of [Auer, P., Cesa-Bianchi, N., Freund, Y., & Schapire, R. E. (2002). The nonstochastic multiarmed bandit problem. SIAM journal on computing, 32(1), 48-77.]
Other Strengths And Weaknesses: For the proposed algorithm, achieving the guarantees stated in Theorem 3.9 and Theorem 3.10 appears to require setting parameters such as the learning rate based on the value of *OPT*. In other words, prior knowledge of *OPT* or at least an approximation of it seems necessary. However, in practice, *OPT* is often unknown beforehand. This is a potential weakness that is not mentioned in the main text.
Conversely, a strength that is not emphasized in the paper is the dependency on the number of heuristics $ \ell $ in the theoretical bounds. While the introduction does not explicitly discuss this aspect, the derived bounds actually show that both the upper and lower bounds scale approximately as $ \ell^{1/3} $. This means that the guarantees on *OPT* have a tight dependency on $ \ell $, which could be highlighted more prominently.
Other Comments Or Suggestions: N/A
Questions For Authors: I have a question regarding the connection to *bandits with switching costs* [Dekel et al. (2013)]. This paper uses their results to establish the lower bound, but is it possible to leverage their results for the upper bound as well?
For example, in the case of $ m = 2 $, if we simply define the loss as $ f_t $ or $ g_t $ in your paper and apply their algorithm, wouldn’t we obtain an upper bound of $ (DT)^{2/3} $ instead of $ OPT^{2/3} $?
Additionally, I wonder if some refinement of their analysis could lead to an upper bound of $ OPT^{2/3} $. I would appreciate hearing the authors' thoughts on this point.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > References Not Discussed
Thank you for the references; we will include them in the revision of our paper.
> In practice, OPT is often unknown beforehand.
We can guess OPT using the doubling technique, getting virtually the same bound, e.g. as in Cesa-Bianchi
and Lugosi (Section 2.3) or Lattimore, Szepesvari (Section 28.5).
We thank you for your question. We will add a detailed explanation in the
revision of our paper.
> Conversely, a strength that is not emphasized is the dependence on the number of heuristics in the theoretical bounds.
Thank you for pointing out this strength of our result. We will make it more
prominent in the revision.
> question regarding the connection to bandits with switching costs [Dekel et al. (2013)]. is it possible to leverage their results for the upper bound as well?
In MTS, the losses are not a priori bounded. Therefore, we cannot blindly follow
the advice of the explored heuristic, since its state can have arbitrary large cost.
However, it is true that if a heuristic suggests a state with a very high
cost, we can always find a different (improper) step as described
in our paper, ensuring that our cost is bounded by $2D$.
Having this, we can apply
the algorithm of Arora, Dekel, Tewari (2021)
for bandits with switching cost and
achieve a regret of order $O(T^{2/3})$ in our setting.
Their result is formulated as a meta algorithm and proving a formal lower
bound is not easy.
However, we do not think that a refinement of their analysis would lead to an
upper bound in terms of $OPT^{2/3}$. Here is a simple argument:
Their algorithm
splits the input sequence into blocks of size $\tau$ and let
some MAB algorithm choose a single arm (or heuristic) for each block which is then played during the whole block.
This is to limit the switching cost to $O(T/\tau)$.
If we want it to be of order $OPT^{2/3}$,
we need to choose the block size $\tau \geq T/OPT^{2/3}$.
However, with blocks so large, already a single exploration of some
very bad heuristic would cause a cost of order $\tau \gg OPT^{2/3}$
if $OPT$ is small. | null | null | null | null | null | null |
On Zero-Initialized Attention: Optimal Prompt and Gating Factor Estimation | Accept (poster) | Summary: This paper provides extensive theoretical analysis for the zero-initialized attention in LLaMA-Adapters and connects it with mixture-of-experts models. Based on this, the author introduces non-linear prompts. Plus, experiments are conducted to demonstrate the performance of non-linear prompts.
Claims And Evidence: The claims of the article are consistent with the content he provided. The structure in attention can indeed be regarded as a mixture-of-experts structure (at least similar in form), and the experiments have confirmed the superiority of the method proposed in the paper.
Methods And Evaluation Criteria: Yes. This article focuses on treating the zero-initialized attention in LLaMA-Adapters as a mixture-of-experts model and optimizing LLaMA-Adapters using nonlinear prompts. In the article, the author provides the connection between mixture-of-experts and attention and gives proof.
Theoretical Claims: I have partially checked the relevant derivations and there should be no problem. However, if there are any more questions, I will add them later.
Experimental Designs Or Analyses: Regarding the experimental part, the experiments on most data sets have verified that zero-init has better results than random-init, and non-linear has greater potential than linear, which to a certain extent can explain the superiority of the method proposed in the paper. However, in the experiments of HellaSwag and LLaMA-7B in TruthfullQA, it seems that non-linear prompts are worse than linear prompts. Can the author explain this part of the problem?
Supplementary Material: None.
Relation To Broader Scientific Literature: The more important finding of the article may be that the introduction of non-linear prompts to LLaMA-Adapters provides better performance, which can inspire new prompt tuning.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: I feel that the author could compare several initialization methods, such as all 1, or other forms of random initialization, and discuss the sparsity of initialization. Intuitively, I feel that sparsity may be a more important point. For example, most of the values are 0 but a small number of values exist.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and insightful comments. We hope that we can address your concerns with the responses below.
### **Q1: Comparison between Non-Linear and Linear prompt on HellaSwag and TruthfullQA with LLaMA-7B setting:**
Thank you for your comments. Our main study in this paper is to **focus on providing detailed theoretical analysis and experiments to understand the benefits of the zero-initialized attention over the random-initialized attention (conventional prefix-tuning approach)** based on their connection to MoE models. Additionally, our analysis also indicates that **non-linear prompts can be also optimally estimated as linear prompt with greater flexibility**, suggesting that non-linear prompts are potential alternative to linear prompts.
To justify the potentials of non-linear prompts for LLaMA-Adapter models, we perform several experiments to compare the performance when using the linear prompts versus when using non-linear prompts. From these experimental results, we observe that non-linear prompts achieve **higher performance in most settings (ranging from 0.5% to 4%)**, specifically across **all results on LLaMA-13B and the first two datasets on LLaMA-7B**. However, there are certain cases where non-linear prompts yield only **comparable results**, with performance differences **ranging from 0.1% to 0.5% lower than the linear prompt settings**.
This variation is expected because no single prompting method universally excels across all tasks and models. The effectiveness of non-linear prompts can depend on several factors, such as the dataset characteristics, model capacity, and the complexity of task-specific adaptations. In some cases, the additional expressivity of non-linear prompts may not be necessary, leading to performance that closely approximates linear prompts.
### **Q2: Comparison among several initialization methods:**
Thank you for your suggestion. Indeed, we would like to clarify that the Random and Zero Initializations in the manuscript do not refer to the initialization of some vector. Let us take this opportunity to explain each of these initializations.
First of all, **Random Initialization (equivalently, Random-Init) follows the conventional prefix-tuning approach**, where traditional attention is applied to all tokens, including prompt vectors and previous output tokens, without incorporating the zero-initialization mechanism introduced in the original paper. The term "Random-Init" is used because, at the initial stage, all prompt vectors are randomly initialized. When combined with traditional attention mechanisms, this randomness affects the model’s convergence robustness. **The terms "Random-Init" and "Zero-Init" are also used in the original LLaMA-Adapter paper**.
On the contrary, **Zero Initialization (in short, Zero-Init) setting introduces a learnable gating factor within the attention layers that use prompt, which is initially set to zero**. Then, tanh activation function is applied to this factor to regulate the scale of this factor into [−1,1]. Additionally, separate softmax operations are applied to the attention scores of prompt tokens and word tokens independently, after which the gating factor is applied to the attention score of prompt tokens. This mechanism, as detailed in Equation (7) of the original LLaMA-Adapter paper, plays a crucial role in controlling the contribution of prompt tokens to the overall attention mechanism. **By initializing this gating factor at zero**, this factor can first eliminate the influence of under-fitted prompts at the early stages of training, allowing the model to gradually adjust its magnitude to incorporate meaningful instruction semantics into LLaMA.
Finally, in our study, we provide a detailed theoretical analysis and experiments to understand the benefits of zero-initialized attention over the random-initialized attention (conventional prefix-tuning approach). We also provide a theoretical analysis and experiments to show the effectiveness and flexibility of the Non-Linear prompt combined with the zero-initialized mechanism.
We hope our response answers your question about the initialization. Otherwise, please feel free to let us know, we are more than happy to address your further concerns. | Summary: The paper provides a rigorous theoretical foundation for zero-initialized attention, which has been successfully used in fine-tuning large language models (LLMs), particularly in LLaMA-Adapter. Establishes a connection between zero-initialized attention and mixture-of-experts (MoE) models. It additionally proves that both linear and non-linear prompts, along with gating functions, can be optimally estimated.
Claims And Evidence: The claims in the submission are largely supported by clear and convincing evidence, as the paper provides both theoretical justifications and empirical validations for its main arguments. The connection between zero-initialized attention and the mixture-of-experts (MoE) framework is rigorously established through mathematical derivations, and the statistical benefits of optimal prompt and gating factor estimation are backed by well-defined regression-based analyses. The experimental results on open LLM benchmarks align with the theoretical findings, demonstrating improved performance of zero-initialized attention over random initialization and highlighting the advantages of non-linear prompts.
Methods And Evaluation Criteria: The paper effectively justifies its focus on zero-initialized attention by connecting it to the mixture-of-experts (MoE) framework and demonstrating its theoretical advantages in prompt estimation. The evaluation is conducted on widely recognized open LLM benchmarks, including AI2 Reasoning Challenge (ARC), HellaSwag, MMLU, and TruthfulQA, which are appropriate for assessing the model’s ability to follow instructions, reason, and generate accurate responses. The choice of LLaMA-7B and LLaMA-13B as base models is also reasonable, as they represent strong open-source LLMs used in real-world applications. Furthermore, the paper compares zero-initialized attention against conventional random-initialized attention and other fine-tuning strategies such as LoRA and full fine-tuning, providing a comprehensive evaluation. However, while the experimental setup is robust, additional analysis on different model scales or alternative PEFT techniques could further strengthen the generalizability of the findings.
Theoretical Claims: The proofs for Theorems 4.2 and 5.2, which demonstrate the optimality of prompt and gating factor estimation, follow a logical structure, using techniques such as Voronoi loss functions and parametric convergence analysis.
Experimental Designs Or Analyses: The performance gain is notable, with a clear highlight in Table 1.
It is strange that LLaMA-7B + zero-init + linear, LLaMA-7B + zero-init + non-linear is worse than LLaMA-7B, Fully Fine-tuning Alpaca. Usually PeFT is much worse than full fine-tuning. Is the full fine-tuned model well-tuned (and overfitted), and what is the trade-off here?
Supplementary Material: No
Relation To Broader Scientific Literature: By establishing a theoretical connection between zero-initialized attention and MoE models, the paper builds upon the foundational principles of MoE, where a gating mechanism dynamically selects a subset of experts (or parameters) for each input, thereby improving computational efficiency and model capacity. This approach resonates with recent advancements in sparsely-gated MoE layers, which have been employed in scaling transformer models efficiently by activating only pertinent subsets of parameters during processing. Also, the exploration of both linear and non-linear prompts in the context of zero-initialized attention extends the current understanding of prompt-based learning and its integration with attention mechanisms, offering a nuanced perspective on optimizing prompt and gating factor estimation in LLM fine-tuning.
Essential References Not Discussed: the paper overlooks the Switch Transformer model, which employs a simplified MoE approach to achieve efficient scaling
Other Strengths And Weaknesses: One strength from the theory side is that the paper rigorously establishes a connection between zero-initialized attention and the mixture-of-experts (MoE) framework, providing a solid mathematical foundation for understanding its benefits in prompt tuning. By leveraging regression-based analysis and deriving optimal estimation rates for both linear and non-linear prompts, the paper offers a theoretical justification for why zero-initialized attention improves sample efficiency and stability in parameter-efficient fine-tuning.
Other Comments Or Suggestions: The paper demonstrates strengths in its theoretical analysis and empirical validation of zero-initialized attention within large language models (LLMs). By establishing a connection between zero-initialized attention and mixture-of-experts (MoE) models, it offers a novel perspective that could inform future research in parameter-efficient fine-tuning. The empirical results, showing improved performance with zero-initialized attention and the effectiveness of non-linear prompts, add practical value to the theoretical insights. However, the paper's originality is somewhat tempered by its reliance on and builds upon PeFT approaches with on-linear prompt structures, which is not entirely new.
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your constructive feedback and insightful comments. We would like to address your concerns as follows:
### **Q1: Explanation for results in Table 2.**
Thank you for your comments. **We want to clarify that most PEFT methods (e.g., LLaMA-7B + zero-init + linear, LLaMA-7B + zero-init + non-linear) perform worse than fully fine-tuning of LLaMA-7B, as shown in Table 2 (e.g., ARC dataset)**. While PEFT sometimes matches full fine-tuning, this is likely because the pre-trained model already contains fundamental knowledge for certain downstream tasks. However, in general, PEFT methods lag behind in performance compared to full fine-tuning.
However, it's important to emphasize that fully fine-tuning models like LLaMA is computationally expensive, making it impractical for low-resource settings. PEFT significantly reduces the number of learnable parameters while achieving comparable results, offering a more efficient alternative in such environments.
### **Q2: Additional analysis on other Parameter-Efficient Fine-Tuning (PEFT) techniques.**
Thank you for your suggestion. To further validate our method, we conducted additional experiments comparing it with other PEFT methods, including **Prompt Tuning [1]**, **IA3 [2]**, and **VeRA (r=128 and applied to the same modules as LoRA) [3]**, on LLaMA-7B setting. These comparisons provide a broader perspective on how our approach performs relative to established fine-tuning techniques. Additionally, we would like to clarify that the **name Random-Init prompt in our study follows the traditional prefix-tuning approach**. The results is presented in table below.
|Method|ARC (eas)|ARC (cha)|ARC (aver)|MMLU|Hellaswag|TruthfullQA|Average|
|-|-|-|-|-|-|-|-|
|LLaMA-7B, Fully Fine-tuning Alpaca|67.47|46.25|56.86|37.25|77.09|42.35|53.39|
|LLaMA-7B, LoRA Alpaca|61.91|42.15|52.03|34.87|**77.53**|**46.14**|52.64|
|LLaMA-7B, Prompt Tuning|55.35|37.46|46.41|32.85|75.88|34.76|47.48|
|LLaMA-7B, IA3|52.06|35.92|43.99|31.65|75.73|32.8|46.04|
|LLaMA-7B, VeRA|49.2|35.49|42.35|30.88|75.59|31.95|45.19|
|LLaMA-7B, Prefix-Tune (Random-Init)|60.65|40.7|50.67|35.12|72.62|37.82|49.06|
|LLaMA-7B + zero-init + linear|62.29|43.17|52.73|36.28|76.79|45.53|52.83|
|LLaMA-7B + zero-init + non-linear|**63.51**|**45.39**|**54.45**|**36.95**|76.67|45.04|**53.28**|
From our results, we observe that except Fully Fine-tuning setting, **the Zero-Init approach combined with a non-linear prompt consistently outperforms other PEFT methods overall**. This further point out the effectiveness of our method in achieving stability compared to traditional fine-tuning techniques.
[1] The Power of Scale for Parameter-Efficient Prompt Tuning. EMNLP, 2021
[2] Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning. NeurIPS, 2022
[3] VeRA: Vector-based Random Matrix Adaptation. ICLR, 2024
### **Q3: The paper overlooks the Switch Transformer model.**
Thanks for your comment. There seems to be a misunderstanding about the contributions of the paper. **Our main focus is to study the LLaMA-Adapter for parameter-efficiently fine-tuning the LLaMA models rather than efficiently scaling LLMs with MoE models.**
In particular, we study the LLaMA-Adapter, a PEFT method for LLaMA models. Since the LLaMA models do not replace feed-forward network layers with sparse MoE layers in their Transformer architecture, we do not consider any MoE-based Transformer variants, including the Switch Transformer. Instead, we establish a connection between the zero-initialized attention and MoE models in Section 3. We show that the zero-initialized attention can be represented as an MoE model. From that perspective, we demonstrate that using the zero-initialized attention with either linear prompts or non-linear prompts is more sample efficient than using the random-initialized attention (traditional attention).
### **Q4: The novelty of non-linear prompts.**
Thanks for your comment. We would like to emphasize that our main contribution is to provide a theoretical study for understanding the benefits of the zero-initialized attention over the random-initialized attention based on their connection to MoE models. To the best of our knowledge, there had not been any similar studies in the literature prior to our work.
Furthermore, our analysis indicates that in addition to linear prompts in the original LLaMA-Adapter, non-linear prompts can be also optimally estimated with greater flexibility. Therefore, we perform several experiments to justify the efficacy of the LLaMA-Adapter with non-linear prompts. Although the idea of using non-linear prompts in PEFT methods may not be new, **our paper is the very first work to propose employing non-linear prompts in the LLaMA-Adapter to enhance its performance with both theoretical guarantee and empirical evidence.** | Summary: This paper investigates a specific aspect of LLaMA-Adapter, focusing on zero-initialized attention. The zero-initialized attention mechanism is not only initialized with zero values but also involves a structural modification that replaces the traditional softmax function. Instead, softmax is computed independently over two components: the input tokens $X_l$ and the learnable adaptation prompt $P_l$, i.e., output softmax equals to $S_g=[{softmax}(S_p) \cdot {tanh}(\alpha), {softmax}(S_X)]$. The authors demonstrate that this zero-initialized attention can be interpreted as a specialized form of a mixture of experts (MoE). Building on this insight, they also prove that non-linear prompts can offer advantages over linear prompts in the context of zero-initialized attention. Extensive experiments were conducted, and the results were validated.
## update after rebuttal
Although the rebuttal responses are satisfactory, I acknowledge that the issues raised necessitate significant revisions to the manuscript’s writing and presentation. As such, I will retain my current evaluation.
Claims And Evidence: While the paper presents several claims, I believe the main arguments can be summarized as follows:
**(Theoretical Claim)** Zero-initialized attention can be interpreted as a mixture of experts (MoE), and this interpretation suggests that non-linear prompts are more suitable than linear prompts.
**(Empirical Claim)** Zero-initialized attention outperforms random-initialized attention (that is traditional attention), and non-linear prompts outperform linear prompts for zero-initialized attention.
Both claims are valuable contributions to the field, and I find sufficient supporting evidence for each. As a result, I am inclined to accept the paper; however, I have some minor suggestions.
**Suggestions:**
1) Alignment of Theory and Experiments: Additionally, the proof in the paper does not directly claim that zero-initialized attention is superior to random-initialized attention. As I understand it, the primary argument is that zero-initialized attention resembles MoE. Given this, I am uncertain if the experiments in Table 1, which compare zero-initialized and random-initialized attention, are the most appropriate. Overall, the theoretical argument revolves around the similarity between zero-initialized attention and MoE, and the benefits of non-linear prompts. However, the experimental results seem to focus on comparing zero-initialized attention with random-initialized attention. I believe it would strengthen the paper if the theory and experiments were more closely aligned.
Moreover, regarding Table 1, the results presented are indeed similar to those found in the original paper's "Table 5: Effectiveness of Zero-initialized Attention in our method," which specifically covers the ScienceQA dataset. However, the performance improvement in this paper does not seem to be as significant as in LLaMA-Adapter. I am curious if there is a specific reason for this discrepancy.
2) Fragmentation of Sections: The structure of the paper feels somewhat fragmented. In particular, Sections 4 and 5 appear to share similar content and proofs. A possible reorganization could involve grouping these under a common heading, such as "Linear and Non-linear Prompts."
Methods And Evaluation Criteria: Refer to Claims And Evidence.
Theoretical Claims: Although I haven't examined all the proofs, there don't seem to be any major issues.
Experimental Designs Or Analyses: Refer to Claims And Evidence.
Supplementary Material: I've read the entire supplementary material.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Refer to Claims And Evidence.
Other Comments Or Suggestions: Refer to Claims And Evidence.
Questions For Authors: Refer to Claims And Evidence.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and insightful comments. We hope that we can address your concerns with the responses below.
### **Q1: Alignment of Theory and Experiments:**
Thanks for your comments. We would like to clarify that the convergence analysis of prompt estimation under the random-initialized attention has been conducted in prior work (see Appendix C in [1] or Appendix A in [2]). Thus, in response to your concern, we will include the following comparison in the revision of our manuscript (below Theorem 4.2), indicating that **using the zero-initialized attention is more sample efficient than using the random-initialized attention**:
*1. Prompt convergence in random-initialized attention [1,2]:* The convergence rates of prompt estimation are significantly slow, standing at the order of $O(1/\log^{\tau}(n))$ for some constant $\tau>0$, where $n$ is the sample size. Thus, to approximate prompts with a given error $\epsilon$, we need exponentially many data points $O(\exp(\epsilon^{-1/\tau}))$, which is not sample efficient.
*2. Prompt convergence in zero-initialized attention (Ours):* As shown in Theorem 4.2 and Theorem 5.2 in our manuscript, the convergence rates of prompt estimation are of polynomial orders, ranging from $O(n^{-1/2})$ to $O(n^{-1/4})$. Therefore, we only need polynomially many data points to approximate the prompts with a given error $\epsilon$.
**Hence, in the experiments, we conduct comparisons between *Zero-init* and *random-initialized attention* to validate the above theoretical analytic as presented in Table 1.** Therefore, we believe that our work consistently aligns with our theoretical results.
[1] P. Akbarian et al. Quadratic gating functions in mixture of experts: A statistical insight. arXiv preprint, 2024.
[2] M. Le et al. Mixture of experts meets prompt-based continual learning. Advances in NeurIPS, 2024.
### **Q2: Discrepancy of LLaMA-Adapter on LLM benchmarks and ScienceQA (Multi-Modal):**
The discrepancy in performance improvement between our paper and the original study on LLaMA-Adapter can be attributed to **the differences in the datasets and task settings**. The original paper only conduct ablation study to compare Zero-initialized Attention to Random-Init setting (without the zero-initialized mechanism) on the ScienceQA dataset, which is a **multi-modal dataset used for Visual Question Answering**. In contrast, we extend more experiments on **language-only tasks**, specifically fine-tuning on the Alpaca dataset **based on the original code of LLaMA-Adapter paper**. The **Reviewer XuJQ** also mention that in these language-only tasks, **the performance gain in Table 1 is notable**. This **fundamental difference in task type** can influence how the zero-initialized mechanism impacts model performance.
Moreover, ScienceQA consists of 21,208 questions, making model convergence slow and unstable when the zero-initialized mechanism is not applied. This issue is **highlighted in Figure 7 of the original paper**, where models **without zero initialization struggle with robustness and efficiency**. The multi-modal nature of ScienceQA, which involves both visual and textual inputs, further complicates the training process, making stabilization techniques like Zero-Init more beneficial.
For language-only tasks, we fine-tuned the Random-Init setting (without the zero-initialized mechanism) on the Alpaca dataset, which contains 52,000 samples. Although the Random-Init setting led to **unrobust convergence in sample efficiency, as shown in Figure 2 and 3 in our paper**, the final convergence values when fine-tune the Random-Init setting on 100% Alpaca dataset remained **sufficiently low compared to the Zero-Init setting**. This indicates that when fine-tuning a random-initialized model on a sufficiently large language-only dataset, the model can still achieve reasonable stability despite the slower convergence.
Overall, the smaller performance gap observed in our study compared to the original paper can be explained by the **difference in dataset size and modality**. While Zero-Init plays a crucial role in stabilizing multi-modal training on ScienceQA, its impact is **less pronounced in large-scale language-only fine-tuning, where instability can be mitigated to some extent by the sheer volume of training data**.
### **Q3: Fragmentation of Sections:**
Thanks for your suggestion. We agree that both Section 4 and Section 5 focus on the convergence analysis of prompt estimation. Therefore, in the revision of our manuscript, we will merge them into one big section by relabeling Subsection 5.1 and Subsection 5.2 as Subsection 4.2 and Subsection 4.3, respectively.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed response.
The concern raised in Q2 has been adequately addressed through the discussion. However, given that the prior work in question is closely connected to the core contribution of this paper, I believe the manuscript must explicitly explain why the experimental setups differ, if such differences exist. This context is essential for clarity and completeness.
Regarding Q1, I view the clarification provided in this rebuttal as effectively the main theorem of the paper. Specifically, the statement that "zero-initialized attention is more sample efficient than random-initialized attention" should be highlighted more prominently in the theoretical sections (Sections 3–4). Doing so would help readers more clearly understand the connection between the theoretical analysis and the empirical results. As it currently stands, this connection is not sufficiently emphasized in the flow of the manuscript.
Overall, I find the rebuttal responses satisfactory, and I believe that—if fully incorporated—the paper will be significantly strengthened. However, I also recognize that these points require substantial revisions to the writing and presentation. Therefore, I will maintain my current score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer oCcc,
We are glad to hear that our response addresses your concerns. In the revision of our manuscript, we will carefully incorporate your suggestions, as well as those provided by the other reviewers. If you have any further concerns, please feel free to let us know.
Thank you,
The Authors | null | null | null | null | null | null | null | null |
The Surprising Effectiveness of Test-Time Training for Few-Shot Learning | Accept (poster) | Summary: This paper presents a comprehensive analysis of the Abstraction and Reasoning Corpus (ARC) and BIG-Bench Hard (BBH) tasks. The authors train LoRA as an adapter. For training, the authors use flips, rotations, and color permutations to augment the original training data. During inference, they adopt intra-transformation voting and global voting strategies. Finally, they achieve improvements of 28% and 7% on ARC and BBH, respectively.
## Update
Thanks for authors' detailed response. I appreciate authors re-state their contribution more clearly. This paper primarily focus on data augmentation and in-context fine-tuning. I believe that with further explanation and clarification, this paper will be much clearer. However, as for the submitted version, there are some missing important points, including augmented data statics, lora baselines comparison, more clearly statement of contribution, and so on (see above).
I will maintain my score. I think this paper is not suitable for publication at this time
Claims And Evidence: There are some over-claims or unclear definitions here that need further clarification:
1. Test-time training: creating a self-supervised learning problem based on this single test samplex, updating θ at test time before making a prediction (quoted from [1]). This paper adopt a lora to train on trianing data, then using several inference scaling methods to test. **The defintion of TTT is different with the main method in this paper.**
2. The Lora is a parameter-efficient fine-tuning methods. Authors use this method as baselines. However, I believe more details should be provided regarding the inference methods used in the baseline approach. Most of the improvements come from the data augmentation and voting strategies presented in the paper.
3. **It should be clearly stated in contribution that this paper employed existing techniques to conduct a comprehensive analysis on ARC and BBH, including data augmentation and voting strategies. This paper does not propose novel methods, theoretical results, or insights.**
---
[1] Sun Y, Wang X, Liu Z, et al. Test-time training with self-supervision for generalization under distribution shifts[C]//International conference on machine learning. PMLR, 2020: 9229-9248.
Methods And Evaluation Criteria: This paper doesn't propose new methods. However, their empirical analysis is thorough and demonstrates significant effectiveness. Their evaluation criteria is reasonable.
Theoretical Claims: This paper does not provide theoretical results for insights.
Experimental Designs Or Analyses: Their empirical analysis is reasonable and comprehensive. My main concern is that the baseline should be compared using the same inference methods.
Supplementary Material: Not applicable. This paper does not provide any supplementary material.
Relation To Broader Scientific Literature: This paper is not related to test-time training. They use LoRA, a parameter-efficient fine-tuning method, rather than conducting training during inference.
Essential References Not Discussed: This published paper proposed training a lora during the inference time, which is related to this paper.
[2] Wang Y, Ma D, Cai D. With greater text comes greater necessity: Inference-time training helps long text generation[J]. arXiv preprint arXiv:2401.11504, 2024.
Other Strengths And Weaknesses: More theoretical analysis is necessary for out-of-distribution tasks, especially for the ARC and BBH tasks studied in this paper.
Other Comments Or Suggestions: The formulation differences between LoRA and test-time training (TTT) should be explicitly clarified.
Questions For Authors: Please provide dataset statistics for LoRA training, such as the dataset sizes obtained after each type of augmentation, to better analyze the sources of actual performance improvements.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you for your review and the feedback on our paper.
**Q1. Definition and Application of Test-Time Training (TTT)**
We use the definition of TTT provided by ([Sun et al. (2020)](https://arxiv.org/abs/1909.13231)): self-supervised training a model using the unlabeled test sample $d_{\textrm{input}}$. Our work applies this definition in the few-shot learning setting where $d_{\textrm{input}}$ consists of instance-specific few-shot demonstrations and a query. The core idea remains: adapting model parameters at inference time based on information available for the specific test input being processed.
Finally, LoRA is simply the means by which we implement TTT in this paper. it is a simple and efficient fine-tuning method that enables us to do TTT with a 8B-size model. In practice, our TTT framework can be used with other fine-tuning techniques as well.
We can clarify more if the reviewer has more questions.
**Q2. Most of the improvements come from inference strategies**
We kindly disagree. Our experiments on the Big-Bench Hard (BBH) show that the TTT without any task-specific voting mechanisms or data augmentations still yields significant improvements. Moreover, our vanilla method (Fig. 7), which **excludes any augmentations or voting procedures**, still achieves a substantial performance boost—solving 22 tasks compared to just 5 (no TTT), marking a 440% improvement.
**Q3. Can you clarify the contributions and significance?**
Thank you for asking for clarification, and we will update the text to reflect our contributions.
We presented a successful extension of the TTT paradigm to the few-shot learning setting and did a systematic study of design choices for this setting. To the best of our knowledge, we are the first to apply TTT within the in-context learning framework.
We showed substantial improvements (e.g., up to 6x on ARC, 7.3pp on BBH) over strong baselines on 2 popular benchmarks. The magnitude of these improvements highlight the limitations of standard ICL and opens up a new avenue for LM research with TTT.
**Q4. TTT Dataset Statistics**
The size of the dynamically generated $D_\textrm{TTT}$ depends on the number of demonstrations (K) and the number of augmentations/permutations used.
For ARC, we use $K$ demonstrations (typically 2-7), generate $K$ leave-one-out tasks, apply $|T|$ geometric transformations (Section C.1 lists 20 base transformations/compositions), and n=2 permutations (line 1066), leading to roughly $K * |T| * n$ synthetic examples per task, capped at 250 (line 202). For BBH, we use $K=10$ demonstrations and $n=40$ permutations (line 344), resulting in $10 * 40 = 400$ leave-one-out examples per task.
We will add these specific calculations/details to Appendix C.2 (for ARC) and F.1 (for BBH) to clarify the scale of the temporary training data used for TTT.
---
Rebuttal Comment 1.1:
Comment: Thank authors for kindly response. I think there are still three points authors should clarify :
(1) Did author use labels in "test time training" ?
Figure 2 indicated that they use the outputs' loss on augmented data. However, the common test time training methods are training test set without labels.
(2) How many data used in "test time training" ?
Lets ref to recent test time training papers, these work demonstrate test time training is learning during the inference. And the training sample is very scare. However, this paper use the data augmentation as main method as listed in 3.1. And the statics of augmentated data is missing in the submission (authors provide partial information in rebuttal).
(3) there is already Lora-style test time training.
I have already mentioned this paper in my first version of review. Considering this paper using the same method, I think authors should provide a baseline experiments to give credit to similar published paper.
[1] With Greater Text Comes Greater Necessity: Inference-Time Training Helps Long Text Generation
----
In conclusion, this paper proposed a data augmentation and Lora fine-tuning pipeline for ARC tasks. To the best of my knowledge, all this methods are well-studied, which raised my concerns about this paper's contribution to the commuity. Additionally, this paper also presents no new theoretical results for ICML submissions.
I will keep my score 1 "reject".
---
Reply to Comment 1.1.1:
Comment: **(1) “Did author use labels in "test time training"?”**
We use demonstration labels, not the test labels. This paper is about TTT+ICL in the few-shot setting, so few-shot labels will of course be used.
Note that each ARC task (or instance) **is defined by** the given 2-7 demonstrations — they are collectively **the input** in an ARC task. For example, in Figure 2,$(x_1, y_1), (x_2, y_2), (x_3, y_3), (x_4, )$ is the complete input of the task, and $y_4$ is the output. Each $(x_i, y_i)$ pair within this sequence is a single demonstration. The objective for an ARC task is to deduce/learn the underlying transformation rule from the demonstrations and then apply this rule to the final query input ($x_4$) to predict the corresponding query output ($y_4$). Only $y_4$ is **the test label** for the ARC task instance, and it is not accessed or used during our TTT process.
The same goes for the BB-Hard benchmark, comprising 27 distinct tasks. While BB-Hard tasks differ in that they can be solved without demonstrations, providing few-shot examples significantly enhances performance, a standard few-shot learning scenario used in previous research. Treating these few-shot examples as a mini-training set at test time (which we refer to as the 'Direct I/O' baseline) also improves performance. Our TTT method builds upon and combines these ideas, achieving the most substantial performance gains, as evidenced in Figure 8. Again, the actual test output for a BB-Hard query is never used during TTT.
In Figure 2, we show our methodology of constructing *synthetic* tasks based on the given demonstrations given in the test input. We believe this directly falls under the definition of test-time training as we’re updating model parameters based on the test inputs before making a prediction [1].
**(2) “How many data used in "test time training"?”**
We would like to emphasize that our experiments on BB-Hard (Section 5) and ablations on ARC (Figures 5, 7) show that task-specific augmentations are not necessary to achieve major improvements with our method.
Previous work on test-time training [1] also makes use of data-augmentations, and we believe this is quite useful in expanding the TTT dataset and improving generalization.
We provide in-depth details of the data augmentations used on ARC and their applications in augmented inference in appendices C and E, and will include the stats we reported in the rebuttal to the revision. Please let us know if there are any further details you believe we should provide.
**(3) “there is already Lora-style test time training.”**
Thank you for pointing out Wang et al. (2024) paper. This is indeed relevant work showing inference-time LoRA updates. We plan on citing this paper and discussing its connection and distinction. It focuses on long text generation whereas our focus is on few-shot learning and reasoning tasks.
We believe it’s incorrect to say that all methods we used are well-studied. While individual components like TTT or LoRA updates at test-time are known, the main idea of the paper is to combine TTT with ICL, which is fundamentally novel. To the best of our knowledge, the *combination* and *systematic study* of Test-Time Training specifically within the In-Context Learning setting (TTT+ICL) has not previously been explored. The improvements over standard ICL with this method is very significant, and we believe it will be very interesting to many audience at ICML 2025.
**(4) “No new theoretical results for ICML submissions.”**
Our paper is not a theory paper! This paper presents a careful empirical analysis and significant improvements to few-shot learning abilities of language models. Please kindly refer to ICML 2025 Call For Papers document for different categories of acceptable papers.
[1] Sun et al. Test-Time Training with Self-Supervision for Generalization under Distribution Shifts. 2019. | Summary: This paper investigates test-time training (TTT) for improving language models' few-shot learning capabilities, particularly on novel tasks that require reasoning and abstraction.
- TTT significantly improves performance on challenging reasoning tasks, e.g. on ARC-AGI, TTT with in-context examples yields up to 6× higher accuracy (53.0%) compared to fine-tuned baselines, and reaches 61.9% when ensembled with program synthesis methods.
- On BIG-Bench Hard (BBH), TTT improves performance by 7.3 percentage points over standard few-shot prompting (50.5% to 57.8%).
## Update after rebuttal
I maintain my rating as accept.
Claims And Evidence: Nothing to report.
Methods And Evaluation Criteria: Nothing to report.
Theoretical Claims: Nothing to report.
Experimental Designs Or Analyses: Nothing to report.
Supplementary Material: Nothing to report.
Relation To Broader Scientific Literature: Nothing to report.
Essential References Not Discussed: None
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the positive review! We are happy to answer any new questions you have later during the rebuttal process. | Summary: This paper proposes to use test-time-training as a method for scaling test-time capabilities of large models under the few-shot setting, and tested the model's performance on ARC and BBH bechmarks. The experiment results show positive performance.
Claims And Evidence: Main claim: TTT helps few-shot performance compared to In Context Learning. The claim is very well supported by various ablation studies in BBH and ARC.
Methods And Evaluation Criteria: Methods
(1) Three methods for TTT: leave-one-out TTT, direct train TTT, augmentated data for TTT. These three are ablated in the experiments
(2) For training loss: test loss + all output loss + per-token loss.
Evaluation: ablation on different methods, losses, and metrics come from the benchmark. So it's sound and solid.
Theoretical Claims: No theoretical claim.
Experimental Designs Or Analyses: The authors clearly discussed and ablated the model components.
Supplementary Material: I briefly scanned through it, didn't read into the details.
Relation To Broader Scientific Literature: This would be interesting results to the "general reasoning" audience.
Essential References Not Discussed: No key missing references
Other Strengths And Weaknesses: Very well written paper.
Other Comments Or Suggestions: The paper did use many data-aug or voting mechanisms to make it work better. It'll be cleaner and cooler if there's no need for those.
Questions For Authors: I wonder whether authors have tried other domains such as coding or algebraic / math questions?
Recent findings show RL-trained models generalizes better, do the authors think TTT can still hold its advantage if the base model is strongly RL-trained, or the baseline is not FT but actually RL finetuned?
Also, many benefits disappear when actual scaling happens, not sure what will happen for this TTT finding.
(I think the paper's finding is interesting overall. Just curious about the authors' thoughts on the above questions.)
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1. Have authors tried domains such as coding or math?**
This paper produced a SoTA way to apply TTT to LMs in the few-shot learning setting using the challenging ARC-AGI and BB-Hard as our reasoning problem sets. Note that our implementations do not leverage the CoT capabilities of the models. We believe extending this to non-few-shot learning settings and to domains where CoT is crucial is a very exciting research problem and we’re currently exploring these extensions.
**Q2. Recent findings show RL-trained models generalize better. Can TTT still be advantageous if the base model is strongly RL-trained?**
That is a great question! Recently, there has been a lot of interest in using RL to unlock long reasoning abilities in LMs. As mentioned in our previous answer, combining TTT with CoT models (both RL or no RL) is a promising future direction! Similarly, another possible extension is **Test-Time-RL**, where RL is incorporated into the test-time training process ([Simonds and Yoshiyama](https://arxiv.org/abs/2503.00735), 2025).
**Q3. Many benefits disappear when actual scaling happens, how will this affect TTT?**
In Section 4.4, we present scaling results for Llama models of 1B, 3B, and 8B parameters, where TTT improves performance by 480%, 163% and 157% respectively. Thus, TTT scales effectively with increasing model size.
**Q4. Augmentations makes the method a little complicated**
Our experiments on the Big-Bench Hard (BBH) suite of tasks show that the core method of fewshot TTT without any task-specific voting mechanisms or data augmentations still yields significant improvements. Moreover, our vanilla method (Fig. 7), which excludes any augmentations or voting procedures, still achieves a substantial performance boost—solving 22 tasks compared to just 5 (no TTT), marking a 440% improvement. | null | null | null | null | null | null | null | null |
(How) Can Transformers Predict Pseudo-Random Numbers? | Accept (poster) | Summary: The paper studies how Transformers can learn linear congruential generators (LCG), a class of simple pseudo-random generators. These are generators of the form $x_{n+1} = a x_n + c \mod m$ for some choices of $(a,c)$ and $m$. The authors demonstrate that Transformers can learn from data to simulate LCG in two settings: when the modulus is fixed, and when it is varied (which requires generalization to unseen values of $m$). The authors also analyze how the Transformer is able to compute the LCG, both when the modulus is fixed and when it is varied, and give a "pseudo-code" for the algorithm implemented by the Transformer. Additionally, the authors study how the depth and number of heads in the Transformer affect the performance on this task.
Claims And Evidence: The results in the work are interesting, and to my knowledge the setting studied in this paper is novel. The target problem of generating pseudo-random numbers allows the authors to carefully study the algorithm learned by the Transformer in two settings. The paper is well-written and the authors do a good job in introducing their results. The algorithms that are learned by the Transformer are clearly introduced, and the authors show convincing evidence regarding the mechanisms learned by the Transformer model, demonstrating the the Transformer indeed learns to implement these algorithms.
The main concern that I have is regarding the motivation for the results discussed in the paper. What are the findings in the paper telling us about Transformers and/or PRNGs that we did not know before? The paper certainly demonstrates that Transformers are able to learn LCGs in novel settings, which is interesting, but I am not sure if this is particularly surprising given evidence on other similar problems like learning modular arithmetic (e.g., [1]). The algorithm that Transformers learn is certainly clever, but I am not sure why we should care about how Transformers solve this particular problem. Is this a new or interesting algorithm for computing LCG? Are the interpretability methods used in the paper novel compared to other, similar works? It seems to me that there has already been quite a few works on interpretability of Transformers on similar tasks (such as [1] and [2]), so clarifying exactly what are the novel contributions and conclusions of this work compared to previous works would be helpful.
Minor:
- A formal definition of the period $T_m(a,c)$ would be useful.
- In think there is a typo in the last paragraph on page 8 ($m2^16$ should be $m2^{16}$?)
[1] PROGRESS MEASURES FOR GROKKING VIA MECHANISTIC INTERPRETABILITY, ICLR 2023
[2] Feature emergence via margin maximization: case studies in algebraic tasks, ICLR 2024
Methods And Evaluation Criteria: See above.
Theoretical Claims: See above.
Experimental Designs Or Analyses: See above.
Supplementary Material: No
Relation To Broader Scientific Literature: See above.
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful assessment and candid feedback. To address the reviewer's main concern, we clarify the motivation behind and the contributions of our work.
One major goal is to answer the following important question: To what extent can deep neural networks crack various primitives in cryptography? PRNGs are an important and commonly used component in cryptography, making them an ideal starting point for investigation. Moreover, transformers are perhaps the most effective pattern recognition systems ever developed, while PRNGs provide some of the best ways of hiding deterministic patterns, so it is natural to pit them against each other and study the resulting learning dynamics. Among PRNGs, LCGs represent the simplest case to examine, and we demonstrate how neural networks can successfully learn the subtle patterns in these sequences.
This work is expected to be among the first in a series of studies examining neural networks' ability to learn increasingly complicated arithmetic sequences. This line of research may reveal completely unknown properties of widely used cryptographically secure PRNGs (CSPRNGs) like AES-CTR\_DRBG (also suggested by the Reviewer rtPM).
Another important goal is to explore the learning ability of transformers in controlled settings, where the data generation process is fully understood, unlike for real-world data like natural images and language. PRNGs provide a natural setting to study in-context learning ability, sample efficiency, and the role of architectural complexity, and further add to discoveries in interpretability of neural networks.
Next, we highlight our contributions:
- **Unexpected scaling behavior**:
We found how the number of in-context examples needed scales with problem complexity. One could argue that classical ways of breaking PRNGs exist in literature; however, we want to emphasize that our setup is qualitatively different because models are not explicitly informed that the data are LCG sequences. When training a Transformer model with just a collection of numbers without such context, there is no clear prior to determining whether the scaling law would be independent, sublinear, superlinear, or exhibit some other behavior. In other words, the scaling law pertains not merely to discovering $(a, c, m)$ or predicting the next number, but also to inferring the LCG rule.
- **New interpretability results**:
The ability to spontaneously factorize numbers as part of the learned algorithm was new and surprising. This, along with the capacity to estimate moduli, adds to a short but growing list of results in mechanistic interpretability studies.
Specifically, although LCGs utilize modular operations, these tasks are drastically different from classification settings like [1] and [2] and have much stronger long context dependence than (He et al., 2024), where they stack examples from the same modulus in random order. In our setting, for the Fixed-Modulus (FM) case, this difference leads us to a completely different set of features, where the model converts numbers implicitly to the digit-wise (mixed radix or RNS) representation without forming any circular embeddings like those found in existing works. On the other hand, since the modulus keeps varying for the Unseen-Modulus (UM) case, the circular patterns observed in modular arithmetic cannot be used here. In this case, the model has to develop the ability to estimate the modulus and combine it with similar features as FM cases to solve the task. To summarize, despite the surface-level similarity to modular arithmetic, the model learned vastly different underlying algorithms.
Finally, our findings do not necessarily provide novel mathematical insights beyond what human experts in PRNG development already understand. However, the models' autonomous discovery of periodic structures in RNS representations from LCGs without explicit guidance is noteworthy. This particular phenomenon (please check the updated theoretical claim for Reviewer 7WLW) is likely familiar only to specialists in the field. The fact that this discovery occurred without directed instruction has not been previously documented in the literature and represents an exciting contribution. It suggests that sufficiently powerful models can potentially uncover unknown and surprising patterns from cryptographically secure PRNGs that humans are currently unaware of. | Summary: This study analyzes whether a Transformer based on next-token prediction can learn an LCG sequence and, if so, how it models the sequence. Specifically, the study demonstrates that a Transformer can learn an LCG sequence given sufficient architectural capacity and training data. Subsequently, it examines the algorithms learned by the Transformer in both the FM and UM cases. Additionally, the study explores the training recipe for scaling up the modulus in this task.
Claims And Evidence: As shown in Figure 1, it is evident that the Transformer can successfully perform this task. Furthermore, based on the characteristics of the LCG sequence, the analysis of the algorithm learned by the Transformer is also clear.
Methods And Evaluation Criteria: Since this study does not specifically propose a new method, there is not much to cover in this section. However, one point of interest is the use of Abacus Embeddings in the modulus scaling process. It would be beneficial to include an analysis of this aspect.
Theoretical Claims: I have reviewed the theoretical claims and have no points to discuss regarding this aspect.
Experimental Designs Or Analyses: The experimental design of this study is well-balanced and valid, as it addresses LCG sequence learning across various scenarios, avoiding biases in the experimental setup.
Supplementary Material: I have reviewed most of the supplementary material, and it is particularly meaningful to present the learned results while varying different combinations of hyperparameters.
Relation To Broader Scientific Literature: The analysis of how the Transformer learns LCG is a valuable finding for future Transformer architecture design. Additionally, the task proposed in this study can serve as a foundation for further research, encouraging continued exploration of the inner workings of Transformers.
Essential References Not Discussed: I believe there is no missing essential work.
Other Strengths And Weaknesses: Strengths: Selecting an appropriate task to analyze the capabilities of a Transformer is highly meaningful. While some previous studies have proposed interesting tasks, they often involved somewhat contrived scenarios. However, the task explored in this study deals with a random number generator used in actual cryptographic applications, making it a more thought-provoking and relevant choice.
Minor Weakness: It would be beneficial to discuss how the Transformer analysis conducted in this study could be applied to more general tasks, such as those in the NLP domain.
Other Comments Or Suggestions: There seem to be some minor typos in the Discussion section on page 8 that need to be corrected.
Questions For Authors: If the study employs an autoregressive Transformer in the style of GPT, does that mean it uses a decoder-only Transformer operating with causal attention? I am curious whether causal attention is critically related to the study's findings. Additionally, I wonder whether an encoder-decoder or a prefix decoder-only Transformer architecture could also learn the LCG sequence, or perhaps even perform better.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their encouraging comments and thoughtful questions.
- **Questions**: Yes, our study employs a decoder-only Transformer architecture with causal masking (autoregressive).
This choice emulates the real-world scenario where observations are obtained sequentially and, at the same time, prevents the model from cheating by checking future information.
We agree that exploring other architectures would certainly be interesting. While a prefix decoder-only model might allow us to use a sequence design similar to our current approach, an encoder-decoder architecture would likely require changing our objective (for data efficiency), formatting the task as a translation from the sequence $x_0, x_1, \cdots, x_n$ to the parameters $(a, c, m)$. This reformulation would likely involve different learned features and scaling laws, and we leave them for future work.
- **Relation to General Tasks**: This is certainly an interesting comment. As we mentioned in the introduction, LCGs can be viewed as a special kind of formal language, the latter of which is deeply intertwined with NLP studies. We believe that starting with LCGs and incrementally increasing the complexity of our datasets would eventually allow us to build a systematic approach to studying formal languages and NLP tasks. | Summary: This paper studies how Transformer can learn linear congruential generators sequence with either fixed or changing modulus. For fixed module, they discover that Transformer learns the radix representation and predicts each radix digit almost independently. Notably, the lower digit are predicted by copying from previous period. However, the model is also capable of predicting non-repeating higher order bit. For changing module, they discovered that the model learns to cluster number based on modulus for small numbers (like 2,3,4, 6) in embedding and first-layer attention head. They also find an attention head that attend to the largest number in the current sequence and use this to estimate the modulus.
Claims And Evidence: Overall, the mechanism pointed out by the authors are well supported by both observation and intervention. However, some of the mechanism remains mostly unexplained:
1. In the fixed module case, the model is capable of predicting non-repeating bit in the modulus 2048 case, this is further shown in the scaling experiment that model only need example with m^{1/4} length to make successful prediction. How are these bit predicted by the model is not explained by the authors (while the authors point out that attention head that attend to 2^{k'} number ago is useful for this prediction.
2. In the changing module case, the authors argue that the mechanism learned by later layer is 'similar' to the unseen module case, but they only showcase that the attention head in higher layer will similar look back over period in the appendix. This evidence is not sufficient enough to support such a claim.
Methods And Evaluation Criteria: The paper train and interpret different models with both fixed and changing modulus. The evaluation is complete and thorough.
Theoretical Claims: There is some theoretical flaws in the main paper about mixed radix setting.
Equation (2) has a typo, there should be a summation from 0.
Equation (3) are incorrect in general, one can easily see this as some small number can easily have two different mixed radius representation under this setting ( 1 * 2 + 1 * 2^2 = 2 * 3). Some of the number are not represented by this representation. This makes the result beyond power of 2 in fixed modulus case hard to understand.
Experimental Designs Or Analyses: The design and analysis is overall detailed and complete. I have raised my concern in the Claims and Evidence part.
Supplementary Material: Yes, I read the supplementary experiments.
Relation To Broader Scientific Literature: This paper is similar to other interpretability work that studies how Transformer performs modulus calculus. One difference is that this paper does not discuss how the modulus calculus is calculated here.
Essential References Not Discussed: I think this paper needs to discuss relationship with in-context linear regression (with LCGs as regression in the modulus setting) and related work in using Transformer to solve LWE (regression in modulus setting with noise). Some of the modulus calculation interpretability work is also missed. An incomplete list includes [1, 2, 3].
[1] https://arxiv.org/abs/2211.15661
[2] https://arxiv.org/abs/2306.17844
[3] https://arxiv.org/abs/2207.04785
Other Strengths And Weaknesses: The scaling experiment is very original and interesting.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How is the mixed radius experiment carried out in detail? Especially, how is the mixed radius calculated for the experiment here?
2. What is the possible mechanism that enables prediction beyond calculating?
3. What implication will this work bring for the PRG community? Is LCG with known modulus actually easier to break with optimal breaking algorithm than the scaling law shown here (as it only requires regressing two numbers)? How is Transformers compared with the current best breaker in terms of complexity in the Unknown modulus setting?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for giving a detailed feedback and raising important questions. **Note**: New experiment figures at: https://doi.org/10.6084/m9.figshare.28703570.v2
## Theoretical claims and related experiments
The reviewer is correct in pointing out that equation (3) is incorrect in general and incompatible with the experiments in Figures 3 and 6 (i.e. our experiments are accurate, but equation (3) is in error). Below, we present the corrected version of equation (3) and surrounding text:
We used the Residual Number System (RNS) (Garner 1959), where each number is represented by its values modulo pairwise coprime factorizations of $m$. Specifically, consider sequences with a composite modulus $m$, which has a prime factorization $m = p_1^{w_1} p_2^{w_2}\cdots p_q^{w_q}$. In this case, we can *uniquely* represent each number $(x \text{ mod } m)$ as the tuple of residuals $( x \text{ mod } p_1^{w_1}, x \text{ mod } p_2^{w_2}, \dots, x \text{ mod } p_q^{w_q} )$ and similar to equation (2) we can further decompose each residual,
$$x \text{ mod } p_j^{w_q} = \alpha_{j,0} p_j^0 + \alpha_{j,1} p_j^1 + \cdots + \alpha_{j,w_j-1} p_j^{w_j-1}$$
where $\alpha_{j,w} \in \{ 0, 1, \dots, p_j - 1 \}$ are base-$p_j$ digits. We refer to $\{\alpha_{j,w}\}$ as the "RNS representation". When period $\mathcal T_m = m$, we can show that each digit $\alpha_{j,w}$ has a period of $p_j^w$. Then, the rest of the discussion remains the same. For example, the $r$ step iteration still reduces the period of each digit $\alpha_{j,w}$ from $p_j^w$ to $p_j^w / \gcd(r, p_j^w)$.
In experiments for Figures 3 and 6, we calculated the collection of $\{\alpha_{j, w}\}$ for the target and model predictions, then compared which $\alpha_{j,w}$ matching. Since these experiments already implicitly use RNS representations, they remain unchanged.
(We had erroneously thought that the mixed-radix representations are similar to RNS representations -- however, we later realized that this is not the case.)
## UM is similar to FM
The key difference between FM and UM is that the model has to figure out the modulus in-context. Once the model determines $m$, the sequence prediction task for UM is not any different from the FM case. Beyond the attention heads we show in the appendix, the per-digit accuracy in Figure 6 also shows a ladder-like structure similar to FM cases, suggesting the copying nature of the UM case.
Such $r$-step copying behavior is crucial for the model to make correct predictions even when the estimate for the modulus is imprecise. The model achieves this in steps: i) it estimates the modulus $\widetilde m_{\mathrm{est}}$ using the largest number observed in the context so far -- which is usually slightly smaller than the actual modulus $\widetilde m_{test} < m_{test}$. ii) The model then corrects this imprecision by leveraging the $r$-step copying behavior. This effectively reduces the task to modulo $m / r$. Then, the small difference in estimating the modulus gets rounded off like $m/r = \lfloor m_{\mathrm{estimate}} / r \rceil$.
## Mechanism for predicting higher bits
Due to space constraints, we refer the reviewer to the "Learning of higher bits" section of our response to Reviewer rtPM.
## Related works
We thank the reviewer for pointing out relevant references. We will cite these references and add a thorough discussion on similar works in the final version of the paper.
## Questions
We have addressed Questions 1&2 above.
Q3. To the best of our knowledge, all LCG breakers *assume that the underlying sequence is LCG* and focus on estimating the parameters $(m, a, c)$ [1] or predicting the next number [2]. The strong prior on the form of the sequence makes the task much simpler. (termed "open-box" in cryptography).
In contrast, the Transformer has no such prior, thereby making the task significantly harder than simply finding $(m, a, c)$ (termed "closed-box"). During training, it learns to parse and utilize the patterns in sequences of seemingly random numbers, and applies these abilities for out-of-distribution sequences at inference time. Thus, it is difficult to make a fair comparison between the "closed-box" Transformer approach with "open-box" optimal breaking algorithms.
We emphasize that the primary purpose of this paper is not to break LCGs with optimal sample complexity, nor to compete with existing open-box algorithms. Rather, our main objective is to investigate the extent to which a Transformer model can break various cryptographic primitives -- this paper serves as an initial step towards that goal. That the model can identify patterns already known to experts is encouraging, suggesting that Transformer models might also discover previously unknown patterns in more complex PRNGs (see also our response to Reviewer tdrB).
[1] J. B. Plunstead, Inferring a Sequence Generated by a Linear Congruence, 1982
[2] J.Stern, Secret linear congruential generators are not cryptographically secure, 1987 | Summary: Summary:
This paper trains transformers on the task of in-context predicting the next element of a sequence generated with a Linear Congruential Generator (LCG). An LCG has the form:
x_{n+1} = a x_n + b (mod m), where
a, b, m are unknown numbers.
The paper studies two settings: 1) Fixed-Modulus (FM) setting, where m is the same in all contexts, and 2) Unseen-Modulus (UM) setting, where m varies, and the model is tested on held-out values of m.
The paper mechanistically interprets transformers on these two tasks. It argues that the radix representation for (mod m) is critical to how the model internally learns to represent the input. It argues that the embedding of the inputs reflects their structure in the radix representation.
Additionally, it argues that the model learns the lower-order bits by performing a lookup to previous terms in the sequence whose period corresponds to the radix representation.
Claims And Evidence: The experiments were convincing to me -- with ablation and patching studies that confirmed the researchers' interpretations of the internal workings of the model.
1) However, I do not understand how the higher-order bits in mod 2048 sequences, are found by the model. This was mysterious to me, and I felt like it was not addressed by the paper. The paper mentions that if you look at the sequence x_{i}, x_{i+r}, x_{i+2r} for some r dividing 2048, then the higher-order bits have a period of length 2048 / r. However, the model does not have access to enough terms in this sequence to perform a backwards lookup in the same way that it can for the lower-order bits.
So what is the mechanism to learn these higher-order bits? I couldn't find the answer to this question in the paper, although the paper mentions "multiple faint lines" in the attention scores "that are 2^{k'} distance apart" for k' < k-1, and that it "combines information from all these tokens to predict the higher bits". This felt too vague for a paper that has as an objective to fully mechanistically interpret the network.
2) Another related question that I have is: do you find that it is much harder to learn in the fixed-modulus setting when m is a prime? In that case the trick that you have described for learning lower-order digits no longer applies. When the model groks, what is the mechanism by which it learns the LCG in that case?
Methods And Evaluation Criteria: Yes
Theoretical Claims: No theoretical claims. I checked the Hull-Dobell theorem and they have indeed cited it right.
Experimental Designs Or Analyses: Yes, I looked through the entire paper and it seemed OK. See my comments on "claims and evidence" above, since I think that a full mechanistic understanding is not obtained by this version of the paper.
Supplementary Material: Yes, I looked through it.
Relation To Broader Scientific Literature: This paper fits into work on mechanistic interpretability, which seeks to reverse-engineer trained neural networks. I am familiar with the results on neural networks learning arithmetic, and there are some elements in this work that vaguely resemble that. However, I am not aware of papers that do what this paper is doing, so it seems original to me.
Essential References Not Discussed: Not that I am aware of
Other Strengths And Weaknesses: I think that the setting is original and the results are cleanly described and presented. With the exception of the explanation of the higher-order bits, I think that this paper does a good job understanding how neural nets learn LCGs.
Thus, I think of this paper as resolving the case where m is a product of a few small primes: e.g. m = 2310 = 2*3*5*7*11 seems satisfactorily addressed by this paper, but for m = 2048 I am not fully convinced by the paper yet.
Other Comments Or Suggestions: * Typo lines 105-109. The variable $m$ should be $p$ instead.
* Typo Line 167? Equation (3) should be a product of the sums, instead of a sum of the sums ?
* Typo in Figure 10 caption. eample should be example.
* Typo in Line 436, column 2. 2^16 should be 2^{16}.
---
* The answer in https://security.stackexchange.com/questions/4268/cracking-a-linear-congruential-generator shows that LCGs can be cracked with a simple classical algorithm -- they have notably bad security. It is not as much of a surprise, therefore, that transformers can learn to crack LCGs than if they could crack more complicated schemes. Do you think that you could train a transformer to crack encryption schemes like AES? The scheme does not have strong theoretical backing insofar as I am aware, so it could be an excellent candidate to try.
Questions For Authors: Could you please clarify my confusions about higher-order bits above -- either by pointing to what I am missing in the current paper or by providing a new analysis that would convince me? I would raise my score.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful assessment and valuable feedback. We have added new experiment Figures S1, S2 at the link: https://doi.org/10.6084/m9.figshare.28703570.v2 -- which we will refer to in our response below.
## Learning of higher bits
Below we present a modified and more accurate account of copying and predicting behavior in the FM setting. The corrections and modified experiments mentioned here will be reflected in the final version of our paper.
Consider the model predicting the number $x_t$. For copying the last $k$ bits (least significant), attending to only one token is required -- located at position $t - 2^k$, where $k = \lfloor \log_2 t \rfloor$. This can be seen from our new experiments in Fig.S1(a). Note that this is a stronger statement than the one in the paper, where we mentioned the need for two bright lines for copying.
Attention to all the other tokens contributes towards predicting higher bits. Specifically the attention to token position $t - 2^{k-1}$ plays an important role. Note that this is a correction to the comment in the paper that the line at $t - 2^{k-1}$ only contributes to copying. This is demonstrated in Fig.S2(b), which is an improved version of Fig.15. (We found a subtle mismatch between selecting the attention mask using top two values and using desired two token positions.)
It shows that even if we mask out attention to all tokens except positions $t-2^k$ and $t - 2^{k-1}$, the model can predict higher bits with remarkable accuracy. Here we present an intuitive explanation of predicting the higher bits via a simple example, using the sequence from Fig.2 in the paper: $(1962, 411, 624, 1593, 838, \dots)$.
Consider the model predicting the number at $t=4$ ($k=2$). In predicting $x_4 = 838 \equiv (01101000110)_2$, the model attends to $x_0 = 1962 \equiv (11110101010)_2$ and $x_2 = 624 \equiv (01001110000)_2$.
We label the numbers by their lowest 3 bits: $x_0 \equiv (\cdots 010)_2$ and $x_2 \equiv (\cdots 000)_2$. Then, by copying the lowest two bits, we have $x_4 \equiv (\cdots ?10)_2$. Now, if we consider $r=2$ iterations and drop the (constant) lowest bit, we obtain the new reduced sequence $x'_0=x_0 \equiv (\cdots 01)_2$, $x'_1=x_2 \equiv (\cdots 00)_2$, and $x'_2=x_4 \equiv (\cdots ?1)_2$. Since the second lowest bit of this reduced sequence has period $8/2=4$, the only possible way to satisfy the period condition for the last two digits in the reduced sequence $x'_t$ is to have $x'_2 \equiv (\cdots 11)_2$ (and $x'_3=x_6 \equiv (\cdots 10)_2$).
In this way, the model calculated higher bits just by using the constraints from the period, along with the knowledge of $x_{t - 2^k}$ and $x_{t - 2^{k-1}}$.
This argument can be extended to even higher bits, by considering $r=2^k$-step iterations and added constraints from digit-wise periods. In practice, this method can be made even more robust with the knowledge of "other faint lines".
## LCG with prime $m$
We find that in the FM setting with prime $m$ the task becomes much harder. Since there are no digit-wise periodic patterns, the model cannot perform the algorithm described in Section 4.2. In Fig.S2, we trained two identical models to learn $m=2039$ and $m=2048$, and observed that $m=2039$ cannot be learned within the same number of training steps. To rule out potential constraints from model capability, we used depth 2 models (compared to depth 1 in the paper).
Note that moduli made up of powers of small primes (e.g. $2^w$) are ubiquitous in practice due to their computational efficiency. This motivated our choice of moduli in the paper.
## Classical algorithms vs Transformers
While LCGs can be cracked using classical algorithms, Transformers face a fundamentally different challenge in our setup -- they have to determine the underlying sequence in an LCG without explicit instruction. In addition, these Transformers have to reverse engineer an algorithm from seemingly random sequences. We think that without explicit training, it is non-trivial to predict if Transformers can learn to predict even these simple LCGs.
Regarding AES and more sophisticated encryption schemes, we agree that these would be interesting targets for future research. Our work on LCGs serves as the first crucial step for predicting these complex encryption schemes using Transformers.
For further discussion, we refer the reviewer to the "Unexpected scaling behavior" paragraph in our response to Reviewer tdrB.
## Corrected Equation 3
Due to space constraints, we refer the reviewer to the "Theoretical claims and related experiments" section of our response to the Reviewer 7WLW.
We hope that we have satisfactorily addressed all of the reviewer's questions, especially concerning learning of higher bits.
---
Rebuttal Comment 1.1:
Comment: Reposted as a rebuttal comment because I realized authors cannot see official comments:
Thank you for your detailed reply. The experiment in S1 is convincing me that your explanation is along the right lines (there is a typo in caption of S1 fyi with t-2^k). However, there is something I’m still missing. In your example above in the reduced sequence why can’t x_2’ end with 10 and x_3’ end with 11?
Why is it necessary that x_2’ ends with 11 and x_3’ with 10? Is this just a fact about possible LCG sequences?
EDIT:
------
Thank you for your response below, I think this clarifies things for me. If you could include an explanation of this in your paper (and the more general case for high-order bits of m = 2^k), I think that it would help readers a lot because the high-order bits were the most mysterious part of this analysis for me.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for carefully following our rebuttal and engaging in follow-up discussion.
The necessity that $x_2'$ has to end with $11$ and $x_3'$ ends with $10$ is indeed related to possible LCG sequences. Specifically, it follows from the digit-wise periods along the seuqence. We had left this fact implicit due to space constraints -- but we elaborate it here.
Since we select the test sequences using the Hull-Dobell theorem, the original LCG sequence has period $m=2048=2^{11}$. In this case, $w^{th}$ lowest digit (bit) has a period of $2^w$ along the sequence. After copying the last digit, the reduced sequences ($r=2$) also maintains this property.
Thus, in the redcued sequence $\\{x_0', x_1', x_3', x_4', \dots \\}$ the lowest digit has period 2: $\\{ (\cdots 1), (\cdots 0), (\cdots 1), (\cdots 0), \dots \\} $.
Similarly, the second lowest digit has period 4: $\\{ (\cdots 0 \cdot), (\cdots 0 \cdot), (\cdots 1 \cdot), (\cdots 1 \cdot), \dots \\} $.
Now if $x_2'$ ends in $10$ and $x_3'$ ends in $11$ then the resulting sequence $\{x_0', x_1', x_3', x_4', \dots \}$ would look like $\\{(\cdots 01), (\cdots 00), (\cdots 10), (\cdots 11)\\}$, which violates the digit-wise period property of the LCG sequence. Consequently, $\\{(\cdots 01), (\cdots 00), (\cdots 11), (\cdots 10)\\}$ is the only viable option.
We are happy to provide further clarifications and answer any other questions the reviewer may have. | null | null | null | null | null | null |
Time to Spike? Understanding the Representational Power of Spiking Neural Networks in Discrete Time | Accept (poster) | Summary: The paper presents bounds on the capacity of spiking neural networks to approximate functions on static inputs for discrete-time spiking neural networks. The theory includes a universal approximation theorem (which is simple, as noted by the authors), and d bound on the number of regions. The experiments cover the effect of layer width on the capacity of the network, some effects of the decay term, and a counting of the number of regions in very small networks.
Claims And Evidence: I think the claims on the abstract are omitting an important fact: their results are only applicable to static inputs. Even the title refers to time, but the rest of the paper does not really address timing or temporal patterns. The authors should really mention this.
Methods And Evaluation Criteria: It is not entirely clear to me why did the authors chose CIFAR for small, feed-forward networks, where something simpler (MNIST) would be more appropriate given the scale used.
The use of very small networks in 5.3 is ok for illustration, but it would be useful to know how this scales (if possible, if not explain).
Table 1 should be complemented by the growth of number of regions with T. ALso, I would imagine that for T=1 this result is the same as for heavyside networks, so it would be useful to know how much the difference is.
Theoretical Claims: The proofs look reasonable.
However, there are a couple of issues:
- The standard LIF model is not strictly what is used in Eq. 1. There the autors reduce the membrane potential by a fixed amount, whereas the classical LIF model sets the membrane potential to a specific value. The authors should mention this distinction.
- The proof sketch for 3.1 is rather hard to follow for something that is meant to give an intuition. I think the authors should state something along the lines of: partition the input state space in hypercubes of maximum radius r (bounded by the lipschitz constant I assume), this is the first layer. then partition the ouptut space in a similar fashion, every output is one neuron. Connect every input to every output. Although I might have misunderstood the proof.
- The first part of Proposition 3.3 seems to be the same as Thm. 3.2. If that is the case, then the proposition should also mention the lower bound. But then again, is not clear to me when do you used the lower bound. So I don't really know why you need this propositoin.
- The remark on the comparison with RELUS (expressivity growth) could be misleading. If I understand this correctly, the growth might be faster but it would saturate if beta < 1 (periodic case, there is a maximum latency). For beta =1, I am not 100% that the growth will continue (because this might be similar to a quantized network, which is definitelly less expressive than ReLu).
- I think a comparison with the continuous-time LIF should be discussed.
Experimental Designs Or Analyses: The experiments are alright, but there are a couple of problems:
- They only report training accuracies, not testing accuracies. While I get that for the expressivity it does not matter, I think it would be useful to at least know if the trends translate to testing.
- Why did the authors pick a linear separation toy example, when their theory refers to networks that could compute any function? I think it would be useful to try some more complex logic functions (XOR or parity checks, for example). This would require more regions than currently specified by the linear case.
Supplementary Material: I checked the reference to previous works, and briefly looked over some of the proofs.
Relation To Broader Scientific Literature: I think that the authors could discuss the relationship of their work with the expessivity of quantized neural networks, specially in the case of beta=1. The rationale is that if the neurons are simply integrating binary inputs, then it can be considered as a form of counting discrete inputs. It would also help to note that quantized networks might in principle be a better solution for low-power ANNs dealing with static inputs.
Essential References Not Discussed: Quantized neural networks, if the connection is relevant (which I think it is).
Also, whether T=1 recovers the existing bounds with heavyside activation functions on ANNs should be mentionned.
Other Strengths And Weaknesses: I think it's good to have such theorems for SNNs. Also, the relevance of the first layer is an interesting angle.
The main weakness is that the title/abstract does not mention the key limitation (temporal structure is missing).
Other Comments Or Suggestions: - Typo in Proposition 3.3, the infinity should be on the low.
When writing back, please do not be overly polite. I would prefer to have just the answers to the points raised without mentioning how nice of me to do this at every answer.
Questions For Authors: I don't see the point of Proposition 3.3. Looks very similar to the previous theorem, just with a lower bound as an add-on that is not really used anywhere else. Did I miss something?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer MuQ3 for the feedback. Below we individually address the (not discussed) concerns:
## 1. Abstract/title
We agree that the focus on static data should be highlighted in the abstract (instead of only in the introduction as of now). We will acknowledge this in the abstract of the revised version. Concerning the title, with 'time' we actually referred to the temporal aspects in the neuronal dynamics itself, i.e. the importance of time in the propagation of information via binary spikes.
## 2. Mismatch between LIF models
As of today, reset-by-subtraction seems to be the more common choice for the reset mechanism (it is the standard setting in many neuromorphic software platforms such as snntorch or SpikingJelly) in comparison to reset-to-zero (or to a fixed value). Experimental evidence and heuristic reasons (often to avoid information loss) for this choice can be found e.g. in [1], [Eshraghian et al., 2023], [Rueckauer et al., 2017]. Also, we believe that extending our results to the other mechanism is feasible and possibly not too difficult to achieve.
[1]: RMP-SNN: Residual Membrane Potential Neuron for Enabling Deeper High-Accuracy and Low-Latency Spiking Neural Network, Han, Srinivasan, Roy, CVPR 2020
## 3. Proof sketch issue
We believe that the reviewer is referring to Thm 3.2 rather than Prop 3.1. In this case, the main points of the sketch are (1) continuous functions can be approximated by step functions, and (2) step functions can be realized by discrete-time LIF-SNNs. Such an argument is quite common in deep learning literature on expressivity. In the revised version, we will take into account the reviewer's high-level suggestion to improve the clarity of this proof sketch.
## 4. Significance of Prop 3.3
In Prop 3.3, the first statement is indeed a direct consequence of Thm 3.2, while the second statement requires a different construction. Basically, the result answers the question of whether the constructive approximation derived in Thm 3.2 is optimal by showing in the worst-case scenario that the number of required neurons is minimal up to constant factors. In the formulation of Prop 3.3, we repeated the result from Thm 3.2 in the first part to facilitate the comparison between upper and lower bounds. As the reviewer noted, this proposition is not applied anywhere else. It is intended as an extension of Theorem 3.2, showing the optimality of the approximation.
## 5. Possible misleading comparison with ReLU case
In our analysis, we do not distinguish between the leaky and non-leaky case. Thus, in both cases, the maximum number of regions could achieve the quadratic growth with respect to the latency. The mentioned remark compares the scaling of the maximum number of regions with respect to different hyperparameters, highlighting the role of depth and latency in the model complexity.
However, we think that just directly considering the maximum number of regions is not sufficient, especially when comparing the expressivity of SNNs to ReLU-ANNs. The main reason is while SNNs realize piecewise constant functions, ReLU-ANNs realize piecewise linear functions. Furthermore, it is worthwhile to clarify that other aspects besides the maximum number of regions are also relevant to the model expressivity. This is also the motivation for our ablation study in Section 5.2 and 5.3.
## 6. Experiments in Section 5.1
- Lack of test accuracy plots: As the reviewer also pointed out, we found these experimental results not relevant to the theoretical expressivity results and therefore excluded them from the paper. However we may add test accuracies to the revised paper for completeness.
- Choice of datasets: We conducted experiments on MNIST, but our observation was that MNIST is too simple to gain insights into expressivity as the training data is almost interpolated even with very small networks, which makes it difficult to reflect the improvement of the model expressivity along with the subsequent hidden layers' width. Therefore, we think that a more complex dataset like CIFAR10 is more appropriate. We also conducted similar experiments on SVHN, leading to results with the same trend as seen for CIFAR10 and will include these results in the revised version.
## 7. Experiments in Section 5.3
- Table 1: We agree that it would be useful to know how the number of regions scales. However, for now, our naive counting algorithm is based on grid search and may overlook several small regions when $T$ gets larger (given the observations in Section 5.2). In the revised version, we will incorporate the reviewer's suggestion by complementing the case $T=1$ as well as a few more different (small) values for $T$.
- Choice of toy example: Our goal is to show that the model may significantly reduce the number of regions during training. Choosing more complex data may be misaligned with this goal, while requiring larger networks and making region counting prohibitive.
---
Rebuttal Comment 1.1:
Comment: I appreciate the answers. I will update my score. | Summary: The authors have innovatively proposed a discrete-time Leaky Integrate-and-Fire (LIF) neuron model for Spiking Neural Networks (SNNs), which represents a fundamental and cutting-edge contribution with significant implications for the entire field of neuromorphic computing. The authors have dedicated substantial effort to theoretically show how discrete-time models can approximate continuous functions. Furthermore, they have extensively validated the impact of simulation time steps and the number of layers on the performance of the discrete-time LIF model.
Claims And Evidence: Yes, the claims are very clear.
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, I have checked the correctness of all proofs.
Experimental Designs Or Analyses: The effectiveness of the proposed method has been validated through some experiments. However, the experimental scope of the paper remains relatively limited. For instance, the discrete-time LIF model should be integrated and tested as a replacement for the standard LIF in existing deep SNNs to comprehensively evaluate its impact on performance, inference time, power consumption, and other critical metrics. Such experiments would provide a more thorough and convincing validation of the proposed method's practical applicability and advantages.
Supplementary Material: The short supplementary material provided after the manuscript offers a wealth of theoretical proofs. I greatly appreciate these solid proofs. This addition not only enhances the rigor of the work but also provides valuable insights into the foundational aspects of the proposed method, further solidifying its credibility and significance.
Relation To Broader Scientific Literature: No
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. Designing a novel model is highly significant, as it represents a foundational contribution to the broader fields of SNNs and neuromorphic computing. This work has strong potential for widespread adoption and application.
2. The writing is very clear, easy to understand, and straightforward, making the content accessible to readers.
3. The authors provide rigorous theoretical proofs for the discrete-time LIF model and validate the effectiveness of the proposed neuron model through a series of experiments.
Weaknesses:
1. Limited Experimental Validation. The authors have validated the impact of simulation time steps and the number of layers on the performance of the discrete-time LIF model. However, these experiments are insufficient to fully demonstrate the effectiveness of the proposed novel neuron. It would be more convincing if the authors could replace standard LIF or other neurons with the proposed model in various mainstream deep SNNs and compare key metrics such as accuracy, inference time, and power consumption. Additionally, validating the proposed neuron across multiple tasks (e.g., classification, object detection, and even optical flow estimation) would make the work more comprehensive and solid.
2. Lack of Comparative Experiments. The authors did not compare the proposed neuron with other existing neuron models, which is crucial for highlighting the novelty and contributions of their work. Including such comparisons would help demonstrate the advantages of the proposed neuron over existing alternatives.
3. Feasibility Analysis for Neuromorphic Chips Deployment. The paper lacks an analysis of the feasibility of deploying discrete-time LIF-SNNs on neuromorphic hardware. One of the most significant advantages of SNNs is their potential for low-power operation, and the ultimate goal is to validate their performance on neuromorphic computing chips. It is essential for the authors to provide a theoretical feasibility analysis of whether the proposed neuron can be effectively deployed in neuromorphic systems, as this would greatly enhance the practical relevance of their work.
4. Writing Needs Improvements. The paper could benefit from several writing improvements. For instance, the related work section is too brief and should be expanded to provide a more thorough background. Additionally, there are formatting issues, such as excessive spacing on page 8, which should be corrected to improve the overall readability and professionalism of the manuscript.
Other Comments Or Suggestions: The authors' work is still in its early stages, and with further refinement, it has the potential to become a highly impactful contribution. We look forward to seeing this research evolve into a more comprehensive study, with the possibility of being published in top-tier conferences such as NeurIPS or ICLR in the future. This work holds great promise for advancing the field of neuromorphic computing.
Questions For Authors: Please see the weaknesses and response each comment. Besides, two questions are listed below:
1. Why are the proposed discrete-time LIF-SNNs more advantageous than quantized ANNs' activation functions? Although there are some differences in their mechanisms, what are the distinctions in terms of performance, inference time, and power consumption? Which approach do you think is more likely to represent the future trend?
2. Is the deployment of the proposed discrete-time LIF-SNNs on neuromorphic computing chips feasible? For instance, could they be implemented on platforms like Intel Loihi2?
Based on the above comments, I am currently leaning toward a weak reject, or perhaps borderline. However, I will also consider the other reviewers' opinions and the authors' responses. If the authors provide satisfactory answers, I will likely raise my score.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank to Reviewer Wwin for the feedback.
We would like to start our response by clarifying—although this is stated multiple times in the paper (e.g. lines 21, 55, 100) as well as discussed explicitly in the related works section (e.g. lines 1466-1469)—that we are not proposing a new SNN or neuronal model. Rather, we are analyzing a well-known model and providing a rigorous mathematical framework for it. The reviewer may have been confused by our use of the term 'our model' on a few occasions. However, we used this term only after clarifying that the model is already widely used and was intended merely as a shorthand for the one analyzed in our paper-a stylistic choice in writing. We believe this is a common practice, but if it caused confusion, we are open to revising it in the final version of the paper.
Our motivation for formalizing the discussed SNN model in a rigorous framework is to ensure that our results are both mathematically sound, general, and understandable. It is important to note that several variants of the studied model exist (which are captured by our framework), differing in aspects such as which parameters are trainable. Comparisons with other neuron models have already been well established in numerous prior works e.g. [Stanojevic et al., 2024], [Fang et al., 2023], etc. In this regard, we consider the comparative experiments and experimental validation (in the sense suggested by the reviewer) to be beyond the scope of our paper.
On the other hand, this model is already widely used; for instance, it is essentially the one implemented in many neuromorphic software and hardware platforms such as [Eshraghian et al., 2023], [Fang et al., 2023], [Gonzalez et al., 2023]. Therefore, we find the concern about the feasibility of deployment on neuromorphic chips not valid.
Concerning the reviewer's comment on writing, the related work section is expanded in Appendix D, and we highlighted only the most relevant literature in the related work of the main part. Additionally, we could not resolve the formatting issues yet due to the fixed ICML template, however, this should not be an issue for a camera-ready version. | Summary: This paper analyzes the representational power of discrete-time leaky integrate-and-fire spiking neural networks (LIF-SNNs) as function approximators. It demonstrates that LIF-SNNs realize piecewise constant functions over polyhedral regions, establishes their universal approximation capabilities, quantifies the required network size, and provides supporting numerical experiments.
Claims And Evidence: I appreciate the theoretical focus of this work, which is relatively rare in the SNN literature. However, given the complexity of the proofs, even after reviewing parts of the appendix, I am not fully certain that I have grasped all the details. I outline a few key concerns below.
1. Distinction from activation quantization studies: In Section 3.1, the paper acknowledges that for $T=1$, LIF-SNNs are equivalent to Heaviside ANNs, effectively performing 0-1 binarization. This raises the question of whether Section 3.2 primarily builds upon existing results on Heaviside ANNs. Similarly, could the theoretical results in Section 4 be interpreted as an extension to $T$-level quantization? From an ANN-to-SNN conversion perspective, an SNN can be viewed as a higher-order quantized ANN, which seems to suggest potential overlaps with prior work.
2. Modeling of multi-step SNNs and its implications: The theoretical framework in Section 4 appears largely independent of Section 3. Moreover, the modeling of $T$-step SNNs raises concerns. In line 271, the paper claims that an LIF-SNN with $T>1$ can be represented as a binary ANN with a block-diagonal weight matrix and varying bias terms. However, this claim is not entirely rigorous, as the bias terms are not arbitrary but rather determined by the input and weight parameters. This simplification significantly reduces the temporal dependencies that characterize multi-step SNN dynamics, potentially underestimating the complexity introduced by temporal correlations.
3. Assumptions on temporal constraints and their impact: Due to the aforementioned simplification, the subsequent analysis in Section 4 seems to model each time step as an independent network. This implicitly assumes a weak constraint: that at step $t−1$, the firing patterns of neurons are arbitrary and can freely modify the hyperplane structure at step $t$. The combinatorial arguments then proceed based on this assumption. However, in reality, there exist strong temporal constraints between steps $t$ and $t−1$ (e.g., for the first layer, each step $i$ contributes a bias term given by $input \times i \mod Threshold$. These constraints could limit the network's expressiveness and might explain certain periodic behaviors observed in Section 5.2 that were not explicitly addressed in the theoretical discussion.
Methods And Evaluation Criteria: Given the theoretical nature of the paper, the use of toy experiments for numerical validation is reasonable.
Theoretical Claims: The theoretical results are complex. While I grasp the general ideas, I have not thoroughly verified the appendix proofs.
Experimental Designs Or Analyses: The experiments are well-structured and systematically explore low-latency effects in SNNs. The findings in Section 5.3 align directly with the theoretical results on quantization, whereas certain observations in Section 5.2 do not appear to have been anticipated in the theory. This might be related to the assumptions discussed in Question 3.
Supplementary Material: I selectively reviewed parts of Proofs 3 and 4.
Relation To Broader Scientific Literature: This paper adopts an ANN-to-SNN conversion perspective, interpreting SNNs as a form of quantized ANNs and extending this view beyond binary quantization. This is an interesting and relatively novel approach in the SNN literature.
Essential References Not Discussed: No additional references need to be suggested.
Other Strengths And Weaknesses: No additional comments.
Other Comments Or Suggestions: No additional comments.
Questions For Authors: My primary questions correspond to the points 1-3 discussed above. Overall, I find this work valuable and insightful, but I would appreciate further clarification on these theoretical assumptions in the rebuttal.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer 9JUu for the feedback. Below we address each mentioned point:
## 1. Distinction from activation quantization studies
First, we confirm that our analysis in Section 3.2 indeed builds on existing results on Heaviside ANNs, as we point out at line 165 (but should be stated more clearly in a revised version). Moreover, we also discussed the relation to existing results for Heaviside ANNs from [Khalife et al., 2024]. Overall, the proof idea is adopted from the consideration of Heaviside ANNs in this paper, with necessary adjustments due to the considered function class and the specific type of result.
Second, concerning the ANN-SNN conversion and quantization perspective, we think that this is an interesting and valid viewpoint about SNNs. In fact, this perspective has been discussed in several previous works, e.g. [Eshraghian et al., 2023] or [1]. However, such discussions often do not offer a rigorous mathematical verification or are restricted to special cases. For instance, the work [1] shows the equivalence between ANNs with the quantization clip-floor activation and SNNs based on non-leaky IF neurons with rate coding. In this case, one could study the input partitioning of SNNs through the specific quantized ANNs. However, such an equivalence does not hold in more general cases, in particular in the case of LIF neuron model that we study in our paper. In other words, given a discrete-time SNN model, it is, in general, not clear how to get an exactly equivalent quantized ANN, through which one could better understand the original SNN. Nevertheless, we appreciate the reviewer's perspective about the link between quantized ANNs and SNNs and believe that describing this connection in a rigorous mathematical way would indeed provide meaningful insights and is an important future work.
## 2. Modeling of multi-step SNNs and its implications
First, the connection between Section 3 and 4 can be explained as follows. Given that discrete-time LIF-SNNs are universal, even in the 'trivial' case of $T=1$, one natural question is what the different hyperparameters, such as depth, width, and especially latency contribute specifically to the network expressivity (and beyond). This question is addressed in Section 4 by studying the model complexity measured by a reasonable metric, which is the richness of the input partitioning. In this sense, Section 4 can be seen as a natural continuation of Section 3, albeit now the latency by design plays a more crucial role compared to the previous analysis.
Second, we want to clarify the meaning of the last paragraph of Section 3. In general, this paragraph was intended to introduce the similarity (linear partitioning) between multi-step SNNs and Heaviside-ANNs as motivation for the upcoming Section 4. With "varying" or "flexible" biases, we indeed mean that the biases are input- and time-dependent, as pointed out by the reviewer. This connection is rigorously established Equation 13 (in the appendix) that we referred to in the paragraph. Moreover, we would like to emphasize that we do not assume any simplification of the temporal dependencies. In fact, our analysis accounts for such temporal dependencies and involves computing the exact bias in each time step in relation to inputs and weights. Dealing with the mentioned temporal dependencies is the main point in Lemma 4.2 (or repeated as Lemma B.13 in the appendix) and poses the main challenge in analyzing the actual number of linear regions.
Finally, we will try to clarify these issues in a revised version of the paper to avoid such confusion by modifying the mentioned paragraph.
## 3. Assumptions on temporal constraints and their impact
As stressed in the previous point, we do not assume any simplification on temporal constraints, in particular, we do not assume that each time step involves an independent network. In fact, since we took into account the temporal dynamics, the firing pattern of each neuron is not arbitrary (which would lead to $2^T$ different binary patterns from $\set{0,1}^T$ ), but is restricted to only $T^2$ possible binary codes. This shows that the temporal dynamic has a noticeable limiting effect on the number of regions created by the SNN, as discussed by the reviewer.
As a side note, we agree with the reviewer that the temporal constraints are helpful to understand the shift behavior observed in Section 5.2. In fact, we were able to prove (after the submission) that in the non-leaky case, the shift converges to the input current. In the leaky case, it seems that the shift still 'tries' to approximate the input current, but the approximation cannot be accurate. This leads to a periodic output spike pattern and thus the observed periodic shift behavior.
[1]: *Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks*, T. Bu, W. Fang, J. Ding, P. Dai, Z. Yu and T. Huang, ICLR 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply, and I will keep my score. | Summary: The manuscript investigates the theoretical expressivity of discrete-time leaky integrate-and-fire (LIF) spiking neural networks (SNNs) and compares them to conventional analog neural networks (ANNs). The manuscript establish that LIF-SNNs realize piecewise constant functions defined on polyhedral regions and derive upper and lower bounds on the network size required to approximate continuous functions. Furthermore, the paper explores the influence of latency (number of time steps) and depth (number of layers) on the complexity of input space partitioning.
Claims And Evidence: The manuscript claims that LIF-SNNs are universal approximators of continuous functions, that their input space partitioning grows quadratically with latency, and that deeper layers contribute less to expressivity than latency. These claims are well-supported by theoretical derivations. However, while the theoretical framework is compelling, the practical significance of these findings remains uncertain.
Methods And Evaluation Criteria: The manuscript mathematically formalizes the expressivity of LIF-SNNs. The evaluation criteria focus on input space partitioning complexity and approximation capabilities. While these are relevant to understanding the computational properties of SNNs, the experimental validation is limited to static datasets (CIFAR-10), which do not fully exploit the temporal nature of SNNs. It would strengthen the paper to test the framework on temporal tasks (sequential decision-making or event-driven processing) to assess its broader applicability.
Theoretical Claims: The theoretical claims in the manuscript appear to be correct and well-supported. The proofs establish clear upper and lower bounds on the number of neurons required for function approximation and provide a quantitative analysis of input space partitioning complexity. The results extend prior work on Heaviside ANNs and SNNs by explicitly incorporating latency as a factor in expressivity.
Experimental Designs Or Analyses: The experiments effectively illustrate the theoretical findings but are limited in scope. The choice of static classification benchmarks does not fully demonstrate the implications of latency in real-world neuromorphic computing tasks. Additionally, while the paper provides quantitative comparisons between different architectures (width) and time steps, it lacks a direct comparison to continuous-time SNN models studied in previous works, which would be valuable for positioning discrete-time LIF-SNNs within the broader landscape of neuromorphic computing.
Supplementary Material: The supplementary material was partially reviewed.
Relation To Broader Scientific Literature: The manuscript builds on prior research on ANN expressivity and SNN computational power. It extends results on piecewise constant functions from Heaviside ANNs to discrete-time LIF-SNNs, demonstrating their approximation properties and partitioning behavior. The work aligns with recent studies on neuromorphic computing and the role of time dynamics in SNN learning. However, additional discussion on how these results relate to continuous-time SNN models would further contextualize the findings.
Essential References Not Discussed: The paper does not appear to omit any critical references necessary for understanding its key contributions.
Other Strengths And Weaknesses: One of the strengths of this work is its theoretical analysis, which provides insights into how discrete-time LIF-SNNs process information. However, the practical applicability of these results remains uncertain. The paper would benefit from additional discussion on how these theoretical insights could inform SNN training, architecture design, or real-world applications.
Other Comments Or Suggestions: -
Questions For Authors: 1. Could the authors discuss how these theoretical insights could guide training strategies or architectural choices for practical SNN implementations?
2. How do the expressivity results compare to continuous-time SNN models that employ spike response dynamics rather than discrete-time formulations?
3. Could the authors evaluate the partitioning complexity on tasks with actual temporal dependencies (sequential processing or event-driven tasks)?
4. Are there practical implications of the observed quadratic growth in partition complexity with latency, particularly in terms of computational efficiency or network scalability?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer V5so for the feedback. Below, we address the concerns individually:
## 1. Training and architectural design
These features of ANNs may benefit from expressivity insights [1,2, Appendix D.1], while today's SNN architectures and training strategies are typically based on those of ANNs. Given that SNNs function differently, one should reconsider the common practices of architectures and training (e.g., increasing depth and latency, applying convolution, batch normalization, pooling, or applying more complex neuron dynamics). Our analysis questions the possible benefits of certain architectural design techniques and relates them to the increase in computation and complexity expense in training and inference. Suggestions for improving the spatial and temporal components of SNN architectures include e.g. NAS [Kim et al, 2022] or DCLS [3]. Starting with our result, one could study how input partitioning influences the training dynamics/performance, e.g., is it advantageous to initialize networks with a dense and complex baseline input partitioning as in ANNs [2]? Convergence speed and generalization in SNNs could also benefit from this strategy.
[1]: *Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective*, Chen, Gong, Wang, ICLR 2021
[2] *Compelling ReLU Networks to Exhibit Exponentially Many Linear Regions at Initialization and During Training*, Milkert, Hyde, Laine, arxiv 2025
[3] *Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings*, Hammouamri, Khalfaoui-Hassani, Masquelier, ICLR 2024
## 2. Continuous- vs discrete-time SNN expressivity
- [Neuman et al 2024] shows that shallow continuous-time SRM-SNNs with temporal coding and linear response function are universal approximators for compactly supported continuous functions. They restrict to single-spike neurons and does not quantify network size.
- [Singh et al 2024; Stanojevic et al 2024] show the ability of similar models in emulating ReLU networks, possibly requiring larger architectures. Due to known results for ReLU networks, one can deduce expressivity results for these SNNs.
- [Comsa et al 2020] shows universality of similar SNNs but with exponential response function. Notably, target function outputs must be bounded below. For $\epsilon$-accuracy, they require $\Theta(n(\Gamma\sqrt{n}/\epsilon)^n)$ neurons, compared to our $\Theta((\Gamma/\epsilon)^n)$.
- For input partitioning, [Singh et al 2024] show that single-output single-layer continuous-time SNNs with linear response realize piecewise linear functions with at most $2^n-1$ pieces. In our case, this bound becomes $(T^2+T+2)/2$ constant regions, independent of input dimension.
However, such direct comparisons of complexity across models may be not entirely fair, as different models realize distinct function classes.
## 3. Partitioning complexity with temporal data
Extending our result to temporal data is challenging, as defining input partitioning is non-trivial. A natural approach is to stack temporal elements and repeat our analysis. While feasible, it is unclear if this approach is meaningful, as the simple stacking process may overlook temporal dependencies. Also, we need to characterize what functions are realized to get a reasonable complexity proxy. Since the output for dynamic data is not necessarily constant, we must reconsider the notion of constant regions.
For now, we analyze static data as a first step toward that goal. As our experiments aim to validate our theoretical results, which do not cover dynamic data, experiments with dynamic data are beyond this paper’s scope.
## 4. Quadratic growth of partitioning complexity
With growing latency, both expressivity and computational complexity increase. This is reflected by practical observations that increasing $T$ leads to performance gains, but inference and training become more expensive in terms of energy and time consumption. Therefore, investigating the trade-off between computational and model complexity is important. The quadratic growth of the input partitioning complexity in latency suggests that expanding SNNs in temporal domain may increase the model complexity, but not at an exponential pace. Moreover, Section 5.2 suggests that the constant regions might be very thin with growing latency. Hence, one could prefer a smarter way to scale up and improve performance, rather than simply increasing the latency. This also aligns with the current trend of designing low-latency SNNs.
Counting regions is just the first step, as it does not provide sufficient practical insights. In classical DL, follow-up studies have been published on various aspects such as shape, size, and evolution over time, which offer valuable practical implications (see Related Works). Similarly, for spiking neural networks (SNNs), it is important to explore these aspects, along with their temporal dependencies. | Summary: Authors theoretically analyse a specific LIF neuron model in discrete time. They observe that they realize piecewise constant functions and quantify the network size required to approximate continuous functions.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No.
Experimental Designs Or Analyses: Yes, seem correct.
Supplementary Material: No.
Relation To Broader Scientific Literature: What is relationship to https://pubmed.ncbi.nlm.nih.gov/16474393/.
Essential References Not Discussed: Overall sufficient.
Other Strengths And Weaknesses: I haven't seen this analysis. I am not surprised by the main observation. I haven't seen but also not checked the detailed theoretical analysis.
Other Comments Or Suggestions: None.
Questions For Authors: How does this generalize to other Neuron models.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer QUDk for the feedback. Below, we address the mentioned concerns individually:
## 1. Relationship to the work [1].
From a high level, this work discusses the capacity of neurons under a specific model with time encodings. The capacity in their work is related to the ability of the neuron to learn time-encoded spatio-temporal patterns. This has several key differences with our work, as we list below:
**(1) Model:** The model considered in [1] is a spike-response model with time encoding, which differs from ours in several key aspects. Their approach primarily focuses on a specific time-based encoding for spatio-temporal patterns. In contrast, in the model we consider, time plays a key role in the neuronal dynamics but not in the encoding scheme. Our framework does not rely on a specific encoding strategy tailored to particular data. Instead, it adopts a more general representation of spike patterns, consistent with the use of SNNs as computational models—a perspective reinforced by the widespread adoption of the discrete LIF model in diverse practical applications. Moreover, the methodology in [1] is limited to a single neuron, and therefore does not incorporate the concept of a neural network.
**(2) Capacity measure:** Their measure of capacity appears to be linked to the neuron's ability to learn specific patterns and is therefore closely tied to the learning rule. In contrast, our work does not theoretically address the training process. Instead, our capacity measures are based on function approximation and the study of linear regions. These notions seem largely unrelated to those developed in [1].
**(3) Motivation:** The primary motivation in [1] is to understand the learning process in biological neurons through a spiking neuron model with biologically plausible learning. In contrast, our work focuses on the discrete LIF model, treating it as a computational framework. Although this model is widely used, many of its theoretical properties remain insufficiently explored. Our work aims to contribute in that direction.
In summary, our work differs from [1] in its motivation, modeling framework, and capacity measures.
## 2. Generalization to other neuron models.
First, it should be mentioned that the elementary LIF neuron model discussed in our paper is covered by many more complex (and often more biologically plausible) neuron models in the literature as a special case. For instance, the current-based model (see e.g. [Eshraghian et al., 2023]) can be reduced to the LIF model by setting the synaptic current decay rate to $1$. In this case, our universality result can be directly extended to these neuron models. Certainly, SNNs may gain some benefit in terms of expressive power when being constructed based on a more complex neuron model, which may lead to better approximation rates for certain function classes.
Second, concerning the input partitioning result, the relevant proofs rely mainly on the affine linearity of the synaptic response (given by the weight matrices and biases) as well as the Heaviside spike activation, which are the common trends in SNNs. Thus, we think that our technique might also apply to other neuron models, yet with several technical modifications. As an example, one may consider the current-based model mentioned above. By taking the sum over time steps, one again obtains a time-dependent affine relation between the membrane potential over the first hidden layer and the input vector, which would lead to an analogous input partitioning as in the LIF case.
We would also like to clarify that our focus on the basic LIF model is not a limitation but rather because of its widespread use, simplicity, and computational efficiency. While extending our results to other neuron models, as discussed above, is realistic and beneficial, a more rigorous exploration would require careful consideration, and we leave this for future work.
Finally, our discussion above relies on the time discretization as an essential high-level aspect in the model comparison. We are not aware if the reviewer counts this into the neuron model (i.e. if you are asking for generalization of our results to continuous-time models). A continuous-time model would likely differ crucially from our discrete-time model (please refer to our related works section and also our response to Reviewer V5so), and hence it is not clear to us how to meaningfully generalize our results to continuous time.
[1]: *The tempotron: a neuron that learns spike timing–based decisions.* R. Güter and H. Sompolinsky, Nature Neuroscience 9, 420–428, 2006 | null | null | null | null |
Discriminative Policy Optimization for Token-Level Reward Models | Accept (poster) | Summary: This paper introduces a method to construct token level reward from pairwise data. It is further shown that such rewards can be applied to methods like PPO or REINFORCE. Experiments show that both PPO and REINFORCE can be benefited from the constructed token level rewards and outperforms their counterparts with other baseline reward or reward models.
Claims And Evidence: 1.The paper claims that incorporating RL algorithms with Q-RM improve the performance of RL algorithm over the baselines
2. A higher accuracy in reward models does not necessarily leads to improved policy performance. A fine-grained credit assignment, even not accuracy enough, can still benefit RL training
Methods And Evaluation Criteria: This paper investigates three tasks, mathematical reasoning, reading comprehensive and instruction following. Widely adopted datasets like GSM8K / MATH, QA-feedback and AlpacaEval are used for evaluation.
Theoretical Claims: The proof looks correct though I did not check every details.
Experimental Designs Or Analyses: The experiment covers a wide range of tasks and datasets. Some concerns are listed below
1. It seems that for both PPO and REINFORCE, the author report the detailed training hyper-parameter but lacking the details about how the hyperparameters are selected. Also the training details are missing for methods like DPO / SimPO
2. The author report pass@N on math reasoning tasks. However, pass@N is not a regular metric for math problem solving (because usually the users of LLM are not expected to verify the answer).
Supplementary Material: I briefly checked the proof and training details and all my concerns are raised in the corresponding parts.
Relation To Broader Scientific Literature: Previous works show that the the resulting policy of DPO can be used as token-level reward construction. This paper instantiate a construction procedure and used it for RL training and also conducted experiments to verify its benefit.
Essential References Not Discussed: I do not see any missing of significant previous related work.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed review and insightful questions on our paper. Below, we address your concerns in detail.
> Q1: It seems that for both PPO and REINFORCE, the author report the detailed training hyper-parameter but lacking the details about how the hyperparameters are selected. Also the training details are missing for methods like DPO / SimPO
A1:Thank you for your valuable suggestions. We conducted a thorough hyperparameter tuning process to determine the optimal values for our experiments.
- **For REINFORCE-based Methods:** We tuned the learning rate within the range [5e-7, 1e-6, 5e-6] and found that 1e-6 consistently yielded the best performance.
- **For PPO-based Methods:** We explored learning rates in the range [1e-6, 5e-6, 1e-5] and identified 5e-6 as the most effective.
- **For Q-RM:** We searched for the optimal $\beta$ parameter from [0.1, 0.2, 0.5] and the $\gamma$ parameter from [1.0, 2.0, 5.0], concluding that $\beta = 0.2$ and $\gamma = 2.0$ produced the best results.
- **For Baseline Methods (e.g., DPO and SimPO):** We followed the recommended configurations from their original papers as well as the default settings from widely used repositories (e.g., [Hugging Face’s TRL repository](https://github.com/huggingface/trl)).
We will include these details in the revised version of the paper.
> Q2: Pass@N is not a regular metric for math problem solving (because usually the users of LLM are not expected to verify the answer).
A2: Thank you for the thoughtful comment. While we agree that users typically expect a single correct answer, we clarify this concern as follows.
**Pass@N** measures the probability that at least one of N sampled outputs is correct. This metric is particularly useful for assessing a model's best-case capability under stochastic generation. **Pass@N** has become an increasingly popular metric for evaluating LLMs on mathematical tasks (e.g., GSM8K, MATH) [1][2][3], as it reflects the likelihood of generating a correct solution within a limited number of attempts. This provides valuable insights into the model’s reasoning ability and output diversity.
To directly address your concern, our study also reports **Pass@1** alongside **Pass@N**. **Pass@1** corresponds to real-world scenarios where only a single response is returned to the user, making it a practical measure of immediate correctness.
[1] Kimi k1.5: Scaling Reinforcement Learning with LLMs. Arxiv, 2025.
[2] rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking. Arxiv, 2025.
[3] Large language monkeys: Scaling inference compute with repeated sampling. Arxiv, 2024. | Summary: This paper aims to develop a token-level reward mechanism in RLHF. The authors achieve this by decoupling the reward model from language generation and constructing a reward model through the optimization of a discriminative policy. Additionally, they provide a comprehensive theoretical analysis of their method. Experimental results demonstrate the superiority of their approach over existing techniques.
## update after rebuttal
The authors' reply address my concerns, and I decide to keep the score.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense.
Theoretical Claims: Yes, I have checked the theoretical claims. Proposition 3.2. provides bounds for $\mathcal{V}(\tau)$, and the authors provides experimental analysis in Appendix. Proposition 3.3 demonstrates that the optimal Q-function and the logit of optimal policy share a linear relationship.
Experimental Designs Or Analyses: The experimental results provide robust evidence that supports the authors’ claims.
Supplementary Material: I have thoroughly reviewed all of the supplementary materials.
Relation To Broader Scientific Literature: The key contributions of this paper are closely related to token-wise RLHF for LLMs.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths
1. The paper is well-written and effectively communicates the key ideas.
2. The proposed method is clearly presented and easy to follow, making it accessible to readers.
3. The authors provide a thorough theoretical analysis of their method, adding depth to the study.
4. The authors conduct extensive experiments, evaluating their approach using various models and baseline methods, which strengthens the empirical validation.
Weaknesses [Minor]
1. Lines 78–80 mention “under certain assumptions.” These assumptions should be explicitly highlighted and explained in the main body of the paper to ensure clarity.
2. Including error bars in the experimental results would better demonstrate the robustness of the method and provide more insight into the variability of the outcomes.
3. The paper lacks human evaluation.
4. The paper does not provide sufficient details about the hyperparameters required to reproduce the method, which may hinder reproducibility.
Other Comments Or Suggestions: None
Questions For Authors: Please address all weaknesses noted above.
Ethical Review Concerns: Please address all weaknesses noted above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed review and insightful questions on our paper. We greatly value your feedback and appreciate your recognition of our work's advancements. Below, we address your concerns in detail.
> Q1: Lines 78–80 mention “under certain assumptions.” These assumptions should be explicitly highlighted and explained in the main body of the paper to ensure clarity.
A1: Thank you for your valuable suggestion.
The assumption in question refers to the trajectory entropy of the optimal policy being close to zero, as stated in Proposition 3.2. This condition constrains $\mathcal{V}(\tau)$ within a narrow range, facilitating the effective training of Q-RM using Eq. 17 in the paper. Furthermore, as demonstrated in Proposition 3.3, computing advantage functions with Q-RM is equivalent to utilizing the optimal Q-functions.
We will ensure that this assumption is explicitly highlighted and more clearly explained in the revised version of the paper.
> Q2: Including error bars in the experimental results would better demonstrate the robustness of the method and provide more insight into the variability of the outcomes.
A2: We appreciate the reviewer’s insightful suggestion and agree that statistical indicators can offer a clearer perspective on method stability.
In our study, we evaluate Q-RM across three task categories: mathematical reasoning (GSM8K, MATH), machine reading comprehension (QA-Feedback), and instruction-following (AlpacaEval 2.0). For GSM8K and MATH, we report **Pass@N** metrics, which inherently account for statistical variation by measuring the probability of obtaining at least one correct sample from N generated completions. These values, computed over multiple sampled responses (e.g., N=8,16), serve as empirical success rates over stochastic rollouts, indirectly capturing performance variance in a manner analogous to confidence intervals.
Following the reviewer’s suggestion, we have now included standard error for QA-Feedback and AlpacaEval 2.0 as follows.
**QA-Feedback:**
| Method | Avg | Std Error |
|-----------|--------|-------|
| REINFORCE + ORM | 0.5813 | 0.06 |
| REINFORCE + DPO-RM | 0.5581 | 0.06 |
| REINFORCE + Q-RM | 0.5917 | 0.07 |
**AlpacaEval 2.0:**
| Method | Win Rate (%) | WR Std Error | LC Win Rate (%) | LC WR Std Error |
|---------|--------------|----|---|---|
| REINFORCE + ORM | 7.72 | 0.86 | 11.00 | 0.47 |
| REINFORCE + DPO-RM | 7.96 | 0.88 | 11.87 | 0.50 |
| REINFORCE + Q-RM | 9.96 | 0.97 | 14.21 | 0.53 |
More results will be included in the revised paper.
> Q3: The paper lacks human evaluation.
A3: Thank you for raising this important point. We would first like to clarify our evaluation strategy.
For instruction-following tasks, we employ AlpacaEval 2.0, which utilizes the LLM-as-a-judge framework. This approach has gained traction as a scalable, cost-effective, and reproducible alternative to traditional human evaluation. Recent studies [1][2][3] have demonstrated that LLM-as-a-judge evaluations strongly correlate with human judgments, particularly when using advanced reference models like GPT-4 as the evaluator. This makes it a practical and reliable substitute for large-scale assessments. Furthermore, AlpacaEval 2.0 incorporates length-controlled win rate (LC win rate) metrics to further reduce bias, enhancing the evaluation’s reliability and interpretability.
We acknowledge the reviewer’s point that human evaluation remains the most direct and interpretable method for assessing LLM performance, particularly for subjective or nuanced outputs. However, due to time constraints during this rebuttal phase, we are unable to complete and present human evaluation results within a few days. We will include these results in the revised version of the paper.
[1] Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. NIPS 2023.
[2] Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators. COLM 2024.
[3] Can LLMs Replace Human Evaluators? An Empirical Study of LLM-as-a-Judge in Software Engineering. ISSTA 2025.
> Q4: The paper does not provide sufficient details about the hyperparameters required to reproduce the method.
A4:Thank you for your valuable comment. We provide detailed hyperparameter settings in Appendix D. For baseline methods, we adhere to the recommended configurations from their original papers and the default settings from widely used repositories (e.g., [Hugging Face’s TRL repository](https://github.com/huggingface/trl)). For example, in DPO, the beta parameter is set to 0.1, while in SimPO, beta is set to 1.0 and gamma to 2.0. Moreover, for GRPO, RLOO, and RFT, we generate four responses per instruction, following the approach in [1].
We will further clarify and supplement these details in the revised version of the paper.
[1] Back to Basics: Revisiting REINFORCE-Style Optimization for Learning from Human Feedback in LLMs. ACL 2024. | Summary: This paper introduces a discriminative Q-function Reward Model (QRM) for token-level credit assignment in RL. Theoretically, the authors show that QRM logits linearly relate to optimal Q-functions, enabling advantage computation without GAE. Empirically, QRM outperforms prior token/sequence-level RMs in tasks like mathematical reasoning and instruction-following when integrated into RL frameworks (PPO, REINFORCE). The work is technically sound, with elegant theoretical derivations and thorough experiments.
Claims And Evidence: **Claim 1**: Equivalence between $Z^*(s_t,a_t)$ and $Q^*(s_t,a_t)$. (Proposition 3.3)
**Feedback**: The connection between Proposition 3.3 and the equivalence claim requires more detailed derivation. For instance:
- Clarify how the linear relationship implies equivalence in practical RL settings.
- Discuss assumptions (e.g., stationarity, policy invariance) needed for this equivalence.
**Claim 2**: Effectiveness across reasoning/QA tasks (Section 5.1).
**Feedback**: While results on GSM8K/MATH are strong, broader validation would strengthen claims:
- For mathematical reasoning task like GSM8K and MATH, authors firstly train Q-RM on their training set and conduct RL training on the same training instructions. The Q-RM is used on in distribution data during RL phase, I suggest RL training on a different OOD instruction dataset, showing the effectiveness of Q-RM on generalization ability.
- Also suggest testing on more OOD benchmarks (e.g., AIME, LiveCodeBench) to assess generalization.
- Explore smaller QRM backbones (e.g., 7B/13B) paired with policy models of comparable size. Using 70B QRMs limits practical applicability and raises scalability concerns.
Methods And Evaluation Criteria: - **Model Sizes**: The choice of 70B QRM backbones across all experiments lacks justification. Smaller QRMs (e.g., 7B) paired with same-scale policies would better demonstrate efficiency.
- **Policy Ablations**: Standardizing to newer models (e.g., Llama-3-8B-Instruct) compared to 1B choice would be better.
Theoretical Claims: The theoretical framework is compelling but would benefit from:
- A step-by-step proof linking Proposition 3.3 to practical advantage equivalence.
- Discussion of approximation boundaries (e.g., when the linear relationship breaks down).
Experimental Designs Or Analyses: I suggest:
- Include **cross-domain evaluations** (e.g., coding, long-form QA) to test generality.
- Compare QRM with **RewardBench**-calibrated models to isolate reward modeling improvements.
Supplementary Material: I have reviewed the Appendix.
Relation To Broader Scientific Literature: The proposed token-level Q-RM seamlessly integrates into existing reinforcement learning frameworks, beating previous token-level RMs or sequence-level RMs and PRMs.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths**:
- Novel theoretical grounding for token-level credit assignment Q-RM.
- Strong empirical results across multiple tasks.
**Weaknesses**:
- Limited scalability analysis (QRM/policy size mismatch).
- Narrow benchmark scope.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Why is KL divergence incorporated into advantage calculation (line 238) rather than adding KL loss on $L^{CLIP}$.
2. Can QRM and policy models share weights (e.g., via a 7B backbone) to reduce memory overhead? Like actor and critic sharing most weights.
3. Have you evaluated QRM on **RewardBench** to quantify its reward modeling accuracy vs. baselines?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and valuable feedback. Below, we address your concerns in detail.
> Q1: Regarding equivalence claim in Proposition 3.3 and assumptions.
A1: We address it from two key aspects:
- According to **Proposition 3.3**, the optimal Q-function and the optimal logits from Q-RM differ only by a constant offset. The left-hand side of Eq. 18 is exactly the definition of the advantage function, using $Z^*(s_t, a_t)$ to estimate the advantage is equivalent to using $Q^*(s_t, a_t)$.
- According to **Proposition 3.2**, we assume that trajectory entropy $\mathcal{H}^*(\tau)$ is close to zero, which constrains $\mathcal{V}(\tau)$ within a narrow range. This allows Q-RM to be effectively trained using Eq. 17.
We will incorporate these clarifications into revised paper.
> Q2: Regarding the effectiveness of Q-RM on generalization ability.
A2: Thank you for your valuable suggestion.
In response, we assess Q-RM's OOD generalization by training ORM, DPO-RM, and Q-RM on the MATH pairwise dataset and using OOD dataset Math10K [1] as RL training instructions. Reward models are based on LLaMA-3-70B-Instruct, while policies are based on LLaMA-3.2-3B-Instruct. As shown in the table below, Q-RM consistently outperforms ORM and DPO-RM, demonstrating its effectiveness in handling OOD scenarios. We will incorporate these OOD results in the updated paper.
| Method | GSM8K Pass@1 | ASDiv Pass@1 | SVAMP Pass@1 |
|-|-|-|-|
| REINFORCE+ORM | 66.19 | 83.05 | 83.06 |
| REINFORCE+DPO-RM | 64.14 | 82.72 | 83.20 |
| REINFORCE+Q-RM | 69.67 | 84.01 | 84.88 |
[1] LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models. EMNLP 2023.
> Q3: Regarding the model size of Q-RM.
A3: We address it from the following perspectives:
- **Model Size of Q-RM.** It is well established that larger models are more effective for ORM compared to smaller ones. Moreover, this work focuses on training a fine-grained reward model for RL training, which presents a greater challenge than ORM. To ensure broad effectiveness across different policy optimizations, a robust backbone for the reward model is essential. Therefore, we selected the 70B model as the backbone for Q-RM.
- **Additional Experiments.** To further analyze the impact of model size on Q-RM, we use LLaMA-3-8B-instruct as the backbone for both Q-RM and policy, comparing it against ORM and DPO-RM. As shown below, Q-RM remains effective with smaller architectures but excels with the 70B backbone, supporting our choice. Additional results will be added to revised paper.
| Method | GSM8K Pass@1 | MATH Pass@1 |
|-|-|-|
| SFT | 63.68 | 30.37 |
| REINFORCE+ORM (8B) | 71.57 | 34.53 |
| REINFORCE+DPO-RM (8B) | 72.81 | 33.41 |
| REINFORCE+Q-RM (8B) | 74.28 | 38.00 |
| REINFORCE+Q-RM (70B) | 79.98 | 41.23 |
> Q4: Regarding KL loss on $\mathcal{L}^{CLIP}$.
A4: While some works (e.g., [1]) incorporate KL divergence directly into the final training objective $\mathcal{L}^{CLIP}$, it is more common practice [2][3][4] to integrate it into the trajectory rewards. Therefore, in the PPO+Q-RM training framework, we followed this widely adopted approach.
[1] Deepseekmath: Pushing the limits of mathematical reasoning in open language models. ArXiv 2024.
[2] REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models. ArXiv 2025.
[3] Delve into PPO: Implementation matters for stable RLHF. NeurIPS Workshop 2023.
[4] https://github.com/huggingface/trl
> Q5: Can QRM and policy models share weights?
A5: Yes, Q-RM can share a backbone with the policy model to save memory, but this limits generalizability and reduces learnable parameters, potentially weakening reward signal capture. Thus, we chose separate backbones for Q-RM and the policy model.
> Q6: Have you evaluated QRM on RewardBench to quantify its reward modeling accuracy vs. baselines?
A6: Q-RM focuses on finer-grained rewards for RL training, unlike ORM's sequence-level rewards, so we did not originally test it on RewardBench. However, to address this concern, we evaluated Q-RM against ORM and DPO-RM using RewardBench.
As shown in the table below, Q-RM performs worse than ORM and DPO-RM in terms of pairwise accuracy. This result aligns with Q-RM’s objective, which **prioritizes fine-grained reward modeling and improved policy optimization over maximizing pairwise accuracy**.
Furthermore, high pairwise accuracy in reward models does not necessarily translate to better policy optimization [2]. This is further illustrated in Figure 2(b) of our paper, where Q-RM, despite having the lowest pairwise accuracy, achieves the highest policy optimization performance.
| Method | Chat | Chat Hard | Safety | Reasoning |
|-|-|-|-|-|
|ORM|96.63|61.84|78.78|87.21|
|DPO-RM|94.69|59.39|77.84|84.67|
|Q-RM|92.91|56.14|71.41|82.21|
We will include the results and discussion in the revised version of the paper.
[1] What Makes a Reward Model a Good Teacher? ArXiv 2025. | Summary: When training LLMs with RL, it matters to have fine-grained credit assignment to tokens. The current popular algorithms like GRPO and RLOO all assign the same credit to all tokens in a response. This does not seem ideal at all. There have been recent works in improving this. Q-RM suggests that we compute these fine-grained credits based on the logits of another LLM, a discriminative policy, which is basically trained on preference data in case of RLHF tasks or correct vs incorrect responses in mathematical reasoning tasks with DPO. They show that the logits of a discriminative policy can provide credits to generated tokens and therefore providing better training signal for RL training of LLMs.
Claims And Evidence: The main claim of the paper: Using Q-RM to provide token level credit enhanced RL training of LLMs.
Evidence: Increased pass@1 performance in GSM8K and MATH.
However, I have some concerns regarding if the evidence supports the claim.
1-Based on Nvidia's reported numbers for Llama-3.2-3B-Instruct, it seems the model already achieves pass@1 48.0 accuracy on MATH dataset as reported https://build.nvidia.com/meta/llama-3.2-3b-instruct/modelcard. However, in the paper the highest reported achieved accuracy on MATH is reported as near 35%. This is lower than the initial model. What is the reason for this discrepancy? That is my main criticism towards the results as it suggests something fundamental may be missing in evaluation or loading these models. Am I understanding this correctly?
I have also a concern about the claim. The Q-RM is a 70B model that will undergo training with DPO. I don't have a problem with that. However, I am wondering if the 70B model is being somehow distilled to the small model.
Methods And Evaluation Criteria: I think the benchmarks are reasonable. However, I am not much a fan of averaging GSM8K performance and MATH performance in the tables. GSM8K is significantly simpler than MATH and averaging results on these two is not a very sensible thing to do. However, that is a minor criticism. In general, these two are the most important for reasoning. The field is changing as they are getting saturated but they are fine for now.
Theoretical Claims: I have not checked them for correctness but they make sense to me. It makes sense that under an optimal discriminative policy, the logits actually provide good rewards for the tokens as the optimal discriminative policy can basically tell where the tokens are incorrect. However, the paper argues this is kind of a credit assignment. I am a bit confused by this. In my mind, credit assignment mostly refers to correct estimation of advantages. However, in this sense, the Q-RM is providing rewards which mostly indicates whether the reported results are incorrect or correct and it is less about the advantage of a token. I hope the authors can clear this up for me.
Experimental Designs Or Analyses: The only criticism I have is why the authors went for the 70B model as their reward model. This raises questions about distilling the capacities of the 70B model to the smaller model. I understand that other methods that Q-RM is compared to are given the same advantage though. However, I still cannot understand what was wrong with using the same model trained on preference datasets as the Q-RM?
Supplementary Material: I have not read the supplementary material.
Relation To Broader Scientific Literature: There is huge interest in RL for LLMs now and how we can improve them. The literature is adopting methods in RL to improve the current algorithms as what is currently SOTA in RL for LLM is actually quite simple compared to ideas tested in RL setups. Therefore, I think this work is very relevant to the current literature.
Essential References Not Discussed: There is a paper I remember that discusses the importance of credit assignment when doing RL for LLMs which I think is quite related: VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment. This paper shows that improving the credit assignment by estimating the advantage via multiple MC rollouts really enhanced RL for LLM training.
Other Strengths And Weaknesses: I think I listed all the things I have in mind in the above.
Other Comments Or Suggestions: I don't have a comment.
Questions For Authors: I have listed my questions already. However, I will list them again for easy reference:
Q1) Is there any reason to go for the 70B model as the RM to train a 7B model? Can't you just use the same base model for both RM and Policy? I know the 7B is the instruct, but the instruct could be also fine-tuned with DPO. Isn't that correct?
Q2) Is there any good reason for the feac that pass@1 reported on the Nvidia is much higher than what is reported in the paper?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and valuable feedback. Below, we address your concerns in detail.
> Q1: Regarding results discrepancy on MATH dataset.
A1: We clarify that the discrepancy in results for Llama-3.2-3B-Instruct on MATH can be attributed to the **differences in evaluation frameworks**. Nvidia's reported results for Llama-3.2-3B-Instruct are based on Meta's official evaluation framework, as detailed in [documentation](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/eval_details.md). While our paper adopts OpenCompass, which is widely used in recent research [1~3]. Below, we outline the key differences between these two frameworks:
- **Use of LLM-as-Judge**: Meta's evaluation employs an extra LLM to act as a judge, using an `equivalence template` to determine whether two expressions are equivalent. This introduces an external dependency into the evaluation process. OpenCompass does not rely on an extra LLM for evaluation.
- **Use of Latex Parser**: Meta's framework utilizes Sympy to parse and verify the correctness of LaTeX equations. OpenCompass does not incorporate such parsing mechanisms, which can lead to lower reported accuracy due to stricter evaluation criteria.
- **Prompting and Sampling Strategy**: Meta's framework employs zero-shot CoT prompting with top-p sampling, while we adopt OpenCompass to use a zero-shot greedy decoding without additional CoT templates, eliminating the influence of different prompting strategies and randomness on the evaluation.
In summary, we selected OpenCompass for its widespread use in recent studies [1-3], ensuring comparability. In response to the reviewer, we are working to reproduce the results using Meta's framework. Since this requires retraining the policy models, it will take some time, and we will provide an update within 48 hours.
[1] Compression Represents Intelligence Linearly. COLM 2024.
[2] Internlm2 technical report. Arxiv 2024.
[3] Internlm-math: Open math large language models toward verifiable reasoning. Arxiv 2024.
> Q2:Regarding Q-RM training with DPO. Is the 70B being distilled into the smaller model? Use same base model for both RM and Policy.
A2: Thank you for your thoughtful comments. We address them from the following perspectives:
- **Q-RM is Not Trained Using DPO.** We would like to clarify that Q-RM is not trained using DPO. In this work, Q-RM is trained using the proposed discriminative policy, as outlined in Equation 17 of the paper. This approach explicitly decouples reward modeling from language generation. Q-RM and DPO both use preference data but differ in how they model token-level rewards.
- **Model Size of Q-RM.** It is well established that larger models are more effective for ORM compared to smaller ones, and training fine-grained reward models for RL is even more challenging. To ensure broad effectiveness across different policy optimizations, a robust backbone for the reward model is essential. Therefore, we selected the 70B model as the backbone for Q-RM.
- **Relationship with Knowledge Distillation.** Our approach follows standard reward modeling in RL, focusing on fine-grained supervision for policy optimization. Unlike knowledge distillation, we don't directly distill the reward model into the policy.
- **Additional Experiments.** To further analyze the impact of model size on Q-RM, we use LLaMA-3-8B-instruct as the backbone for both Q-RM and policy, comparing it against ORM and DPO-RM. As shown below, Q-RM remains effective with smaller architectures but excels with the 70B backbone, supporting our choice. Additional results will be added to revised paper.
| Method | GSM8K Pass@1 | MATH Pass@1 |
|---|---|---|
| SFT | 63.68 | 30.37 |
| REINFORCE+ORM (8B) | 71.57 | 34.53 |
| REINFORCE+DPO-RM (8B) | 72.81 | 33.41 |
| REINFORCE+Q-RM (8B) | 74.28 | 38.00 |
| REINFORCE+Q-RM (70B) | 79.98 | 41.23 |
> Q3: Regarding the averaging results on GSM8K and MATH.
A3: We agree that GSM8K is simpler than MATH, so averaging their results isn't ideal. Our original intent was to report overall results on math domain. To avoid confusion, we will present separate results for GSM8K and MATH in Table 1 without averaging their performance.
> Q4: Regarding the credit assignment in Q-RM.
A4: We acknowledge that credit assignment primarily pertains to the estimation of advantages, and we agree that Q-RM does not directly compute advantages. However, as demonstrated in Proposition 3.3, the rewards generated by Q-RM maintain a linear relationship with the optimal Q-function. This relationship enables the estimation of advantages using Q-RM. We will clarify how Q-RM facilitates credit assignment in the revised paper.
> Q5: Regarding the related paper.
A5: Thank you for referencing VinePPO. We recognize its key insight on accurate credit assignment in RL fine-tuning of LLMs and will cite it while discussing its findings in our revised paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed answers. I understood that I did not understand the method correctly. Given the fact that I really tried to understand it, I think maybe the method section of the paper is not written clearly. I read the paper again and the equation 17, and I still don't think one can understand from the text that that equation is the objective. I think the paper could have be written much more clearly.
I appreciate the effort that you put into your reply and it makes me feel bad not to increase my score. However, I still think the writing of the paper is the main bottleneck:
1-Writing about what the actual method is. It seems very obscured. I remember I really tried to understand it and then my understanding was wrong and I am not that much of a lazy reader.
2-Writing on why going for a bigger model to train the smaller model is OK in experiments. I think it is better to explain it as at its current form it raises concerns.
Thank you again.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback. We agree that the clarity of the methodology section is crucial. To address your concerns more comprehensively, we provide detailed responses below:
**Regarding Clarifying the Methodology**
We plan to make the following revisions in the final version of the paper:
+ We will restructure Section 3.3 to explicityly state that Equation 17 represents the final training objective for Q-RM.
+ We will add a subsection to outline the overall training objective for Q-RM before diving into implementation details.
**Regarding Using Larger Model to Train Small Model**
Our explanation is as follows:
+ Larger models are more effective for reward modeling compared to smaller ones. And in this work, we focus on training a fine-grained, token-level reward model for RL training, which presents a greater challenge than outcome-based reward modeling. The increased difficulty arises because token-level reward modeling requires evaluating and assigning rewards at a much finer granularity (at the level of individual tokens), rather than assessing the overall outcome of a sequence. This makes the modeling process more complex and demanding. To ensure robust and broad effectiveness, we select 70B model as backbone for all reward models in the experiment.
+ This design also meets the current growing demand for lightweight models deployment. While large models excel in performance, they require high computational costs. By training smaller policies to align with larger reward models, we aim to achieve inference efficiency while approximating the performance ceiling of large models.
We will incorporate this discussion into the revised version.
**Regarding Evaluating Results Using Meta's framework**
Additionally, we provide updated results for Llama-3.2-3B-Instruct on the MATH-500 benchmark using Meta's framework, as shown in the table below. These results exhibit a similar trend to Table 1 in the paper, demonstrating that the improvement achieved by our method is independent of the evaluation framework.
| Method | MATH-500 Pass@1 |
| --- | --- |
| SFT | 47.01 |
| REINFORCE+DPO-RM | 48.54 |
| REINFORCE+Q-RM | 51.94 |
We hope these revisions and clarifications address your concerns effectively. Thank you again for your valuable feedback! | null | null | null | null | null | null |
The Panaceas for Improving Low-Rank Decomposition in Communication-Efficient Federated Learning | Accept (poster) | Summary: This paper reduces communication overhead in FL by enhancing low-rank decomposition techniques. The authors focus on three key issues: what to decompose, how to decompose, and how to aggregate. They propose three techniques respectively: Model Update Decomposition (MUD), Block-wise Kronecker Decomposition (BKD), and Aggregation-Aware Decomposition (AAD). The paper provides both theoretical analysis and empirical studies on benchmark datasets.
## update after rebuttal
My concerns are mainly addressed during rebuttal. Thus, I will keep my positive rating.
Claims And Evidence: The claims made in this paper are well-supported by both theoretical proofs and experimental validation.
Methods And Evaluation Criteria: This paper focuses on improving communication efficiency in FL by proposing three techniques that comprehensively improve low-rank decomposition. The proposed methods have been validated on multiple benchmark datasets.
Theoretical Claims: The claims are well-supported by the proofs in general.
Experimental Designs Or Analyses: The authors conduct experiments on four datasets, comparing the proposed methods against traditional low-rank decomposition techniques and sota pruning and quantization methods. Ablation studies and hyperparameter analysis are also conducted.
Supplementary Material: The code is provided in the supplementary material with a clear structure.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: This paper adequately discusses essential literature related to low-rank decomposition in FL.
Other Strengths And Weaknesses: Strengths:
1. Communication overhead in FL is a critical problem. This paper identifies three key challenges in low-rank decomposition for FL and proposes three novel techniques to address them.
2. The paper provides a rigorous theoretical analysis, proving that FedMUD achieves better convergence than the traditional method.
Weaknesses:
1. The authors claim that BKD achieves $rank(W)\leq\min\{m,n\}$. I suggest including a more detailed explanation.
Other Comments Or Suggestions: I suggest the authors provide a more detailed discussion on the rank upper bound of BKD.
Questions For Authors: In Section 5.5, the authors discuss the significant impact of initialization on low-rank decomposition performance. I am curious whether traditional low-rank decomposition methods in FL also face this issue. If so, how do they typically determine the initialization size?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Hi Reviewer u9mP:**
We sincerely appreciate your valuable feedback. Below, we address each of your comments in detail. For additional experimental results, please refer to the anonymous link: **https://anonymous.4open.science/r/fedmud_rebuttal-962F**.
***Q1: "a more detailed discussion on the upper rank bound of BKD"***
**R1:**
According to your suggestion, we provide a detailed discussion on the upper rank bound of BKD.
1. Theoretical Analysis: We leverage the properties of the Kronecker product ($rank(A \otimes B) = rank(A) \cdot rank(B)$) and matrix concatenation ($rank(A|B) \leq rank(A) + rank(B)$). Suppose $A, B \in \mathbb{R}^{a \times b}$, then $W = A \otimes B \in \mathbb{R}^{a^2 \times b^2}$. By the nature of the Kronecker product, $rank(A \otimes B) = rank(A) \cdot rank(B) \leq \min\\{a^2, b^2\\}$, which corresponds to the full rank of a matrix with dimensions $(a^2, b^2)$. Furthermore, BKD can be viewed as the concatenation of multiple Kronecker product matrices. Given that each submatrix retains full rank, the concatenated BKD recovery matrix also achieves full rank.
2. Experimental Validation: We compare the rank of model updates obtained via matrix multiplication and BKD. The results, presented in **Figure 10** in the above link, demonstrate that BKD produces near full-rank model updates, significantly outperforming standard matrix multiplication.
We will incorporate above disscussion into the paper.
***Q2: "how do other low-rank decomposition methods handle the initialization of low-rank matrices"***
**R2:**
Other low-rank decomposition methods, such as FedLMT and FedPara, also suffer from initialization variance.
Hyeon-Woo et al., in FedPara, employ the default He initialization [1] but acknowledge that "investigating initializations appropriate for our model might improve potential instability in our method."
Similarly, Liu et al., in FedLMT, state that "the performance of low-rank models can be boosted through a customized initialization called spectral initialization [2]," which "uses SVD to initialize low-rank model parameters."
In our experiments, to ensure a fair comparison, we adjust the initialization size across different methods. As shown in Figure 4 in our paper, BKD mitigates the sensitivity of the model to initialization variance to some extent. We attribute this to the fact that $Var(AB)=r\cdot Var(A)\cdot Var(B)$ (where $A\in\mathbb{R}^{a\times r}, B\in\mathbb{R}^{r\times b}$, assuming both are i.i.d.), while $Var(A\otimes B)=Var(A)\cdot Var(B)$. Clearly, the variance of the reconstructed matrix using matrix multiplication is also influenced by $r$, making it more unstable.
In this paper, we did not delve into the impact of low-rank matrix initialization on model performance. This is an interesting and meaningful direction that we will continue to pay attention to and conduct possible explorations.
[1] FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning. ICLR 2022.
[2] FedLMT: Tackling System Heterogeneity of Federated Learning via Low-Rank Model Training with Theoretical Guarantees. ICML 2024.
Below, we address a general issue regarding novelty raised by other reviewers, and we hope this will provide you with new insights.
***Q3: "novelty of MUD and difference with LoRA"***
**R3:**
The core difference between MUD and other model update decompositions is that our model update is for several training rounds, not the entire training process.
Let's take LoRA as an example to explain the difference and relationship between FedMUD and LoRA in the federated fine-tuning scenario. This also applies to other technologies with similar ideas.
1. Assuming the pre-trained weight is $W_p$, LoRA will learn $W_p+AB$ during the entire FL training process, where $W_p$ is frozen. $A$ and $B$ are trainable parameters, which represent the model update during the entire FL training process.
2. In FedMUD, we learn the model update every $S$ rounds. Reset Interval $s$ is a hyperparameter of fedMUD (the default value is 1), which means that every $S$ rounds, we manually add the model update $AB$ to the pre-trained parameter $W_p$ and reinitialize the submatrix. In this way, the parameters of the final model can be expressed as $W=W_p+\sum_i^{T/s}A_SB_S$, where $T$ is the number of training rounds. By training different low-rank increments in different rounds, we can achieve better accuracy with the same amount of communication.
3. When we set $S$ of FedMUD to $\infty$ (or any value larger than the number of communication rounds $T$), which means there is one low-rank increment during the entire training process, FedMUD and FL-LoRA are equivalent. In fact, FL-LoRA is equivalent to FedLMT.
Hopefully the above explanation can clarify for you the difference between how MUD and LoRA and other methods handle model updates. | Summary: The paper focuses on enhancing communication efficiency in FL by improving low-rank decomposition techniques. The authors identify three key issues: what to decompose, how to decompose, and how to aggregate. To address these, they propose three novel techniques: decompose only model updates, block-wise Kronecker decomposition, and ways the matrices are assembled by the server. These techniques improve convergence speed and accuracy compared to existing methods.
Claims And Evidence: They are clear but some of the claims are questionable. I disagree with all of the claims regarding the model update decomposition. The original LoRA paper proposes to min f(W+AB) which means that only model updates get a low rank decomposition. Other FL low-rank algorithms also assume the same setting. Unless I'm missing something, the first contribution is not a contribution. To the contrary, it's common practice.
Methods And Evaluation Criteria: They are okay. Very standard small datasets. It would be great to apply it for find tuning of an LLM.
I wonder why a standard CNN wasn't used (ResNet18 for example) instead of a 'out-of-nowhere' architecture.
Theoretical Claims: I didn't check the proofs. The statement is a typical convergent statement and thus I does I have fairly high confidence in the proof.
The analysis of the algorithm with Kronecker and aggregation would be much more interesting.
In general, convergence analyses are fairly standard. A more interesting question: with the same number of trainable parameters, compare convergence rates of Kronecker vs standard AB decomposition.
Experimental Designs Or Analyses: I did. See Methods And Evaluation Criteria for comments.
Supplementary Material: I did but not in detail.
Relation To Broader Scientific Literature: See my other remarks regarding LoRA and other FL low-rank algorithms.
Essential References Not Discussed: Nothing to report.
Other Strengths And Weaknesses: FedMUD is never formally stated. Based on theory it seems not to include Section 3.2. It's unclear if it includes Section 3.3.
If it includes only Section 3.1, based on my LoRA comment, it questions the contribution of the analytical convergence proof.
I think the idea of using the Kronecker product is a great one. Unfortunately the authors don't do enough with it; it is obfuscated with other aspects such as model update decomposition whose novelty is questionable.
Other Comments Or Suggestions: Lines 84 to 88, left column: maximizing the rank is crucial: I'm not sure where this is coming from. In practice and as demonstrated in many paper, low-rank is enough.
Questions For Authors: 1. how is model decomposition different from LoRA and many offsprings?
2. Why not fine tuning of an LLM in experiments?
3. Why not ResNet18 (or a different standard model)?
4. The assumption of bounded weights is questionable. In practice gradient clipping is used because the weights are not bounded (unless being forced to be bounded).
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Hi Reviewer Gugc:**
We sincerely appreciate your valuable feedback. Below, we address each of your comments in detail. For additional experimental results, please refer to the anonymous link: **https://anonymous.4open.science/r/fedmud_rebuttal-962F**.
***Q1: "datasets and models"***
**R1:**
In response to your question, we have added experiments with other model architectures (ResNet18, ViT) and datasets (TinyImageNet). Specifically, we test ResNet18 on CIFAR-10 and TinyImageNet, and ViT (pretrained on ImageNet) on CIFAR-10. We compared only FedAvg, FedLMT/FedHM, and FedMUD+BKD+AAD. The results (**Figure 8** in the above link) further validate the superiority of our method. By the way, due to limited time, we did not test on LLM. But we conducted experiments on a transformer-based model (ViT) and proved that our method should be applicable to LLM. We will include the complete experimental results in the revised paper.
***Q2: "FedMUD's formal statement"***
**R2:**
We have formally defines FedMUD in Section 3.1, line 159: *Federated Learning with Model Update Decomposition (FedMUD)*. Notably, FedMUD refers to using only MUD, excluding the BKD and AAD designs. For better presentation, we will add its formal definition in Section 1 of the revised paper.
***Q3: "difference between model update decomposition (MUD) and LoRA"***
**R3:**
Please kindly refer to **R3** from reviewer **u9mP** for our response.
***Q4: "BKD is obfuscated with other aspects such as MUD"***
**R4:**
Table 1 presents the accuracy results using FedMUD as the base framework, with BKD and AAD as plugins. The role of BKD is indeed obfuscated with that of MUD or AAD. However, Figure 3 provides an ablation study on the reset interval. Notably, when the reset interval equals the total number of rounds, MUD no longer contributes. In this case, FedMUD+BKD reflects the performance of BKD alone. For further details, please refer to Figure 3 in our paper, which intuitively demonstrates the superiority of BKD over the standard decomposition.
***Q5: "convergence rates of BKD"***
**R5:**
The convergence of BKD is indeed a noteworthy problem. Theorem 1 focuses on the convergence of FedMUD (excluding BKD and AAD), demonstrating that FedMUD converges faster than FedLMT. However, comparing the convergence of BKD with methods using matrix multiplication is challenging. In convergence analysis, unifying the number of parameters across different decomposition operators is difficult, making a direct comparison of convergence rates infeasible. We will continue to explore solutions to this problem to provide stronger theoretical support for BKD.
***Q6: "whether low-rank is enough in practice"***
**R6:**
In few cases, low-rank approximations are sufficient, such as in LoRA fine-tuning of LLM on small datasets. However, in most scenarios, low-rank methods are primarily used to reduce computational costs, model size, and communication overhead rather than because they are inherently sufficient. Conversely, many approaches aim to achieve higher matrix ranks using a limited number of parameters. Such as FedPara [1] uses Hadamard product to increase matrix rank, whereas MeLoRA [2] enhances the rank of LoRA by arranging smaller matrices along the diagonal.
Besides, Figure 5 in our paper reveals that as the rank, controlled by the compression ratio, increases, model accuracy improves. This observation confirms that higher ranks contribute to better accuracy. Thus, low-rank approximations are generally insufficient, particularly for communication compression.
[1]FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning, ICLR 2022
[2]MELoRA: Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-Tuning, ACL 2024
***Q7: "bounded weights assumption"***
**R7:**
Your concern regarding the bounded weights assumption is valid. However, FedMUD ensures its reasonableness by periodically reinitializing the low-rank matrices:
1. First, we assume that the gradients are bounded, which is a reasonable and commonly adopted assumption in convergence analysis, as discussed in lines 245–247 of the paper.
2. The bounded weights assumption applies to the low-rank matrices $U$ and $V$ during local training. In FedMUD, $U$ and $V$ are updated via gradient descent for a finite number of steps, after which they are manually added into the base model parameters and reinitialized. This process prevents the unbounded growth of $U$ and $V$.
3. To further substantiate the feasibility of this assumption, we provide a theoretical justification. Let $U$ represent a matrix with an initial value $U_0$ of dimension $n$, an upper gradient bound $G$, and $\tau$ gradient descent steps before reinitialization. Then, $U_t = U_0 +\eta\sum_{i=1}^t g_i$, where $t\leq \tau$. Thus, $\Vert U_t\Vert^2\leq (t+1) \left[\Vert U_0 \Vert^2+\eta^2 \sum_{i=1}^t\Vert g_i\Vert^2 \right] \leq (\tau+1)\left[\Vert U_0\Vert^2+\eta^2\tau^2 G^2\right]$.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing the answers. I have no further questions and comments.
---
Reply to Comment 1.1.1:
Comment: **Hi, Reviewer Gugc:**
Thank you again for your time in reviewing our paper. We also sincerely appreciate your acknowledgment of our rebuttal responses. As you indicated no remaining questions and comments, we would be truly grateful if you could reconsider your evaluation score. Your generous reconsideration would mean a lot to our research team.
**Follow-up reply (AoE Time: April 6, 2:45 AM):**
Hi, Reviewer Gugc! We understand that you have a busy schedule, but we sincerely hope you can reconsider your score in light of our rebuttal. If you still have any concerns, we would greatly appreciate your feedback so we can further improve our work. Thank you again for your time and effort!
**Follow-up reply (AoE Time: April 7, 1:15 AM):**
As the rebuttal deadline is approaching, we wanted to kindly check if you might have had a chance to reconsider the evaluation score based on our rebuttal response. If there are any remaining concerns we could address, we would be more than happy to provide additional information. Your feedback is invaluable to us, and we deeply respect your perspective.
Thank you again for your support throughout this process. We understand your schedule is demanding, and we sincerely appreciate your reconsideration.
**Final Kind Reminder (AoE Time: April 8, 3:00 PM)**
Dear Reviewer Gugc,
As the rebuttal period is about to close, we wanted to express our sincere gratitude once again for your time and thoughtful feedback throughout this process. Your insights have been invaluable in strengthening our work.
If you might have a moment to revisit your evaluation score based on our responses, we would be truly grateful. We fully respect your expertise, and even a modest adjustment would mean a great deal to us.
Of course, we understand how demanding your schedule must be, but want to kindly highlight this final opportunity. Either way, we deeply appreciate your contributions to improving our paper. | Summary: The authors introduce a novel communication-efficient federated learning algorithm that integrates Model Update Decomposition (MUD), Block-wise Kronecker Decomposition (BKD), and Aggregation-Aware Decomposition (AAD). This approach is particularly well-suited for training large neural networks, which commonly involve high-dimensional tensors. They provide a theoretical analysis of the local method applied to a restricted function $h_i(x)$, where $x$ represents the trainable low-rank updates to the neural network parameters. The proposed algorithm is further validated through extensive computational experiments.
Claims And Evidence: The authors effectively support their claims with extensive computational experiments. However, I find Theorem 1 somewhat detached from the main flow of the paper. Although the caption of Figure 1, the discussion on discrepancies between equations (6) and (7), and the introduction of AAD collectively refute the direct aggregation of updates, the proof of Theorem 1 appears to analyze the opposite. If my understanding is correct, this discrepancy is particularly evident in equation (18), which forms the basis of the reviewer's concern. Moreover, the theoretical analysis seems to focus on MUD with truncated SVD rather than with BKD.
Methods And Evaluation Criteria: Proposed methods and evalution criteteria seem adequate.
Theoretical Claims: No, I did not.
Experimental Designs Or Analyses: The experiments appear to be valid.
There is a minor concern regarding the validity of linearizing the stack of convolutional layers in the CNN model used in the experiments. Intuitively, flattening all convolutional kernels and then reshaping them back may introduce significant approximation errors. However, I do not consider this to be a major issue.
Supplementary Material: I reviewed section B.1.
Relation To Broader Scientific Literature: The paper falls within the broad area of communication-efficient federated learning (CEFL) algorithms. Unlike approaches that employ well-defined compression operators—whether unbiased or biased—which typically first compute an intermediate update based on the iterate and loss before applying the compression operator, this work belongs to a subfield of CEFL where optimization and compression occur simultaneously. A notable example from this subfield is FedLMT, which directly decomposes the neural network layer matrix rather than the update to it.
Essential References Not Discussed: Related work is adequately described.
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: 1. The abbreviation AAD is introduced on line 74 but is only defined later in the text.
2. It is unclear how BKD is applied in equation (8). Specifically, how is the model update $\Delta W$ distributed between the two summations when combining MUD+BKD+AAD?
3. In line 230, the phrase *"L-smooth with a constant $L_s$"* should be revised to *"$L_s$-smooth."*
4. In line 235, the notation $l$ in $\nabla F_{i, l}$ is ambiguous. Does it refer to the layer number?
5. The algorithm description is missing from the main text and appears only in the appendix, in equation (15).
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: **Hi Reviewer fwVG:**
We sincerely appreciate your valuable feedback. Below, we address each of your comments in detail. For additional experimental results, please refer to the anonymous link: **https://anonymous.4open.science/r/fedmud_rebuttal-962F**.
***Q1: "The target object of Theorem 1"***
**R1:** Sorry for the confusion caused and hope to clarify this misunderstanding with the following points:
1. In this paper, we propose three modules to enhance low-rank decomposition performance: MUD, BKD, and AAD. MUD serves as the foundational framework, and FedMUD refers to the training scheme using only MUD. BKD and AAD are plugins that further improve the rank of model updates and avoid aggregation errors. When using BKD and AAD, we will add corresponding labels as shown in Table 1 in our paper (e.g., +BKD or +AAD).
2. Theorem 1 analyzes the convergence of FedMUD (without considering BKD and AAD). Its purpose is to highlight the difference between MUD and full-weight decomposition (i.e., FedLMT vs. FedMUD). By introducing the reset interval $S$ (the number of rounds before adding model update to base weights and then reinitializing low-rank matrices), we establish the relationship between FedLMT and FedMUD to compare their convergence rates. Note that FedLMT is equivalent to FedMUD with $S=\infty$.
3. We did not conduct a theoretical analysis of BKD and AAD. Instead, we further added experiments (**Figures 10-11** in the above link) demonstrating their effectiveness in increasing the rank of recovered model updates and avoiding aggregation errors. We will revise the paper to clarify these points and consider integrating Section 4 into Section 3.1 to avoid ambiguity.
***Q2: "concern about reshaping convolutional kernels"***
**R2:** Reshaping the convolutional kernel into a 2D matrix for low-rank decomposition is a method employed by FedLMT and [1], representing a common approach to decomposing convolutional layers. This technique aims to reduce computational complexity from $\mathcal{O}(k^2 c_{in} c_{out})$ to $\mathcal{O}(kr(c_{in} + c_{out}))$. Therefor, for a fair comparison, we follow their processing approach of convolutional kernels.
[1] Initialization and Regularization of Factorized Neural Layers, ICLR 2021.
***Q3: "definition of abbreviation AAD"***
**R3:**
Thank you for raising this point. The definition of AAD is provided in Figure 1(c), though it may not be obvious. We will include a clearer, formal definition of AAD at an appropriate location in Section 1.
***Q4: "how BKD is applied in Eq.(8)"***
**R4:**
Eqs (8) and (9) describe the process of recovering weights and aggregating submatrices using only AAD. When combined with MUD, the local weights of client $i$ are given by:
$W_i = W_g + [U_i(\widetilde{V})^\top + \widetilde{U}(V_i)^\top]$
where $i$ is the client index, $W_g$ is the latest global model parameter, and $U_i(\widetilde{V})^\top + \widetilde{U}(V_i)^\top$ represents the local model update. $\widetilde{V}$ and $\widetilde{U}$ are random noise terms, determined by a unified random seed, which are untrainable and identical across all clients.
When incorporating BKD, the matrix multiplication of $UV^\top$ is replaced by the BKD operator ($\otimes_B$), defined as follows:
$W_i = W_g + [U_i\otimes_B\widetilde{V} + \widetilde{U}\otimes_BV_i]$
After local training, client $i$ only needs to send the trained $U_i$ and $V_i$ to the server for aggregation, regardless of whether standard matrix multiplication or BKD is used.
***Q5: "algorithm description"***
**R5:**
Thank you for your suggestion. We will add a detailed description of our algorithm in the main text, as well as an overall algorithm table for a clearer presentation.
***Q6: "meaning of $l$ in $\nabla F_{i,l}$"***
**R6:**
$l$ denotes the network layer, and $i$ is the client index. We will include a detailed and prominent explanation in the paper.
***Q7: "the expression of $L$-smooth"***
**R7:**
Thank you for the correction. We will revise the description about $L_s$-smooth in the paper accordingly. | Summary: This paper introduces three techniques to enhance low-rank decomposition for communication-efficient federated learning (CEFL):
**Model Updates Decomposition** (MUD), **Block-wise Kronecker Decomposition** (BKD), and **Aggregate-Aware Decomposition** (AAD). Each method addresses specific challenges—what to decompose, how to decompose, and how to aggregate. Theoretical analysis is provided to prove convergence, and experiments across multiple datasets (FMNIST, SVHN, CIFAR-10, and CIFAR-100) are conducted, demonstrating superior accuracy compared to baselines.
Claims And Evidence: - Claim 1: MUD reduces information loss by decomposing only model updates rather than full parameters.
- **Evidence**: Theoretical discussion and experimental results show reduced compression errors and better accuracy.
- **Concern**: MUD is not novel, as similar ideas exist in LoRA[1] techniques, EvoFed[2], SA-LoRA[3], FFA-LoRA[4], and MAPA[5], which also freeze model parameters and only compress updates. The authors fail to highlight these connections.
- Claim 2: BKD enhances the rank of decomposed matrices, improving information preservation during compression.
- **Evidence**: Theoretical rank bounds and experimental validation are provided.
- **Concern**: BKD novelty is limited, as improved rank and parameter efficiency are explored in EvoFed through *reshaping* into single vector and *partitioning*. There is little justification for why this particular approach is better than existing alternatives.
- Claim 3: AAD mitigates implicit aggregation errors introduced by low-rank approximations.
- **Evidence**: Theoretical formulation and ablation studies validate reduced bias in aggregation.
- **Concern**: AAD is novel, but it doubles communication costs compared to freezing one matrix. There is no comparison of whether this additional cost is justified by improved performance. (Accuracy per Communication analysis)
[1] LoRA: Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models. arXiv 2021." arXiv preprint arXiv:2106.09685 (2021).
[2] EvoFed: Rahimi, Mohammad Mahdi, et al. "EvoFed: leveraging evolutionary strategies for communication-efficient federated learning." Advances in Neural Information Processing Systems 36 (2023): 62428-62441.
[3] SA-LoRA: Guo, Pengxin, et al. "Selective Aggregation for Low-Rank Adaptation in Federated Learning." arXiv preprint arXiv:2410.01463 (2024).
[4] FFA-LoRA: Sun, Youbang, et al. "Improving loRA in privacy-preserving federated learning." arXiv preprint arXiv:2403.12313 (2024).
[5] MAPA: Rahimi, Mohammad Mahdi, et al. "Communication-Efficient Federated Learning via Model-Agnostic Projection Adaptation." https://openreview.net/forum?id=rhfOzJzsKN
Methods And Evaluation Criteria: - **Methods**: Introduction of MUD, BKD, and AAD techniques. Theoretical proofs are provided for convergence and decomposition efficiency.
- **Evaluation Criteria**: Accuracy on multiple datasets under non-IID and IID settings. Baselines include FedHM, FedLMT, FedPara, EF21-P, and FedBAT.
Theoretical Claims: 1. **Convergence Guarantees**: Provided under certain and solid assumptions (e.g., L-smoothness, bounded gradients).
2. **Improved Rank and Compression Efficiency**: BKD offers higher rank upper bounds and flexible compression ratios.
3. **Error Reduction**: AAD minimizes bias during aggregation, validated by rigorous mathematical derivation.
**Concern**: While the convergence analysis is provided, additional computational costs are not analyzed, which is critical for understanding scalability.
Experimental Designs Or Analyses: **Datasets:** FMNIST, SVHN, CIFAR-10, CIFAR-100.
**Data Distribution:** Non-IID and IID settings.
**Metrics:** Test accuracy and convergence speed.
**Comparisons:** Against multiple baseline algorithms under controlled hyperparameters.
**Ablation Studies:** Performed to isolate the effects of MUD, BKD, and AAD, as well as the impacts of reset intervals and initialization values.
Supplementary Material: I read details of theoretical analysis and additional results for IID settings, and I skimmed through the code but did not read it thoroughly.
Relation To Broader Scientific Literature: This work fits within the broader theme of communication-efficient FL. Particularly methods that address reducing communication load through the low-rank representation of the model updates, which can extend to LoRA architecture in FL.
Essential References Not Discussed: [1] LoRA: Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models. arXiv 2021." arXiv preprint arXiv:2106.09685 (2021).
[2] EvoFed: Rahimi, Mohammad Mahdi, et al. "EvoFed: leveraging evolutionary strategies for communication-efficient federated learning." Advances in Neural Information Processing Systems 36 (2023): 62428-62441.
Other Strengths And Weaknesses: **Strengths:**
- Introduction of three complementary techniques that tackle distinct challenges in low-rank decomposition.
- Strong theoretical backing with convergence proofs.
- Comprehensive and well-controlled experimental evaluation.
- Detailed ablation studies enhancing the robustness of the claims.
**Weaknesses:**
- **Lack of Novelty in MUD and BKD:** The core concepts behind MUD and BKD closely align with prior work (e.g., LoRA, EvoFed, MAPA). However, the paper does not sufficiently acknowledge these connections or differentiate its contributions.
- **Insufficient Justification for AAD's Overhead:** AAD effectively doubles communication costs, yet the paper lacks a thorough trade-off analysis to justify this overhead.
- **Scalability Not Demonstrated:** Experiments are confined to relatively small models, with no evaluation on larger-scale tasks (e.g., ImageNet) or with more complex FL architectures.
- **Absence of Computational Cost Analysis:** While communication efficiency is discussed, the computational overhead introduced by the proposed methods on both client and server sides remains unaddressed. This is critical for assessing feasibility in large-scale FL deployments.
- **Hyperparameter Sensitivity:** The methods exhibit high sensitivity to initialization values and reset intervals, necessitating extensive fine-tuning. Moreover, the impact of parameters such as *rr* or *kk* (rank or BKD constant) on training dynamics and accuracy-communication trade-offs is not clearly analyzed.
Other Comments Or Suggestions: There are no additional comments or suggestions at this time.
Questions For Authors: **Overhead Analysis:**
- What are the memory and computational overheads of the proposed methods compared to baseline approaches?
**Communication Costs in AAD:**
- Is the additional communication cost of AAD over Freezing justified across all scenarios, or is it contingent on the dataset and FL setting?
**Scalability Considerations:**
- Can the approach scale to larger datasets and model architectures, such as ImageNet with Transformers?
- How does low-rank factorization perform with architectures beyond MLP and CNN?
**Choice of Parameter _s_:**
- The text suggests that FedMUD reduces to FedLMT when _s = 1_, and also implies that _s = 1_ is often the optimal choice.
- What is the rationale for selecting _s > 1_ in certain scenarios?
- Are there specific advantages to higher _s_ values?
**FedMUD vs. FedLMT Performance:**
- If FedMUD reduces to FedLMT with _s = 1_, why does Table 1 indicate that FedMUD outperforms FedLMT?
- Shouldn’t their accuracies converge in this case?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Hi Reviewer W27g:**
We sincerely appreciate your valuable feedback. Below, we address each of your comments in detail. For additional experimental results, please refer to the anonymous link: **https://anonymous.4open.science/r/fedmud_rebuttal-962F**.
***Q1: "novelty of MUD and difference with LoRA"***
**R1:**
Please kindly refer to **R3** from reviewer **u9mP** for our response.
***Q2: "novelty of BKD and difference with EvoFed"***
**R2:**
There may be some misunderstandings, and we offer the following clarifications:
1. After carefully studying Evofed, we found that it does not involve reshaping or partitioning vectors, nor does it use low-rank decomposition. In essence, Evofed substitutes the transmission of model parameters with the use of distance similarity between the trained model parameters and a noise-perturbed model population.
2. The primary focus of BKD is not on reshaping or partitioning vectors, but rather on the innovative application of Kronecker products to enhance the rank of model updates. While the block structure in BKD serves as a partitioning mechanism, its main purpose is to enable dynamic compression strength, rather than being the key factor in improving efficiency.
Therefore, we assert that the innovation of BKD is worthy of recognition.
***Q3: "AAD's doubles communication costs compared to freezing one matrix."***
**R3:**
AAD does not double communication costs. The following explanations clarify this:
1. Compared to standard decomposition ($UV^\top$), AAD ($U\tilde{V}^\top + \tilde{U}V^\top$) does not increase the communication volume. This is because $\tilde{V}$ and $\tilde{U}$ are noise terms determined by random seeds and are not updated. Thus, the communication volume remains #$param(U)$ + #$param(V)$.
2. The comparison in Table 2 is made under identical communication conditions, meaning that with the same communication volume, AAD outperforms the approach of freezing a matrix.
These points will be emphasized in the revised version of the paper.
***Q4: "additional computational costs."***
**R4:**
As you noted, while our focus is communication compression, our approach incurs additional computational and memory overhead. We reported the local training time of different methods. Results (**Table 5** in the avove link) show that our method does not excessively increase local training time. On the contrary, the disadvantages of additional computing time overhead can be offset by the benefits of efficient communication and will not affect the acceleration of federated training.
***Q5: "concern about scalability"***
**R5:**
Please kindly refer to **R3** from reviewer **Gugc** for our response.
***Q6: "hyperparameter sensitivity:"***
**R6:**
Our hyperparameters typically do not require manual tuning. The following clarifications are provided:
1. Our method introduces three additional hyperparameters: compression strength ($r$ or $k$), reset interval ($s$), and initialization size ($a$).
2. Compression strength: The values of $r$ and $k$ are determined by the desired compression strength and do not require further adjustment.
3. Reset interval: Our experiments and analysis indicate that larger values of $s$ lead to lower accuracy. As such, $s$ should be set to 1 by default without modification. To better demonstrate the mechanism of MUD, we include experiments showing the accuracy reduction associated with larger values of $s$. Therefore, $s$ does not need to be treated as a tunable hyperparameter, and there is no sensitivity issue.
4. Initialization size: While the initialization of sub-matrices affects model accuracy, this issue is inherent in all low-rank methods. As shown in Fig. 4 of our paper, BKD effectively reduces the sensitivity of the model to initialization size. For further details, please refer to **R2** from reviewer **u9mP**.
***Q7: "choice of parameter s and question about FedMUD vs. FedLMT Performance:"***
**R7:**
FedMUD reduces to FedLMT when $s\geq T$, where $T$ is the number of training rounds, and $s$ denotes the number of rounds for resetting model updates (i.e., adding a submatrix to the base parameters and reinitializing the submatrix). In MUD, the final model parameters are expressed as: $W=W_0+ \sum_i^{T/s} UV^\top$.
As shown in Figure 3 in our paper, model accuracy generally decreases as $s$ increases, which is consistent with Theorem 1. However, compared to $s=1$, slight improvements in accuracy are occasionally observed for $s=2$ or $s=4$, which can be attributed to data heterogeneity. Specifically, a larger interval allows the low-rank matrices to observe more diverse data before being added to the frozen parameters, thereby mitigating the effects of data heterogeneity. However, as the interval increases further, previously learned knowledge may be forgotten or overwritten, leading to a decline in accuracy.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the answers.
I am satisfied with the answers for **Q1**, **Q2**, **Q4**, **Q6**, and **Q7**.
I still think the **R3** and **R5** did not answer my concerns.
**Q3**:
My concern was not the comparisons of $UV^\top$ to $U\bar{V}^\top + \bar{U}V^\top$, but the comparison of **ADD** to the case where one of the matrices is random like $U\bar{V}^\top$ or $\bar{U}V^\top$.
I believe in this case, the communication cost is either $param(U)$ or $param(V)$, while ADD has a communication cost of $param(U) + param(V)$.
From your answer and the results in Table 2, I understand that ADD has higher convergence accuracy at the end of the training.
However, my question is, given ADD additional communication overhead, compared to $U\bar{V}^\top$ or $\bar{U}V^\top$, does it achieve **higher accuracy per communication**?
It would be convincing if the reviewer could provide plots for accuracy per communication or a table indicating the minimum number of rounds and minimum communication cost to reach certain levels of accuracy in each case, such as total communication at 70% and total communication at 80%.
I think the only case in which ADD can have higher accuracy per communication would be if ADD converged twice as fast as freezing one matrix.
**Q5**:
I think there was a mistake; the answer **R3** from the reviewer **Gugc** is unrelated to my questions and, in fact, refers to another response.
I believe there should be a time and memory complexity analysis that shows the methodology does not demand high computational resources as the number of clients or model parameters increases.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thorough review of our response and your timely feedback. We are glad that our response has addressed some of your questions. However, we apologize for not clearly explaining Q3 and for mistakenly linking Q5 to another question's answer. Below, we provide a more detailed response to these two questions.
***Q3: "AAD's doubles communication costs compared to freezing one matrix."***
**R3-(2):**
To address this concern more intuitively, we compare the performance of three settings:
1. ***AAD*** ($U\tilde{V}^\top+\tilde{U}V^\top$): This setting employs FedMUD+AAD with a 32x compression rate. Here, $\tilde{V}$ and $\tilde{U}$ are randomly initialized and frozen. The communication volume is given by $param(𝑈)+param(𝑉)=param(𝑊)/32$, where $W$ denotes the original weight matrix.
2. ***AAD-Half*** ($\tilde{U}V^\top$): This setting freezes $\tilde{U}$ and trains $V$.
The shapes of $\tilde{U}$ and $V$ remain the same as in *AAD* (setting 1). However, the communication volume is reduced to $param(𝑉)<param(𝑊)/32\approx param(𝑊)/64$.
1. ***Freeze*** ($\tilde{A}B^\top$): Similar to *AAD-Half*, this setting also freezes $\tilde{A}$. However, the matrices $\tilde{A}$ and $𝐵$ are larger in shape compared to $\tilde{U}$ and $V$, ensuring that $para(B)=para(W)/32$ (i.e., 32x compression as *AAD*).
Notably, the relationship between the communication volumes of the three settings follows: ***AAD-Half < AAD = Freeze***.
We compare these three settings based on two criteria: the same number of training rounds and the same communication volume, as illustrated in Figure 13(a-b) of https://anonymous.4open.science/r/fedmud_rebuttal-962F. **Figure 13(a)** shows that under the same number of training rounds, *AAD-Half* achieves the lowest accuracy due to its strongest compression. In contrast, *AAD* not only significantly outperforms *AAD-Half* but also surpasses *Freeze*.
Similarly, under the same communication volume (**Figure 13(b)**), *AAD* remains superior to both *AAD-Half* and *Freeze*. Notably, *AAD-Half* uses more training rounds (i.e., increased computation and updates) under the same communication volume. As a result, its accuracy surpasses that of *Freeze* but still falls short of *AAD*.
By the way, the comparison presented in Table 2 of our paper specifically examines *AAD* and *Freeze*.
***Q5: "concern about scalability"***
**R5-(2):**
Originally, Q5 was an abbreviation of your comment:
*"Scalability Not Demonstrated: Experiments are confined to relatively small models, with no evaluation on larger-scale tasks (e.g., ImageNet) or with more complex FL architectures."*
The correct reply should be **R1** from reviewer **Gugc**. We apologize for the mislabeling as R3 and provide the complete response below:
To verify **scalability**, we have conducted additional experiments with various model architectures (ResNet18, ViT) and datasets (CIFAR-10, TinyImageNet). Specifically, we evaluated ResNet18 on CIFAR-10 and TinyImageNet, and ViT (pretrained on ImageNet) on CIFAR-10. We compared the performance of FedAvg, FedLMT/FedHM, and FedMUD+BKD+AAD. The results (Figure 8 in https://anonymous.4open.science/r/fedmud_rebuttal-962F) further validate the superiority of our method on these models and datasets. The complete experimental results will be included in the revised paper.
Regarding your question on **time and memory complexity** analysis, we previously addressed this in **R4** (Table 5 in the above link). There, we compared the local training time of different methods and found that our approach does not introduce much more computational overhead than other low-rank methods.
To further clarify your concern about **the impact of increasing model parameters on time and memory consumption**, we have now included local training time comparisons for models of different scales in Table 6 in the above link. While Table 5 focused solely on time complexity, Table 6 additionally accounts for memory complexity.
The results in Table 6 are consistent with those in Table 5, both indicating that our method does not introduce excessive additional computations. Notably, compared to ResNet, our method incurs less additional training time on ViT. This is primarily because the Attention mechanism has inherently higher computational complexity than Convolutional Layers, making the relative increase in computation from our method less impactful on ViT’s overall performance. Finally, we note that time and memory complexity are not directly influenced by the number of clients, so we did not consider this factor in our analysis.
**We hope these clarifications have fully addressed your concerns. If so, we would be most grateful if you could reconsider your evaluation score of our paper. Your generous reconsideration would mean a lot to our research team.** | Summary: This paper introduces three novel techniques for Communication Efficient Federated Learning (CEFL) based on low-rank matrix decomposition: Model Updates Decomposition, Block-wise Kronecker Decomposition, and Aggregation-Aware Decomposition, each of which are targetting a specific issue. First, to reduce information loss, a Model Updates Decomposition (FedMUD) approach is developed that involves factorization of model updates rather than entire model parameters. Second, to maximize the rank of the recovered matrix, a Block-wise Kronecker Decomposition (BKD) approach is proposed that partitions the matrix into blocks and uses Kronecker product decomposition and the block structure enables dynamic compression. Finally, to minimize the effect of compression errors during model aggregation stage, the authors propose an Aggregation-Aware Decomposition (AAD) approach that works by decoupling the multiplication of trainable submatrices. These techniques are complementary and can be applied simultaneously to
achieve optimal performance. The authors provide rigorous theoretical analysis of these methods and provide a comprehensive set of experiments to show that the combinations of the above approaches achieve superior accuracy and faster convergence than baseline methods like FedHM (Yao et al., 2021), FedLMT (Liu et al., 2024), FedPara (Hyeon-Woo et al., 2022), EF21-P (Gruntkowska et al., 2023) and FedBAT (Li et al., 2024b).
Claims And Evidence: The authors do a very comprehensive set of experiments that show speed of convergence with communication volume and training rounds of different combinations of FedMUD + BKD + AAD vs baselines as well as quantify the effect of compression ratios, reset intervals and initialization values on the performance of various combinations of FedMUD, BKD and AAD. The main claim in their paper is that their algorithms do a better job of handling information loss, maximizing the rank of the recovered matrices and reducing the effect of compression errors in the model aggregation stage. What the first set of experiments show is that the speed of convergence (Figure 1) is better with FedMUD + BKD + AAD and that the achieved test accuracy is better than the baselines (Table 1). They do not show that their proposed algorithms reduce information loss or maximize the rank of the recovered matrix or reduce compression errors at the aggregation stage *better than the competitors like FedLMT, FedHM and so on*. What the second set of experiments with compression ratios, reset intervals and initialization values show is that combination of FedMUD + BKD + AAD does best in achieving these goals.
Methods And Evaluation Criteria: The choice of benchmark datasets is good and make sense for the problem at hand. Their experiments are comprehensive. It is good to show that speed of convergence of different combinations of FedMUD + BKD + AAD is better than the baselines (Figure 1) and that the achieved test accuracy is better than the baselines (Table 1) across 4 benchmark datasets FMNIST, SVHN, CIFAR-10 and CIFAR-100. They also look at different data distributions (IID, non-IID-1 and non-IID-2) and different compressors: the Rand-K and Top-K compressors. These are the most important experiments. The comprehensive experimental evaluation is why I am inclined towards acceptance of the paper despite the shortcomings I present below.
However, what is missing and really needed are experiments that quantify the effect of compression ratios, reset intervals (where applicable) and initialization values on the performance of various combinations of FedMUD, BKD and AAD *as well as baselines*. In general, we need more additional experiments that backup the claims of the paper: the proposed algorithms FedMUD, BKD and AAD together do a better job of handling information loss, maximizing the rank of the recovered matrices and reducing the effect of compression errors in the model aggregation stage *compared to the baseline approaches*. For instance, what is the experimental evidence that the proposed algorithms maximize the rank of the recovered matrices?
Theoretical Claims: I tried to check the correctness of the proofs up to Eq. 43 in the appendix. I don't see any issues so far.
Experimental Designs Or Analyses: I went through the description of the experimental setup and analysis of the experiments in the paper and examined the plots and tables to judge soundness/validity of the experiment. As the authors state, the initialization of sub-matrices in the decomposition methods can have a significant effect on the model performance and the submatrices were initialized with values drawn from a uniform distribution $U(-a, a)$ with $a \in \{0.01, 0.05, 0.1, 0.5, 1, 5, 10\}$. What would be interesting is to see how varying this initialization value for elements of $U_l$ and $V_l$ affect the performance of FedMUD + BKD + AAD and other baselines. This is key because they are claiming that their methods are more resilient to performance differences due to differences in initialization. Figure 4 shows an ablation study based on this but the curves are not lined up so it is difficult to make a fair comparison between the different methods and the range of values on the x-axis are different from $a \in \{0.01, 0.05, 0.1, 0.5, 1, 5, 10\}$. So it would be helpful to clarify what the numbers on the x-axis of Figure 4 mean.
Supplementary Material: The code looks well-written and I am reasonably sure it runs if the environment is suitable. I tried to run the code and I had some issues. Change the name of the requirement.txt to requirements.txt. You need to add scipy to the requirements.txt. When I run on Ubuntu OS, I had some issues as the file format is not recognized. I tried using dos2unix conversion utility but I get syntatical errors.
Relation To Broader Scientific Literature: Low-rank decomposition (Sainath et al. 2013) is a method for parameter compression in federated learning and works by approximating a matrix by the product of smaller sub-matrices. More specifically, the server sends a low-rank model tot he clients for training and subsequently receives the optimized low-rank model from them. FedHM (Yao et al. 2021) generate a low-rank model by using truncated SVD to the global model but the SVD algorithm introduces approximation errors causing deviations of the client models from the global model over time. To avoid this issue, FedLMT (Liu et al., 2024) directly trains a pre-decomposed global model, eliminating the need for SVD. FedPara (Hyeon-Woo et al., 2022) uses the Hadamard product to increase the rank of the recovered matrices. (Mei et al., 2022) further improves compression by sharing low-rank matrices across multiple layers. The FedMUD approach is an extension of FedLMT in the sense that it also trains a pre-decomposed global model and FedMUD reduces to FedLMT in the case where we have simply low rank matrix factor updates. The BKD approach uses the same concept as FedPara but with block structures to promote dynamic compression.
Essential References Not Discussed: One reference that has not been mentioned is this. This work also employs low-rank matrix factorization to do federated learning. It achieves a $\mathcal{O}(1/T)$ rate for the full-gradient and $\mathcal{O}(1/\sqrt{T})$ rate for stochastic gradients.
Dadras, Ali, Sebastian U. Stich, and Alp Yurtsever. "Personalized Federated Learning via Low-Rank Matrix Factorization." OPT 2024: Optimization for Machine Learning. 2024.
Other Strengths And Weaknesses: The main strength of the paper is the extensive experimental evaluation and three different proposed approaches to mitigate various issues associated with employing matrix factorization to train ML models in a federated setting.
Other Comments Or Suggestions: 1. For Equation 43, you have a missing ^ before the "T" so the subscript doesn't render.
Questions For Authors: 1. In which experiments do you vary $a \in \{0.01, 0.05, 0.1, 0.5, 1, 5, 10\}$?
2. Can you clarify what the numbers on the x-axis of Figure 4 mean. Are they fixed values used to initialize the matrix elements or are they used as hyperparameters for the distribution from which initial values of the elements of $U_l$ and $V_l$ are set?
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Hi Reviewer CShD:**
We sincerely appreciate your valuable feedback. Below, we address each of your comments in detail. For additional experimental results, please refer to the anonymous link: **https://anonymous.4open.science/r/fedmud_rebuttal-962F**.
***Q1: "experimental evidence of the functionality of the proposed modules"***
**R1:**
Based on your suggestion, we present experimental evidence supporting the functionality of MUD, BKD, and AAD below:
1. **MUD Reduces Information Loss:** We apply post-training 'model update decomposition' and 'full-weight decomposition' to the locally trained model (of FedAvG) and compare their validation accuracy. Information loss is measured by the loss in validation accuracy. The experimental setup and results are presented in **Figure 9** (see link). As compression strength increases, the accuracy of the full parameter decomposition drops significantly, even falling below the accuracy achieved before local training, which indicates substantial information loss. In contrast, MUD exhibits minimal accuracy loss.
2. **BKD Maximizes the Rank of the Recovered Matrix:** We compare the ranks of the model updates obtained by FedMUD and FedMUD+BKD on FMNIST, at the same communication cost. The results, shown in **Figure 10** (see link), indicate that the rank of FedMUD is significantly smaller than the full-rank level, and is constrained by the parameter $r$ derived from the compression ratio. In contrast, FedMUD+BKD, utilizing the Kronecker product, maintains nearly full-rank performance.
3. **AAD Avoids Aggregation Errors:** We report the errors introduced by aggregating low-rank matrices using methods other than AAD (note that AAD introduces no aggregation error). The specific calculation method and settings are shown in **Figure 11** (see link). The results demonstrate that FedLMT, FedHM, FedMUD, and FedMUD+BKD all exhibit varying degrees of aggregation error. Moreover, it is evident that the aggregation error in the MUD and BKD is considerably smaller than that in the full-weight decomposition model.
***Q2: "which experiments do you vary $a\in\\{0.01,0.05,0.1,0.5,1,5,10\\}$"***
**R2:**
In our experiments, we initialize the submatrix of low-rank methods using a uniform distribution, $U(-a,a)$, where $a$ controls the initialization size. In Section 5.1, we tune $a$ over the values ${0.01, 0.05, 0.1, 0.5, 1, 5, 10}$ for FedLMT, FedPara, and our method. Specifically, FedLMT uses $a=0.05$, FedDMU and FedPara use $a=0.5$, and FedDMU+BKD uses $a=5$. All other experiments follow this configuration, except for the ablation study of $a$ in Section 5.5 (Figure 5).
***Q3: "the curves in Fig. 4 are not lined up and the meaning of the x-axis is unclear"***
**R3:**
In our experiments, we initialized the submatrices of the low-rank methods using a uniform distribution, $U(-a,a)$. The x-axis in Figure 4 represents the parameter $a$, which controls the initialization amplitude.
To investigate the impact of initialization, we conducted detailed ablation studies around the optimal value of $a$ for each method. For FedMUD, the optimal value is $a=0.5$, and the ablation interval is $\\{0.1, 0.3, 0.5, 1.0, 3.0\\}$, and for FedMUD+BKD, the optimal value is $a=5$, and the ablation interval is $\\{1, 3, 5, 10, 30\\}$.
As suggested, we further aligned the curves in Figure 4 (refer to **Figure 12** in the above link). The experimental results still show that BKD requires larger values of $a$, which can be attributed to the differing variance effects of Kronecker products and matrix multiplications. Specifically, for $A \in \mathbb{R}^{m \times r}$ and $B \in \mathbb{R}^{r \times n}$, assuming both are i.i.d., we have $Var(AB) = r \cdot Var(A) \cdot Var(B)$ and $Var(A \otimes B) = Var(A) \cdot Var(B)$. Since $r$ affects and amplifies $Var(AB)$, BKD requires a larger initialization range to match the magnitudes.
Additionally, for the same reason, BKD is more robust to changes in $a$, as the variance is no longer influenced by $r$ ($r$ is determined by the compression ratio).
***Q4: "code reproducibility issues"***
**R4:**
Sorry for the reproducibility issues with our code. Our implementation builds upon the open-source work "FedBAT" (ICML 2024), ensuring replicability. We will update the codebase to include the necessary dependencies and conduct thorough reviews to ensure compatibility across different operating systems and devices.
***Q5: "typo in Eq.(43)"***
**R5:**
Thank you for your meticulous review. We will address the typos in the upcoming version.
***Q6: "One related reference not discussed."***
**R6:**
Although we covered low-rank decomposition in personalized federated learning in the related work section, we acknowledge the oversight of this valuable work. We will include a discussion of it in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for adding the additional experiments, clarifications and updates. I believe that this merits an increase in my score from 3 to 4.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for taking the time to review our paper and for your helpful suggestions. We truly appreciate you recognizing our responses and raising your score. Your feedback made a big difference in improving our work! | null | null | null | null |
BCE vs. CE in Deep Feature Learning | Accept (poster) | Summary: The paper compares Binary Cross-Entropy (BCE) and Cross-Entropy (CE) loss functions in deep feature learning, focusing on their ability to enhance intra-class compactness and inter-class distinctiveness. It theoretically proves that BCE, like CE, can lead to Neural Collapse (NC) when minimized, maximizing these properties. BCE is found to explicitly enhance feature compactness and distinctiveness by adjusting decision scores across samples, whereas CE implicitly improves features by classifying samples one-by-one. Experimental results show BCE improves classification performance and leads to faster NC.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: - The experiments mainly focus on CNNs such as ResNet and DenseNet. It would be better to provide more experiments results for Transformer-based models such as ViT.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: The proposed method contributes to the broader scientific literature by providing a theoretical analysis of BCE from the perspective of neural collapse.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths**
- The paper is well-written and organized.
- The paper provides a novel theoretical analysis demonstrating that BCE, like CE, can lead to neural collapse when minimized. This fills a gap in understanding BCE’s behavior.
**Weakness**
- While the paper experimentally validates BCE's advantages on long-tailed datasets, it does not provide a theoretical analysis on imbalanced scenarios.
- The authors rely heavily on "decision scores" to assess compactness and distinctiveness of features, but as acknowledged by the authors themselves, these scores are not direct measurements of feature compactness or distinctiveness.
Other Comments Or Suggestions: - There are some typos. For example, in the abstract, where "the leaned features are compact" should be "the learned features are compact".
Questions For Authors: Please see above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # **Thanks to reviewer RW5r for the comments!**
# Response to “Experimental Designs Or Analyses”: Experiments on Transformer
To further validate the advantages of BCE over CE, we trained ViT [1] and Swin Transformer [2] using BCE and CE on CIFAR10 and CIFAR100. Similar to the experimental setting in Section 4.1 of the paper, we conducted two groups of experiments: 1) with a fixed $\lambda_b=0$, setting the mean of the initialized classifier biases to 0, 1, 2, 3, 4, 5, 6, 8, and 10, respectively; 2) with varying $\lambda_b = 0.5, 0.05, 5\times10^{-3}, 5\times10^{-4}, 5\times10^{-5}$, and $5\times10^{-6}$, respectively, setting the mean of initialized classifier biases to 10.
After the training, we visualized the distributions of unbiased positive and negative decision scores and classifier biases using violin plots for the different ViTs and Swin Transformers. One can find the results through the following anonymous link:
https://anonymous.4open.science/r/BCE-vs-CE-6F45/
From the results, we obtained the conclusions similar to those from CNNs: for models trained with CE, the final classifier biases have no substantial relationship with the unbiased decision scores of the sample features, which primarily depend on their initial values and regularization coefficients $\lambda_b$. In contrast, for models trained with BCE, regardless of the model or optimizer used, there is a clear correlation between the final classifier biases and the unbiased positive and negative decision scores.
These results indicate that the classifier bias of BCE substantially affects the positive and negative decision scores, thereby affecting the compactness and distinctiveness of the sample features, which is consistent with the analysis in the paper.
[1] A. Dosovitskiy, et al, An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, ICLR2021
[2] Z. Liu, et al, Swin Transformer: Hierarchical Vision Transformer using Shifted Windows, ICCV2021
# Response to “Weaknesses”:
**1. theoretical analysis on imbalanced scenarios**
On imbalanced datasets, the theoretically analyzing for neural collapse of loss functions is a more challenging task. Currently, the most theoretical analyses of loss function with neural collapse are conducted on balanced datasets, such as [1,2,3,4,5]. Some theoretical works [6] on neural collapse even simplify a class of sample features into a single vector. Currently, we only found few papers [7,8] that rigorously analyze the neural collapse for loss functions on imbalanced datasets. We are also investigating the neural collapse of BCE on imbalanced datasets, and we hope to theoretically address this issue in the near future.
[1] Z. Zhu, et al, A Geometric Analysis of Neural Collapse with Unconstrained Features, NeurIPS 2021.
[2] J. Zhou, et al, Are All Losses Created Equal: A Neural Collapse Perspective, NeurIPS 2022.
[3] M. Munn, et al, The Impact of Geometric Complexity on Neural Collapse in Transfer Learning, NeurIPS2024
[4] P. Li, et al, Neural Collapse in Multi-label Learning with Pick-all-label Loss, ICML 2024
[5] J. Jiang, et al, Generalized Neural Collapse for a Large Number of Classes, ICML 2024
[6] J. Lu and S. Steinerberger, Neural Collapse Under Cross-entropy Loss, Applied and Computational Harmonic Analysis, 2022
[7] C. Fang, et al, Exploring Deep Neural Networks via Layer-peeled Model: Minority Collapse in Imbalanced Training, PNAS 2021
[8] H. Dang, et al, Neural Collapse for Cross-entropy Class-Imbalanced Learning with Unconstrained ReLU Features Mode, ICML 2024
**2. decision scores**
Although decision scores do not directly measure the intra-class compactness and inter-class differences of features, they can indirectly reflect these two properties by anchoring to the classifier vectors $[w_k]_{k=1}^K$. In practice, the loss functions based on decision scores, such as CE and focal loss, etc., are prevalent in enhancing the feature properties. Therefore, this paper relies on decision scores to analyze CE and BCE.
Currently, to fill the gap between "decision score" and “feature property”, we are theoretically analyzing CE and BCE, which directly measure the inner product or cosine similarity among sample features in contrastive learning.
# Response to “Other Comments Or Suggestions”: Typos
Thanks to the reviewer for pointing out the typos in the paper. We have conducted a thorough review to avoid such issues in our manuscript. | Summary: This paper shows that binary cross-entropy (BCE) loss, like cross-entropy (CE) loss, can induce neural collapse—maximize intra-class compactness and inter-class distinctiveness in multi-class tasks when the loss reaches its minimum. Through theoretical and empirical analysis, the authors show that models trained with BCE outperform those using CE. This advantage stems from BCE's classifier bias, which explicitly optimizes feature alignment toward class centers, enhancing intra-class compactness and inter-class distinctiveness. This finding reveals that BCE not only promotes beneficial feature distributions but also boosts model performance, outperforming CE in both classification and retrieval tasks due to its ability to learn more compact and separated features.
Claims And Evidence: Yes.
Sections 3 and 4 provide theoretical and empirical analysis respectively.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Section 5 about Transformer and other deep architectures is indeed overly concise, lacking experimental validation. The authors did not personally verify the performance of BCE and CE in Transformer architectures, which diminishes the comprehensiveness and persuasiveness of the study.
Supplementary Material: This paper does not have Supplementary Material.
Relation To Broader Scientific Literature: This paper demonstrates that BCE loss not only fosters a beneficial feature distribution but also boosts model performance. If the conclusion of this study is validated on Transformer-based architectures, it could advance existing research in several critical areas of computer vision, including imbalanced/long-tail learning[1], continual learning[2], noisy label[3] learning, and CV tasks.
[1] J. Shi, et al., Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts, in ICML 2024.
[2] L. Wang, et al., A Comprehensive Survey of Continual Learning: Theory, Method and Application, in TPAMI 2024.
[3] N. Natarajan, et al., Learning with Noisy Labels, in NIPS 2013.
Essential References Not Discussed: No. The related work is sufficient.
Other Strengths And Weaknesses: Strengths:
1. The paper presents a comprehensive theoretical analysis comparing the capabilities of BCE and CE in achieving neural collapse. It demonstrates that BCE can effectively attain high intra-class compactness and inter-class separation, underscoring BCE's robust theoretical potential in feature learning.
2. Beyond its theoretical contributions, the paper delves into the practical distinctions between BCE and CE, particularly in terms of decision score dynamics and boundary updates. The analysis reveals that BCE provides more consistent updates to decision scores, which imposes stronger constraints on feature representations. This practical insight enhances our understanding of how BCE can refine feature learning, offering a clear explanation for its superior performance in improving feature alignment and separation.
Weaknesses:
1. Figure 1 lacks corresponding experimental verification. t-SNE visualizations can be utilized to validate the feature distributions of CE and BCE.
2. The authors focus exclusively on the original softmax formulation in their comparison of CE and BCE, concluding that CE measures the relative values of decision scores, while BCE adjusts the absolute values of positive/negative decision scores uniformly across all samples. What about the cosine similarity used in A-Softmax loss [1] and ArcFace[2]? These methods explicitly optimize the angular margin between classes, potentially leading to different decision scores compared to the standard softmax-based approaches. When employing cosine-based softmax, does BCE still retain its advantages in feature learning?
[1] W. Liu, et al., SphereFace: Deep Hypersphere Embedding for Face Recognition, CVPR 2017.
[2] J. Deng, et al., ArcFace: Additive Angular Margin Loss for Deep Face Recognition, CVPR 2019.
Other Comments Or Suggestions: It is recommended that the authors extend their experimental validation to include other architectures beyond those currently studied. For instance, exploring parameter-efficient fine-tuning (PEFT) methods, such as Visual Prompt Tuning, LoRA, or similar approaches, would significantly enhance the compelling evidence of the study.
Questions For Authors: The abbreviation "LTR" mentioned in Sec. 5 "Transformer and other deep architectures" lacks a clear explanation, and its full form is not provided anywhere in the text.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Thanks to reviewer bxWz for the comments!
# Response to “Experimental Designs Or Analyses”: Experiments on Transformers
As required by Reviewers **bxWz** and **RW5r**, we train ViT and Swin Transformer using BCE and CE on CIFAR10 and CIFAR100, to further validate the advantages of BCE over CE.
The experimental setting and conclusions can be referenced in our response to reviewer **RW5r**.
The results can be found through: https://anonymous.4open.science/r/BCE-vs-CE-6F45/
# Response to “Supplementary Material”:
We submitted a 24-page PDF file as supplementary material in the OpenReview, which includes the metrics used in our experiments, additional experimental results, and detailed proofs of the theorems presented in the paper.
# Response to “Weaknesses”:
**1. t-SNE visualizations for the feature distributions**
We thank the reviewer for suggesting the use of t-SNE visualization to compare the feature distributions of CE and BCE. Using t-SNE, we dynamically demonstrate the feature distributions extracted by ResNet18 trained with BCE and CE on CIFAR10, from the first epoch to the 30th epoch. One can find the results through the following anonymous link:
https://anonymous.4open.science/r/BCE-vs-CE-6F45/
Although both CE and BCE can lead to neural collapse, ultimately maximizing the intra-class compactness and inter-class differences of the features, BCE leads to neural collapse more quickly during the feature learning. We provide a static display of the features from epochs 12 and 13 for both CE and BCE. One can observe that, compared to CE, the features from BCE at these two epochs are already distributed in distinct regions, within a more compact distribution for each class. In contrast, CE shows unclear boundaries between features of different classes, and the feature distribution within the same class is relatively loose.
**2. CE and BCE using cosine similarity and margin**
When using cosine similarity as the unbiased decision score, the CE loss still couples the positive and negative decision scores of every sample within one Softmax, constraining their relative values. In this regard, it is fundamentally no different from the original CE, which uses the inner product as the unbiased decision score.
When an angular margin is added, the Softmax-based CE requires that there be at least a margin between the positive and negative decision scores for each sample. Although this margin is consistent across all samples, only when the marginal intervals between the positive and negative decision scores of all samples overlap, a unified threshold that can distinguish the positive and negative decision scores for all samples exists, to ensure that the positive decision scores of all samples are at a uniformly high level (as we have explained in the response to reviewer **eGz4** regarding the connection between ``uniformly high level’’ and intra-class compactness), while their negative decision scores are at a uniformly low level. In total, introducing the margin can indirectly enhance the intra-class compactness and inter-class disparity of sample features. In contrast, as explained in Eq. 16 of the paper, BCE uses the unified parameters (i.e., classifier biases) to directly constrain the decision scores of samples within the same class to be at uniformly high level or uniformly low level.
In practice, although the gains brought by an appropriate margin to CE are generally higher than the gains of BCE compared to CE, the margin strategies can also be integrated into BCE for higher performance. This is discussed in more depth with experimental validation in SphereFace2 [1] and UniFace [2], both of which take BCE-based loss functions with cosine similarity and margin, and they achieved better face recognition performance than CE-based ArcFace, A-softmax, and CosFace.
[1] Wen, Y., et al. SphereFace2: Binary classification is all you need for deep face recognition. ICLR 2022.
[2] Zhou, J., et al. UniFace: Unified cross-entropy loss for deep face recognition. ICCV2023.
# Response to “Comments Or Suggestions”: experiments with other architectures:
We sincerely appreciate the reviewer for suggesting us to validate the advantages of BCE over CE across more methods and model architectures. We have conducted preliminary explorations of the advantages of BCE over CE using ViT and Swin Transformer. We believe that the benefits of BCE are universal, and in the future, we will consider using BCE in methods including visual prompt tuning and LoRA.
# Response to “Questions For Authors”: LTR
LTR is short for long-tailed recognition, which refers to the recognition tasks on imbalanced long-tailed datasets. We apologize for not including its full name in the paper, and we will add it in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' effort. I apologize for the carelessness. I initially thought the supplementary material was to be included with the main paper, but I later found it in a separate file. After reviewing the supplementary, I found the theoretical proof rigorous and well-structured.
This work makes a valuable contribution by establishing a solid theoretical foundation, which could potentially support more challenging research directions such as imbalanced/long-tailed learning and OOD generalization, to name a few.
Given its contributions, I lean to accept this paper.
---
Reply to Comment 1.1.1:
Comment: Thanks very much for your response and increasing the score! | Summary: This paper provides a comparative analysis of Binary Cross-Entropy (BCE) and Cross-Entropy (CE) losses in the context of deep feature learning. The authors investigate whether BCE can lead to neural collapse (NC)—a phenomenon where intra-class variability collapses, class centers form a simplex equiangular tight frame, and classifier vectors align with their class centers. The study offers both theoretical proofs and empirical evidence demonstrating that BCE, like CE, can maximize intra-class compactness and inter-class distinctiveness. Furthermore, the authors argue that BCE explicitly enhances feature properties during training, unlike CE, which implicitly achieves this effect through classification decisions.
Claims And Evidence: Most of claims are well supported by either the theoretical proof or emprical experiments. However, The authors claims that the BCE achieve better feature compactness and distinctiveness, resulting in higher classification performance. However, the better feature compactness and distinctiveness and the relationship with the accuracy is studied on the small dataset, such as CIFAR10 and CIFAR100. For larger dataset like ImageNet, the author only reports the classification, whether this claim is hold on this dataset in unclear.
Methods And Evaluation Criteria: No new proposed methods and evaluation criteria.
Theoretical Claims: I rougly read the proof in the appendix, which use similar proof procedure in the previous work but bot rigorously check the correctness of any proofs for theoretical claims.
Experimental Designs Or Analyses: i check the soundness/validity of any experimental designs or analyses. Most of the metric are adopted from the previous neural collapse study.
Supplementary Material: I review all supplementary material
Relation To Broader Scientific Literature: From my understanding, there is no direct connection with broader scientific literature.
Essential References Not Discussed: There is some Neural collapse papers study different losses functions that are not cited in the papers, like MSE and contrastive loss.
[1] Zhou J, Li X, Ding T, et al. On the optimization landscape of neural collapse under mse loss: Global optimality with unconstrained features[C]//International Conference on Machine Learning. PMLR, 2022: 27179-27202.
[2] Xue Y, Joshi S, Gan E, et al. Which features are learnt by contrastive learning? On the role of simplicity bias in class collapse and feature suppression[C]//International Conference on Machine Learning. PMLR, 2023: 38938-38970.
[3] Jiang R, Nguyen T, Aeron S, et al. On neural and dimensional collapse in supervised and unsupervised contrastive learning with hard negative sampling[J]. arXiv preprint arXiv:2311.05139, 2023.
Other Strengths And Weaknesses: **Strengths**:
1. Theoretical Contribution: The paper provides a rigorous proof that BCE, despite its structural differences from CE, can also induce neural collapse. It offers a mathematical explanation for how classifier biases in BCE constrain decision scores, leading to improved feature compactness and distinctiveness.
2. Empirical Validation: The comparison across different architectures (ResNet18, ResNet50, DenseNet121) ensures the results are not model-specific. The study includes extensive experiments across multiple datasets (MNIST, CIFAR-10, CIFAR-100, ImageNet-1k), showcasing that BCE achieves better intra-class compactness, inter-class distinctiveness, and classification accuracy than CE. Experiments demonstrate that BCE achieves neural collapse faster than CE in the initial training stages.
3. Comprehensive Comparison: The study explains BCE and CE differences geometrically and analytically, providing a deeper understanding of how these loss functions influence feature learning. The authors provide clear visualizations (e.g., decision region comparisons) and numerical metrics (e.g., compactness and distinctiveness scores) to support their claims.
4. Well-written and organization
**Weaknesses**:
1. Limited Discussion on Compactness and Distinctiveness in ImageNet: While the authors demonstrate that BCE improves feature compactness and distinctiveness on smaller datasets such as CIFAR-10 and CIFAR-100, they provide only accuracy comparisons for larger-scale datasets like ImageNet-1k. Although BCE achieves slightly higher classification accuracy than CE on ImageNet, the paper lacks an in-depth analysis of whether BCE also enhances feature compactness and inter-class distinctiveness on large-scale datasets. Given that ImageNet has significantly more classes and higher complexity, it is important to verify whether the same advantages observed on smaller datasets hold in this larger setting.
2. Practical benefits of BCE loss: One of the key arguments in favor of BCE over CE is its ability to explicitly constrain decision scores, leading to bounded feature distributions. However, this advantage is not entirely clear in practical deep learning scenarios. Typically, models trained with cross-entropy (CE) loss already incorporate weight decay as a regularization mechanism, which naturally constrains features within a bounded region. As a result, the explicit bounding effect of BCE may not provide a significant additional benefit beyond what is already achieved through weight decay.
Other Comments Or Suggestions: On page 5, lines 246 and 255, the term "uniformly high level" is unclear, as it lacks a precise definition and an explicit connection to intra-class compactness. It is not immediately evident what aspect of the decision scores is being referred to as "uniformly high" and why this uniformity directly contributes to enhanced compactness within the same class.
To improve clarity, the authors should explicitly define what "uniformly high level" means in the context of decision scores and provide a more detailed explanation of how this uniformity leads to greater intra-class compactness.
Questions For Authors: See above weakness and suggestions
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Thanks to reviewer eGz4 for the comments!
# Response to “Claims and evidence”: feature properties on ImageNet
On the validation set of ImageNet, we calculated the feature properties for ResNet50, ResNet101, and DenseNet161 trained by CE and BCE in Table 3 of the paper, and the results are presented in the table below.
| |R50| R50|R101|R101|D161|D161|
|----|----|----|----|----|----|----|
| |$\mathcal E_{com}$|$\mathcal E_{dis}$|$\mathcal E_{com}$|$\mathcal E_{dis}$|$\mathcal E_{com}$|$\mathcal E_{dis}$|
|CE |82.46|12.21|82.88|**13.14**|78.25|12.04|
|BCE|**82.99**|**12.50**|**83.81**|12.97|**79.09**|**12.05**|
In the table, $\mathcal E_{com}$ and $\mathcal E_{dis}$ stand for the intra-class compactness and inter-class distinctiveness, respectively. From the table, one can find that although the more classes and higher complexity result in a less significant gain in feature properties for BCE, it still enhances the feature properties extracted by the model in most cases compared to CE. Only the inter-class distinctiveness of the ResNet101 trained by BCE has decreased, while the compactness and distinctiveness of other models trained by BCE have improved.
In addition to the above results on ImageNet, we also compared BCE and CE using ViT and Swin Transformer. Please refer to our responses to reviewers **bxWz** and **RW5r**.
# Response to “Essential References Not Discussed”:
We thank reviewer **eGz4** for providing three new references. The first paper analyzes the neural collapse (NC) of the MSE loss, while the latter two investigate NC in contrastive learning. These references are indeed relevant to our research and will help advance our theoretical analysis of the BCE in contrastive learning. Once again, we appreciate the reviewer’s suggestions, and we will cite these papers.
# Response to “Weaknesses”:
**1. Feature properties on ImageNet**
In the response to “Claims and evidence”, we have presented the feature properties of ResNet50, ResNet101, and DenseNet161 trained with CE and BCE on ImageNet. Although ImageNet has more classes and higher complexity, the results shows that the features extracted by the models trained with BCE exhibit better intra-class compactness and inter-class distinctiveness.
**2. BCE and CE with weight decay**
The core argument of our paper is that the classifier bias in BCE plays a significant role in the deep feature learning, while the bias in CE does not, which helps BCE learn features with better properties. In Section 3.2, we theoretically demonstrate this viewpoint through the neural collapse of BCE and CE. In Section 3.3, to provide a simple and intuitive understanding of the above argument, we did not use the weight decay; however, in both the theoretical proof (line 162) in Section 3.2 and the experimental validation (line 258, line 305, and line 322) in Section 4, we did employ weight decay.
Under the same theoretical framework and experimental setting, the theoretical analysis and experimental results in the paper both indicate that when using weight decay, BCE achieves better feature properties and classification results than CE in most cases.
# Response to “Other Comments Or Suggestions”: uniformly high level
"Uniformly high level" refers to the situation where the values expressed by the positive (unbiased) decision scores for different samples are consistently at a high level. For the class $k$, it means that the inner products between the different sample features $[h_i^{(k)}]_{i=1}^{n_k}$ and their corresponding classifier vector $w_k$ are all at high level. We believe that, in this case, the sample features within the class exhibit good intra-class compactness.
To understand this, in first, we incorporated the weight decay in the theoretical analysis of the paper, ensuring that the L2 norms of the sample features do not increase indefinitely during the training. The theorems in the paper have also indicated that the L2 norms of different feature vectors will converge to a fixed value. Secondly, as
$$‖w_k-h^{(k)}‖_2^2 = ‖w_k ‖_2^2 + ‖h^{(k)}‖_2^2 - 2w_k^T h^{(k)},$$
for different sample features of class $k$, when they have the same norm, the large unbiased positive decision scores $\big[w_k^T h_i^{(k)}\big]_{i=1}^{n_k}$ implies that they are close to the classifier vector $w_k$. Therefore, if all the unbiased positive decision scores of different sample features of class $k$ are at a uniformly high level, it means all of these features are all close to $w_k$, resulting in small distances between them and thus high intra-class compactness. Conversely, if there is an obviously difference in the distances of the different sample features from $w_k$, it can be reasonably inferred that there will also be considerable differences in the distances between these features, leading to lower intra-class compactness.
---
Rebuttal Comment 1.1:
Comment: After reading the rebuttal and additional experiments, my concern is mainly addressed. Therefore, I will keep my score.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for taking the time to review our manuscript and rebuttal. We are glad to have addressed the reviewer’s concerns. | null | null | null | null | null | null | null | null |
SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation | Accept (poster) | Summary: This paper addresses the problem of robotic manipulation with memory. To that end, the authors combine the RVT-2 manipulation framework with the SAM2 memory-enabled segmentation model. I'll only adjust the "coarse" module which predicts the rough spatial region of interest for manipulation; the memory is only incorporated into the coarse module, not the "fine" grip-prediction module. The high-level architectural combination works as follows:
1. RVT-2 takes a bunch of random camera views and synthesizes a point cloud, from which three standard orthogonal camera angles are re-rendered (XY / YZ / XZ planes). There is then a learned component which takes in these RGB images and a language instruction and outputs are three action heatmaps from the same viewpoints. The spatial intersection of the extrusions is the region of interest that is then passed to the downstream "fine" module.
2. The authors want to ultimately enable the integration of SAM2's memory / object tracking capability into this architecture. So they introduce a "SAM2Act" module which produces the action heatmaps. There is some fancy upsampling going on, but the key idea is that they pass the instruction, original RGB images, and the SAM2 embeddings of those images all into a multi-view transformer ("MVT"), which then outputs a latent vector that is upsampled into the heatmaps. This new architecture alone shows some decent performance improvements over RVT-2 without even considering the memory.
3. Now memory is incorporated as follows, independently for each view. The basic idea is that past input image embeddings and output heatmap embeddings are stored pairwise in a FIFO queue. They take the heatmap image embedding output of the MVT and do a few layers of cross-attention with the memory, and then output a new heatmap image. This allows the module to condition on the observation history.
The authors show that the SAM2Act module alone results in a ~5% success rate bump for existing benchmarks, and with memory enabled reduces the failure rate by ~80% on some memory-requiring benchmarks.
Claims And Evidence: I think all the claims made are well-supported by the experiments. Namely:
1. SAM2Act+ enables memory-dependent manipulation where baselines fail.
2. SAM2Act is more robust across different environmental perturbations.
3. SAM2Act outperforms existing policies on established baselines which don't involve memory.
Methods And Evaluation Criteria: The authors compare on both existing benchmarks and propose their own benchmark which relies on memory. The included baseline methods are appropriate (if somewhat limited) and the results are compelling.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Overall I liked found the experiments to be quite thorough, especially the ablations between SAM2Act and SAM2Act+. One additional ablation I would be curious about is the performance of incororporating memory without the SAM2 image encoder step. My intuition here is that these are actually orthogonal, and we could simply have the MVT take in an instruction + RGB images, output an embedding, perform cross-attention with old images & heatmaps (any embedding module could be used here), and then upsample the resulting final embedding. But I don't think this experiment is necessary for the paper to be valuable.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: Vision-Language-Action models are a promising research direction right now in robotics (Octo, $\pi_0$, OpenVLA). These models have generally made markovian assumptions; i.e. the optimal action is independent of past observations given the current observation. This is of course not realistic for real-world autonomy. If a robot turns around, it shouldn't forget what's behind it. To the best of my knowledge, this is the first serious attempt to incorporate memory into these VLA models and comprises an important step for the field.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Weaknesses:
1. It's a bit hard to understand the paper as-is without referencing the RVT-2 paper. The figures especially could maybe use some more labels and clarity. For instance, the three action heatmaps in Figure 3 are not labelled and I actually only saw a solid blue square until I looked closely.
2. The baselines are a little lacking (e.g. Octo and related follow-up work).
Other Comments Or Suggestions: 1. I think some of the result summaries are a little editorialized. For example, line 436 states that SAM2Act+ outperforms SAM2Act by a "huge" margin of 37.6%. I find this to be stylistically a little off-putting and would prefer to let the (impressive) numbers speak for themselves. Or perhaps add a sentence emphasizing that the failure rate was "~81% lower than that of SAM2Act and ~83% lower than the RVT-2 baseline," or "less than one-fifth that of SAM2Act and the RVT-2 baseline." This is just my preference, and I think it's also fine if the authors want to keep the language as-is.
2. Conversely, the abstract is dramatically understated: "SAM2Act+ achieves competitive performance on MemoryBench." I'd suggest giving some concrete numbers here to spark reader interest.
3. There's a point of confusion for me about how the higher-resolution embeddings from the SAM2 encoder are passed to the multi-resolution upsampling method in Stage 2. Namely, are the original embeddings of the RGB image passed through, ignoring the memory module? Or is the memory somehow involved here? Specifically, in Figure 3 I'm expecting an arrow from the SAM2 Image Encoder directly to the Multi-Resolution Upsampling block containing the higher-resolution embeddings, bypassing the memory attention and the MVT. I guess these were just left out for simplicity of the diagram?
4. I'd suggest using bold font a little more judiciously in Section 5.
Questions For Authors: 1. What are the cases where the SAM2Act+ still fails on MemoryBench? Is it due to failures in memory (opening the wrong drawer), or failure in execution (trying and failing to open the drawer)?
2. I'm trying to understand the implications of the SAM2Act-fine module not having a memory component. Does that mean that a memory-enabled instruction like "move the doll by grabbing the limb closest to the robot, then grab the teddy bear by the same limb" would not be possible? I don't think this is a major issue if true but I want to check my understanding of the limitations here.
3. Why make the views independent in the memory architecture? It seems that the memory attention module should have the capacity to use or ignore other views as appropriate.
4. Can you speculate about why the robustness is so improved in Section 5.3? Is it the pre-trained knowledge from the SAM2 image encoder that helps distinguish semantic meaning despite visual perturbations?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank **Reviewer auqZ** for recognizing the contributions of our work. We are pleased the reviewer found novelty in our method, noting its ~5% improvement in success rate and state-of-the-art performance on benchmarks, as well as our memory components reducing failures by ~80% on memory-intensive tasks. We also appreciate that reviewer acknowledged our comprehensive experiments and thorough evaluation.
Despite this, the reviewer raised several concerns and suggestions, which we address below:
**The baselines are a little lacking.**
We appreciate the reviewer's feedback on the baselines. Our evaluation spans three benchmarks: RLBench (18 tasks), The Colosseum, and MemoryBench. We achieve state-of-the-art results on RLBench and The Colosseum. For MemoryBench, we tested OpenVLA, which obtained a 0% success rate across all tasks (All baselines on our [anonymous website](https://sam2act-icml.github.io/)). OpenVLA demonstrated basic interactions (e.g., approaching objects) but failed key actions like closing drawers or grasping, due to visual ambiguities—identical scenes requiring different actions (e.g., opening vs. closing). Conventional VLAs struggle as they rely solely on visual and language cues without temporal context. Our method integrates timestep information into proprioceptive inputs, resolving these ambiguities effectively. Consequently, even RVT-2 and SAM2Act (without explicit memory) succeed in such tasks, though occasionally erring on subtasks requiring actual memory. Extensive baseline comparisons and ablations confirm our method’s robustness on MemoryBench, highlighting areas for VLA improvement. Full results tables will appear in the final manuscript.
**Clarification on how the higher-resolution embeddings from the SAM2 encoder are passed to the multi-resolution resampling methods in Stage 2?**
We apologize for the confusion and clarify briefly: Multi-resolution embeddings from the SAM2 encoder integrate into Stage 2 as in Stage 1 (see simplified Figure 4). Lower-resolution (res 16) embeddings pass through MVT and Memory Attention (conditioned on Memory Bank embeddings) before multi-resolution upsampling. Higher-resolution embeddings skip MVT and Memory Attention, entering upsampling directly. The resulting action heatmap is encoded into memory, indirectly involving all resolutions in memory processing.
**What are the cases where SAM2Act+ still fails on MemoryBench?**
Our analysis of the reopen_drawer task indicates that failures primarily stem from memory issues rather than execution errors. Specifically, SAM2Act+ often reopens the wrong drawer, showing reliable command execution but difficulty recalling the correct drawer. This aligns with our benchmark's focus on isolating memory-related errors. We found that shadows in the simulation occasionally caused ambiguous visual cues, complicating memory recall. To test this, we regenerated the dataset without shadows and, after retraining, SAM2Act+ achieved 100% success. This confirms that visual ambiguity from shadows was the primary cause of memory failures and demonstrates our approach effectively resolves these memory challenges.
**Can you speculate about why the robustness is so improved in Section 5.3? Is it the pre-trained knowledge from the SAM2 image encoder that helps distinguish semantic meaning despite visual perturbations?**
Similar response to Question 2 from **Reviewer 24ak**; please kindly refer to that for more details.
**For SAM2Act+ why does the fine branch not have the memory component?**
Our design separates coarse and fine branches based on information density. The coarse branch uses scene-level point clouds to generate virtual images and employs a memory module for spatially consistent heatmaps, critical for memory-dependent tasks. The fine branch processes localized, rapidly changing views focused on immediate interactions, where incorporating memory would disrupt spatial consistency.
**Suggestion to refine our figures and clarify our writing and highlighting key results and impacts**
We thank the reviewer for their detailed suggestions and fully agree that their proposed improvements to the figures and manuscript will strengthen our paper. Due to ICML’s no-revision policy during rebuttal, we will incorporate these enhancements in future versions.
**Why make the views independent in the memory architecture?**
We treat views independently in the memory architecture for two reasons. First, the MVT effectively integrates multi-view cues by processing each view both independently and dependently through attention layers. Second, consistent with SAM2’s design, avoiding cross-view dependencies prevents memory complications due to significant camera angle changes. This approach simplifies memory integration and enhances robustness and reliability.
We hope our responses have adequately addressed all of the reviewers' concerns. If so, we would greatly appreciate your consideration in raising your ratings. | Summary: This submission introduces SAM2Act, a transformer-based coarse-to-fine behavior cloning policy for language-conditioned robot manipulation. SAM2Act exploits the multi-resolution features of SAM2 vision foundation model, through a novel up-sampling scheme which enables high-precision manipulation behavior. Additionally, the SAM2Act+ employs a spatial memory module, enabling it to solve more complex memory-based (non-Markovian) tasks. To evaluate the latter, a novel robot manipulation benchmark (extension to RLBench) is proposed, featuring 3 carefully designed tasks that specifically evaluate the model's spatial memory capabilities.
Claims And Evidence: In my opinion, the claims made in this submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: In my opinion, the proposed methods and evaluation criteria are meaningful for the task at hand.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: In my opinion, the experimental design and analysis is convincing.
Supplementary Material: I have fully read the supplementary material and considered it in my review, as if it was part of the main manuscript.
Relation To Broader Scientific Literature: This submission integrates an existing vision foundation model (SAM2) with novel components (multi-view policy and memory architecture) to enhance it with language-guided robot manipulation capabilities. The proposed model formulation is novel and pushes the limits of SOTA in high-precision and memory-dependent manipulation tasks. Furthermore, the proposed spatial memory-based benchmark tasks can be used by the community to provide a more target evaluation of robot manipulation policies on memory-aware tasks.
Essential References Not Discussed: In my opinion, the related works that are essential to understand the key contributions of the paper have been cited.
A more enhanced discussion of the positioning of the proposed methodology to approaches from different manipulation related literature may enhance the understanding of the reader (please see question below for more specific suggestions).
Other Strengths And Weaknesses: Strengths:
- The manuscript studies a very interesting and timely problem.
- The proposed solution is novel and insightful and successfully takes advantage of the latest advances in vision foundation models (offering tremendous generalisation capabilities) integrating them with novel elements that enhance their capabilities on high-precision and memory-aware robot manipulation.
- The proposed benchmark (extension to RLBench) can also facilitate the targeted evaluation of manipulation policies specifically on memory-aware tasks.
Comments (related to questions below):
- World models (e.g. the latest DINO-WM), although serving a different purpose, also typically feature a combination of a visual encoder, with a memory component and policy (although many works tend to adhere to the Markovian assumption). It is unclear how the proposed approach relates to such approaches, or if the proposed model architecture can potentially be adopted for world modelling.
- The choice to build on top of SAM2, although proven effective due to the multi-resolution features provided and ablated against previous versions of SAM, could have been further ablated by the use of different visual encoders (e.g. DINOv2 or DepthAnything features) to indicate the generality of the proposed approach, and offer more insights on which Vision Foundation Models are most suitable for robot manipulation.
- It is unclear whether reopen_drawer and put_block_back tasks of the proposed benchmark effectively do evaluate different aspects of spatial memory (namely 3D vs 2D spatial information) as claimed. This is because the z-axis information of which drawer to reopen can potentially be encoded in the 2D pixel-space, unless if the position of the chest of drawers all together is randomized between the 2 stages of the experiment.
Other Comments Or Suggestions: Not Applicable
Questions For Authors: 1. How does the proposed architecture differ from the architecture that are used to build world models (vision model + memory component) and how can the proposed architecture (due to its similarity) be adapted as a backbone architecture for training world models. It may be beneficial to add this discussion on the manuscript.
2. Is SAM2 the best choice of a vision foundation model to be used as a backbone for the integration of the proposed robot manipulation components (and why) ? How do other vision foundation models compare to SAM2 as visual encoders for the task at hand?
3. Is there any evidence or insights that the proposed benchmark tasks of reopen_drawer and put_block_back evaluate different aspects of spatial memory?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the **Reviewer 24ak** for recognizing the novelty of our approach, specifically in leveraging multi-resolution features from SAM2, and our innovative up-sampling scheme designed to facilitate high-precision manipulation policy learning. Additionally, we are grateful that the reviewer acknowledged our effective use of SAM2's memory mechanism to address complex, memory-based tasks, as well as our carefully designed MemoryBench, explicitly created for evaluating spatial memory capabilities. We are pleased that the reviewer found our claims well-supported by clear and convincing evidence (agreed by Reviewer *yA68* and *auqZ*). Moreover, we thank the reviewer for emphasizing the timeliness and relevance of our research within robotic manipulation(similar to Reviewer *yA68* and *auqZ*), and for highlighting the novel insights our study provides towards training generalist robotics agents(agreed by Reviewer *auqZ*).
Despite this, the reviewer raised several concerns and suggestions, which we address below:
**How does the proposed architecture differ from world models? Can SAM2Act+ be adapted to serve as a backbone for training world models?**
World models (e.g., DINO-WM, TD-MPC) differ significantly from SAM2Act in both objectives and training paradigms. While world models are trained to predict future observations and latent dynamics using various objective functions, SAM2Act is trained via behavior cloning. Although both incorporate memory and visual features, SAM2Act targets more realistic scenarios that violate the Markov assumption, unlike world models as mentioned by the reviewer. Moreover, world models typically decouple visual encoding and memory during training, whereas SAM2Act+ tightly integrates visual embeddings with spatial memory in an end-to-end fashion. While SAM2Act+ could potentially serve as a backbone for world modeling—by replacing the behavior cloning head with dynamics prediction and incorporating self-supervised objectives—that direction is beyond the scope of this work. Due to ICML's no-revision policy during rebuttal, we'll discuss this in a future version.
**Is SAM2 the best choice of a vision foundation model used as backbone for robotic manipulation as compared to other visual encoders?**
Our choice of SAM2 as the vision backbone is strongly justified by both prior work and our own ablation studies. In the SAM-E paper, a thorough evaluation compared various visual encoders—including CLIP, DINO, and the robotics-specific R3M—and found that the SAM image encoder outperformed the others when paired with an RVT-based action sequence prediction module. Since SAM2 builds on SAM by introducing multi-resolution features, it is expected—and indeed confirmed—to deliver superior performance and also supported from in-depth studies from the original SAM2 paper and SAM2-Adapter. To further validate this, we replaced SAM2 with alternative encoders suggested by the reviewer. Our experiments show that when using original upsampling (a fair comparison, as only SAM2 provides multi-resolution embeddings), **DINOv2 achieves 82.2 ± 0.5** and **DepthAnythingV2 achieves 81.1 ± 1.2** on the RLBench 18 tasks. Even the ablated version of SAM2Act, which uses the SAM2 encoder without multi-resolution output, already outperforms both alternatives with a success rate of **84.2 ± 0.9**. Moreover, when leveraging SAM2’s multi-resolution outputs, we anticipate an even larger performance gap in its favor. Thus, the evidence strongly supports SAM2 as the best choice for our application, as its multi-resolution capabilities are uniquely advantageous for the robot manipulation tasks addressed in our work.
**Clarification on whether the reopen-drawer and put_block_back tasks effectively evaluate different aspects of spatial memory (namely, 3D versus 2D spatial information) as claimed?**
We realize our description of spatial memory evaluation may have caused confusion. Our intention with the two tasks is not to distinguish 3D from 2D spatial memory, but rather to isolate distinct components of spatial memory. Specifically:
- **Reopen_Drawer:** This task is designed to assess whether the agent can retain and recall information along the vertical (z-axis) dimension. It tests the agent’s ability to remember which specific drawer was interacted with previously.
- **Put_Block_Back:** This task evaluates the agent’s memory of the horizontal layout (x and y axes), ensuring it can accurately reposition objects within the overall spatial configuration.
Both tasks involve processing 2D pixel input but focus on different spatial dimensions within a 3D environment. By decoupling these aspects, our benchmark evaluates how effectively an agent integrates spatial cues across different axes. We hope this clarifies the confusion regarding the spatial memory tasks.
We hope our responses have addressed all of the reviewers' concerns. If so, we would appreciate your consideration in raising your ratings.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' responses which clarify my raised comments. I believe that the proposed methodology makes a notable contribution to the community and maintain my Accept score.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for acknowledging and recognizing the notable contribution through this paper. | Summary: The paper introduces SAM2Act, a multi-view, language-conditioned behavior cloning policy for 6-DoF 3D robotic manipulation, which integrates a visual foundation model (SAM2) with a memory architecture to enhance feature representation and task-specific reasoning. SAM2Act leverages multi-resolution upsampling and visual embeddings from SAM2 to achieve high-precision manipulations and robust generalization across environmental and object-level variations. Additionally, the authors propose SAM2Act+, an extension of SAM2Act that incorporates memory-based components (Memory Bank, Memory Encoder, and Memory Attention) to enable episodic recall and solve spatial memory-dependent tasks. The paper also introduces MemoryBench, a benchmark for assessing spatial memory in behavior cloning models. Empirical results demonstrate that SAM2Act and SAM2Act+ achieve state-of-the-art performance across multiple benchmarks, highlighting their effectiveness in complex manipulation tasks and their ability to generalize to unseen perturbations.
**Post Rebuttal:**
Thanks for additional results provided in the rebuttal, most of my concerns are addressed. Overall, the overall technical novelty of this paper is somehow limited. However it's unique contributions (such as the memorybench and analysis on memory-related tasks) could be interesting to some group of the community. I've decided to raise my score to Weak Accept.
Claims And Evidence: All the claims made in the paper are supported by corresponding evidence.
Methods And Evaluation Criteria: The proposed pipeline, including the utilization of SAM2 for improved generalizability and the proposed memory module for tackling memory-specific tasks, is intuitive and makes sense.
The evaluation benchmarks are comprehensive, and supported by further real-world evaluations. The new benchmark proposed, MemoryBench, also addresses important research questions.
Theoretical Claims: No theoretical claims are made in the paper.
Experimental Designs Or Analyses: It would be better for the authors to highlight the difference between *long-horizon manipulation tasks* and tasks that require the trained policy to have both semantic and spatial memory as introduced in the MemoryBench. More detailed discussion would be helpful in highling contributions in this paper.
Supplementary Material: I've watched all videos provided in the supplementary material.
Relation To Broader Scientific Literature: The utilization of SAM2 feature for various downstream tasks is adopted in prior literature [1].
[1] Chen, Tianrun, et al. "Sam2-adapter: Evaluating & adapting segment anything 2 in downstream tasks: Camouflage, shadow, medical image segmentation, and more." arXiv preprint arXiv:2408.04579 (2024).
Essential References Not Discussed: Many designs proposed in the paper are largely inherited from RVT-2[1], except for the memory module. I believe it would be better for the authors to highlight unique designs that distinguished this paper from RVT-2. Otherwise, the overall technical novelty seems to be limited.
[1] Goyal, Ankit, et al. "RVT-2: Learning precise manipulation from few demonstrations." RSS 2024.
Other Strengths And Weaknesses: **Strength:** The proposed MemoryBench addresses an important aspect of the robot policies: the spatial memory capabilities. The evaluation suite is also thoroughly discussed in the paper.
**Weaknesses:** The originality of model architecture design is constrained. Given the SAM2Act module and the coarse-to-fine pipeline is largely borrowed from RVT-2, and the memory module in SAM2Act+ is similar to the tacking module in SAM2.
Other Comments Or Suggestions: None
Questions For Authors: 1. As shown in Table 2 and Table 7, replacing SAM2 with SAM results in a significant performance decline. The generalizability evaluated in Colosseum and the average success rate in RLBench for the SAM-based variant are even lower than those of the RVT-2 baseline. Does this outcome suggest that the performance of the proposed method is largely attributed to the SAM2 encoder? Further clarifications can help better understand technical contribution and novelty of the paper.
2. Could the authors clarify the distinction between long-horizon tasks in general and the tasks in MemoryBench, as well as the unique challenges posed by tasks that necessitate the use of a memory module? Additionally, would the memory module also bring improvements to long-horizon tasks more broadly, such as those in the LIBERO-Long benchmark?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank **Reviewer yA68** for the detailed and insightful feedback. We appreciate the recognition of our innovative use of SAM2 for generalization, the intuitive memory module design, and our comprehensive benchmarking. We're especially grateful for the positive remarks on our real-world evaluations, also echoed by Reviewer *24ak* and *auqZ*. We also value the reviewer’s recognition of MemoryBench as a meaningful contribution to addressing critical research questions, as well as the acknowledgment that our claims are well-supported by the evidence—an assessment shared across reviewers.
Despite this, the reviewer raised several concerns and suggestions, which we address below:
**The model architecture appears to have limited originality, as the SAM2Act module and coarse-to-fine pipeline closely follow RVT-2, and the memory module in SAM2Act+ strongly resembles SAM2’s tracking module.**
We acknowledge that our work naturally builds upon prior research, embracing the principle of 'standing on the shoulders of giants,' much like RVT-2 evolved from RVT through targeted architectural changes, or SAM-E integrated elements from RVT with SAM's image encoder and action-chunking module. Our approach similarly leverages existing architectures but introduces critical, innovative adaptations that significantly enhance performance. In particular, our novel contribution is in effectively addressing the challenging problem of memory, representing one of the first systematic attempts in this area. Our key adaptations that make SAM2Act and SAM2Act+ effective are: (1) Multi-Resolution Upsampling, which leverages SAM2’s multi-scale image embeddings to boost RLBench performance and generalization (Tables 2 & 7), and (2) Memory Task Adaptation, where we extend SAM2’s tracking module to multi-view, multi-step settings by treating action heatmaps as object masks and integrating MVT embeddings into memory. This extension, which required careful design and extensive experimentation, is novel and essential—naively using the base models without these changes fails, as shown in our ablations, Table 7.
**Given the performance drop when replacing SAM2 with SAM—falling below even the RVT-2 baseline on Colosseum and RLBench (Tables 2 & 7)—to what extent does the method's effectiveness rely on SAM2 rather than the proposed contributions?**
We appreciate the reviewer’s observation and agree that the SAM2 encoder contributes significantly to performance. The drop observed when replacing SAM2 with SAM aligns with findings from the SAM2 and SAM2-Adapter papers (to be cited in future version), which show that SAM2’s lightweight Hiera encoder yields stronger embeddings for segmentation and downstream tasks. However, our ablations also demonstrate that the strong performance of SAM2Act arises from the combination of SAM2 with the other novel contributions from our proposed method and not solely from the encoder.
Our primary contributions include innovations like **multi-resolution upsampling**, which adeptly leverages multi-resolution embeddings. As shown in Table 2, the improved generalization on Colosseum is primarily driven by multi-resolution upsampling. Without multi-resolution embeddings, performance matches the SAM-based variant, highlighting our architectural contributions as the main driver of generalization. Moreover, when comparing the SAM-based variant to RVT-2, their overlapping performance intervals (80.8 ± 1.9 vs. 81.4 ± 3.1) indicate no statistically significant differences. In summary, our approach combines several novel contributions that together enhance both performance and practicality.
**Could the authors clarify the distinction between general long-horizon tasks and the specific tasks in MemoryBench, particularly highlighting the unique challenges that make a dedicated memory module necessary?**
Robotic manipulation tasks typically follow the Markov assumption, where the optimal action depends solely on the current observation. Even in long-horizon tasks key information is directly observable and might not need memory. In contrast, MemoryBench is explicitly designed to violate this assumption—tasks are ambiguous, and visually identical states may require different actions based on prior interactions. We appreciate the reviewer’s suggestion of LIBERO, it emphasizes action-based models, which differ from our keyframe-based approach in SAM2Act. To investigate the relation between long-horizon and memory-based tasks, we curated four cube-stacking tasks with increasing complexity and keyframe length. We observed consistent performance degradation for both SAM2Act and SAM2Act+ as horizon length increased, suggesting that memory-based challenges extend beyond task length alone. (Results on our [anonymous website](https://sam2act-icml.github.io/)).
We sincerely hope our responses have adequately addressed all of the reviewers' concerns. If so, we would greatly appreciate your consideration in raising your ratings. | null | null | null | null | null | null | null | null |
Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale | Accept (poster) | Summary: This work proposes a new method to clean data for LM pre-training. Their method is based on a small model, which can use/combine/create functions/programs to clear and transform documents (getting rid of noise and unnecessary things). Empirically, they observe a performance improvement in downstream tasks.
Claims And Evidence: The claims made are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The method makes sense and is novel: using a small LM to clean/modify data, as far as I know, is a novel approach. The authors evaluate their method quite extensively.
Theoretical Claims: NA
Experimental Designs Or Analyses: The experiments and analyses are sound.
Supplementary Material: I have skimmed their supplementary material but have not taken a very close look.
Relation To Broader Scientific Literature: The relation seems clear from their related work section and, from what I get, their method offers novelty in how they use a LM to clean/modify lower quality documents.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: - In Figure 7, what is the main reason why documents get lengthier after cleaning? I would expect the opposite (maybe shorter documents are getting deleted?).
- In the first plot of Figure 7, shouldn't the two areas under the curves be equal, one? One of them seems much greater than the other... maybe you're plotting the frequency and not density?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for recognizing our extensive experiments, method soundness and novelty! We are happy to answer all your questions.
$\textrm{\color{blue}Question 1}$
```
What is the main reason why documents get lengthier after cleaning? I would expect the opposite (maybe shorter documents are getting deleted?)
```
Yes, as you correctly pointed out, the primary reason for the longer average document length in ProX-refined data is that general text corpora—**even after extensive rule-based filtering**—still contain many **short and noisy** documents. These low-quality documents are often discarded during the document-level refining stage, leading to a curated dataset with naturally longer documents.
At the chunk-level refining stage, we observe a 5–10% reduction in total tokens depending on the corpus, and document lengths correspondingly decrease. However, they **still remain consistently longer than those in the original corpus**, indicating that ProX effectively removes noise while retaining substantial and meaningful content.
$\textrm{\color{blue}Question 2}$
```
Maybe you're plotting the frequency and not density?
```
Thank you for your careful reading! After checking it, we do not perform normalization on the Y-axis, thus showing the frequency instead of density. We will correct this typo in the updated version.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for your responses! For now, I will keep my overall recommendation.
---
Reply to Comment 1.1.1:
Comment: Thank you! If you need any further explanations, please do not hesitate to let us know. | Summary: The paper introduces the ProX framework which trains a model that generates short program instructions to clean pre-training documents.
The authors prompt large-language models to score the quality and format of documents and to remove various website artifacts (header, footers, URLs, and navigation elements). The generated programs unify both document filtering and chunk rewriting.
The paper includes experiments which show that the proposed method boosts the performance of language models when applied to the pre-training data. The method performs well across three model scales (up to 1B) and different pre-training datasets, and outperforms various baselines.
Claims And Evidence: The central claim of superior pre-training performance is well supported by many experiments, even though the paper should compare to stronger baselines (see below). The authors also show that at larger model scale the gains in performance can justify the increased inference cost of running their ProX model on the pre-training corpus.
Another contentious issue is the comparison to methods which directly rewrite documents using LMs. The authors argue that these methods are less scalable, but it is not clear whether a strong 0.7B model might not be sufficient to rewrite data directly and the experiments do not include this relevant baseline.
Methods And Evaluation Criteria: The proposed method is good solution to the practical problem of selecting and rewriting pre-training data. The evaluation of the method follows standard practices from previous papers.
Theoretical Claims: n/a
Experimental Designs Or Analyses: The choice of model sizes, data scales and evaluation tasks is appropriate for an academic paper on pre-training data.
However, I believe that document filtering and chunk-level rewriting are distinct contributions and would benefit from better individual experiments:
1. Table 2 shows how well ProX curates data from a raw corpus. However, it misses the current best curation baselines, FineWeb-Edu and DCLM-baseline. FineWeb-Edu is especially important, since document-level ProX is re-uses the FineWeb-Edu prompt (Appendix A), but using a different threshold and augmented with a format prompt.
The comparison in Table 7 in the appendix would be important to show in Table 2, and the "kept ratio" would also be important for the other methods in Table 2.
2. Table 3 shows how well ProX performs in curating a pre-processed corpus (C4). Here all baselines only perform document filtering, so showing separate rows for either document filtering, chunk rewriting, or both would be important to understand the source of the gain. Similarly, Figure 6 shows how much ProX improves the performance of strong existing corpora (in my view, the most compelling evidence for ProX). Here it would also be very useful to understand whether the performance edge is mostly due to the chunk rewriting or additional filtering.
3. The chunk-level rewriting could be compared to other approaches of rewriting text. A good baseline would training the same ProX base model to generate the cleaned document (with headers, footers, URLs, navigation removed) directly.
Supplementary Material: I reviewed the prompts used to obtain the ProX training data in the appendix and also looked at the examples and error analysis.
Relation To Broader Scientific Literature: Both prompt-based selection and chunk rewriting are established techniques for curating training data.
In fact, the selection strategy is very similar to FineWeb-Edu (Penedo et al., 2024a).
The major novelity is to use structured commands (string replacement and line removal) to modify the pre-training data, which has previously been done by using LMs to rewrite the entire document. While the authors that their method is more scalable (which I am not sure about) and may be more reliable (I am tempted to agree), there is no evidence to compare between these sophisticated methods. The authors also argue that they unify chunk rewriting and document filtering in a single programming framework, but the practical value of this is not clear, especially as multiple models are still used.
Besides novelty, the released models are undoubedly useful resources to the community.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: n/a
Other Comments Or Suggestions: It would be useful to discuss deduplication as a standard part of the data curation pipeline and its relation to ProX.
Questions For Authors: Can you explain what seed data the fasttext baseline is trained on?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for recognizing the practicality of our method and its strong empirical results across model scales and datasets. For your questions and suggestions:
$\textrm{\color{blue}Question 1}$
```
The comparison in Table7 in the appendix would be important to show in Table2, and the "kept ratio" would also be important in Table 2.
```
Thank you for your careful reading and for recognizing that ProX achieves a higher kept ratio than FineWeb-Edu. We will update the presentation accordingly in our revised version.
Regarding the kept ratio of rule-based filtering methods on **RedPajama-V2**, we summarize the results as follows. Notice that **aggressively ensembling these filtering methods does not show very obvious improvement (+0.2%) but only keeps 50% of the data**.
| Filtering Method | Kept Ratio | Avg. Performance Gain |
|-----------------------|------------|-----------------------|
| C4 | 73.60% | 0.50% |
| Gopher | 70.50% | 0.20% |
| FineWeb | 84.50% | 0.70% |
| C4 + Gopher + FineWeb | 50.00% | 0.20% |
| **ProX-D** | 30.00% | **1.20%** |
| **ProX-D+C** | 25.60% | **2.50%** |
On FineWeb, we observe the following:
| Method | Kept Ratio | Avg. Performance |
|-------------|------------|------------------|
| FineWeb-Edu | 8.60% | 45.2 |
| **ProX** | **28.00%** | 45.2 |
Despite keeping more than **3×** the data, ProX achieves **comparable downstream performance** to FineWeb-Edu, highlighting its effectiveness in retaining high-quality content.
$\textrm{\color{blue}Question 2}$
```
Here all baselines only perform document filtering, so showing separate rows for either document filtering, chunk rewriting, or both would be important to understand the source of the gain.
```
In Table 3, we use only prox's doc refining performance to compare with other baseline data selection methods (please see our experiment setting from line 294 to line 296) This is mainly to show that using a very small LM on quality refinement task is indeed very simple and effective compared to other data selection baselines.
$\textrm{\color{blue}Question 3}$
```
The chunk-level rewriting could be compared to other approaches of rewriting text. A good baseline would training the same ProX base model to generate the cleaned document (with headers, footers, URLs, navigation removed) directly.
```
Thank you for the insightful suggestion. While rewriting is a valid approach to improve pre-training data quality, it incurs high inference costs due to full-document generation and relies heavily on large models. Recent work like WRAP and Nemotron-CC uses models over 7B (e.g., Mistral NeMo 12B). In contrast, ProX improves data quality with much lower compute by using a small 0.3B model and concise outputs. We believe efficient rewriting would require a strong base model and leave this as future work.
$\textrm{\color{blue}Suggestion}$
```
Include deduplication as a standard part of the data curation pipeline and its relation to ProX.
```
Thank you for the suggestion! We have discussed in the related work section that deduplication plays an important role in data preprocessing—it reduces redundancy, improves training stability, and enhances efficiency. ProX is orthogonal to deduplication, as it focuses on the quality of individual samples rather than redundancy. In practice, ProX can be applied after deduplication to further improve data quality while saving compute. We will include this discussion in the revised related work section.
$\textrm{\color{blue}Question 4}$
```
Can you explain what seed data the fasttext baseline is trained on?
```
To ensure a fair comparison, we re-trained the fastText classifier on the exact same training data used for our ProX doc-level refining models. All documents labeled with `drop_doc()` are treated as negative samples, while those labeled with `keep_doc()` are treated as high-quality samples. We trained the language model on FastText filtered data from scratch using all the same configuration as the other experiments in Table 2.
---
Rebuttal Comment 1.1:
Comment: Question 1:
Thank you! This representation of Table 1 is much better in my view. However, why not add FineWeb-Edu to it (which should also achieve a 1.2% avg performance gain if I'm not mistaken)? I also believe it is important to mention the relationship to the FineWeb-Edu prompt in the main text.
Question 2:
Thank you for clarifying these results for me. I had missed the note in line 294-296.
Question 3:
The rewriting operation in WRAP seems quite challenging (converting to different formats etc.). However, ProX performs fairly simple operations and I would not be surprised that a fine-tuned 0.3B model could regenerate the source document with headers / footers / URLs removed. Fine-tuning a model to generate commands will still be a more efficient option, but it would be good to provide evidence for the claims that the ProX framework is more robust than regenerating the document.
Suggestion:
Great!
Question 4:
This sounds like it is exactly the right baseline. Thanks!
---
Reply to Comment 1.1.1:
Comment: We are glad to hear that our responses helped clarify your concerns, and we truly appreciate your timely follow-up and positive acknowledgment. Thank you again for the thoughtful feedback. It's been very helpful in strengthening the paper.
$\textrm{\color{blue}Further Response to Question 1}$
```
However, why not add FineWeb-Edu to it (which should also achieve a 1.2% avg performance gain if I'm not mistaken)?
```
Thank you for the thoughtful suggestion! We believe you are referring to Table 2 (as Table 1 focuses solely on the function design of ProX). Currently, however, all experiments reported in Table 2 are conducted on the RedPajama-V2 dataset, rather than on FineWeb. In Section 3.2 and Table 2, our primary goal is to evaluate the effectiveness of ProX in comparison to traditional rule-based filtering methods.
That said, we agree that FineWeb-Edu represents a strong baseline, and we believe its classifier could also be applied to RedPajama-V2. To highlight the compatibility and added value of ProX, we include additional experiments in Figure 6, where we apply ProX refining on FineWeb-Edu data. These experiments, conducted on 1.7B models, further demonstrate that ProX can enhance data quality even when starting from a carefully filtered dataset like FineWeb-Edu.
```
I also believe it is important to mention the relationship to the FineWeb-Edu prompt in the main text.
```
Thank you for pointing it out. We will add this detail (currently more in Appendix A.1) in Section 2 in the next version.
---
$\textrm{\color{blue}Further Response to Question 3}$
```
I would not be surprised that a fine-tuned 0.3B model could regenerate the source document with headers / footers / URLs removed.
```
Thank you for raising this insightful point. We agree that, in principle, a fine-tuned 0.3B model could potentially regenerate the source document with noise (e.g., headers, footers, URLs) removed.
In our preliminary experiments, we explored a simple document refining approach: training the model to directly generate high-quality fragments from raw documents, which is very similar to your proposal. Through this exploration, we observed the following, which leads to our current design:
1. **Limited capacity of small models**: Small models (e.g., 0.3B) struggled to perform fine-grained and document-specific tailoring, especially in the presence of diverse and noisy web content, while it is relatively easier for them to locate thses noises by line number or keywords.
2. **Efficiency concerns**: Generating clean fragments directly is often much less efficient in terms of both quality and cost. Specifically, this strategy leads to a larger number of output tokens, increasing computation costs. In contrast, identifying and removing noisy segments (e.g., headers, boilerplate, low-content sections) via code execution is more lightweight and cost-effective.
3. **Limited gain during experiments**: We also experimented with training ~1B models on both raw and fragment-refined data. Interestingly, the downstream performance of LM-refined data was similar to, or sometimes even slightly lower than, that of models trained on raw data, again demonstrating the first point.
These observations motivated our decision to frame ProX as a modular refining framework that operates by identifying and filtering low-quality content, rather than regenerating full cleaned documents based on a small language model. | Summary: Data curation for LLMs typically relies on rule-based filtering to discard documents or refine them. However, these rules are inflexible and cannot adapt to the unique characteristics of each sample, but it would be laborious for a practitioner to determine how to refine/discard at the sample level. This paper proposes ProX, a procedure where a small LM is trained to produce a set of functions that discard and refine each sample. These functions are executed to refine the dataset, and then a LLM is trained on this curated dataset. This approach allows for per-sample flexibility by using the small LM, resulting in better performance than rule-based filtering approaches. Moreover, ProX's curation process is fairly efficient due to the size of the small LM and the programmatic use of the LM.
Claims And Evidence: - Claims around ProX's performance and efficiency are supported by clear empirical evidence.
- The paper seems to claim that ProX's key advantage over prior heuristic-based approaches is its flexibility to refine each sample individually, in contrast to low-coverage, hard-coded rules. However, when comparing ProX with other filtering rules like FineWeb, Gopher, or C4, it's unclear whether ProX's improvements are due solely to its per-sample, LLM-based programming approach, or also because of the nature of the rules it uses. For example, ProX uses soft rules like: 1) discarding documents with low educational content or poor formatting scores 2) within chunks, removing elements such as navigation bars, URLs, and footers. In contrast, FineWeb/Gopher/C4 use different rules; for instance, C4 removes documents where the fraction of lines ending in punctuation is low. To better understand where ProX's performance gains come from, it would be insightful to run a baseline where you "convert" the traditional heuristic rules from FineWeb/Gopher/C4 into flexible, LM-driven programs that can adapt per sample. This would help isolate whether the improvement is due to better rules or the adaptive, per-sample application of those rules.
Methods And Evaluation Criteria: Proposed method and evaluation makes sense.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: Experimental design and analyses appear sound.
Supplementary Material: Yes, all parts.
Relation To Broader Scientific Literature: This paper strikes a middle ground between two existing types of approaches for data curation: 1) manual filtering rules that can cheaply be run across many samples, but are inflexible and have low coverage, and 2) expensive, per-sample LLM-based approaches, such as getting each sample to be scored/edited/synthesized by a powerful LLM. The paper shows that we can combine strengths from both of these paradigms to produce data curation methods that are relatively efficient yet of higher quality than heuristic approaches.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
- Remarkable results showing that the this approach can be used on a variety of model sizes and datasets, outperforming hard-coded rules as well as model-based data selection algorithms.
- ProX is an efficient curation approach (figure 7)---the total FLOPS used by ProX (including training) is less than standard training (especially for larger models). This is in contrast to other curation strategies, which are oftentimes equally or more expensive than the training itself.
Weaknesses:
- The applicability of ProX could be enhanced if the paper included more concrete information on how one (i.e., a domain expert) could specify new functions---I wonder if there are limitations of using Llama-70B to annotate the seed data for domain-specific functions. It would also be nice if approaches like Rephrasing the Web/WRAP (Maini et. al.) could be expressed in this framework and combined with the functions in this paper.
Other Comments Or Suggestions: Typos:
- In table 1, "orignal" -> "original"
Questions For Authors: 1. Can the exact rules for C4, Gopher, and FineWeb be made available in the appendix?
2. Could experiments/analyses be conducted to isolate ProX's performance improvement due to per-sample application versus its particular rules?
3. Could the paper provide an example of how a practitioner/domain expert would add more complex functions to ProX, and which parts of the procedure they would need to adjust?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for recognizing our work! We are truly delighted to see your appreciation of ProX's novelty, efficiency, and effectiveness in improving data quality. Regarding your question:
$\textrm{\color{blue}Question 1}$
```
Can the exact rules for C4, Gopher, and FineWeb be made available in the appendix?
```
Thank you for the suggestion. For these rule-based filtering methods, we primarily refer to the implementations provided in the FineWeb codebase. We are happy to summarize them below and will include the full details in the appendix in our revised version.
- C4: We re-implement the C4 filtering rules, including:
1. Filtering lines with overly long words or too few words
2. Removing citations, placeholder text (e.g., lorem ipsum), JavaScript, bracket content, and web policy mentions
3. Discarding documents with too few sentences
- Gopher: We adopt Gopher’s rules, including:
1. Minimum and maximum word count per document
2. Symbol-to-word ratio
3. Average word length per document
4. Heuristics such as bullet point filtering
- FineWeb: We use FineWeb’s official implementation, which includes:
1. Punctuation ratio checks
2. Duplicate line removal
3. Filtering based on newline character frequency
4. Additional combined filters integrating the above rules
We hope this clarifies our reference to these rule-based methods, and we’ll ensure their presentation in the appendix for reproduction.
$\textrm{\color{blue}Question 2}$
```
Could experiments/ analyses be conducted to isolate ProX's performance improvement due to per-sample application versus its particular rules.
```
Regarding the performance of ProX in isolation, we would like to clarify two points:
1. ProX inherently and functionally covers other rule-based filtering methods, as it leverages a language model to learn and apply data selection criteria, effectively capturing patterns encoded by handcrafted rules.
2. During our preliminary study, we explored training the refining model using data filtered by FineWeb rules as negative samples. Compared to using the original raw data(a subset of Red-Pajama-V2), ProX achieved approximately **+1.0%** improvement in downstream performance, indicating its ability to identify lower-quality documents.
Furthermore, we observed that more aggressive filtering tends to yield better performance. The use of quantifiable scores such as Edu Score and Format Score offers us the flexibility to adjust filtering thresholds and control the trade-off between data quality and quantity.
$\textrm{\color{blue}Question 3}$
```
Could the paper provide an example of how a practitioner/ domain expert would add more complex functions to ProX and which parts of the procedure they would need to adjust?
```
Regarding the addition of specific rules or filtering strategies, we believe it is important to distinguish between document-level and chunk-level refining:
- At the document level, if a domain expert aims to remove biased content (e.g., safety issues or toxic data), they can simply annotate relevant keywords and mark the corresponding documents with a drop_doc label. This allows for efficient and targeted removal based on expert knowledge.
- At the chunk level, domain experts may need to:
1. Identify specific types of noise with clear patterns — such as frequently repeated dates in forum pages, pervasive advertisement slogans, or unnatural line breaks that disrupt the reading flow.
2. Abstract the noise removal strategy into a reusable function — for example, leveraging regular expression (regex) syntax for flexible and customizable pattern-based filtering.
3. Use model or human annotation to create seed data, which can then be used to train a refining model tailored to the noise patterns of interest.
We hope this explanation clarifies our approach.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I still am not completely satisfied regarding Question 2 though.
> ProX inherently and functionally covers other rule-based filtering methods, as it leverages a language model to learn and apply data selection criteria, effectively capturing patterns encoded by handcrafted rules.
Yes, but based on the appendix, the rules that underly ProX (e.g. education score, removing navigation bars, URLs, and footers) seem to still differ from C4/Gopher/FineWeb rules.
So I am wondering if we could look at what happens if the underlying rules of ProX are the same as the other rules. Here's an example of how I would imagine doing this; take a Gopher rule:
Gopher rule: "Minimum and maximum word count per document" --> this traditionally requires some hardcoded number (i.e., if # words < 10 then discard)
To do a ProX version of this rule, you could create the seed data by prompting the LLama-70b model to flag documents if they appear too long or too short, but the key thing is that you don't enforce a hard rule, like 10 words.
This is still the thing I am most curious about in the performance of ProX.
---
Reply to Comment 1.1.1:
Comment: We are glad to see our response address most of your questions.
For your follow-up questions:
```
So I am wondering if we could look at what happens if the underlying rules of ProX are the same as the other rules. Here's an example of how I would imagine doing this; take a Gopher rule: Gopher rule: "Minimum and maximum word count per document" --> this traditionally requires some hardcoded number (i.e., if # words < 10 then discard) To do a ProX version of this rule, you could create the seed data by prompting the LLama-70b model to flag documents if they appear too long or too short, but the key thing is that you don't enforce a hard rule, like 10 words.
```
Thank you very much. We think that aligning the underlying rules and comparing hard-coded vs. model-based filtering is a valuable angle. Also, as we posted in our last response, in our preliminary study, we already trained ProX using data filtered by FineWeb rules as negative examples. This setup mimics applying the same rules but lets the model learn a softer boundary instead of enforcing hard thresholds. ProX achieved around **+1.0%** improvement in downstream performance over using the rule-filtered data directly.
We really appreciate your suggestion, and fully agree that a broader study aligning with other rule sets is a promising direction for future work. As you suggested, to further investigate this, we additionally conducted an experiment using FineWeb-style prompts to generate seed data for ProX training.
We chose FineWeb for consistency with our preliminary experiments and manageable compute cost. Similarly, we use Llama-3-70B-Instruct to construct these seed data, and our prompt focuses mainly on: 1. Punctuation ratio checks 2. Duplicate line detection 3. newline character frequency. Any flag document will be thus dropped.
We present the results below for your reference:
| Methods | Avg Performance |
|----------------------------------------------------|-----------------|
| Raw data | 42.1 |
| FineWeb Rules | 42.8 |
| ProX (train on hard rule generated seed data) | 43.1 |
| ProX (train on llama generated seed data) | 43.0 |
| ProX-D (trained on Edu & Format focused seed data) | 43.5 |
We believe these results suggest that:
1. the choice of underlying rules matters (none of them outperforms what is used in ProX-D and ProX-D+C)
2. meanwhile, even a small model trained with modest fine-tuning demonstrates slightly better document quality assessment than purely rule-based approaches. (which is also one of the main observations we have in ProX)
We hope these updated results can answer your questions and clarify our main focus in ProX. | null | null | null | null | null | null | null | null |
ROME is Forged in Adversity: Robust Distilled Datasets via Information Bottleneck | Accept (poster) | Summary: Dataset Distillation (DD) compresses large datasets into smaller synthetic subsets but remains vulnerable to adversarial attacks. To address this, the paper proposes ROME, a method leveraging the Information Bottleneck principle to enhance adversarial robustness by aligning feature distributions, demonstrating significant improvements in robustness metrics on CIFAR-10 and CIFAR-100.
Claims And Evidence: Yes, the claims are well supported by evidence. The authors use information bottleneck to enhance the adversarial robustness of the distilled datasets.
Methods And Evaluation Criteria: Yes, it mostly makes sense. The problem I got is: why not generate adversarial examples on the original dataset first, then use DD methods to distill the dataset? In this way, is ROME still outperforming others?
Theoretical Claims: Yes, I checked all of them. I have questions on Theorem 3.6 and 3.7.
- Theorem 3.6: The expression of equation (111) is confusing, what is this "||" means for CE loss? Also, how to derive from equation (111) to (112) is not very clear to me.
- Theorem 3.7: It seems that the authors use the embedding to estimate the total variation distances. Why does this make sense in practice?
Experimental Designs Or Analyses: Yes, the experimental design is generally sound and valid. However, I am not sure whether it is fully fair to compare ROME with DD methods without any adversarial example-generating procedures. I wonder whether it would be to compare ROME with adversarial distilled datasets of other DD methods for a fairer comparison.
Supplementary Material: Yes. The code is provided. The results should be reproducible.
Relation To Broader Scientific Literature: The key contribution of this paper would be improving the adversarial robustness of distilled datasets in an efficient way.
Essential References Not Discussed: References are discussed comprehensively.
Other Strengths And Weaknesses: - The paper is well-written, easy to follow, and well-motivated.
- The results are comprehensive and show significant improvements.
Other Comments Or Suggestions: - What is IPC, which seems to not be defined in the paper?
- Are there any real-world applications of ROME?
- The RR metric sometimes has a value over 100%, which seems to be weird to me. A good metric should usually fall into [0,100]% range.
Questions For Authors: - Could the authors do the comparison with DD baselines by using an adversarial version of the original dataset?
- Could the authors explain more on the theorems 3.6 and 3.7? (See above)
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: - **Q1: Theorem 3.6: The expression of equation (111) is confusing, what is this "||" means for CE loss? Also, how to derive from equation (111) to (112) is not very clear to me.**
- **R1:**
Thank you for your feedback. We have updated the derivation in Theorem 3.6 to remove the "||" symbol in Equation (111) for clarity. The revised version provides a clearer derivation, leading to the cross-entropy loss.
$$
\begin{align}
\mathbb{E} _ {p(x,\hat{x},y)p(z|x,\hat{x},y)} \left[\log q(y|z)\right]
&= \mathbb{E} _ {p(x,\hat{x}, y)p(z|x)} \left[ \log q(y|z)\right]
\\\\
&= \mathbb{E} _ {p(x,\hat{x}, y)}\mathbb{E} _ {p(z|x)}\left[ \log q(y|z)\right]\\\\
&= \mathbb{E} _ {p(x,\hat{x},y)} \left[\log q(y|e(x))\right]
\\\\
&= - \mathbb{E} _ {p(x,\hat{x},y)} \left[- y^t \log q(y^t|e(x))\right]
\\\\
&= - \mathbb{E} _ {p(x,\hat{x},y)} \left[\mathbb{CE}[y^t, f(x)]\right].
\end{align}
$$
The derivation starts with the joint distribution $p(x, \hat{x}, y)$ and $p(z | x, \hat{x}, y)$, simplifying to an expectation over $p(x, \hat{x}, y)$ and $p(z | x)$, as $z$ depends only on $x$. The term $p(z | x)$ is replaced by $q(y | z(x))$ since $z(x)$ is deterministically determined by $x$. The term $e(x)$ represents the latent embeddings from the model's embedding layer, which is equivalent to $z(x)$. The classifier $f(x)$ models $q(y^t | e(x))$, and $y^t$ is the one-hot encoding of the true label. The final result is the cross-entropy loss $\mathbb{CE}[y^t, f(x)]$, which minimizes prediction error.
---
- **Q2: Theorem 3.7: It seems that the authors use the embedding to estimate the total variation distances. Why does this make sense in practice?**
- **R2:**
Thank you for your question. The use of embeddings to estimate total variation distances in Theorem 3.7 aligns with feature alignment strategies in dataset distillation, helping capture robust features for improved adversarial robustness. This approach is **consistent with prior works like DM, IDM, and BACON**. The transition from $p(z|x)$ and $q(z|\hat{x})$ to $e(x)$ and $e(\hat{x})$ in Equations 121-122 reflects the use of embeddings to ensure robustness. This has been clarified in the revised manuscript.
---
- **Q3: What is IPC, which seems to not be defined in the paper?**
- **R3:**
Thank you for your suggestion. We have added a definition of **Images Per Class (IPC)** in the revised manuscript to ensure clarity.
---
- **Q4: Are there any real-world applications of ROME?**
- **R4:**
Thank you for your question. We are exploring ROME's potential in **real-world applications** like **autonomous driving**, **facial recognition**, and **edge computing**, where **robustness** and **efficiency** are crucial. As mentioned in *Lines 57-59 of the Introduction*, these fields need robust dataset distillation techniques. ROME can speed up training on compact, robust datasets, improving security without costly adversarial retraining. While still in early stages, we believe ROME has great potential to enhance safety and efficiency in these areas.
---
- **Q5: The RR metric sometimes has a value over 100%, which seems to be weird to me. A good metric should usually fall into [0,100]% range.**
- **R5:**
Thank you for your observation. While I-RR values typically range from 0 to 100, ROME exhibits an **"Over-Robustness Phenomenon"**, where models trained with ROME achieve **higher accuracy under attack than in a clean setting**, causing I-RR values to exceed 100%. We propose that ROME’s information bottleneck framework enhances robustness by **amplifying non-robust features** under specific adversarial conditions like PGD. This is discussed in *Line 374 of the Adversarial Robustness Evaluation* section. A similar effect is observed in BEARD’s RR metric for CIFAR-100 with targeted white-box attacks. We plan to explore more effective metrics to better capture this phenomenon in future work.
---
- **Q6: Could the authors do the comparison with DD baselines by using an adversarial version of the original dataset?**
- **R6:**
Thank you for your suggestion. We compared **ROME** with **adversarially distilled datasets** using the **BEARD benchmark** and applied **CREI** for a fair comparison. As shown in **Table 1**, ROME outperforms adversarially distilled methods in both **adversarial robustness** and **efficiency** under targeted and untargeted attacks. ROME achieves comparable robustness **without the need for retraining**, significantly **reducing computational costs**. We have clarified this and provided additional experimental details in the revised manuscript.
**Table 1: Comparison of Adversarial Robustness of ROME and Adversarially Distilled Datasets Using DD Methods under CIFAR-10 IPC-50 with CREI (%).**
|Attack Type|Full-size|DC|DSA|MTT|DM|IDM|BACON|ROME|
|-|-|-|-|-|-|-|-|-|
|Targeted Attack|50.54|52.30|55.56|50.96|57.21|54.67|56.18|**63.32**|
|Untargeted Attack|41.33|37.39|37.59|33.13|37.34|39.05|37.25|**43.62**|
---
Rebuttal Comment 1.1:
Comment: Thanks for the additional experiments and clarification, which address my concerns. I recommend the acceptance of this work.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our responses. We're pleased to hear that our clarifications addressed your concerns, and we truly appreciate your recommendation for acceptance. | Summary: The authors proposed an adversarially robust distillation optimization framework for dataset distillation. They also provided the optimization method for this framework.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: The theoretical derivation has been checked, and no issues were found. However, there may be some minor notation misuse, such as the use of CE[p(z|x)||q(y|z)] in the proof of Theorem 3.6.
Moreover, Theorem 3.7 is a lower-bound objective obtained by scaling an inequality, but this is not stated in the main text.
Experimental Designs Or Analyses: Under the dataset distillation framework, the attacker appears to be somewhat disadvantaged. The defender can carefully select features that are robust. Perhaps an adaptive attack should be considered during evaluation, assuming that the attacker is aware that the defender employs dataset distillation methods.
Supplementary Material: Proof of theorems.
Relation To Broader Scientific Literature: 1. Conditional Entropy Bottleneck (CEB).
2. Adversarial examples are not bugs, they are features.
Essential References Not Discussed: no
Other Strengths And Weaknesses: The authors claim that the proposed method avoids the high computational cost of retraining but do not provide a detailed explanation. The proposed framework does not discuss the sampling of \hat{x}. Intuitively, the iterative training of dataset distillation and the sampling of \hat{x} resemble the process of training a model and sampling adversarial examples in standard adversarial training.
Other Comments Or Suggestions: Some typos. Line 153 misses I. Line 197 misdescribe joint distribution. Line 940 misses Expectation notation.
Questions For Authors: no
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: - **Q1: The theoretical derivation has been checked, and no issues were found. However, there may be some minor notation misuse, such as the use of $\mathbb{CE}[p(z|x)||q(y|z)]$ in the proof of Theorem 3.6.**
- **R1:**
Thank you for your valuable feedback. We have carefully reviewed and revised the proof of Theorem 3.6 to correct the notation and ensure consistency. Additionally, we have provided more details in the proof to enhance clarity and readability.
---
- **Q2: Moreover, Theorem 3.7 is a lower-bound objective obtained by scaling an inequality, but this is not stated in the main text.**
- **R2:**
Thank you for your suggestion. We have updated the main text to explicitly state that Theorem 3.7 is a lower-bound objective derived by scaling an inequality. This clarification have been included in the revised manuscript to ensure better readability and understanding of the theoretical derivation.
---
- **Q3: Under the dataset distillation framework, the attacker appears to be somewhat disadvantaged. The defender can carefully select features that are robust. Perhaps an adaptive attack should be considered during evaluation, assuming that the attacker is aware that the defender employs dataset distillation methods.**
- **R3:**
Thank you for your suggestion. Our method is designed without prior knowledge of specific attacks, with the robust prior derived solely from **PGD ($\frac{8}{255}$ perturbation budget)**. As mentioned in *Evaluation Attack (L278)*, we assess robustness using PGD and **six additional attack methods**: **FGSM**, **C&W**, **DeepFool**, **AutoAttack**, **Square**, and **SPSA**, none of which were used to construct ROME. This ensures a balanced information setting between the attacker and the defender. Notably, **AutoAttack**, a **representative adaptive attack**, ensures a rigorous evaluation. Our experiments follow a **white-box attack setting**, where attackers have full knowledge of the model parameters, giving them an advantage. As shown in Table 1, ROME consistently outperforms other methods in all robustness metrics. Finally, developing more effective adaptive attacks for dataset distillation remains an open challenge and a promising direction for future work.
---
- **Q4: The authors claim that the proposed method avoids the high computational cost of retraining but do not provide a detailed explanation.**
- **R4:**
Thank you for your suggestion. Models trained on **ROME-distilled datasets** inherently exhibit adversarial robustness, unlike methods such as DC, DSA, MTT, DM, IDM, and BACON, which require **additional adversarial training**, effectively **doubling the training cost**. ROME mitigates this overhead by **embedding robust priors directly into the distillation process**, thus eliminating the need for retraining. The experimental results quantifying this difference are shown in **Table 1**. While ROME requires slightly more initial training time, its overall computational cost remains significantly lower than methods that involve adversarial retraining.
**Table 1: Comparison of Training Time for ROME and Adversarially Distilled Datasets Using DD Methods under CIFAR-10 IPC-50.**
| Method | Training Time (hrs) | +Adversarial Training (hrs) | Total (hrs) |
|-|-|-|-|
| DC | 1.009 | 2.019 | 3.028 |
| DSA | 0.898 | 1.796 | 2.694 |
| MTT | 0.882 | 1.764 | 2.647 |
| DM | 0.963 | 1.925 | 2.888 |
| IDM | 0.895 | 1.790 | 2.685 |
| BACON| 0.874 | 1.748 | 2.622|
| **ROME** | **1.014** | **1.014** | **2.027** |
---
- **Q5: The proposed framework does not discuss the sampling of $\hat{x}$. Intuitively, the iterative training of dataset distillation and the sampling of $\hat{x}$ resemble the process of training a model and sampling adversarial examples in standard adversarial training.**
- **R5:**
Thank you for your comment. The sampling of $\hat{x}$ in ROME is similar to generating adversarial examples in standard adversarial training, but there are two key differences. First, unlike adversarial training, where adversarial examples are generated in each iteration to update model weights, ROME **integrates adversarial perturbations directly into the dataset distillation process**, producing a distilled dataset that inherently possesses adversarial robustness, eliminating the need for repeated adversarial training and reducing computational overhead. Second, while adversarial training results in robust model weights that require retraining for new tasks, ROME does not have this requirement. This distinction is discussed in the revised manuscript.
---
- **Q6: Some typos. Line 153 misses I. Line 197 misdescribe joint distribution. Line 940 misses Expectation notation.**
- **R6:**
Thank you for your review. We have made the following corrections: added the missing "I" in Line 153, revised the joint distribution in Line 197, and added the Expectation notation in Line 940. These changes have been included in the revised manuscript. | Summary: This paper proposes a new method -- ROME for dataset distillation that uses the Information Bottleneck principle to create small, synthetic datasets with improved resistance to adversarial attacks. Traditional adversarial training is slow and often reduces accuracy. ROME incorporates the Conditional Entropy Bottleneck into the distillation process. It optimizes two parts: one that preserves accuracy by strengthening the link between latent features and true labels, and another that boosts robustness by weakening the link between the input and latent features when adversarial noise is added. Tests on CIFAR-10 and CIFAR-100 show that ROME outperforms previous methods under various attack scenarios.
Claims And Evidence: Claim: ROME significantly improves adversarial robustness of distilled datasets without needing costly adversarial retraining.
Evidence: The experimental results (Tables 1–3) consistently show improvements in I-RR and related metrics under both targeted and untargeted attacks. For example, on CIFAR-10, ROME achieves up to a 40% improvement in I-RR compared to baselines such as DC, DSA, and BACON.
Claim: Incorporating the IB (and specifically the Conditional Entropy Bottleneck) into the distillation process yields a favorable balance between accuracy and robustness.
Evidence: The paper presents both theoretical derivations (Theorems 3.2–3.7, with proofs in the appendix) and ablation studies that validate the effectiveness of the two loss components—the performance-aligned and robustness-aligned terms.
Methods And Evaluation Criteria: Method: ROME reframes dataset distillation using the IB principle by introducing a robust prior via adversarial perturbations. The method defines an objective function that combines: 1). A performance-aligned term ensuring that distilled data retain sufficient label-related information. 2).A robustness-aligned term that minimizes the discrepancy between the synthetic dataset and its adversarially perturbed counterpart, thereby pushing the model’s decision boundary away from potential adversarial examples.
Evaluation: The authors evaluate robustness using both white-box and black-box attack scenarios (e.g., FGSM, PGD, C&W, Autoattack for white-box; transfer-based and query-based for black-box). They introduce the I-RR metric as a refined measure of robustness. Experiments are conducted on CIFAR-10 and CIFAR-100 using a ConvNet architecture, and ablation studies explore the impact of robust pretraining, adversarial perturbations.
Theoretical Claims: The theoretical claims are supported by detailed proofs provided in the appendix. While the derivations are mathematically involved, they offer a solid foundation for the proposed objective and justify the design choices in ROME.
Experimental Designs Or Analyses: The experimental setup is comprehensive. The authors compare ROME with several leading dataset distillation methods (DC, DSA, MTT, DM, IDM, BACON) under various attack settings on standard benchmarks (CIFAR-10 and CIFAR-100).
Ablation studies show the contributions of key components, such as the robust pretrained model (RPM) and adversarial perturbations (AP).
Supplementary Material: I have reviewed the appendix (theory, experiments, etc...).
Relation To Broader Scientific Literature: By introducing an IB-based formulation into dataset distillation, ROME offers a new perspective on how to achieve both accuracy and robustness in compact datasets.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The visualization is clear and understandable.
2. The experiments are solid and comprehensive.
3.The theoretical proof is solid.
Weaknesses:
1. Need more discussion about dataset scaling.
2. Need more discussion about different model architectures.
3. Lack of experiments about computation cost.
Other Comments Or Suggestions: See questions.
Questions For Authors: 1. Curious about the performance of the proposed method on larger dataset.
2. Can the IB-based robust distillation framework be extended to other network architectures, such as transformers or hybrid models?
3. Do you have any quantitative comparisons (e.g., specific training time reductions) but rather emphasizes the conceptual benefit of lower computational demands?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: - **Q1: Curious about the performance of the proposed method on larger dataset.**
- **R1:**
Thank you for your question. Scaling ROME to larger datasets like ImageNet requires significantly **more computational resources and time**, which was not feasible within the **limited rebuttal period** and **available resources**. Additionally, prominent dataset distillation baselines (e.g., DC, DSA, MTT, DM, IDM, BACON) have not been trained on larger datasets due to the computational challenges posed by **feature alignment strategies**. As the dataset size increases, the search space for representative features expands, requiring considerably **more computational power**. For instance, generating ROME on CIFAR-100 with IPC-50 settings takes 3-5 days in our lab, and ImageNet or its subsets, which are hundreds of times larger, would require significantly more time. Furthermore, methods like DM, IDM, and BACON have not publicly released results on large datasets, so a fair comparison would require re-running baselines, which is not feasible within the rebuttal period. Nevertheless, we believe ROME can perform well on larger datasets. As shown in *Table 1* of our paper, ROME demonstrates better performance on CIFAR-100 compared to CIFAR-10, suggesting that its robustness benefits could **extend to larger datasets**, though further investigation is needed. As discussed in the *Limitations and Future Work* section, we acknowledge this limitation and plan to explore more scalable approaches to enhance **adversarial robustness** on **larger datasets** in future work.
---
- **Q2: Can the IB-based robust distillation framework be extended to other network architectures, such as transformers or hybrid models?**
- **R2:**
Thank you for your question. We believe the **IB-based robust distillation (ROME)** framework can extend beyond ConvNet-based architectures to models such as **transformers**. Currently, dataset distillation methods like **DC**, **DSA**, **MTT**, **DM**, **IDM**, and **BACON** are primarily designed for **ConvNets**. Since ROME enhances the adversarial robustness of distilled datasets, it has the potential to generalize to other architectures as dataset distillation evolves beyond ConvNets. Regarding **generalization**, ROME has demonstrated strong robustness against both **transfer-based** and **query-based black-box attacks**, as shown in *Table 2* and the *Figure 3 in the paper*. As discussed in the *Limitations and Future Work* section, we plan to explore its applicability to **more complex models**, such as transformer-based architectures, particularly for tasks like **adversarially robust vision-language learning**. Additionally, we aim to assess adversarial robustness across **diverse architectures** to validate ROME's broader effectiveness.
---
- **Q3: Do you have any quantitative comparisons (e.g., specific training time reductions) but rather emphasizes the conceptual benefit of lower computational demands?**
- **R3:**
Thank you for your question. While we emphasize the conceptual advantage of ROME, which inherently provides adversarial robustness without the need for **additional adversarial training**, we also present quantitative comparisons. Methods like DC, DSA, MTT, DM, IDM, and BACON require **adversarial retraining** to achieve similar robustness, which typically doubles the training cost due to the additional retraining on adversarial examples. In contrast, ROME eliminates the need for repeated adversarial training by **integrating robust priors directly into the distillation process**, thus significantly reducing computational overhead. Although ROME requires more training time, its total time is lower than methods that rely on adversarial retraining. We have included these quantitative comparisons in Table 1 and provided further experimental results in the revised version to highlight the training time differences.
**Table 1: Comparison of Training Time for ROME and Adversarially Distilled Datasets Using DD Methods under CIFAR-10 IPC-50.**
| Method | Training Time (hrs) | +Adversarial Training (hrs) | Total (hrs) |
|--------|---------------------|-----------------------------|-------------|
| DC | 1.009 | 2.019 | 3.028 |
| DSA | 0.898 | 1.796 | 2.694 |
| MTT | 0.882 | 1.764 | 2.647 |
| DM | 0.963 | 1.925 | 2.888 |
| IDM | 0.895 | 1.790 | 2.685 |
| BACON | 0.874 | 1.748 | 2.622 |
| **ROME** | **1.014** | **1.014** | **2.027** | | Summary: This paper aims to improve adversarial robustness in dataset distillation. Inspired by the Information Bottleneck principle, this paper proposes a novel framework which is able to balance model performance and robustness. Various kind of experiments demonstrate the effectiveness of the proposed framework.
Claims And Evidence: In this work, authors' claim that they proposed a new method to enhance the adversarial robustness of data distillation. The experimental results demonstrate the superiority of the proposed method, which support the authors' claim sufficiently.
Methods And Evaluation Criteria: Authors utilize most common benchmark datasets and evaluation metrics in their experiments, hence the evaluation criteria makes sense for the problem studied in this paper.
Theoretical Claims: After going through theoretical proofs provided in supplementary material, I don't find any issues.
Experimental Designs Or Analyses: Yes, from my point of view, the experimental design is reasonable and complete.
Supplementary Material: Yes, I go through the first two parts and mainly check part C, i.e. experiments part.
Relation To Broader Scientific Literature: This work utilize the Information Bottleneck principle to improve the adversarial robustness in dataset distillation, which further illustrate that the Information Bottleneck principle could be an effective way to help defense adversarial attack.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Pros: 1. Authors proposed a novel idea to improve the adversarial robustness in dataset distillation; although the backbone technique Information Bottleneck principle has been widely utilized in similar settings, adopting it into dataset distillation is still very innovative;2. A series of experiments involving both targeted attack and untargeted attack, show that the proposed method is able to improve the adversarial robustness by a large margin, comparing with baseline methods.
Cons: 1. It would be better if authors can further discuss the main purpose of using RR, CREI, I-RR and I-CREI in evaluating adversarial robustness separately, so that evaluation results could be more easier to understand.
Other Comments Or Suggestions: No.
Questions For Authors: 1. Does the proposed method require a longer time for executing, comparing with baseline methods? If so, which part consumes the most resources?
2. As shown in Figure 4, looks like the robustness-aligned term has a higher impact on model performance, comparing with performance-aligned term. Any thoughts on it?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: - **Q1: It would be better if authors can further discuss the main purpose of using RR, CREI, I-RR and I-CREI in evaluating adversarial robustness separately, so that evaluation results could be more easier to understand.**
- **R2:**
Thank you for your suggestion. We have clarified the purpose of using RR, CREI, I-RR, and I-CREI in evaluating adversarial robustness. RR, originally introduced in BEARD, measures robustness by considering both **the average and worst-case** attack success rates (ASR), aiming to keep both values low. However, when a single attack method is exceptionally strong, RR can become disproportionately high, leading to an **overestimation of robustness**. To address this issue, we propose I-RR, which refines RR by incorporating model accuracy (ACC) to **ensure that both the average and worst-case ASR remain low while maintaining high ACC**, making it a more balanced robustness metric. CREI is a comprehensive index that combines RR with Attack Efficiency (AE), and in ROME, we introduce I-CREI by replacing RR with I-RR, further improving robustness evaluation. The primary motivation for using I-RR and I-CREI in ROME is to overcome the limitations of RR and CREI, ensuring a more reliable and fair assessment of adversarial robustness, especially in cases where a single attack method dominates. These distinctions have been added to the manuscript for better clarity.
---
- **Q2: Does the proposed method require a longer time for executing, comparing with baseline methods? If so, which part consumes the most resources?**
- **R2:**
Thank you for your question. ROME does introduce **additional computational overhead** compared to standard dataset distillation methods, mainly due to the robust prior generation. As shown in **Table 1**, ROME requires more training time than other methods. However, it does not necessitate full **adversarial retraining**, which is typically much more computationally expensive. The primary computational cost stems from the **adversarial perturbation** step used to generate the **robust priors**. Despite ROME having higher training time, its total time is lower compared to methods requiring adversarial retraining. We have included a detailed analysis of the computational costs compared to baseline methods in the supplementary material.
**Table 1: Comparison of Training Time for ROME and Adversarially Distilled Datasets Using DD Methods under CIFAR-10 IPC-50.**
| Method | Training Time (hrs) | +Adversarial Training (hrs) | Total (hrs) |
|--------|---------------------|-----------------------------|-------------|
| DC | 1.009 | 2.019 | 3.028 |
| DSA | 0.898 | 1.796 | 2.694 |
| MTT | 0.882 | 1.764 | 2.647 |
| DM | 0.963 | 1.925 | 2.888 |
| IDM | 0.895 | 1.790 | 2.685 |
| BACON | 0.874 | 1.748 | 2.622 |
| **ROME** | **1.014** | **1.014** | **2.027** |
---
- **Q3: As shown in Figure 4, looks like the robustness-aligned term has a higher impact on model performance, comparing with performance-aligned term. Any thoughts on it?**
- **R3:**
Thank you for your question. The observed difference in impact between the **robustness-aligned** and **performance-aligned terms** can be attributed to their distinct roles in model optimization. According to *Eq. 12*, $\alpha$ controls the trade-off between these two terms, where a higher $\alpha$ increases the weight of the robustness-aligned term, thereby enhancing the model's ability to generalize under adversarial perturbations. However, as shown in *Figure 4*, while emphasizing robustness improves performance under adversarial attacks, an excessive focus on robustness (i.e., a high $\alpha$) can suppress **non-robust features** critical for classification, potentially **reducing overall accuracy**. This trade-off is further emphasized in Table 3, where the robustness-aligned term (AP) contributes more significantly to **improving robustness** than the performance-aligned term (RPM). We have updated the discussion to clarify this trade-off and its impact on model performance. | null | null | null | null | null | null |
Lightweight-Mark: Rethinking Deep Learning-Based Watermarking | Accept (poster) | Summary: The paper proposes a deep watermarking framework that achieves state-of-the-art performance with significantly fewer parameters. The authors identify a mismatch between commonly used decoding losses and the decoding goal, which leads to parameter redundancy. To address this, they propose a Detachable Projection Head (PH) and a new loss function.
Claims And Evidence: Yes
Methods And Evaluation Criteria: This scheme should add perceptual loss or adversarial training to enhance perceptual quality.
Theoretical Claims: No
Experimental Designs Or Analyses: No
Supplementary Material: No
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strength:
There is a significant improvement in computation and efficiency. The proposed lightweight model utilizes less parameters compared to state-of-the-art frameworks, while achieving comparable or better performance in terms of invisibility and robustness.
Weaknesses:
(1) This expansion of mean squared error seems to have been complicated. Can an effective analysis still be provided without considering the positivity or negativity of the decoding output?
(2) This scheme should add perceptual loss or adversarial training to enhance perceptual quality.
Other Comments Or Suggestions: NO
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Weakness 1: Complexity of MSE Expansion and the Role of Positivity/Negativity**
We appreciate the your insightful feedback. We understand your concern regarding the complexity.
In watermark decoding tasks, positivity or negativity is a key criterion for determining whether decoding is correct. This not only serves as the foundation for decoding results but also plays an important role in our analysis, helping to express optimization behaviors such as **inflation** or **deflation**. Therefore, to ensure the rigor and completeness of the proof, considering positivity or negativity is indeed indispensable.
However, your suggestion is very reasonable, and we plan to improve the paper in the following ways to enhance readability and clarity:
**1. Simplification of Symbols and Formulae**:
To improve the readability of the formulas, we will simplify certain symbols and expressions. For example, we will rewrite $L_R^{+ and < \epsilon}$ as $L_R^{+} \cap L_R^{< \epsilon}$, making the expression more concise.
**2. A More Intuitive Explanation of Equation 3**:
The goal of decoding is to ensure that the decoded bits are correctly separated from the boundary value (0), without considering the magnitude of each bit (e.g., 0.1 or 1). However, the objective of the MSE loss is not only to ensure that each bit is correctly separated from the boundary value but also to push the bits closer to ±1. Clearly, MSE loss is a stricter loss function, but this stricter requirement does not directly improve decoding accuracy. To highlight the gap between MSE loss and decoding accuracy, we have decomposed it into three parts:
- **L_deflation** mainly optimizes the portion of the decoding error and directly affects decoding accuracy.
- **L_inflation** and **L_regularization** optimize the portion of correct decoding but do not directly influence decoding accuracy.
Additionally, based on the analysis of Equation 3, we propose two methods, **PH** and **DO**, to alleviate the redundancy introduced by **L_inflation** and **L_regularization**.
---
**Weakness 2: Adding perceptual loss or adversarial training for perceptual quality**
**1. Adding perceptual loss:**
Following your suggestion, we expanded our experiments to include perceptual loss in our training process. To improve perceptual quality, we incorporated SSIM and LPIPS losses and present the results in Table 1.
As shown in the Table 1, adding SSIM and LPIPS individually improves SSIM and LPIPS scores, particularly LPIPS, which **is reduced by at least half**.
Although a slight drop in PSNR and average accuracy is observed, these trade-offs are reasonable given the enhanced perceptual quality.
**Table 1. Benchmark comparisons on visual quality and robustness against combined noise**
| **Method** | PSNR | SSIM | LPIPS | Ave |
|--------------------|----------|-------|---------|---------|
| PH | 41.67 | 0.97 | 0.00184 | 99.28 |
| PH + SSIM | 41.14 | 0.98 | 0.00173 | 99.13 |
| PH + LPIPS | 40.14 | 0.97 | 0.00058 | 99.23 |
| PH + SSIM + LPIPS | 40.06 | 0.98 | 0.00050 | 99.03 |
| DO | 41.70 | 0.97 | 0.00097 | 99.36 |
| DO + SSIM | 41.20 | 0.99 | 0.00158 | 99.30 |
| DO + LPIPS | 41.15 | 0.97 | 0.00047 | 99.31 |
| DO + SSIM + LPIPS | 41.05 | 0.99 | 0.00044 | 99.06 |
---
**2. Adding Adversarial Training:**
To further investigate adversarial training, we analyzed the impact of adding a discriminator (DIS) to our proposed lightweight model. Detailed results are provided in Appendix E.8, and the key findings are summarized in Table 2.
**Table 2. The Effect of the Discriminator (DIS) on the Proposed Lightweight Model**
| **Method** | **PSNR** | **SSIM** | **LPIPS** | **Ave** |
|--------------|--------------|--------------|--------------|------------|
| **PH w/o DIS** | 41.67 | 0.97 | 0.00184 | 99.28 |
| **PH w DIS** | 41.07 | 0.97 | 0.00157 | 99.31 |
| **DO w/o DIS** | 41.70 | 0.97 | 0.00097 | 99.36 |
| **DO w DIS** | 41.21 | 0.97 | 0.00062 | 99.34 |
As observed, the discriminator does indeed lead to some improvement in perceptual quality (LPIPS). Additionally, **efficiency is also a key metric for our model**. As shown in Appendix E.8 Table 13, the discriminator requires **6.8×** more parameters and **8.5×** more FLOPs than our entire lightweight model. Given the small perceptual gain and the substantial increase in computational cost, we prioritize **perceptual loss** as a more efficient approach to enhancing perceptual quality.
---
We sincerely appreciate your insightful suggestion regarding perceptual quality. Your recommendation to incorporate **perceptual loss** has proven to be effective and efficient, and we will include this experimental analysis in the final version of our paper. | Summary: This manuscript presents a novel approach to deep learning-based watermarking, aiming to balance efficiency, invisibility, and robustness while reducing computational cost. The key contributions include: 1. Decoding-Oriented Surrogate Loss (DO): The authors identify a mismatch between commonly used decoding losses (e.g., MSE and BCE) and the actual decoding objective. They propose a surrogate loss that mitigates the influence of optimization directions unrelated to decoding accuracy. 2. Detachable Projection Head (PH): A temporary module introduced during training to handle decoding-irrelevant optimization directions, which is removed during inference to reduce model complexity. 3. Lightweight Model: The proposed watermarking model achieves state-of-the-art robustness and invisibility while using only 2.2% of the parameters of previous frameworks. 4. Comprehensive Experiments: The authors validate their approach across multiple distortions (Gaussian blur, noise, JPEG compression, etc.) and show robustness against diffusion-based watermark removal attacks.
Claims And Evidence: The manuscript claims that existing decoding losses introduce parameter inefficiency by optimizing directions that do not directly contribute to decoding accuracy. To address this, it proposes the Decoding-Oriented Surrogate Loss (DO), which aims to eliminate irrelevant optimization directions, and the Detachable Projection Head (PH), a temporary module used during training to handle decoding-irrelevant tasks. The paper further claims that these methods enhance both robustness and invisibility while significantly reducing model complexity. Experimental results are presented to support these claims, demonstrating that the proposed methods achieve improved decoding accuracy and visual quality across multiple distortions while using only a small fraction of the parameters.
Methods And Evaluation Criteria: - Baseline Comparisons: The paper benchmarks against multiple SOTA watermarking models (HiDDeN, MBRS, CIN, FIN).
- Distortion Types: Assesses performance under Gaussian blur, noise, JPEG compression, dropout, salt & pepper noise, and diffusion-based attacks.
- Metrics: Uses PSNR, SSIM, LPIPS (visual quality), accuracy (robustness), and FLOPs/parameter numbers (efficiency).
Theoretical Claims: The manuscript provides a mathematically decomposition of MSE loss (Equation 3) and BCE loss (Equation 21, Appendix A.2), clearly identifying components that do not directly contribute to decoding accuracy. These insights form the foundation for the proposed Decoding-Oriented Surrogate Loss (DO), which offers an innovative approach to improving efficiency by focusing optimization on decoding-relevant directions. This finding is interesting to the deep learning based watermarking community.
Experimental Designs Or Analyses: 1. Soundness: The experiments are well-structured, testing both traditional and diffusion-based watermark removal techniques. Additionally, the authors compare their method with knowledge distillation techniques, demonstrating the effectiveness of their approach.
2. Ablation Studies: The paper provides detailed analyses of different loss components (Table 5), verifying the necessity of each proposed module. The five modules in the proposed watermarking framework are also thoroughly tested in Table 7 under different distortions.
However, as mentioned in the limitations, the choice of some important hyperparameters is not sufficiently clear. For example, the safe distance (\epsilon) in the DO loss is manually set. Could this parameter be improved with automated tuning to enhance performance and ease of use?
Supplementary Material: Supplementary Material: The algorithm steps in the supplementary material provide comprehensive and thorough supplementary experiments, as well as analysis of other common decoding losses (BCE loss), which are relatively clear and helpful in understanding the paper's methods and experiments. However, some sections of the supplementary material should be included in the main text, such as Figure 3 in Appendix B and Figure 4 in Appendix C. These visual representations are crucial for the paper's presentation and should be incorporated into the main body of the manuscript.
Relation To Broader Scientific Literature: The main problem addressed by this paper is still the visual quality and robustness of deep learning-based watermarking models (e.g., HiDDeN, CIN, FIN, MBRS). Beyond this, the authors also consider model lightweighting, setting their work apart from knowledge distillation methods (e.g., Hinton et al., 2015; Zhao et al., 2022), as their approach does not require a teacher model. Instead, the focus is primarily on improving the shared issues present in the widely used decoding losses (MSE and BCE loss) within watermarking tasks.
Essential References Not Discussed: -
Other Strengths And Weaknesses: Strengths:
- The approach is interesting to the digital watermarking community, offering a simple and easy-to-deploy method. The experimental validation is comprehensive, covering multiple distortions and diffusion-based attacks.
- The proposed separable projection head (PH) and decoding-oriented alternative loss (DO) are effective in mitigating the negative impact of irrelevant optimization directions, enabling the lightweight model to achieve state-of-the-art performance.
- The novel lightweight watermarking framework outperforms existing models in terms of invisibility, robustness, and efficiency, making it a valuable solution for applications with limited computational resources.
Weakness:
- As mentioned in the limitations section, the safe distance (\epsilon) in the DO loss is manually set. Why hasn't an automated tuning approach been considered for this parameter? Wouldn't this enhance both performance and ease of use, making the method more robust and adaptable?
- I understand that due to space limitations, the authors have placed a lot of content in the Appendix. However, some important content should be in the main text. For example, Figure 3 in Appendix B and Figure 4 in Appendix C are essential visual representations that are crucial for the paper's presentation and should have been incorporated into the main body of the manuscript.
Other Comments Or Suggestions: 1. Clarification on Hyperparameters:
The manual setting of certain hyperparameters, such as the safe distance (ϵ) in the DO loss, needs further clarification. How is a reasonable value for ϵ chosen in practice? Also, could you explore ways to automate the optimization of ϵ during training to improve adaptability and ease of use?
2. Figures in Appendix:
Figures like Figure 3 in Appendix B and Figure 4 in Appendix C are crucial for understanding the methods. I suggest moving these figures to the main body of the manuscript to improve accessibility and clarity.
Questions For Authors: 1. What motivates the design of the lightweight deep watermarking model?
2. The manuscript uses several evaluation metrics like PSNR, SSIM, and LPIPS to evaluate visual quality. Can you provide more detailed insights on how these metrics compare in terms of their correlation with human perception of image quality, and whether the proposed method could be further improved in subjective visual quality?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thorough review and the constructive comments on our manuscript. Below, we address your comments to clarify and improve our manuscript:
**Weakness 1 & Suggestion 1: Can ($\epsilon$) in DO be Automated Tuning and How to Choose a Reasonable ($\epsilon$)?**
**1. Can Safe Distance ($\epsilon$) in DO be Automated Tuning?**
Following your suggestion, we attempted to make $\epsilon$ a learnable parameter integrated into the optimization process. We initially set $\epsilon$ to 0.1 and left other training settings unchanged. During training, we monitored both decoding accuracy (ACC) and the value of $\epsilon$. We found that $\epsilon$ decreased continuously as training progressed and eventually reached 0, leading to instability.
At this point, the ACC dropped to around 50%, and the model was unable to decode correctly. We analyzed this behavior and identified that in the DO method, the role of $\epsilon$ in the DO method is to penalize values close to the boundary (0). A larger $\epsilon$ applies a broader penalty, while a smaller $\epsilon$ shrinks the penalty range. When $\epsilon$ reaches 0, no penalty is applied, and the model effectively minimizes $L_{inflation}$ by setting $\epsilon = 0$, reducing the DO method (i.e., $L_{deflation} + L_{inflation}$) to $L_{deflation}$. As shown in Section 2.2 ("The Gap between Two Objectives") of our original paper, this causes model instability, which we further validated and reported in Table 5 of Section 4.4 (Ablation Study).
**How to Choose a Reasonable ($\epsilon$)?**
We acknowledge that directly incorporating $\epsilon$ as a learnable parameter was not feasible. However, your suggestion was valuable, and we conducted extensive experiments on the model's performance under varying $\epsilon$ values. The results of these experiments are presented in Appendix E.11, offering guidance on selecting an appropriate $\epsilon$.
For now, we empirically selected $\epsilon$ based on these experiments, balancing model stability and optimal performance. We also plan to explore more automated and convenient methods in future research. We hope this explanation clarifies the challenges we faced and addresses your concerns.
---
**Weakness 2 & Suggestion 2: More Visual Results in Main Text.**
Thank you for your constructive comment! Indeed, due to the current length constraints, we are unable to include more images in the main text now. However, in the final version, we will incorporate Figures 3 and 4 into the main body of the paper.
---
**Question 1: The Motivation of the lightweight deep watermarking model.**
The motivation behind the lightweight deep watermarking architecture stems from the practical demands of deploying models in resource-constrained environments. The motivations are as follows:
**1. Practical Need for Lightweight Models in Real-World Applications**
Digital watermarking plays a crucial role in protecting intellectual property across various domains, such as images, videos, and 3D content. However, high-performance models are often impractical for deployment due to their:
- **Large parameter sizes**, which increase storage requirements.
- **High computational demands**, which exceed the capacity of many real-world systems.
This challenge is particularly acute in scenarios like **video streaming** and **online education**, where devices must operate efficiently within strict computational and energy constraints. High-performance lightweight models are essential to enable such deployment while remaining effective for copyright protection.
**2. Importance of Lightweight Design in SoC Architectures**
In SoC-based environments, lightweight design is not just beneficial but imperative:
- **Storage Constraints**: The storage capacity in embedded devices is limited, making parameter-efficient models essential.
- **Computational Efficiency**: SoCs lack significant computational power. Lightweight models reduce energy consumption, enabling practical deployment on edge devices.
---
**Question 2: Further Improving Subjective Visual Quality.**
- **PSNR** primarily measures pixel-level differences but does not reflect visual quality well in terms of image structure and texture details.
- **SSIM** considers luminance, contrast, and structure, offering a better alignment with human perception, but it still does not fully capture complex visual features.
- **LPIPS** compares feature representations from deep neural networks, providing a closer approximation to human perception, especially in terms of high-level perceptual quality.
For improving subjective visual quality, more detailed experiments can be found in weakness 2 of Reviewer QY8P's feedback. For your convenience, we summarize the results here. We experimented with two approaches: directly adding SSIM and LPIPS losses, and using a discriminator. Both methods effectively improved subjective visual quality (LPIPS), but adding SSIM and LPIPS losses was computationally more efficient. | Summary: The authors address the challenges of computational efficiency and accuracy in steganography. By identifying shortcomings in commonly used decoding loss functions, such as MSELoss and BCELoss, they introduce two techniques to mitigate these issues.
The first method, Detectable Projection Head (PH), scales the decoding output to maintain values close to 1 (or -1), improving training stability. The second method, Decoding-Oriented Surrogate Loss (DO), selectively filters out loss contributions from correctly decoded bits, focusing only on incorrectly decoded bits and those near the decision boundary. This targeted approach stabilizes training and enhances decoding accuracy.
Furthermore, the authors demonstrate that their approach achieves similar or slightly better performance than the previous state-of-the-art method, FIN, while enabling the use of a smaller model, thereby reducing computational overhead without sacrificing effectiveness.
### Update after Rebuttal
I would like to revise my score from 2 to 3, as the extended experimental results show that PH (referring to Table 1 in the second-round response) and DO (referring to Table 2 in the second-round response) offer valuable contributions to the development of future watermarking frameworks. Additionally, all questions and suggestions raised during the two-round discussions have been adequately addressed. This work demonstrates the potential for producing robust watermarking techniques while maintaining a lightweight model.
Claims And Evidence: Yes. The authors provide the convincing experimental result that support their claims.
The authors compare their method's performance with prior works, demonstrating its effectiveness in terms of lightweight design and decoding accuracy. They also validate the soundness of their loss decomposition and grouping through an ablation study.
Methods And Evaluation Criteria: The Decoding-Oriented Surrogate Loss (DO) method makes sense for the problem.
For evaluation, the authors use an image dataset with a resolution of 3 × 128 × 128, embedding watermark messages (default L = 64) and assessing extraction accuracy across various noisy channels. The evaluation metrics, including the visual preservation score (PSNR) and decoding accuracy, are standard metrics commonly used in the field. Note that they also evaluate robustness against two watermark removal attacks, PRGAI and DiffPure, demonstrating the effectiveness of the watermark under adversarial cases.
Theoretical Claims: I reviewed Appendix A and E.2, where the authors demonstrate how the loss can be decomposed and grouped based on decoding correctness (Right/Wrong) and decoded value (+/-). The mathematical formulation appears sound, and no major issues were identified in the proofs.
Experimental Designs Or Analyses: The authors compare message decoding performance against four baselines: HiDDeN, MBRS, CIN, and FIN. The proposed method (lightweight model + DO) achieves performance comparable to the state-of-the-art (SOTA) method FIN, with slight improvements in visual preservation and decoding accuracy.
Supplementary Material: The authors provide the source code as supplementary material. Due to time constraints, I have not been able to review the source code in depth.
Relation To Broader Scientific Literature: I am not familiar with the broader scientific literature in this area, so I am unable to provide a detailed answer to this question.
Essential References Not Discussed: I did not identify any relevant works that are not referenced in the paper.
Other Strengths And Weaknesses: **Strengths**
- The paper is well-written and easy to follow overall.
**Limitations**
- The authors address limitations in the appendix: there is still room for improvement in the robustness of the watermarking against geometric distortion channels.
**Weakness**
- Some details remain unclear. Please refer to the Questions section.
Other Comments Or Suggestions: I believe that the subscript should be written in normal text—for instance, $L_{\text{deflation}}$ rather than $L_{deflation}$—when it represents a word. Moreover, using $\log$ instead of $log$ is much more preferred. To express the condition "and" in the formula more clearly, it would be better to use the intersection operator—such as $L^{−}_R \cap L^{>−ε}_R$— rather than $L^{−\text{and}>−ε}_R$.
Some typos were found in the paper:
1. L159, second column: The phrase "... first three terms of Equation (3) ..." seems incorrect, as this paragraph discusses the loss contributed by wrongly decoded bits. It should likely read "the last three terms."
2. L359, second column: The citation is missing the year.
Questions For Authors: 1. Is it possible to bound the output value of the decoder to the range [0, 1] using the $\text{tanh}$ function when using MSELoss? Currently, without the projection head, the output ranges from [-100, 100]. However, for BCELoss, a sigmoid function is typically added to restrict the co-domain of the decoder. Would applying a $\text{tanh}$ function with MSELoss produce a similar effect, or is there a specific reason for using the current approach?
2. I am confused about the term "Combined noise," as the performance of these perturbations is evaluated and reported separately. Please explain this term and the experiment design in more detail.
3. Referring to Section 4.3, the authors use Gaussian noise and a median filter as surrogates for diffusion-based watermark removal attacks for some reasons. Have you tried using a few-step diffusion model or Tweedie Estimation to avoid the multi-step sampling?
4. Referring to Table 4, the authors only compared the robustness of the baseline (MSE) and the proposed methods (PH and DO). What about prior works (HiDDeN, MBRS, CIN and FIN)?
5. Referring to Figure 5, it would be better to also show the watermarked image without an attack.
6. Referring to Table 10, the authors show the robustness when adjusting the noise layer with *less* perturbation. What about when the noise layer is adjusted with *more* perturbation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Notation and Typographical Corrections**
Thank you for your detailed review. We will ensure proper formatting of subscripts and expressions in the final version.
Regarding the typos, you are right that in L159 it should be "the last three terms," and the citation in L359 should be "2024". We will address these issues in the final version.
---
**To provide a comprehensive response, we have included some results in the anonymous link:**
https://anonymous.4open.science/r/Rebuttal-ICML-8393/Rebuttal_ICML2025_%208393.pdf
**Q1:** Thank you for your constructive suggestion! If we remove the projection head, we could indeed add a Tanh function after the decoder to constrain the output to the range (-1,1), as shown in Figure 3 (linked). However, we do not impose such a restriction on the decoder output because the success of decoding depends on the relationship between the decoded bits and the boundary value (0 for MSELoss, 0.5 for BCELoss), rather than on the magnitude of the output itself. For example, an output of 0.1 and 2 both indicate the same decoded bit '1' under MSELoss.
Adding a Tanh function would not alter the relative relationship between the decoded bits and the boundary (i.e., values greater than 0 remain positive, and values less than 0 remain negative). Consequently, as shown in Table 1 (linked), the final decoding accuracy remains unaffected, so we opt not to introduce an additional function.
For BCELoss, the sigmoid function is necessary because BCELoss is designed for probabilistic outputs. The sigmoid ensures that the outputs lie within the [0,1] range, representing valid probability distributions. In contrast, MSELoss does not require such a transformation, as it directly optimizes the squared error.
**Q2:** Thank you for your question. The term "Combined noise" was originally introduced by HiDDeN, and its purpose is to address the scenario where watermark images may encounter different types of distortions in real-world applications. Since it is difficult to predict which specific distortion may occur, the watermarking model needs to be trained to handle multiple types of noise simultaneously. In practice, during training, we employ a Combined Noise technique, where the model is exposed to a random noise layer in each mini-batch. This enables the model to learn robustness to multiple types of distortions at the same time. Additionally, during evaluation, we assess the model's robustness under different noise layers to demonstrate its general robustness. This approach is also widely used in watermarking models such as MBRS, CIN, and FIN.
**Q3:** Thank you for your insightful suggestion. Following your advice, we experimented with directly using a few-step diffusion model as a noise layer for training. The results are presented in Table 2 (linked).
As the number of steps (t) in the noise layer diffusion model increases, the robustness of the watermarking model against diffusion-based attacks also improves. However, with smaller t, the model’s performance remains inferior to surrogate method. We believe this is due to the difficulty of simulating large t attacks with smaller t. Additionally, using t=0.03 already demands substantial GPU memory and time, which limits our ability to explore larger values of t with our current resources.
Despite these challenges, we believe your approach holds significant potential. Developing efficient few-shot diffusion models to simulate larger t attacks is an important direction for future research. However, this goes beyond the scope of our current paper. Our main contribution lies in providing a lightweight watermarking model that demonstrates state-of-the-art performance across various distortions. We are confident that our model can soon be integrated with future diffusion noise layers, and we believe it can as an efficient and effective framework for the watermarking community.
**Q4:** Thank you for your valuable suggestion. We have included the results in Table 3 (linked). For your convenience, we summarize the findings below. Our method consistently performs best in most cases for the PRGAI attack, and it consistently ranks in the top two for the DiffPure attack.
**Q5:** Thank you for your insightful suggestion. We have included the watermarked image without an attack in Figure 1 (linked) and will update Figure 5 in the final version accordingly.
**Q6:** Thank you for your insightful suggestion. We conducted extensive experiments to evaluate the model's performance under stronger perturbations, and the results are provided in Table 4 (linked). For your convenience, we summarize the key findings below: Under stronger perturbations, both PH and DO methods remain competitive. Notably, DO outperforms other models across all distortions, except for Median Blur. This confirms the effectiveness of our methods and demonstrates that the robustness acquired from training with less perturbations generalizes well to more perturbations.
---
Rebuttal Comment 1.1:
Comment: After reviewing the authors' response, I suggest the paper's contribution valuable to the development of image watermarking overall. PH and DO provide comparable performance to prior works while using a lighter model design, which has potential to help future work improving performance.
There are still several question I am considering:
- Regarding the PH approach versus $\text{tanh}$: Would the model perform better or worse if **trained** without PH but with a $\text{tanh}$ function clipping the output to the range $[-1, 1]$, with MSELoss directly applied to the output of the $\text{tanh}$ layer, compared to the PH approach?
- The paper mentions **Combined Noise** during testing time, which could be misinterpreted as cascading several noisy layers as a noisy channel. Readers would benefit from a clearer specification of the evaluation configuration.
For future work, I suggest that:
1. Testing DO with complex watermarking models such as FIN would strengthen the research, since DO is applicable to other watermarking models trained with MSELoss or BCELoss. Note that some distortions (style-transfer based, such as crayon, heavy color, etc.) are not evaluated in this paper; demonstrating performance improvement with FIN and its resilience to heavier distortions would provide valuable insights.
2. Improving the robustness against diffusion-based watermark removal attacks is needed. Currently, Gaussian Noise + Median Filter works better than directly training with such attacks, as shown in the experimental results provided by the authors. However, it could be possible to address the challenge of incorporating a diffusion model with large edit strength ($t$) through a carefully designed training recipe (appropriate diffusion model, method to avoid multi-step denoising). Including discussion on adversarial attacks is crucial for robust watermarking.
---
Reply to Comment 1.1.1:
Comment: **We have included some results in the anonymous link:**
https://anonymous.4open.science/r/Rebuttal-ICML-8393/Round2_Rebuttal_ICML2025_%208393.pdf
**Question 1:**
As shown in Table 1(linked), applying a `tanh` function to constrain the decoder output indeed improves performance compared to using MSELoss alone. However, the performance remains inferior to our PH approach.
Our interpretation is as follows: while restricting the decoder’s output domain (e.g., via `tanh`) does help mitigate the impact of extreme values, it does **not** alleviate the influence of $L_{\text{inflation}}$
on already correctly decoded bits—similar to the use of `sigmoid` under BCE loss (see Appendix A.2). Without the PH block, the backbone alone must directly bear the burden of $L_{\text{inflation}}$, which consumes model capacity without contributing to decoding accuracy. In contrast, PH effectively offloads this burden, leading to better overall performance.
**Question 2:**
Thank you for your advice. In the final version, we will revise Section 4 (Experiments) to provide a clearer and more detailed description of the noise layers used during testing.
**Suggestion 1:**
Thank you for the valuable suggestion. We provide additional results in Table 2(linked) to evaluate the generalization of DO to complex watermarking models such as FIN.
We first applied DO to the original FIN architecture (8 INN blocks with 32-channel layers) and trained it using the same Combined Noise setting as our main paper. During testing, we evaluated its robustness against 6 white-box distortions and 4 black-box distortions mentioned in the FIN paper.
As shown in Table 2, using DO or MSE on the original FIN leads to comparable results across all distortions. We attribute this to the fact that the original FIN already over-parameterizes for MSELoss, and its INN block design is near-optimal. Thus, even when DO relieves parameter pressure, performance gain is limited.
To further test DO’s strength, we reduced the model size while preserving the INN block structure. Specifically, we built a lightweight "small FIN" by using 2 INN blocks and reducing the intermediate channel size from 32 to 2. We then trained small FIN using both MSE and DO losses.
As shown in the table, small FIN trained with MSE suffers significant degradation under both white-box and black-box distortions. In contrast, DO remarkably preserves performance, achieving results close to the original FIN. Notably, under the Sketch distortion, DO outperforms MSE by more than **3%**, demonstrating DO's effectiveness in enabling robust performance under reduced capacity and challenging distortions.
**Suggestion 2:**
Thank you for the valuable and forward-looking suggestions.
Diffusion-based Attacks:
Our contribution to defending against diffusion-based watermark removal lies in the proposal of using **Gaussian Noise + Median Filter**, which has proven to be an **efficient and effective** strategy. While we agree that a carefully designed training recipe for diffusion models (e.g., appropriate model choice and avoiding multi-step denoising) is promising, this direction goes beyond the core scope of this work.
Our main focus is to bridge the gap between commonly used decoding losses and actual decoding goals. We propose two generally applicable methods (DO and PH) that enable **lightweight models** to achieve **top-tier robustness and accuracy** across a wide range of distortions. The technical design of **lightweight AIGC-editing simulation layers** for robust training under diffusion-based attacks is an emerging area. To our knowledge, the most relevant recent work is the CVPR 2025 paper **OmniGuard: Hybrid Manipulation Localization via Augmented Versatile Deep Image Watermarking**.
Adversarial Attacks:
Regarding adversarial attacks, we explored adversarial training using **PGD attacks**. During training, we adopted PGD with parameters: `eps = 8/255`, `alpha = 2/255`, `iters = 5`, `norm = linf`. For evaluation, we used a stronger PGD with `iters = 1000`. As shown in Table 3(linked), while adversarial training slightly reduces robustness under the 6 white-box distortions, it **significantly enhances robustness** against adversarial attacks.
This reveals a **robustness trade-off** between traditional distortions and adversarial attacks. In practice, users can balance this by tuning the weight between adversarial loss and decoding loss during training. Designing more comprehensive adversarial training strategies to mitigate this trade-off will be a promising future direction. To achieve provable robustness, one could explore random smoothing methods (**e.g. Certified adversarial robustness via randomized smoothing**).
---
Thank you for the multiple rounds of comments. They have been helpful in improving the quality of our work. We have made every effort to add the necessary experiments and provide clear explanations, and we hope our responses address your concerns. | null | null | null | null | null | null | null | null |
Non-stationary Diffusion For Probabilistic Time Series Forecasting | Accept (spotlight poster) | Summary: This paper introduces NsDiff, a novel diffusion-based framework for probabilistic time series forecasting that explicitly addresses non-stationary uncertainty. Recognizing that conventional DDPMs rely on a fixed variance assumption from the additive noise model (ANM), the authors propose the integration of a Location-Scale Noise Model (LSNM) to allow the variance to vary with the input data. NsDiff combines a pre-trained conditional mean and variance estimator with an uncertainty-aware noise schedule that dynamically adapts noise levels at each diffusion step. Extensive experiments on nine real-world and synthetic datasets demonstrate that NsDiff significantly outperforms existing methods, such as TimeGrad and TMDM, especially in capturing changing uncertainty patterns. Although the paper provides thorough theoretical derivations and promising empirical results, some aspects—such as the integration of the pre-trained estimators and the robustness of the noise schedule—could benefit from further clarification.
1. Why does NsDiff not adopt a fully end-to-end joint optimization approach? Is it necessary to pre-train the two networks separately?
2. Figure 2 contains numerous curved lines. While I understand that this may reflect the authors’ intentional design style, it appears rather unappealing and should be modified.
3. Although the task focuses on probabilistic MTS forecasting, I recommend additionally reporting metrics such as MSE and MAE, since the mean is an important characteristic of the distribution.
Claims And Evidence: See the summary.
Methods And Evaluation Criteria: See the summary.
Theoretical Claims: See the summary.
Experimental Designs Or Analyses: See the summary.
Supplementary Material: See the summary.
Relation To Broader Scientific Literature: See the summary.
Essential References Not Discussed: See the summary.
Other Strengths And Weaknesses: See the summary.
Other Comments Or Suggestions: See the summary.
Questions For Authors: See the summary.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1**: Why does NsDiff not adopt a fully end-to-end joint optimization approach? Is it necessary to pre-train the two networks separately?
**A1**: yes, the networks can be trained jointly without large performance loss. Below is an example to train on ETTh1 with and without pretraining, where we report CRPS metric.
|epoch|pretrain|jointtrain|
|-------|----------|----------|
|1|0.4181|0.4407|
|2|0.4041|0.4227|
|3|0.3977|0.4045|
|4|0.3926|0.4004|
|5|0.3889|**0.3868**|
|6|**0.3795**|0.3873|
|MSE earlystop|||
As can be seen, although joint train experiences a slight performance degradation (1.86%), it still outperforms the previous state-of-the-art TMDM (0.452). However, compared to pretraining, co-training is slightly harder to converge. We will clarify this in the updated version.
**Q2**: Figure 2 should be modified, e.g. remove curved lines.
**A2**: Thanks for this suggestion, we will modify Figure 2 according to your advice in the lastest version.
**Q3**: Report additional metrics such as MSE/MAE.
**A3**: Thanks for your insight. We give the results on real and synthetic datasets at the following tables with an addtional baseline CSBI (as required by Reviewer gSNF), the settings are consistent with the main context. The repo is updated accordingly to include CSBI code.
We present the results of MSE/MAE of synthetic datasets at follows:
|Variance|Linear||Quadratic||
|-----------|------|---------|---------|---------|
|Models|MSE|MAE|MSE|MAE|
|TimeGrad|1.546|3.726|1.626|4.173|
|CSDI|1.516|3.641|1.546|3.768|
|TimeDiff|1.537|3.776|1.559|3.762|
|DiffusionTS|1.738|4.766|1.689|4.823|
|TMDM|1.514|3.639|1.493|3.568|
|NsDiff|**1.512**|**3.616**|**1.479**|**3.448**|
As seen in this Table, NsDiff still achieves SOTA in uncertainty variation conditions.
We present the results of MSE/MAE of real datasets at follows:
|Models|Datasets|ETTh1|ETTh2|ETTm1|ETTm2|ECL|EXG|ILI|Solar|Traffic|
|-------------|----------|--------|--------|--------|--------|--------|--------|--------|--------|---------|
|TimeGrad|MSE|0.813|1.496|0.831|0.967|0.504|1.058|1.414|0.446|0.535|
|(2021)|MAE|1.062|3.462|1.218|1.690|0.505|1.567|4.197|0.475|0.983|
|CSDI|MSE|0.708|0.900|0.752|1.069|0.822|1.081|1.481|0.675|0.925|
|(2022)|MAE|0.949|1.226|1.002|1.723|1.007|1.701|4.515|0.763|1.731|
|CSBI|MSE|0.634|0.820|0.757|0.636|0.783|0.897|1.438|0.651|0.848|
|(2023)|MAE|0.762|0.659|0.526|0.841|0.923|0.746|4.344|0.748|1.527|
|TimeDiff|MSE|**0.479**|**0.485**|0.477|**0.333**|0.764|0.446|1.169|0.713|0.784|
|(2023)|MAE|**0.517**|**0.456**|0.537|**0.268**|0.879|0.402|3.958|0.821|1.350|
|DiffusionTS|MSE|0.774|1.411|0.744|1.232|0.856|1.564|1.788|0.740|0.815|
|(2024)|MAE|1.089|3.273|1.030|2.372|1.072|3.628|6.053|0.749|1.473|
|TMDM|MSE|0.607|0.490|**0.455**|0.395|0.359|0.430|1.175|0.316|0.425|
|(2024)|MAE|0.696|0.512|0.494|0.315|0.257|0.334|3.636|0.250|0.679|
|NsDiff|MSE|0.523|0.490|**0.455**|0.352|**0.306**|**0.412**|**0.985**|**0.307**|**0.373**|
|(ours)|MAE|0.594|0.514|**0.488**|0.281|**0.209**|**0.300**|**2.846**|**0.242**|**0.637**|
Note: TimeDiff is a model speciffically designed for long-term point forecasting.
As in this Table, in datasets with high non-stationarity, NsDiff still achieves SOTA, attributed to the dynamic mean and variance endpoint and the uncertainty-aware noise schedule.
Here we give the results of an addtional baseline CSBI, where NsDiff still remains SOTA.
|Models|Datasets|ETTh1|ETTh2|ETTm1|ETTm2|ECL|EXG|ILI|Solar|Traffic|
|-------------|----------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
|CSBI|CRPS|0.552|0.571|0.502|0.491|0.585|0.659|1.109|0.498|0.875|
|(2023)|QICE|6.141|5.230|3.471|8.918|7.982|6.870|7.175|10.830|11.382| | Summary: This paper introduces a new probabilistic time series forecasting method based on non-stationary diffusion by estimation the step-wise means and variances. The proposed method is validated on different real-world datasets.
Claims And Evidence: No. Below are some of my concerns.
1)Estimating the variance of time series is numerically tricky. Please clarify how to predict the variances in a numerically stable way using pretraining in Sec.4.3. And what if the variance predicted via MLE is too large (when some spiky data points appear)? Using sliding windows might not address this issue perfectly.
2)From Algorithm 1, it seems that the proposed training method is in fact based on fine-tuning, which requires path sampling during training. Is it too computationally expensive for diffusion-based time series forecasting models?
Methods And Evaluation Criteria: No.
Apart from the results reported in Table 3, it would be more convincing if the authors could report the relevant RMSEs and MAEs to show the proposed method is an unbiased point-wise predictor. In my opinion, PICP and QICE may be more suitable for evaluating uncertainty quantification tasks than forecasting tasks.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
1)The CRPS and QICE may not be sufficient to evaluate the model’s forecasting performance.
2)The authors may also need to report the time efficiency and memory costs of the proposed training and inference procedures.
Supplementary Material: Yes, all the parts.
Relation To Broader Scientific Literature: The proposed method is largely built on the following paper.
Reference:
[1] Li, Y., Chen, W., Hu, X., Chen, B., Zhou, M., et al. Transformer-modulated diffusion models for probabilistic multivariate time series forecasting. In The Twelfth International Conference on Learning Representations, 2024.
The derivations in Appendix A in this paper (which is the core of the proposed method) is directly adapted from Appendix D in [1].
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1) The writing is clear and easy to follow.
2) Including uncertainty estimation for time series diffusion model is reasonable.
3) The relevant arithmetic prof regarding the computation of the step-wise variances is given.
Weaknesses:
1) The novelty of the proposed method is limited as it heavily builds upon the existing method [1].
2) The numerically stability of the proposed method is somewhat questionable.
3) The computational cost of proposed method is high compared to existing diffusion methods, e.g., CSDI.
4)Some important evaluation metrics for forecasting are missing, e.g., MAE and RMSE.
Other Comments Or Suggestions: Please include a detailed account of how the proposed method builds upon and differs from prior works, e.g., [1]., with appropriate citations; in the Appendix and main text.
Questions For Authors: Given the substantial computational cost of training this proposed method compared to other diffusion-based models, is the performance improvement significant enough to justify the expense?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1**: The differences from TMDM.
**A1**: We believe there are some misunderstandings regarding the relationship between our method and TMDM, particularly in the contributions and derivation aspects. We believe the differences from TMDM are clearly presented throughout the paper. To aid clarity, we list some key differences between NsDiff and TMDM along with parts of where they are discussed in the paper. We hope the following table helps the reviewer quickly identify and understand these distinctions with TMDM.
| differences| mentioned in|
| -- | -- |
| TMDM uses ANM assumption, while NsDiff uses LSNM assumption. | Figure 1, right to line 23-27.|
| TMDM uses $\mathcal{N}(f_\phi(x),\mathbf{I})$ endpoint while NsDiff uses $\mathcal{N}(f_\phi(x),g_\psi(x))$ endpoint | Figure 1, left to line 57-74. |
| TMDM uses traditional noise schedule, while NsDiff introduces uncertainty-aware noise schedule.| Section 4.6 gives how NsDiff can degenerated to TMDM. Table 5 further gives exp. results of different noise schedule.|
| TMDM does not optimize variance in loss function | Eq. 13 where TMDM has only the left term. |
| TMDM does not estimate reverse variance. | line 284 in Algorithms 2. |
| TMDM could not handle non-stationary variance | Table 3/Table 4 gives the exp. results on real/synthetic dataset, Figure 3/4 gives a visualized illustration.|
We believe that in the derivation presented in Appendix A, all the relevant parts listed in the table above are different from those in TMDM. For example, NsDiff infer the reverse distribution variance by solving Eq. 38, while TMDM does not introduce this step.
**Q2**: the method is based on fine-tuning, which requires path sampling during training. Is this computationally expensive for DDPM in time series?
**A2**: The fact is **no path sampling is required during training**. The training and inference procedures are the same as those of a standard DDPM, except for using a different endpoint and noise schedule. We only introduce some basic operations during training and inference, so the overall computational complexity remains. See **Q5** for experimental results.
**Q3**: What if the variance predicted via MLE is too large (when some spiky data points appear)? Using sliding windows might not address this issue perfectly.
**A3**: We do introduce a design to address this issue by using an uncertainty-aware noise schedule, which incorporates the true variance into the diffusion training process. This design reduces reliance on the variance predicted by the estimator (e.g., via MLE), which can be overly large in the presence of spiky or noisy data. See Table 5 for the experiments results, where it can be seen that by explicitly learning from the true variance, NsDiff becomes more robust to such cases, rather than relying solely on sliding window heuristics or estimator outputs.
**Q4**: The numerically stability of NsDiff.
**A4**: In NsDiff, the only potential numerical issue is the solvability of the equation in Eq. 15. We have already provided the conditions for solvability in Eq. 17. As stated in the paper (lines 269-272), this equation always has a solution. Therefore, theoretically, **NsDiff does not introduce additional numerical instability issues. In practice, no numerical instability has been observed in our experiments.** We kindly invite reviewers to check and run our code to verify this.
**Q5**: The efficiency and memory costs in training and inference phases vs. performance improvements.
**A5**: As shown in the following table, compared to TMDM, NsDiff achieves SOTA and has smaller memory costs and higher efficiency. This is because NsDiff does not introduce additional hidden variables and only adds a small number of basic operations.
|Model|Mem.Train(MB)|Mem.Inference(MB)|Tim.Train(ms)|Tim.Inference(ms)|CRPS|QICE|
|----|---|--|---|--|--|--|
|TimeGrad|27.47|8.61|47.89|8319.29|0.606|6.731|
|CSDI|109.81|22.61|60.50|446.70|0.492|3.107|
|TimeDiff|**15.66**|**3.40**|33.93|238.78|0.465|14.931|
|DiffusionTS|65.03|79.23|94.51|8214.53|0.603|6.423|
|TMDM|221.58|213.46|33.26|237.37|0.452|2.821|
|NsDiff|68.20|57.75|**32.13**|**208.07**|**0.392**|**1.470**|
The results are tested on ETTh1, with 100 diffusion steps.
**Q6**: Some metrics are missing, e.g., MAE.
**A6**: NsDiff still achieves SOTA on these metrics, see **Reviewer ZzLi Q3**.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. My recommendation score has been updated. | Summary: In this paper, the authors considered modeling the uncertainty quantification when applying diffusion models to time-series forecasting tasks. In the beginning, the authors first demonstrated a toy case study that the DDPM may not perform well on uncertainty prediction tasks due to the traditional Additive Noise Model (ANM) scheme. After that, the authors designed the location scale noise model to alleviate this issue, and proposed the Non-stationary Diffusion Model (NsDiff) framework. In the NsDiff, the authors redesigned the forward noise process and reformulated the backward generation process rigorously. Finally, the authors conducted various experiments to demonstrate the efficacy of the proposed approach.
Claims And Evidence: The claims made in this submission are supported by rigorous and convincing evidence. However the reviewer has the following two issues:
1. On page 3, Eq. 1. Given that diffusion models' inference is solving the SDE/ODE, should this part be given as the integral form?
2. In Figure 2, the proposed paper, to the reviewer's understanding mainly focuses on the uncertainty prediction of time series. However, the picture during the inference stage, does not include the uncertainty interval.
Methods And Evaluation Criteria: Yes, the methods and evaluation criteria make sense for demonstrating the derivation of convergence analysis. Nevertheless, the reviewer still has the following concerns:
1. To the best of the reviewer's knowledge, the authors attempt to transform some predicted value (distribution 1) to another similar distribution, which includes the uncertainty information. Based on this, to the reviewer's understanding, this problem can be treated as a kind of Schrodinger bridge problem. In addition, related works have applied the Schrodinger bridge [1] in the imputation procedure, which is similar to the CSDI model.
2. Regarding the evaluation criteria, to the best of the reviewer's knowledge, time-series forecasting mainly focuses on prediction accuracy. The authors have not included related evaluation metrics like mean square error or mean absolute error [2]. It would be better to clarify this issue.
---
References:
[1]. Provably Convergent Schrodinger Bridge with Applications to Probabilistic Time Series Imputation, ICML-2023
[2]. Transformers in Time Series: A Survey
Theoretical Claims: The reviewer attempts to check the derivation of the theorem and it seems that nearly all the derivations look good to the best of the reviewer's knowledge.
Experimental Designs Or Analyses: 1. In the supplementary material, Figure 5, the TMDM model can perform well under ETT datasets compared to the proposed NsDiff, what causes this result?
2. The computational time has not been proposed.
3. Is it possible to apply the diffusion model solvers like DPM solver during the model inference stage?
4. As mentioned above, the baseline comparison lacks related baseline models like Schrodinger bridge imputation models.
---
References:
[1]. DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps, NeurIPS 2022
Supplementary Material: The reviewer reviews the supplementary material. It would be better add an illustration during the derivation of Eq. 21.
Relation To Broader Scientific Literature: The diffusion model is of great importance and applying the diffusion model to model the uncertainty information in the context of time-series forecasting is of great necessity. Thus, the key contributions of the paper related to the broader scientific literature are of great important.
Essential References Not Discussed: As mentioned above, there remain the related works from the following two aspects have not been discussed:
1. Bridge-based Models: The diffusion models designed by diffusion bridges [1,2] have not been well discussed. The DDPM can be treated by a special kind of Ornstein–Uhlenbeck bridge [3].
2. The noise editing: It seems that the noise was modified to delineate the predicted time-series result. Thus, related works on noise selection [4] could be considered to some extent.
---
References:
[1]. Provably Convergent Schrodinger Bridge with Applications to Probabilistic Time Series Imputation, ICML-2023
[2]. Flow Matching for Generative Modeling, ICLR 2023
[3]. Image Restoration Through Generalized Ornstein-Uhlenbeck Bridge, ICML 2024
[4] Xiefan Guo etal, Initno: Boosting text-to-image diffusion models via initial noise optimization. CVPR, 2024
Other Strengths And Weaknesses: ### Strengths
1. The topic is related to the ICML conferences.
2. The proposed approach is interesting.
3. The derivation is rigorous.
### Weaknesses
1. Major weaknesses have been listed in the abovementioned items.
2. To the reviewer's knowledge, the initial value is of great importance for solving ODE. What would happen if the pre-trained model does not predict well (in the context of ODE, we call it stiffness [1])? It would be better to demonstrate the results under various random seeds to demonstrate the robustness of the proposed approach.
3. The convergence of the proposed approach has not been discussed.
---
References:
[1]. Numerical Methods for Ordinary Differential Equations
Other Comments Or Suggestions: See the abovementioned chat window.
Questions For Authors: See the abovementioned chat window.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1**: should Eq. 1. be given as the ODE/SDE integral form?
**A1**: Thanks for the insight. However, we believe the reviewer may be referring to Eq. 7 instead of Eq. 1: Eq. 1 describes the LSNM and is not a stochastic process, so it cannot be written as SDE; Eq. 7 defines the data perturbation process and can be written as SDE. We give a theoretical discussion as follows.
First, the Euler-Maruyama discrete form of NsDiff is
$$
\mathbf{Y}(t+\triangle t) = Y(t) -\frac{1}{2}\beta(t)(\mathbf{Y}(t) - f_\phi(\mathbf{X}))\triangle t + \sqrt{\sigma_{Y_0} - \beta(t)(\sigma_{Y_0} -g_\psi(\mathbf{X}))\triangle t} \sqrt{\beta(t)\triangle t}\mathbf{z}(t)
$$
The diffusion coefficient is related to the $\triangle t$, which is non-Itô-integrable, making it intractable to define a clean continuous-time reverse SDE for the process. To give more theoretical insight, we resort to the perfect estimator assumption (i.e., $\sigma_{Y_0}=g_\psi(\mathbf{X})$) and give the forward/reverse SDE as follows:
$$
d\mathbf{Y} = -\frac{1}{2}\beta(t)(\mathbf{Y} - f_\phi(\mathbf{X}))dt + \sqrt{g_\psi(\mathbf{X})\beta(t)}d\mathbf{w}
$$
$$
d\mathbf{Y} = [-\frac{1}{2}\beta(t)(\mathbf{Y} - f_\phi(\mathbf{X}))-g_\psi(\mathbf{X})\beta(t)\nabla_\mathbf{Y}\log p_t(\mathbf{Y})]dt + \sqrt{g_\psi(\mathbf{X})\beta(t)}d\mathbf{\bar w}
$$
We will include more detail in the latest version.
**Q2**: improve Figure 2 and the description of Eq. 21.
**A2**: Thanks for this suggestion, we will revise the paper according to your advice.
**Q3**: SB problem and include SB-based baselines.
**A3**: Indeed, the problem can be viewed as a SB problem. Taking CSBI as an example, we can introduce an uncertainty-aware prior for it by replacing its $p_\text{prior}$ with the prior of NsDiff $\mathcal{N}(f_\phi(X),g_\psi(X))$, to combine the advantages of NsDiff. Compared to CSBI, NsDiff can estimate the true variance $\sigma_{Y_0}$ by $\sigma_\theta$ (see App. A.4) to learn uncertainty more accurately (as reflected in the results of Sec. 5.4). We include a SB-based baseline CSBI in Table 3 and 4, see **Revewer ZzLi Q3**.
**Q4**: MSE/MAE, computational time.
**Q4**: NsDiff still achieves SOTA on MSE/MAE. Furthermore, compared to previous SOTA, NsDiff has smaller memory costs and higher efficiency. This is because NsDiff does not introduce additional hidden variables and only adds a small number of basic operations.
**Q5**: Figure 5, the TMDM model seems to perform better, why?
**A5**: In Figure 5, **TMDM actually performs worse than NsDiff**. Of course, the difference is less significant compared to Traffic as ETT datasets have relatively small uncertainty variation as in Table 2 (e.g. 1.2 ETTm1, 1.3 ETTm2). Specifically, Figure 5 shows TMDM produces a less accurate mean prediction, leading to larger MAE and MSE than Nsdiff; and, TMDM's predictions do not sufficiently cover the true values, resulting in poorer CRPS and QICE scores.
**Q6**: Is it possible to apply the diffusion model solvers during inference stage?
**A6**: Yes, following A1, NsDiff has the following ODE form:
$$
d\mathbf{Y} = [-\frac{1}{2}\beta(t)(\mathbf{Y} - f_\phi(\mathbf{X}))- \frac{1}{2}g_\psi(\mathbf{X})\beta(t)\nabla_\mathbf{Y}\log p_t(\mathbf{Y})]dt
$$
where the ODE follows a semi-linear structure, remaining compatible with DPM-Solver. However, since time series tasks typically require only a few steps (less than 100) for effective performance, the inference efficiency is already sufficient. Therefore, we do not recommend applying DPM-Solver at the cost of prediction accuracy, given an additional assumption in A1.
**Q7**: Bridge-based Models [1-3] and noise editing [4] should be discussed.
**A7**: Thanks for these references. we have discussed SB-based methods in Q3. The noising editing method is interesting and relevant. While NsDiff gives a distribution-wise improvement on the initial noise, the given paper consider a sample-wise perspective to optimize the initial noise to a more reasonable space. We will provide more discussion in the latest version.
**Q8**: What would happen if the pre-trained model does not predict well (stiffness). Include various random seeds to demonstrate the robustness.
**A8**: NsDiff incorporates the true variance $\sigma_{Y_0}$ into the learning process to alleviate this stiffness problem. Although the pre-trained model $g_\psi(X)$ may not predict well, NsDiff use Eq. 18 to estimate the true variance. As shown in Table 5 of the ablation experiments (we report results mean and std on various seeds), incorporating this variance estimation improves both the performance and the robustness of our method.
**Q9**: The convergence of NsDiff has not been discussed.
**A9**: We acknowledge that the convergence analysis is a critical problem, which is missing in basically all previous works, e.g. TimeGrad, TMDM etc. We will explore this more in future work.
---
Rebuttal Comment 1.1:
Comment: The reviewer appreciates the authors' detailed and thoughtful rebuttal. However, the reviewer would like to suggest a few additional revisions to further strengthen the rigor and clarity of the work:
1. It would enhance the rigor of the manuscript to include a demonstration of convergence results across epochs.
2. Based on the descriptions of Algorithm 1 and Algorithm 2, it seems beneficial to summarize them into a combined Algorithm 3. This would help rectify and streamline the workflow of the proposed approach.
3. Since Table 5 presents experiments conducted with various seeds, it would be more robust to include a paired-sample $t$-test to statistically validate the results.
---
Reply to Comment 1.1.1:
Comment: **Q10**: a demonstration of convergence results across epochs.
**A10**: According to your advice, we provide a figure visual illustration with train loss/test result across epochs of $f_\phi(x)$, $g_\psi(x)$, NsDiff at https://1drv.ms/i/c/f104f0574e8cb377/EQz0OZHb9ahKtv2ZU9tA5HIBVklJYSOJVnRlFwQjq42HLw?e=MUC2Rn.
As can be seen, NsDiff could help improving the uncertainty prediction by combining two mean/variance estimators. we would include the figure in the updated version to demonstrate the convergence.
**Q11**: summarize Algorithm 1,2 into a combined Algorithm 3.
**A11**: Thanks for this advice, we would provide a comprehensive Algorithm 3 in the updated version.
**Q12**: paired-sample t-tests for ablation exp. in Table 5.
**A12**: We thank the reviewer for this valuable suggestion. The results evident and well-supported by visualized results (Figures 3 and 4), which is why we didn’t initially consider statistical testing. To ensure reliability across experiments, we used identical seeds (1, 2, 3) throughout the paper. To address your concern, we support the statistical analysis with addtional seeds [1, 2, 3, 4, 5, 6] for the paired-sample t-tests. The results of these tests are summarized in the table below:
| **Comparison** | **t-statistic** | **p-value** |
|------------------------------------|-------------|----------|
| **CRPS: NsDiff vs w/o LSNM** | -3.4549 | 0.0181 |
| **CRPS: NsDiff vs w/o UANS** | -3.9949 | 0.0104 |
| **QICE: NsDiff vs w/o LSNM** | -3.0978 | 0.0269 |
| **QICE: NsDiff vs w/o UANS** | -4.2117 | 0.0084 |
Both CRPS and QICE comparisons for NsDiff against the ablation variants (w/o LSNM and w/o UANS) yielded statistically significant results, indicating a consistent performance advantage of the full NsDiff model. We will include these updated results in the revised version of the paper. | Summary: The paper introduces a novel diffusion-based probabilistic forecasting framework, called NsDiff, which is designed to address the non-stationary nature of uncertainty in time series data. Traditional Denoising Diffusion Probabilistic Models (DDPMs) typically rely on an Additive Noise Model (ANM) with fixed variance, limiting their ability to capture the dynamic uncertainty observed in many real-world applications. To overcome this limitation, the authors propose incorporating a Location-Scale Noise Model (LSNM) that allows the noise variance to vary with the input context. The authors also try to validate their results in a wide range of datasets with comparisons to some existing methods.
Claims And Evidence: I think the claims made in the submission are clear and well-supported by the evidence detailed in the main text and the supplementary material. It is nice that the authors open-sourced their code.
Methods And Evaluation Criteria: I think the proposed methods make sense for the applications of probabilistic forecasting of non-stationary systems.
Theoretical Claims: I looked at the proof in Appendix A, which is straightforward and easy to follow.
Experimental Designs Or Analyses: I think the experiments done by the authors are impressive and comprehensive as they test the proposed method on many real-world datasets of different nature and dimensionalities.
Supplementary Material: I reviewed the proof and the experimental details in the supplementary material.
Relation To Broader Scientific Literature: I think the proposed method has a potential of being applied to many scientific applications, including weather forecasting, forecasting of physical systems (e.g. fluid dynamics), and epidemics.
Essential References Not Discussed: I believe the paper would benefit from incorporating several additional essential references that provide valuable context and complementary perspectives. For example:
[1] Chen, Y., Goldstein, M., Hua, M., Albergo, M. S., Boffi, N. M., & Vanden-Eijnden, E. (2024). Probabilistic Forecasting with Stochastic Interpolants and Föllmer Processes. arXiv preprint arXiv:2403.13724.
This work is relevant because it also leverages diffusion models for probabilistic predictions, proposes innovative loss functions for optimizing noise schedules, and offers flexible alternatives for the base measure within diffusion models.
[2] Jiang, R., Lu, P. Y., Orlova, E., & Willett, R. (2023). Training Neural Operators to Preserve Invariant Measures of Chaotic Attractors. Advances in Neural Information Processing Systems, 36, 27645–27669.
This reference is pertinent as it addresses similar applications where noise induces non-stationarity in observations and presents methods to enhance long-term prediction accuracy.
Incorporating these references would strengthen the discussion by situating the proposed work within the broader context of recent advances in diffusion models and long-term forecasting under non-stationary conditions.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: I don't have any questions for authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1**: additional key references [1-2].
**A1**: Thanks for your recognition of our work. We agree that the references [1–2] provide meaningful context and will help strengthen our discussion. In particular, we find several aspects of these works especially relevant to our setting:
The work by Chen et al. [1] proposes a novel probabilistic forecasting framework based on Föllmer processes and stochastic interpolants. Their approach to learning noise schedules through tailored loss functions resonates with our motivation to adapt diffusion endpoints for better uncertainty modeling, especially under non-stationary conditions.
Jiang et al. [2] address the challenge of long-term forecasting in chaotic systems by preserving invariant measures. Their use of contrastive learning to stabilize dynamics over time without requiring domain-specific priors is an inspiring direction that aligns with our interest in modeling non-stationary behavior robustly.
We will cite and briefly discuss these works in the revised version to better contextualize our contributions within the broader landscape of diffusion-based and non-stationary forecasting techniques.
[1] Chen, Y., Goldstein, M., Hua, M., Albergo, M. S., Boffi, N. M., & Vanden-Eijnden, E. (2024). Probabilistic Forecasting with Stochastic Interpolants and Föllmer Processes. arXiv preprint arXiv:2403.13724.
[2] Jiang, R., Lu, P. Y., Orlova, E., & Willett, R. (2023). Training Neural Operators to Preserve Invariant Measures of Chaotic Attractors. Advances in Neural Information Processing Systems, 36, 27645–27669. | null | null | null | null | null | null |
Enhancing the Influence of Labels on Unlabeled Nodes in Graph Convolutional Networks | Accept (poster) | Summary: The paper focuses on the impact of labels on unlabeled nodes, the authors propose that the label information is not always effectively utilized in the traditional GCN framework. Thus, this paper proposes the ELU-GCN to solve this issue. First, the paper proposed a new objective function to ensure the graph structure effectively propagates label information under the GCN framework. Second, the paper proposes new contrastive learning to capture consistent and mutually exclusive information between the two graphs.
Claims And Evidence: - The experimental results demonstrate that ELU-GCN outperforms existing methods on multiple datasets, particularly on heterophilic graphs like Chameleon.
- The motivation is reasonable.
- Theoretical proof is basically reasonable.
Methods And Evaluation Criteria: - The proposed two-stage method is clear and promising.
- The Woodbury identity trick effectively reduces the computational complexity of ELU-graph, making the framework more efficient.
- The paper uses 11 public datasets, covering both homophilic and heterophilic graphs, making the evaluation fairly comprehensive.
Theoretical Claims: - The theoretical analysis appears mathematically sound.
Experimental Designs Or Analyses: Strength:
- The experimental setup and dataset selection are reasonable.
Weakness:
- Lack of baselines. Since the authors designed a new contrastive learning method, it is necessary to compare it with the contrastive learning method.
Supplementary Material: - The supplementary material is complete and high quality, providing detailed experimental data, mathematical derivations, and algorithmic details.
Relation To Broader Scientific Literature: Strength:
- The paper provides a well-structured overview of LU-GCN and the differences and advantages over previous methods are also explained.
Weakness:
- The discussion on contrastive learning in graphs is somewhat lacking.
Essential References Not Discussed: - The essential references are discussed.
Other Strengths And Weaknesses: Other Strengths:
- The issue the paper focuses on is interesting i.e., Enhancing the use of label information by unlabeled samples.
- Overall, the paper is complete and inspiring.
Other Weaknesses:
- The proposed Eq. (12) is a paradigm of contrastive learning. The author should further discuss its difference and connection with contrastive learning loss such as InfoNCE.
- The proposed method seems a bit complicated, therefore I am concerned about its reproducibility.
Other Comments Or Suggestions: - There are some minor errors that need to be carefully checked, such as whether the comma or period at the end of the formula is correct.
- The paper directly mentions LPA and GCN, but there is no introduction to them. The author should briefly introduce them in the notation or appendix.
Questions For Authors: See weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for the positive comments. We are so encouraged and will try our best to address the concerns one by one. All changes here will be found in the final version.
>Q1: Lack of baselines. Since the authors designed a new contrastive learning method, it is necessary to compare it with the contrastive learning method.
A1: We acknowledge the importance of comparing our method with existing contrastive learning methods. To address this, we compared the typical graph contrastive learning GRANCE[1] and the recent graph contrastive learning method (SGCL[2]). We can observe that our method consistently outperforms the two contrastive learning baselines across all datasets.
[1] Deep graph contrastive representation learning. ICML'20
[2] Rethinking and Simplifying Bootstrapped Graph Latents. WSDM'24
| Datasets | Cora | Citeseer | pubmed | Computers | Photo | Chameleon | squirrel |
|----------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| GRACE | 83.30±0.40 | 72.10±0.50 | 79.86±0.12 | 81.86±3.86 | 88.72±2.07 | 46.75±2.47 | 38.16±3.14 |
| SGCL | 83.54±0.40 | 72.58±0.25 | 80.09±0.53 | 81.67±2.59 | 89.07±2.45 | 55.36±1.28 | 42.34±1.47 |
| ELU-GCN | **84.29±0.39** | **74.23±0.62** | **80.51±0.21** | **83.73±2.31** | **90.81±1.33** | **70.90±1.76** | **56.91±1.81** |
>Q2: The discussion on contrastive learning in graphs is somewhat lacking. The proposed Eq. (12) is a paradigm of contrastive learning. The author should further discuss its difference and connection with contrastive learning loss, such as InfoNCE.
A2: A key difference is that in InfoNCE, the same node in different graph views is treated as a positive sample, while different nodes are treated as negative samples. In contrast, Eq. (12) distinguishes ELU nodes and NELU nodes, ensuring that ELU nodes maintain consistency across graphs while NELU nodes are pushed apart. This means that Eq. (12) incorporates structural information specific to label influence in GCNs, rather than relying solely on node (instance) discrimination as in InfoNCE. We will clarify this point in the revised version.
>Q3: The proposed method seems a bit complicated, therefore I am concerned about its reproducibility.
A3: In fact, our method is simple to implement in practice. The computation of the ELU-graph is a parameter-free process, and the Woodbury trick significantly accelerates matrix operations. Moreover, we have already provided the code link in the supplementary material and commit to open-sourcing it to ensure full reproducibility.
>Comment: There are some minor errors that need to be carefully checked, such as whether the comma or period at the end of the formula is correct.
Thank you for your careful review. We will thoroughly check and correct any minor errors, including punctuation issues in the formulas.
>Comment: The paper directly mentions LPA and GCN, but there is no introduction to them. The author should briefly introduce them in the notation or appendix.
Thank you for your suggestion. We will add brief introductions to LPA and GCN in the notation section or appendix to improve clarity. | Summary: This paper introduces ELU-GCN which enhances label utilization in GCNs. First, it constructs an ELU graph to optimize label influence on unlabeled nodes. Then, a contrastive loss is designed to enhance representation learning by integrating information from both the ELU graph and the original graph. The experiments on multiple datasets show its superiority over existing approaches.
Claims And Evidence: The claims in the paper are supported by both experimental and theoretical evidence. The authors have validated the effectiveness of the proposed method on multiple datasets. However, some of the assumptions in the paper may be problematic. Specifically, the core idea is to ensure that the prediction of GCN is consistent with the output of LPA. However, this may be problematic—Why not constrain the GCN output to align with the LPA during training? Moreover, this approach does not seem to be effective.
Methods And Evaluation Criteria: The proposed method is feasible, and its effectiveness has been validated on a total of 11 datasets.
Theoretical Claims: The proof of the theory seems to be sound.
Experimental Designs Or Analyses: The authors verified the effectiveness of the method on multiple benchmark datasets and conducted detailed comparative experiments and statistical analysis. However, some details of the experimental settings are missing, such as learning rate, weight decay, number of hidden units, etc.
Supplementary Material: The supplementary material is relatively complete and includes reproducible code.
Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the existing literature, such as GCN-LPA and graph contrastive learning. Building on a thorough citation of relevant research, the authors propose a new method that addresses the issue of label information not being effectively utilized by GCNs.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Besides the strengths and weaknesses mentioned above, the paper has the following additional strengths and weaknesses.
Other Strengths:
- The paper is well-written.
- The experiment is quite sufficient.
Other weaknesses:
- The subscript notation of $\mathbf{Y}$ is confusing. In Notation, $\mathbf{Y}_{l}$ is defined as the training set label, and then becomes $\mathbf{Y}$ in Eq. (13) and Theorem 2.3. Please check and unify.
- The S* calculated by Eq. (11) looks like a dense matrix (almost no zeros), so it consumes a lot of memory and time for subsequent matrix multiplication.
- The authors mention heterophily graphs but do not give any definition.
Other Comments Or Suggestions: Please check the citation format and distinguish the use of \cite and \citet. For example, in the last sentence of the second paragraph of the Introduction, Bi et al should use \citet instead of \cite.
Questions For Authors: I would like to ask whether this framework can be used for GAT.
Ethical Review Concerns: NO
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the positive comments. We are so encouraged and will try our best to address the concerns one by one. All changes here will be found in the final version.
>Q1: The core idea is to ensure that the prediction of GCN is consistent with the output of LPA. However, this may be problematic—Why not constrain the GCN output to align with the LPA during training? Moreover, this approach does not seem to be effective.
A1: First, we would like to clarify that our method does not simply force GCN predictions to align with LPA; rather, LPA outputs should also align with GCN predictions. Therefore, directly constraining the GCN output to match LPA is inappropriate. Instead, our approach aims to find a new graph structure that naturally ensures consistency between GCN and LPA outputs, allowing the prediction of GCN to utilize label information effectively (see Section 2.1).
>Q2: Some details of the experimental settings are missing, such as learning rate, weight decay, number of hidden units, etc.
A2: We set the weight decay to
5e-4. The learning rate was selected from [0.01, 0.02] for all datasets, and the number of hidden units was chosen from [4, 8, 64, 128]. We will include these details in the revised version of the paper to ensure clarity and completeness.
>Q3: The subscript notation of $\mathbf{Y}$ is confusing. In Notation, $\mathbf{Y}_{l}$ is defined as the training set label, and then $\mathbf{Y}$ becomes in Eq. (13) and Theorem 2.3. Please check and unify.
A3: Thank you for your careful observation. We acknowledge the inconsistency in the subscript notation of
$\mathbf{Y}$. In the notation section,
$\mathbf{Y} _{l}$ refers to the training set labels, and the same meaning should be maintained in Eq. (13) and Theorem 2.3. We will revise the notation to ensure consistency and clarity in the next version.
>Q4: The $\mathbf{S}^{*}$ calculated by Eq. (11) looks like a dense matrix (almost no zeros), so it consumes a lot of memory and time for subsequent matrix multiplication.
A4: You are correct that
$\mathbf{S}^*$ computed from Eq. (11) is a dense matrix, which could lead to high memory and computational costs for subsequent matrix multiplications. However, we apply a sparsification process to $\mathbf{S}^*$ to mitigate this issue. Specifically, $ \forall i,j$ where $ |\mathbf{S}^* _{i,j} | < \eta$, we set $|\mathbf{S}^* _{i,j} | = 0$, while elements with $ |\mathbf{S}^{*} _{i,j} | > \eta$ remain unchanged, where $\eta$ is a non-negative parameter that we usually set to correspond to the top 10 percent of element values. The details of this process are already provided in the appendix.
>Q5: The authors mention heterophily graphs but do not give any definition.
A5: We acknowledge the omission and will provide a clear definition of heterophily graphs in the revised version. Generally, heterophily graphs refer to graphs where connected nodes tend to have different labels, in contrast to homophily graphs where connected nodes are more likely to share the same label.
>Q6: I would like to ask whether this framework can be used for GAT.
A6: Although our method is specifically designed for GCN, we have added experiments applying the ELU graph to GAT.
| Datasets | Cora | Citeseer | Pubmed | Computers | Photo | Chameleon | Squirrel |
|----------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| GAT | 83.03±0.71 | 71.54±1.12 | 79.17±0.38 | 78.01±19.1 | 85.71±20.3 | 40.72±1.55 | 30.26±2.50 |
| ELU-GAT | **84.89±0.39** | **74.53±0.49** | **80.23±0.41** | **80.36±5.36** | **88.43±1.85** | **60.17±2.14** | **50.47±1.04** |
The results show that the ELU graph also positively impacts GAT, demonstrating its effectiveness for GAT as well.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's rebuttal, which addressed most of my concerns. I am happy to raise my rating from 2 to 3. | Summary: This paper proposed a new GCN framework called ELU-GCN, which aims to better propagate the label information to unlabeled nodes. First, by analyzing which situation can achieve effective label utilization for unlabeled nodes, the authors proposed an objective function that can guide the GCN to effective label utilization. Then, a graph regularization is designed to capture the consistency and mutually exclusive information between the original and the ELU graphs.
Claims And Evidence: The majority of the claims are well-supported by empirical results and theoretical analysis.
Methods And Evaluation Criteria: The proposed ELU-GCN makes sense for the GNN field. The selected datasets include both homophilic and heterophilic graphs, making the evaluation relatively comprehensive.
Theoretical Claims: This paper provides additional theoretical analysis, and the theoretical proof is complete and reasonable.
Experimental Designs Or Analyses: I have reviewed the experimental design and analyses, and while they are generally sound, there are some areas that could be improved. For example, the real SOTA GNN mode such as GCNII[1] needs to be compared.
[1] Chen M, Wei Z, Huang Z, et al. Simple and deep graph convolutional networks.ICML.
Supplementary Material: I reviewed the Supplementary Material, the theoretical proof and experimental setup are complete.
Relation To Broader Scientific Literature: GCN-LPA also studies the impact of labels, but the author claims the differences and Strengths of GCN-LPA in the paper.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths
1. This paper is well-written and clearly structured.
2. The proposed ELU-GCN is novel.
Weaknesses:
1. Figure 2 gives the experimental conclusion very abruptly. The specific experimental details of Figure 2 need to be supplemented.
2. There are many hyperparameters in this paper. Although the author has analyzed the sensitivity of some hyperparameters, some important parameters have not been analyzed, such as the number of iterations used to calculate the ELU graph.
3. In Eq. 12, it is necessary to explain what specific distance function is used and
why it is chosen.
Other Comments Or Suggestions: See weaknesses.
Questions For Authors: From Theorem 2.3, can this also be understood as: if an adjacency matrix A makes LPA (AY) perform better, then it would also be better for GCN? Would this inspire the use of a more lightweight and parameter-free LPA instead of GCN to find a better graph structure?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the positive comments. We are so encouraged and will try our best to address the concerns one by one. All
changes here will be found in the final version.
>Q1: The real SOTA GNN mode such as GCNII[1] needs to be compared. [1] Chen M, Wei Z, Huang Z, et al. Simple and deep graph convolutional networks.ICML.
A1: We have added a comparison with GCNII in the updated experiments as following table. The results show that, on the Cora dataset, GCNII performs a little bit better, but on the other six datasets, our method outperforms GCNII. We will include these results in the next version for further clarification.
| Datasets | Cora | Citeseer | pubmed | Computers | Photo | Chameleon | squirrel |
|----------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| GCNII | **85.49±0.52** | 73.41±0.63 | 80.28±0.41 | 82.53±4.02 | 87.48±2.14 | 62.48±2.54 | 48.17±2.04 |
| ELU-GCN | 84.29±0.39 | **74.23±0.62** | **80.51±0.21** | **83.73±2.31** | **90.81±1.33** | **70.90±1.76** | **56.91±1.81** |
>Q2: Figure 2 gives the experimental conclusion very abruptly. The specific experimental details of Figure 2 need to be supplemented.
A2: Thank you for your suggestion. The experimental setup for Figure 2 follows the same configuration outlined in Section 3. The specific details of the experimental setup can be found in the appendix. We will enhance the explanation of this section in the next version to provide more clarity.
>Q3: In Eq. 12, it is necessary to explain what specific distance function is used and why it is chosen.
A3: In Eq. 12, various distance functions, such as Euclidean distance and inner product, can be considered. Based on experimental results, we found that Euclidean distance performed better in our case.
>Q4: From Theorem 2.3, can this also be understood as: if an adjacency matrix A makes LPA (AY) perform better, then it would also be better for GCN? Would this inspire the use of a more lightweight and parameter-free LPA instead of GCN to find a better graph structure?
A4: Yes, I agree with your opinion. Based on Theorem 2.3, if a graph structure $\mathbf{A}$ enables LPA to achieve better performance, it means that GCN can achieve better generalization ability on this graph structure $\mathbf{A}$. This suggests that a lightweight and parameter-free LPA can serve as a criterion for evaluating graph structures or even as an objective function in graph structure learning. In fact, our method can also be seen as optimizing the graph structure $\mathbf{A}$ to improve LPA performance as much as possible. | Summary: The paper proposes ELU-GCN, a two-stage method. First, it constructs the ELU graph, which enables the message passing in GCN to utilize label information more effectively. After that, a contrastive loss is designed to fuse information between the original graph and the ELU graph.
Claims And Evidence: The paper claims that ELU-GCN enhances label utilization in GCNs through adaptive graph construction and contrastive learning, with experimental results and the paper’s analysis generally supporting this claim.
Methods And Evaluation Criteria: The proposed method is based on mathematical derivations and effectively addresses complexity issues encountered in the process, making it a reasonable approach. The evaluation criteria are appropriate for studies in this field.
Theoretical Claims: The paper provides interesting theoretical claims and the proofs appear sound, but there are still some limitations. For example, the $Y_{true}$ is actually unknown to us, thus this may limit the applicability of the theory.
Experimental Designs Or Analyses: The experimental design is reasonable, validating the proposed model from the perspectives of effectiveness, ablation study, runtime analysis, etc. Notably, the visualization of the key ELU graph helps provide a clearer understanding of the ELU graph.
Supplementary Material: The supplementary material is sufficient, including code links.
Relation To Broader Scientific Literature: The paper builds upon graph structure learning, further considering and enhancing the influence of label information on unlabeled nodes. Additionally, it introduces the new contrastive learning paradigm to improve the integration of learned graph structures with the original graph information.
Essential References Not Discussed: As far as I know, no essential references have been overlooked.
Other Strengths And Weaknesses: Strengths:
1. The paper studies an interesting problem: promoting the positive impact of label information on unlabeled nodes.
2. The proposed method is relatively novel and reasonable.
3. Theoretical analysis enriches the foundation of this work.
Weaknesses:
1. When computing the ELU graph, the paper employs a variant of GCN in the form of Eq.4. However, it remains unclear whether the same form of GCN is also utilized during the contrastive learning phase. This aspect necessitates further clarification.
2. To validate the effectiveness of the ELU graph, the authors should compare it with alternative graph construction methods, such as the KNN-graph, by replacing the ELU graph in the ELU-GCN and reporting the corresponding results.
3. The improvement of NELU nodes after using ELU-GCN should be reported.
Other Comments Or Suggestions: Please reply or modify according to the Weaknesses mentioned above.
Questions For Authors: Why can't the two-stage framework proposed in the paper be designed as an end-to-end framework?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the positive comments. We are so encouraged and will try our best to address the concerns one by one. All changes here will be found in the final version.
>Q1: Limitation on theoretical part: the $\mathbf{Y} _{true}$ is actually unknown to us, thus this may limit the applicability of the theory.
A1: It is true that $\mathbf{Y} _{true}$ is unknown to us. However, we can approximate $\mathbf{Y} _{true}$ using the pseudo-labels predicted by GCN, a common practice in many existing works [1]. This approximation allows us to learn an adjacency matrix
$\mathbf{A}$ that ensures strong generalization by Theorem 2.3. By then applying GCN to this learned $\mathbf{A}$, we can obtain higher-quality pseudo-labels that more closely approximate $\mathbf{Y} _{true}$. Through this iterative process, the learned ELU graph progressively refines $\mathbf{A}$ and pseudo-labels, approaching the optimal structure for generalization.
[1] Calibrating graph neural networks from a data-centric perspective. WWW'24
>Q2: When computing the ELU graph, the paper employs a variant of GCN in the form of Eq.4. However, it remains unclear whether the same form of GCN is also utilized during the contrastive learning phase. This aspect necessitates further clarification.
A2: We ultimately use the standard GCN formulation (Kipf \& Welling, 2017) in the contrastive learning phase. The variant of GCN in Eq. 4 is introduced solely to facilitate the optimization of the objective function in Eq. 5.
>Q3: To validate the effectiveness of the ELU graph, the authors should compare it with alternative graph construction methods, such as the KNN-graph, by replacing the ELU graph in the ELU-GCN and reporting the corresponding results.
A3: The effectiveness of the ELU graph is demonstrated by its ability to make unlabeled nodes use the label information more effectively while enhancing generalization. However, the KNN graph only considers the feature information, which ignores the impact of labels. To further validate this, we have added an additional experiment where the ELU graph in ELU-GCN is replaced with the KNN graph for further validation. The results are shown in the table below.
| Datasets | Cora | Citeseer | pubmed | Computers | Photo | Chameleon | squirrel |
|----------|------------------|------------------|------------------|------------------|------------------|------------------|------------------|
| KNN-GCN | 82.73±0.64 | 72.15±0.34 | 79.57±0.51 | 82.16±3.52 | 90.75±1.54 | 48.16±3.42 | 36.49±3.56 |
| ELU-GCN | ***84.29±0.39*** | ***74.23±0.62*** | ***80.51±0.21*** | ***83.73±2.31*** | ***90.81±1.33*** | ***70.90±1.76*** | ***56.91±1.81*** |
It is evident that replacing the ELU graph with the KNN graph leads to a significant performance drop. This confirms that the ELU graph effectively facilitates the utilization of label information by unlabeled nodes, whereas the KNN graph, which relies solely on feature similarity, fails to capture this crucial aspect.
>Q4: The improvement of NELU nodes after using ELU-GCN should be reported.
A4: We have already conducted this experiment, and the results are presented in Figure 7 of the appendix. This figure illustrates the improvement of NELU nodes after using ELU-GCN.
>Q5: Why can't the two-stage framework proposed in the paper be designed as an end-to-end framework?
A5: While our framework follows a two-stage design, the first stage—constructing the ELU graph—is a parameter-free process. Since this stage does not involve learnable parameters, it cannot be seamlessly integrated into an end-to-end framework. Overall, the ELU graph is firstly precomputed and then used to enhance GCN training in the second stage. This parameter-free approach ensures stability and efficiency in graph construction while keeping the model focused on learning meaningful representations during the second stage. In our future work, we plan to make our framework be an end-to-end one. | null | null | null | null | null | null |
$\infty$-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory Consolidation | Accept (poster) | Summary: This paper proposes a training-free approach for long-form video understanding tasks. The method introduces a memory bank that integrates modality projectors (Q-formers), which combine short-term and long-term memory for more efficient video processing. By leveraging this architecture, the framework can handle videos without requiring additional training. Experiments conducted on two different video benchmarks, Video-LLama and VideoChat2, show that the method provides clear improvements in long-form video understanding tasks, particularly in video question-answering.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence from the experimental results. The authors demonstrate that their approach, which integrates a memory bank and modality projectors, effectively improves the performance of video understanding models on standard benchmarks.
Methods And Evaluation Criteria: Yes, the proposed training-free method has the impact on application side, addressing the challenge of long-form video understanding without requiring additional training.
Theoretical Claims: There is quite a little theoretical part in this paper. The authors primarily focus on the empirical performance of their method, with limited theoretical exploration of how the continuous-time memory mechanism interacts with the video understanding tasks.
Experimental Designs Or Analyses: The experimental settings are clear. The paper presents well-defined experiments with appropriate baselines and evaluation metrics. However, one concern is that the authors seem to use relatively old models as the base for their comparisons.
Supplementary Material: Yes, the author lists the hyper-parameter and evaluation details in the supplementary material. This additional information is helpful for understanding the experimental setup and allows for better reproducibility of the results.
Relation To Broader Scientific Literature: The proposed method further extends the memory-based long-form video understanding framework by incorporating more dynamic resource allocation. By augmenting the memory model with a continuous-time mechanism, the approach provides a way to process long-form videos more effectively.
Essential References Not Discussed: The paper ignored some recent state-of-the-art video-language models, such as Apollo[1], Video-LLaMA[2], and Llava-OneVision[3]. These models have made significant contributions to the field and could provide useful comparisons for the proposed method.
[1] Zohar, Orr, et al. "Apollo: An exploration of video understanding in large multimodal models." arXiv preprint arXiv:2412.10360 (2024).
[2] Zhang, Hang, Xin Li, and Lidong Bing. "Video-llama: An instruction-tuned audio-visual language model for video understanding." arXiv preprint arXiv:2306.02858 (2023).
[3] Li, Bo, et al. "Llava-onevision: Easy visual task transfer." arXiv preprint arXiv:2408.03326 (2024).
Other Strengths And Weaknesses: Strengths:
1. The proposed method uses a memory bank to address long context has been proven both efficient and effective for long-form video understanding.
2. The paper extended the framework from MovieChat by introducing Continuous-Time Memory Consolidation, which is better than uniformed sampling. The continuous-time strategy helps capture more relevant video segments and improves overall efficiency.
3. The experiments are solid, utilizing multiple widely-used datasets, which makes the results more robust and generalizable to various tasks in video understanding.
Weaknesses: See Question Section.
Other Comments Or Suggestions: NA
Questions For Authors: 1. Is there any efficiency analysis provided in terms of computational resources, such as FLOPs or GPU memory cost for the proposed method? For example, a visualization like the figure 1 in MovieChat paper would be better.
2. In Table 3, why do video-llama-based models perform best in sticky settings while VideoChat2 are in no short-term memory sticky?
3. Is it possible to set alpha as a dynamic parameter instead of keeping it fixed at 0.9/1.0 statically? A dynamic alpha might allow more flexible adaptation based on varying video contexts.
4. The baseline models used in the paper (Video-LLama and VideoChat2) are quite old (from spring and summer 2023). Given the rapid advancements in video understanding, there are newer and more powerful models available. It would be better to consider including some of these models, such as Llava-OneVision[1], Qwen2-VL[2], or DeepSeek-VL[3], in future experiments?
[1] Li, Bo, et al. "Llava-onevision: Easy visual task transfer." arXiv preprint arXiv:2408.03326 (2024).
[2] Wang, Peng, et al. "Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution." arXiv preprint arXiv:2409.12191 (2024).
[3] Lu, Haoyu, et al. "Deepseek-vl: towards real-world vision-language understanding." arXiv preprint arXiv:2403.05525 (2024).
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive review and suggestions. We are happy that you found our method to be both efficient and effective for long-form video understanding, and our experiments solid. We address your concerns about our paper below.
> “Is there any efficiency analysis provided in terms of computational resources, such as FLOPs or GPU memory cost for the proposed method? For example, a visualization like the figure 1 in MovieChat paper would be better.”
We appreciate the suggestion and refer to our response to reviewer iEVz, where we present additional experiments that analyze the computational overhead. These experiments include the time consumption of the LTM module and the impact of sampling more frames. Our results demonstrate that our method sustains a constant memory footprint regardless of the number of frames, with only a slight increase in inference time. Additionally, we observe that increasing the number of basis functions leads to a much smaller rise in memory usage compared to the baseline's growth with additional frames. We will add these experiments to the appendix.
> “In Table 3, why do video-llama-based models perform best in sticky settings while VideoChat2 are in no short-term memory sticky?”
This is indeed intriguing and we asked ourselves the same question. We speculate that VideoChat2-based models tend to prioritize global features, whereas Video-LLaMA tends to focus on local features. Both benefit from sticky memories in open-ended generation tasks, as it helps maintain long-term coherence and consistency across outputs, regardless of whether the model emphasizes global or local context.
> “Is it possible to set alpha as a dynamic parameter instead of keeping it fixed at 0.9/1.0 statically? A dynamic alpha might allow more flexible adaptation based on varying video contexts.”
Thank you for the thoughtful suggestion. We agree that using a fixed $\alpha$ may not always be ideal, as video content with different temporal dynamics might benefit from varying weightings. While we chose a fixed $\alpha$ to simplify the current experiments and establish a baseline, we recognize the potential advantages of making $\alpha$ dynamic. Adapting $\alpha$ based on video characteristics (as mentioned by reviewer **aCqS**), such as shot transitions or object persistence, could indeed improve performance and generalization. We plan to explore this dynamic approach in future work, and we will mention it in the discussion section.
> “The baseline models used in the paper (Video-LLama and VideoChat2) are quite old (from spring and summer 2023). Given the rapid advancements in video understanding, there are newer and more powerful models available. It would be better to consider including some of these models, such as Llava-OneVision[1], Qwen2-VL[2], or DeepSeek-VL[3], in future experiments?”
While we recognize the rapid advancements in video understanding and the availability of more powerful models, our primary goal in this work was not to develop a SOTA model but to demonstrate the effectiveness of incorporating a biologically inspired LTM in a fair, training-free setting. The baseline models we used (Video-LLama and VideoChat2) were chosen for their relevance to our approach given that their Q-former based architecture enables a training-free approach, but we agree that including newer models like Llava-OneVision, Qwen2-VL, or DeepSeek-VL in future experiments (which would require additional training) may provide valuable comparisons. Our method is highly general and can be integrated into stronger VLMs moving forward. We will highlight this in the discussion section and suggest future work to explore the incorporation of our LTM approach into state-of-the-art models.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. While the response does clarify several technical points, I remain concerned about the limited empirical validation of the proposed training-free method on stronger or more recent base models. As the method claims to be training-free, it should be feasible to demonstrate broader applicability without significant computational overhead. The lack of such evidence weakens the generality and practical value of the approach. Therefore, I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: Thank for your response. We are happy that our answer clarified the technical points you were concerned about in your review. Regarding the empirical validation of our proposed training-free method: we understand you would like to see experiments on stronger or more recent base models beyond the two models we already experimented in: Video-LLaMA and VideoChat2. However, we maintain that doing so **while keeping the model training-free** is not straightforward. Our continuous attention framework requires spatio-temporal modules based on transformers to map the embeddings from the ViT to tokens understandable by LLMs, which models Video-LLaMA and VideoChat2 satisfy with the video Q-formers. Unfortunately this is not possible with other models such as the ones you mention in your review, Apollo, Llava-OneVision, Qwen2-VL, or DeepSeek-VL as these use projection or pooling layers to do this mapping . Integration of our continuous memories with those models would require expensive fine-tuning with substantial computational overhead, which goes beyond the scope of our paper and surpasses our limited computational budget. We believe the experiments with Video-LLaMA and VideoChat2 on 3 different benchmarks serve the main purpose of our paper, which is to show that our memory consolidation mechanism using continuous attention is a simple add-on component that improves model capabilities with minimal computational effort. | Summary: The paper presents a method for long video understanding through a continuous long-term memory (LTM) consolidation mechanism. In their approach, the authors propose a continuous-time attention mechanism that leverages the Gibbs density function to obtain a continuous-time query-key similarity function. Using this similarity function, they update the video Q-former's attention mechanism to make it utilize continuous-time LTM, with particular attention on the most relevant parts of input videos given the input prompts, for efficient understanding of long videos. The authors perform relevant experiments to evaluate their approach on multiple video understanding datasets and compare against baseline methods.
Claims And Evidence: The central claim of chain-of-shot prompting improving video understanding performance is backed up by experimental results.
Methods And Evaluation Criteria: The proposed ideas of processing long-term memory in a continuous fashion to improve the attention mechanism of video Q-formers for long videos and sampling relevant parts of the memory at higher granularity to improve Q&A performance make sense as an approach for making long video understanding feasible and efficient.
Theoretical Claims: Not applicable - the paper presents experimental findings to justify the proposed approach.
Experimental Designs Or Analyses: 1. As the authors mention, one key benefit of using basis functions over processing individual video frames is that fewer basis functions are needed to represent the information in the raw frames. This leads to more compressed representations (Lines 145-147), which is particularly useful for processing long videos. However, in their experiments, the authors use 256 frames in each chunk and 1024 basis functions (Line 238, col 2) for VideoLLaMA and 16-frame chunks and 256 basis functions for VideoChat2 (Line 241, col 2). How do these experimental designs reconcile with the assumption of needing fewer basis functions than video frames? Or are the authors implying that the number of basis functions is fewer than the *total* number of frames across all the chunks (e.g., 8 $\times$ 256 = 2048 > 1024)? But even with that assumption, for VideoChat2, we get 8 $\times$ 16 = 128 < 256. Could the authors please explain this discrepancy?
2. For the choice of the weighting factor $\alpha$ between short- and long-term memories (Eqn. 16), the authors report experiments on the MovieChat dataset and choose the best value of $\alpha$ from those experiments. However, it is unclear how generalizable that value is to other videos or datasets. Also, is it fair to assume that $\alpha$ should be constant for all kinds of videos, or is there scope to make $\alpha$ context- and category-aware, e.g., depending on how often shots are cut in the video, how often background scenes and foreground objects appear and reappear in the video, etc.?
3. Have the authors considered the effect of any type of noise when determining relevant locations for the sticky memory? For example, if the signal is slightly perturbed (additive noise), how would the probability function (Eqn. 15) be affected? Understanding the noise characteristics is not essential to determine the utility of the sticky memory procedure, and my rating would not depend on the authors' response to this question. However, the noise characteristics become relevant when considering that video signals can easily be corrupted by transmission, compression, or even adversarial noises.
Supplementary Material: I have read all the supplementary material in conjunction with the main paper.
Relation To Broader Scientific Literature: With the development of language models for video understanding, the paper's contribution is relevant and timely, and it establishes new baselines for future video understanding models. It will likely interest the broader scientific communities working on language models, video understanding, and their intersections.
Essential References Not Discussed: While not a domain expert, I did not find any major missing references in my search.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: In Eqn. 2, $Q \in \mathbb{R}^{R \times d}, K \in \mathbb{R}^{L \times d}, V \in \mathbb{R}^{L \times d} \Rightarrow QK^\top \in \mathbb{R}^{R \times L} \Rightarrow QK^{\top}V \in \mathbb{R}^{R \times d}$. This implies that $Z \in \mathbb{R}^{R \times d}$, but the authors have stated $Z \in \mathbb{R}^{L \times d}$ in both Lines 116 and 121. Is this a typo?
Questions For Authors: Please refer to the comments in previous sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive review and suggestions. We address your concerns about our paper below.
> “As the authors mention, one key benefit of using basis functions over processing individual video frames is that fewer basis functions are needed to represent the information in the raw frames. This leads to more compressed representations (Lines 145-147), which is particularly useful for processing long videos. However, in their experiments, the authors use 256 frames in each chunk and 1024 basis functions (Line 238, col 2) for VideoLLaMA and 16-frame chunks and 256 basis functions for VideoChat2 (Line 241, col 2). How do these experimental designs reconcile with the assumption of needing fewer basis functions than video frames? Or are the authors implying that the number of basis functions is fewer than the total number of frames across all the chunks (e.g., 8x256 = 2048 > 1024)? But even with that assumption, for VideoChat2, we get 8x16 = 128 < 256. Could the authors please explain this discrepancy?”
For Video LLaMA, where the number of total frames is large (2048), the basis functions were set to half this total number of frames, ensuring a compressed yet effective representation. In the case of VideoChat2, we used 256 basis functions with $L=128$ frames since the number of frames supported by VideoChat2 is much smaller. Increasing the number of basis functions is computationally lighter than increasing chunk size (as shown in the first two Tables in **iEVz** response) and it improves the multivariate step regression fit. Larger chunk sizes caused memory issues for VideoChat2, but increasing basis functions did not, allowing for better fitting. This strategy balances memory efficiency and model accuracy without overloading computational resources. This tradeoff will be further discussed in the appendix.
> “For the choice of the weighting factor alpha between short- and long-term memories (Eqn. 16), the authors report experiments on the MovieChat dataset and choose the best value of α from those experiments. However, it is unclear how generalizable that value is to other videos or datasets. Also, is it fair to assume that α should be constant for all kinds of videos, or is there scope to make α context- and category-aware, e.g., depending on how often shots are cut in the video, how often background scenes and foreground objects appear and reappear in the video, etc.?”
Thank you for the insightful suggestion. While we intentionally used a fixed $\alpha$ to establish a baseline and simplify experiments, we agree that an adaptive approach—adjusting $\alpha$ based on video characteristics like shot changes or object permanence—might be an interesting idea to improve generalization in future work. We will mention it in the discussion section.
> “Have the authors considered the effect of any type of noise when determining relevant locations for the sticky memory? For example, if the signal is slightly perturbed (additive noise), how would the probability function (Eqn. 15) be affected? Understanding the noise characteristics is not essential to determine the utility of the sticky memory procedure, and my rating would not depend on the authors' response to this question. However, the noise characteristics become relevant when considering that video signals can easily be corrupted by transmission, compression, or even adversarial noises.”
This is an interesting question. The impact will depend on the type of noise and on its smoothness in the time dimension. If the signal is perturbed with additive noise, the Gibbs transformation in Eq. 10 will lead to a modified density, which in turn will change the locations for the sticky memory as for Eq. 15. Since the Gibbs transformation is continuous and well-behaved, the effect in the memory representations should be smooth. However, to better understand the practical effect of noise in the memory a careful empirical analysis needs to be conducted.
> “In Eqn. 2, $Q \in \mathbb{R}^{R \times d}$, $K \in \mathbb{R}^{L \times d}$, $V \in \mathbb{R}^{L \times d}$ (...) implies that $Z \in \mathbb{R}^{R \times d}$, but the authors have stated $Z \in \mathbb{R}^{L \times d}$ in both Lines 116 and 121. Is this a typo?”
Yes, this is indeed a typographical error. We appreciate your careful review and will correct it accordingly. Thank you for bringing this to our attention. | Summary: This paper introduces a long-term memory (LTM) consolidation mechanism from $\infty$-Former, and a long-video LLM (Language Model) that requires no additional training based on existing short-video LLMs. Experimental results show that this approach significantly improves performance on long-video benchmarks. Furthermore, the use of Gibbs density in LTM for improved PDF sampling, the establishment of continuous-time memory and the use of stick memory sampling strategy are also introduced. The ablation study shows the effects of the proposed sticky memory and LTM. The contribution to understanding brain memory is also highlighted.
Claims And Evidence: Strengths:
1. The introduction of LTM has effectively improved the performance of long video understanding, which has been strongly validated in the experiment section.
2. The claim of the generalizability from short video to long video understanding is well proved in methods and experiments.
Weaknesses:
1. The authors claim that the proposed PDF based on Gibbs density is much powerful than the Gaussian model in L96-99, which lacks neither theoretical proof nor experimental results.
Methods And Evaluation Criteria: Strengths:
1. The idea of introducing the brain's working mechanism into video LLMs through structural design rather than training another model is novel and reasonable.
2. The use of representing L signals through N basis functions significantly reduces context length, which contributes to longer video understanding.
Weaknesses:
1. The proposed method heavily relies on the video Q-Former. However, recent models seldom use Q-Former, as previous research [Tong'24] has demonstrated the flaws in Q-Former. Even the latest works of Video-LLaMA series [Zhang'25] and VideoChat series [Li'25] mentioned in the paper, though as the concurrent works, do not use Q-Former. Over-reliance on Q-Former may affect the method's generalizability and limit the possibilities of applying the method to stronger baselines.
\[Tong'24\]: Tong et. al, Cambrian-1: A Fully Open,Vision-Centric Exploration of Multimodal LLMs, NeurIPS 2024
\[Li'25\]: Li et. al, VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling
\[Zhang'25\]: Zhang et. al, VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding
Theoretical Claims: The theoretical introduction of continuous LTM is well presented.
Experimental Designs Or Analyses: Strengths:
1. The ablation study on different setting is well conducted, proving the effects of each proposed module.
2. The visualization of the LTM attention density clearly shows the importance of sticky memory.
Weaknesses:
1. Though the method is training-free, there is still computational overhead should be considered. That includes the time consumption of the proposed LTM module, the extra time brought by sampling more frames from the video and encoding them with ViT encoder.
2. The experimental results shown in Tab. 1 and Tab. 2 demonstrate limited improvement for VideoChat2, and the authors attribute this to the optimization of the original model. Recent models have significantly outperformed VideoChat2 on various video understanding benchmarks, raising concerns about the applicability of the proposed method on stronger models.
3. The baseline models used for comparison are too outdated. Most of these models were produced in early 2024 or in 2023, and their performance is far inferior to current models.
Supplementary Material: The author provides more detail of implementation in App. A and some detailed ablation and analysis in App. B, which helps better understand the paper.
Relation To Broader Scientific Literature: This paper utilizes the continuous attention [Martins'20] and $\infty$-former [Martins'22] inspired from cognitive and mechanistic theories of memory [Hardt'09, Weilbächer&Gluth'13, Ma'14] to enhance video models based on VideoChat2 [Li'23] and Video-LLaMA [Zhang'23]. The paper connects the field of neuroscience and video LLM and enables new understanding of neuroscience through the analysis on the mechanism of video LLM memory.
\[Martins'20\]: Martins et. al, Sparse Continuous Distributions and Fenchel-Young Losses, JMLR 2022
\[Martins'22\]: Martins et. al, $\infin$-former: Infinite Memory Transformer, ACL 2022
\[Hardt'09\]: Hardt et. al, A Bridge Over Troubled Water: Reconsolidation as a Link Between Cognitive and Neuroscientific Memory Research Traditions, Annual Review of Psychology
\[Weilbächer&Gluth'13\]: Weilbächer and Gluth, The Interplay of Hippocampus and Ventromedial Prefrontal Cortex in Memory-Based Decision Making, Current Biology
\[Ma'14\]: Ma et. al, Changing concepts of working memory, Nature Neuroscience
\[Li'23\]: Li et. al, MVBench: A Comprehensive Multi-modal Video Understanding Benchmark, CVPR 2024
\[Zhang'23\]: Zhang et. al, Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding, EMNLP 2023 Demo
Essential References Not Discussed: Most related works are well cited.
Other Strengths And Weaknesses: Strengths:
1. The creative combination and analysis of video understanding and brain memory may contribute to neuroscience
Weaknesses:
1. The limited performance of the model might be a severe bottleneck for real-world application of the method.
Other Comments Or Suggestions: 1. The last 'a' of Video-LLaMA in L288 should be capitalized.
2. The reference format should be unified and refined. Most names that should be capitalized are not capitalized properly. Some of the references are cited with url while others are not. Some arxiv papers are cited with arxiv IDs, some are cited with url and others are cited without additional information. Some conference names are annotated with abbreviations, while others are not.
Questions For Authors: 1. Could the authors give proof of the advantage of using the complex Gibbs density for the PDF instead of the original Gaussian model?
2. Considering the shortcomings of Q-Former, could the authors talk about the generalizability to other models without video Q-Former?
3. To better prove the effectiveness of the proposed method, could the authors provide a detailed analysis of time consumption for inferring, including the time used for pre-calculating the ridge regression $F^T(F F^T+\lambda I)^{-1}$, the extra time brought by the use of basis functions $\psi$, the extra time of using LTM, and most importantly the extra time brought by sampling much more frames and encoding them with ViT encoder? It's recommended to compare the time consumption with LLM forward time.
4. Could the authors give more detailed analysis on the reason of limited improvement on the strong baseline VideoChat2?
5. Could the authors conduct the experiments based on stronger baselines to prove the generalizability of the proposed method and provide more comparisons to recent stronger models?
6. Could the author fix the typos and reference format issues mentioned above?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review and suggestions. We are happy that you found our idea novel, the generalizability from short video to long video understanding well proved in both methods and experiments, and the ablation studies well conducted. We understand your main concerns about our paper and address them below. We hope that our answers clarify and alleviate your concerns.
> “The authors claim that the proposed PDF based on Gibbs density is much powerful than the Gaussian model in L96-99, which lacks neither theoretical proof nor experimental results.”
We agree that our claim will benefit from concrete evidence. In fact, at an initial stage of our project, we experimented with the Gaussian model but we abandoned it since its unimodal nature limits its ability to capture complex distributions, leading to poor results. Our method extends Video Q-Former's cross-attention using the same learned projections trained with softmax, making Gibbs density a more natural fit. Nevertheless, we conducted additional experiments comparing both approaches, confirming that Gibbs improves performance. We also observed that most of the responses in open-ended MovieChat-1K degraded significantly under the Gaussian model. We will add this to the appendix.
|Density|Method|Acc|Score|CI|DO|CU|
|-|-|-|-|-|-|-|
|Gibbs|Ours (no LTM)|68.0|3.76|3.72|3.33|3.71|
|Gibbs|Ours (unif.)|66.5|3.69|3.60|3.31|3.58|
|Gibbs|Ours (sticky)|**72.2**|**3.88**|**3.89**|**3.47**|**3.79**|
|Gibbs|Ours (no STM unif.)|62.4|3.75|3.36|3.38|3.52|
|Gibbs|Ours (no STM sticky)|59.2|3.68|3.30|3.30|3.44|
|Gaussian|Ours (unif.)|46.3|2.98|3.60|2.31|3.16|
|Gaussian|Ours (sticky)|38.4|2.76|3.51|2.11|3.12|
|Gaussian |Ours (no STM unif.)|46.9|3.06|3.54|2.40|3.13|
|Gaussian |Ours (no STM sticky)|38.6|2.77|3.43|2.15|3.13|
> “Though the method is training-free, there is still computational overhead should be considered.”
We refer to our response to reviewer iEVz, where we present additional experiments analyzing the computational overhead, including the time consumption of the LTM module, the impact of sampling more frames, and the encoding time with the ViT encoder. Our results demonstrate that our method sustains a constant memory footprint regardless of the number of frames, with only a slight increase in inference time. Additionally, we observe that increasing the number of basis functions leads to a much smaller rise in memory usage compared to the baseline's growth with additional frames. We hope that these experiments alleviate your concern.
> “The proposed method heavily relies on the video Q-Former. However, recent models seldom use Q-Former (...)”; “The baseline models used for comparison are too outdated (...)”
We chose Q-Former because it allows for seamless, training-free integration of our biologically inspired continuous attention component. This is possible because the video Q-Former is designed to process sequences of frame representations using a cross-attention module, which enables our method to augment it with a LTM based on continuous cross-attention. However, our LTM, if we want to go beyond training-free, is not inherently tied to Q-Former and could be used with other architectures. Our primary goal is to provide a proof of concept incorporating a biologically inspired LTM in a fair, training-free setting. While SOTA models continue to evolve rapidly, our focus was on conducting an apples-to-apples comparison to validate the benefits of our approach. Our method is highly general and can be integrated into stronger VLMs in the future. Unlike our training-free approach, this would require additional learning. We will provide insights in the discussion section. Future work can then build on our findings by incorporating the LTM into the strongest VLMs available.
> “Tab. 1 and Tab. 2 demonstrate limited improvement for VideoChat2 (...). Recent models have significantly outperformed VideoChat2 (...) give more detailed analysis on the reason of limited improvement (...)”
This is an interesting question. The results in Tab. 1-2 are for multiple-choice datasets, where VideoChat2 models directly incorporate answer options into the prompt, making responses more constrained. In contrast, models like Video-LLaMA generate open-ended responses before selecting the closest option match with LangChain, as explained in the Appendix, creating more room for the LTM to improve reasoning. Nevertheless, in open-ended generation tasks (Tab. 3), our method already provides a significant accuracy boost for VideoChat2 models.
We are fixing the typos, thank you for pointing them out! | Summary: This paper proposes a method called “\infty-VIDEO” to enable large multimodal language models (LLMs), originally designed for short video contexts to process arbitrarily long videos. The approach builds on top of existing “video Q-former” architectures by equipping them with a new continuous-time long-term memory (LTM) mechanism. Specifically, the authors use a “continuous attention” strategy that compresses and consolidates past chunks into a fixed dimension memory representation. Experiments on multiple video QA benchmarks, some of which include minutes-long and even hour-long videos, indicate the method can improve long-video comprehension in a training-free manner.
Claims And Evidence: - Claim: the proposed continuous-time attention can dynamically allocate higher granularity to crucial frames and effectively compress less relevant parts.
Evidence: qualitative heatmaps of the attention densities show peaks at visually distinctive or narrative-significant frames. Quantitative results indicate improvements over uniform sampling baselines.
- Claim: sticky memories (adaptive sampling guided by prior attention) outperform simple uniform sampling.
Evidence: in multiple-choice QA tasks, sticky memories yield consistently better accuracy. In open-ended QA (MovieChat-1K), sticky memories often outperform pure uniform sampling.
Methods And Evaluation Criteria: Yes. The evaluation is sound, including:
- Multiple-Choice Question Answering
- Long-Term Open-Ended Question Answering
Theoretical Claims: No theoretical claims are made.
Experimental Designs Or Analyses: Yes. The authors run experiments on NeXT-QA, EgoSchema, Video-MME and MovieChat-1K. Key analyses include accuracy gains from ablating the “sticky” memory vs. “uniform” memory. The paper also includes qualitative visualizations.
Supplementary Material: Yes. I have read all the supplementary material.
Relation To Broader Scientific Literature: I think the key contribution is the exploration of a training-free approach for long video understanding.
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths**
- The paper proposes a training-free method which is novel and barely explored in this field.
- Sticky memory sampling effectively focuses on critical frames, as demonstrated by meaningful attention heatmaps.
- The motivation of the paper is strong and intuitive.
**Weaknesses**
- The paper misses analysis about memory usage and runtime overhead for very large videos. While the paper highlights the advantage of not storing all frames in memory, the scaling behavior and trade-offs could be more explicitly benchmarked.
- The conclusion from Table 4 is not consistent. It is better to test other Q-former video models and give a deeper analysis.
Other Comments Or Suggestions: - The authors are recommended to also compare with recent VLM models such as [*1, *2, *3].
- Even the paper focuses on training-free setting, it is also recommended to compare with some trained based long-video understanding paper, such as longvila and Video-xl.
- A breakdown of computational costs would be helpful—for example, how the inference time scales with the number of chunks or basis functions.
[*1] Wang, Peng, et al. "Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution." arXiv preprint arXiv:2409.12191 (2024).
[*2] Zhang, Yuanhan, et al. "Video instruction tuning with synthetic data." arXiv preprint arXiv:2410.02713 (2024).
[*3] Cheng, Zesen, et al. "Videollama 2: Advancing spatial-temporal modeling and audio understanding in video-llms." arXiv preprint arXiv:2406.07476 (2024).
Questions For Authors: - Would segmenting the video by scene change (rather than fixed-length chunks) further boost performance?
- How are the hyperparameters chosen? For example, is the method sensitive to memory contraction factor?
- How sensitive are results to the choice of chunk size vs. number of chunks? Is there an optimal chunk length for typical tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive review and suggestions. We are glad that you found our method novel, our evaluation sound, and the motivation of our work strong and intuitive. We address your main concerns below.
> “The paper misses analysis about memory usage and runtime overhead (...) the scaling behavior and trade-offs could be more explicitly benchmarked.”
This is a good suggestion. We agree that it is important to analyze the memory usage and runtime overhead of our method for large videos. Our experiments focus on a fixed-dimensional memory representation that scales independently of video length. To address this concern, we have conducted additional experiments that show the trade-offs in inference time and memory usage as the number of video chunks and basis functions increases, including the impact of the LTM. We will add these results to the appendix.
### Inference Time vs L
| **L**|**Video LLaMA (s)**|**Ours, No LTM (s)**|**Ours, LTM (s)**|
|-|-|-|-|
|**256**|31.25|31.25|33.10|
|**512**|36.68|35.55|37.42|
|**1024**|38.72|40.03|41.41|
|**2048**|OOM|51.30|55.79|
### Memory Usage vs L
|**L**|**Video LLaMA (Gb)**|**Ours, No LTM (Gb)**|**Ours, LTM (Gb)**|
|-|-|-|-|
|**256**|18.99|18.99|18.99|
|**512**|22.67|18.99|19.17|
|**1024**|30.04|19.46|19.47|
|**2048**| OOM| 20.05|20.96|
### Inference Time vs N
|**L**|**Video LLaMA (s)**|**N**|**Ours (s)**|
|-|-|-|-|
|**64**|32.27|**64**|42.94|
|**128**|33.49|**128**|40.51|
|**256**|31.25|**256**|41.41|
|**512**|36.88|**512**|41.68|
|**1024**|38.72|**1024**|42.18|
### Memory Usage vs N
|**L**|**Video LLaMA (Gb)**|**N** |**Ours (Gb)**|
|-|-|-|-|
|**64**|16.23|**64**|19.47|
|**128**|17.15|**128**| 19.47|
|**256**|18.99|**256**|19.47|
|**512**|22.67|**512**|19.48|
|**1024**|30.04|**1024**|20.64|
In the first two tables, for $\infty$-Video LLaMA, we use 256 frames per chunk and set $N=256$, while for the baseline (Video LLaMA), we use the total number of frames without chunking. In the last 2 tables for “Ours” we use 4 chunks of 256 frames. We use the Bohemian Rhapsody movie. OOM denotes an Out-of-Memory error on an A6000 GPU. Results show that our method maintains a constant memory footprint regardless of the number of frames, with only a small increase in inference time. We also observe that the increase in memory usage with the number of basis functions is smaller compared to the increase with the number of frames in the baseline. We hope this alleviates your concern.
> “The conclusion from Table 4 is not consistent. It is better to test other Q-former video models and give a deeper analysis (...) compare with recent VLM models such as [*1, *2, *3].”
Please note that the main goal of our paper is to validate the benefits of integrating a LTM with continuous attention in a fair, training-free setting, for which the Q-former video models are a suitable choice. We focused on an apples-to-apples comparison to isolate the impact of our approach, paving the way for future integration into stronger VLMs, which keep evolving very rapidly. For more details, see our response to Reviewer SJwR (second question).
> “Would segmenting the video by scene change (rather than fixed-length chunks) further boost performance?”
In our early experiments, we tried scene segmentation, but for fast-moving videos, this created small chunks that did not improve performance. Testing different scene-dividing granularities resulted in some chunks containing multiple scenes, which yielded worse results than fixed-length chunks. In any case, we find fixed-length chunks more interesting, since the ability of our method to generate sticky memories without the need for any scene segmentation suggests it is able to identify the most relevant information in the different scenes, bypassing this additional step.
> “How are the hyperparameters chosen? For example, is the method sensitive to memory contraction factor? (...) How sensitive are results to the choice of chunk size vs. number of chunks? Is there an optimal chunk length for typical tasks?”
The number of frames per chunk was determined by balancing GPU memory constraints and alignment with the training setup of the base models. For VideoChat2-based models, we selected 16 frames per chunk, as this both fits within memory limits and also matches the number of frames used during the training of VideoChat2. For Video LLaMA, although the model was originally trained with 32 frames, [1] has demonstrated that increasing this to 256 frames does not degrade performance. We therefore used 8 chunks of 256 frames, which aligns with the total number of frames in MovieChat [1]. Finally, the $\tau$ parameter was chosen to be the same as in the $\infty$-former paper [2].
[1] Moviechat+: Question-aware sparse memory for long video question answering (Song et al., 2024)
[2] $\infty$-former: infinite memory transformer (Martins et al., 2022) | null | null | null | null | null | null |
MoE-SVD: Structured Mixture-of-Experts LLMs Compression via Singular Value Decomposition | Accept (poster) | Summary: This paper decomposes expert layers into low-rank matrices to reduce parameter counts and memory demands in MoE LLMs. The key innovations include a selective decomposition strategy based on sensitivity metrics and a low-rank matrix sharing and trimming scheme. The authors claim MoE-SVD achieves significant compression and faster inference on models like Mixtral, Phi-3.5, DeepSeek, and Qwen2.
Claims And Evidence: The claims seem generally supported by the experimental results, and ablation studies further validate the contributions of the strategy.
Methods And Evaluation Criteria: This paper presents the experiments and comparisons on multiple language task benchmarks following the previous LLM Compression studies such as SVD-LLM and Wanda etc. The evaluation uses standard datasets and metrics, and it would be nice to see some evaluations on more complex tasks.
Theoretical Claims: MoE-SVD mainly illustrates some of the properties of MoE in SVD decomposition via empirical observations, which are quite interesting to me. However, it lacks theoretical analysis to back up the empirical results.
Experimental Designs Or Analyses: The experimental designs appear sound. The paper compares MoE-SVD with several existing compression methods and conducts ablation studies to evaluate the impact of different components.
Supplementary Material: I reviewed the Section A and B in the supplementary material.
Relation To Broader Scientific Literature: The paper adequately discusses related work, including general LLM compressors and MoE-specific compression methods. The introduction part is especially very interesting and carefully introduces the potential challenges in previous work.
Essential References Not Discussed: To my knowledge, most critical references have been discussed within the paper. I suggest the authors cite the published version of previous work rather than the arxiv version.
Other Strengths And Weaknesses: Overall, the paper presents an interesting empirical finding regarding the sensitivity and similarity of MoE LLMs decomposition, introduces a method that is easy to implement and reproduce while achieving good inference acceleration, and provides comprehensive experiments on multiple MoE models and benchmarks.
However, I have several concerns:
- The similarity of decomposition matrices is an important aspect. It would be great if the authors could provide more solid examples across multiple MoE LLMs or include a theoretical discussion to strengthen their findings.
- The trimming of the U matrix appears to have a substantial effect on performance. I encourage the authors to explore more nuanced approaches.
- There are minor typos (e.g., the caption in Figure 1). Additionally, the method comparison in Table 1 is overly complex, making it difficult for readers to follow. A clearer presentation would enhance readability.
Other Comments Or Suggestions: I recommend the authors double-check the line space across the paper.
Questions For Authors: See the weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for the valuable feedback and recognition. We have tried our best to address all concerns in the last few days. Please see our below responses to your concerns and questions one by one.
---
**Q1: More Examples of Matrix Similarity Across MoE Models**
**A1:**
**(1)** We've conducted additional analysis on **Phi-3.5-MoE and DeepSeekMoE** (within inherent expert group) that shows similar patterns of V-matrix redundancy across architectures. **These models exhibit 0.92 and 0. 81 average CKA similarity among V-matrices individuals**, confirming that this property generalizes across different MoE architectures.
**(2)** Table 3 provides empirical validation that our V-matrix sharing approach works effectively across multiple architectures, confirming that matrix similarity is a general property of MoE models rather than specific to Mixtral.
**(3)** The theoretical basis for V-matrix redundancy relates to how MoE models are trained. Since all experts process similar input distributions but specialize in different aspects, their output projections (captured by V-matrices) share substantial structure while their internal representations (captured by U-matrices) diverge more significantly.
**Q2: Refining U-Matrix Trimming Strategy**
**A2:**
**(1)** Table 6 shows perplexity trade-offs on Mixtral-8×7B: 4.86 (no trimming) → 5.98 → 7.11 → 10.25 as k increases. k=2 balances performance and speed at high compression.
**(2)** This value aligns with the standard top-k routing mechanism in MoE architectures, which typically activates 2 experts per token. By retaining 2 U-matrices, we maintain consistency with the model's inherent routing structure.
**(3)** We further propose **Adaptive k Selection Framework** to improve U-matrix trimming:
$$k_l^* = \arg\min_k {\mathcal{L}(k) + \lambda \cdot \mathcal{Q}(k)}$$
where $\mathcal{L}(k)$ represents the estimated performance loss from trimming to k matrices, $\mathcal{Q}(k)$ denotes the parameter count, and $\lambda$ controls the tradeoff. The performance loss is approximated using the information coverage of retained matrices:
$$\mathcal{L}(k) \approx 1 - \frac{\sum_{i=1}^k f_i \cdot \sigma_i}{\sum_{i=1}^N f_i \cdot \sigma_i}$$
where $f_i$ is the expert sampling frequency and $\sigma_i$ is the sum of singular values, jointly capturing the expert's contribution to model performance.
Using Automatic k Determination, k is computed automatically during compression based on calibration data statistics, without manual tuning. For each layer, the algorithm calculates the marginal utility of increasing k and stops when additional U-matrices provide diminishing returns relative to their parameter cost. Our experiments with Mixtral-8×7B demonstrate the effectiveness of this approach:
| Compression | Method | WikiText-2 PPL | Runtime (Tokens/sec) |
| ------------- | ---------------- | -------------- | -------------------- |
| 0% (Original) | - | 3.98 | 87.7 |
| 40% | Fixed k=2 | 6.74 | 109.8 |
| 40% | Adaptive k | 6.53 | 107.5 |
| 40% | U-matrix merging | 6.83 | 112.3 |
| 60% | Fixed k=2 | 13.52 | 156.1 |
| 60% | Adaptive k | 12.91 | 143.5 |
| 60% | U-matrix merging | 13.30 | 158.4 |
- Layer-specific k Distribution: Analysis of the automatically determined k values reveals an intuitive pattern: early and late layers (which our sensitivity metric identifies as more critical) receive higher k values (typically 2-3), while middle layers receive lower values (typically 1-2). This aligns with our understanding of information flow in transformer architectures and confirms that the algorithm captures meaningful layer-wise differences.
These results show that adaptive k selection consistently outperforms fixed k=2, with Adaptive k providing the best performance by preserving more information from the original expert space, particularly at higher compression ratios but U-matrix merging can find the best tradeoff between Runtime and performance.
**Q3: Minor Typos and Readability of Table 1**
**A3:**
**(1)** We'll correct the typo "TThese" in Figure 1's caption and conduct a thorough proofreading of the entire manuscript to ensure clarity and consistency in revision.
**(2)** Table 1 will be redesigned for improved readability. We'll use a clearer layout with better spacing and more consistent formatting to highlight the key differences between methods.
**(3)** We'll ensure that method descriptions are concise yet precise, using a consistent terminology throughout the table and the rest of the paper.
---
**Finally, we hope our response could address the concerns, and we thank the reviewer again for the helpful comments.** | Summary: This paper introduces MoE-SVD, a new compression framework specifically designed for MoE LLMs. Specifically, they first decompose experts into low-rank matrices via SVD. In particular, they selectively decompose the expert layers based on sensitivity metrics.
Thanks for the authors' detailed responses. All my concerns have been satisfactorily addressed, and I lean to vote for acceptance.
Claims And Evidence: The claims made in this paper are generally supported by experimental evidence.
Methods And Evaluation Criteria: The evaluation across language modeling (perplexity on WikiText-2, PTB, C4) and reasoning tasks (accuracy on seven common sense benchmarks) provides a comprehensive picture of model performance across different capabilities.
Theoretical Claims: The paper doesn't contain formal proofs but does provide theoretical justifications for its approaches.
Experimental Designs Or Analyses: The experimental designs appear sound and comprehensive. The authors evaluate their method across multiple model architectures and compression ratios. The ablation studies also analyze the contribution of each component.
Supplementary Material: I reviewed all the supplementary material, which contains additional experimental results and implementation details.
Relation To Broader Scientific Literature: The paper positions its contributions well within the broader literature on LLM compression and MoE optimization.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
1.The paper identifies and addresses the failure of standard SVD methods when applied to MoE architectures. Their sensitivity metric provides an automated and principled way to identify which expert layers to decompose.
2.The matrix sharing and trimming approach effectively balances parameter reduction with performance preservation.
Weaknesses:
1. The absolute performance degradation at higher compression ratios (e.g., 60%) is still significant.
2. The theoretical justification for the combined sensitivity metric could be strengthened with more analysis of how the three components interact.
3. The methodology for selecting the number of U-matrices to trim could be more principled rather than using a fixed value of k=2
Other Comments Or Suggestions: Size and layout of some tables and figures need minor refinement.
Questions For Authors: 1.The paper proposes sharing a single V-matrix across all experts. Have you explored more flexible sharing schemes, such as clustering similar experts and sharing V-matrices within clusters rather than globally?
2.For the U-matrix trimming, you use a fixed value of k=2. What is the justification for this particular value?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you so much for constructive comments, and recognition. Please see our below responses:
-----
**Q1: Performance Degradation at Higher Compression Ratios**
**A1:**
(1) MoE-SVD outperforms alternatives at all compression levels. At 60% compression: MoE-SVD=13.52 perplexity; ASVD/SVD-LLM>10,000 perplexity on WikiText-2.
(2) Lightweight LoRA fine-tuning (MoE-SVD†) mitigates degradation at 1% of full training cost. At 50% compression: improves accuracy from 0.43 to 0.49, recovering 9.5% original performance.
(3) Following the general phenomenon of performance loss in almost high-ratio compression, MoE-SVD significantly improves trade-off curve vs. existing approaches.
(4) Task-specific analysis shows reasoning tasks like ARC-e maintain performance at 60% compression, while MathQA shows higher sensitivity.
**Q2: Theoretical Justification for the Sensitivity Metric**
**A2:**
**(1)** Our sensitivity metric integrates three complementary components with theoretical foundations:
(a) Sampling frequency ($f_i$): expert utilization rate determined by router.
(b) Principal rank ($p_i$): effective dimensionality of expert's weight matrix from matrix approximation theory.
(b) Activation outliers ($a_i$): functional distinctiveness of experts.
**(2)** **Information-Theoretic Interpretation of Sensitivity Metric**: We have derived our sensitivity metric ( $S_L = Σ f_i · p_i · a_i$) from an information-theoretic perspective please see **A2** in responsing to Reviewer Cmem.
We show that under certain assumptions, our metric approximates the expected information loss from decomposition:
$E[I(W_i; Y|X)] - E[I(W̃_i; Y|X)] ∝ f_i · p_i · a_i$
where $I(W_i; Y|X)$ represents the mutual information between expert weights $W_i$ and model output $Y$ given input $X$.
**(3)** Table 5 validates that combining these components ($f_i · p_i · a_i$) achieves optimal performance (8.67 perplexity) vs. individual components (9.27-12.65 perplexity).
**Q3: Fixed `k=2` in U-Matrix Trimming and more principled methodology for K selection**
**A3:**
**(1)** Table 6 shows perplexity trade-offs on Mixtral-8×7B: 4.86 (no trimming) → 5.98 → 7.11 → 10.25 as k increases. k=2 balances performance and speed at high compression.
**(2)** This value aligns with the standard top-k routing mechanism in MoE architectures, which typically activates 2 experts per token. By retaining 2 U-matrices, we maintain consistency with the model's inherent routing structure.
**(3)** We further propose **Adaptive k Selection Framework** to improve U-matrix trimming:
$$k_l^* = \arg\min_k {\mathcal{L}(k) + \lambda \cdot \mathcal{Q}(k)}$$
where $\mathcal{L}(k)$ represents the estimated performance loss from trimming to k matrices, $\mathcal{Q}(k)$ denotes the parameter count, and $\lambda$ controls the tradeoff. The performance loss is approximated using the information coverage of retained matrices:
$$\mathcal{L}(k) \approx 1 - \frac{\sum_{i=1}^k f_i \cdot \sigma_i}{\sum_{i=1}^N f_i \cdot \sigma_i}$$
where $f_i$ is the expert sampling frequency and $\sigma_i$ is the sum of singular values, jointly capturing the expert's contribution to model performance. Using Automatic k Determination, k is computed automatically during compression based on calibration data statistics, without manual tuning. For each layer, the algorithm calculates the marginal utility of increasing k and stops when additional U-matrices provide diminishing returns relative to their parameter cost. Our experiments with Mixtral-8×7B demonstrate the effectiveness of this approach:
| Compression | Method | WikiText-2 PPL | Runtime (Tokens/sec) |
| ------------- | ---------------- | -------------- | -------------------- |
| 0% (Original) | - | 3.98 | 87.7 |
| 40% | Fixed k=2 | 6.74 | 109.8 |
| 40% | Adaptive k | 6.53 | 107.5 |
| 40% | U-matrix merging | 6.83 | 112.3 |
| 60% | Fixed k=2 | 13.52 | 156.1 |
| 60% | Adaptive k | 12.91 | 143.5 |
| 60% | U-matrix merging | 13.30 | 158.4 |
**Q4: Alternative V-Matrix Sharing Strategies**
**A4:**
Yes. For DeepSeek-MoE and Qwen2 (with inherented organized expert groups), we actually employ clustering strategy following architecture setting. We will emphasize this point and add more analysis in the revision.
**Q5: About size and layout**
**A5:**
We will carefully revise all size and layout of some tables and figures in the revision. Thanks.
---
**Finally,** we hope our response could address the concerns, and we thank the reviewer again for the helpful comments. | Summary: The paper presents a new compression method (MoE-SVD) for Mixture-of-Experts. The framework introduces a selective decomposition strategy and employs low-rank matrix sharing and trimming. Comprehensive experiments on models like Mixtral, Phi-3.5, DeepSeek, and Qwen2 demonstrate that MoE-SVD achieves faster inference, outperforming other compression methods.
Claims And Evidence: The central claims are supported by experiments across multiple models and tasks. Table 2 shows MoE-SVD’s superior perplexity and accuracy over baselines, and Figure 4 validates speedup. The ablation studies (Tables 4–6) further substantiate design choices.
However, the term "minimal performance loss" is ambiguous for higher compression ratios (e.g., 60% compression reduces average accuracy from 0.63 to 0.37 on Mixtral).
Methods And Evaluation Criteria: The evaluation criteria, including perplexity on language modeling datasets and accuracy on reasoning tasks, are appropriate and align with standard benchmarks in the field.
Theoretical Claims: The paper does not present theoretical claims.
Experimental Designs Or Analyses: The experimental design is sound and thorough. The authors evaluate MoE-SVD on multiple models and datasets, including Mixtral, Phi-3.5, and DeepSeek, and provide detailed ablation studies to isolate the contributions of individual components.
Supplementary Material: I have reviewed the supplementary material. The appendix includes code, implementation details, and extended analyses.
Relation To Broader Scientific Literature: This work already contains detailed comparisons of the literature on SVD, pruning, and MoE compression.
Essential References Not Discussed: None
Other Strengths And Weaknesses: **Strengths**
1. The paper presents a novel and effective approach to compressing MoE LLMs, addressing key challenges like decomposition collapse and matrix redundancy.
2. The experimental results are comprehensive and demonstrate clear improvements over existing methods.
3. The framework is practical, requiring no additional training and being hardware-independent.
**Weaknesses**
1. The theoretical underpinnings of the method are not explored in depth, which could limit understanding of its generalizability.
2. Clarify "minimal performance loss" in the abstract relative to baselines vs. absolute metrics.
3. While MoE and SVD are widely adopted in existing approaches, the novelty of this paper remains limited and requires further enhancement.
Other Comments Or Suggestions: Typos: "TThese" in Figure 1 caption;
Questions For Authors: Please refer to the above weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your detailed and constructive feedback. We have tried our best to address all concerns in the last few days. Please see our responses below one by one:
----
**Q1: About Theoretical Justification**
**A1:**
**(1)** **Theoretical Foundation of Decomposition Sensitivity**: We have formalized the relationship between expert sensitivity and model performance by analyzing how SVD decomposition affects information flow in MoE architectures. In Section 3.2, we present a detailed theoretical analysis that demonstrates why initial and final layers are particularly sensitive to decomposition due to their role in translating between embedding space and MoE-specific representations. This explains the empirical observations in Figure 1 (left), where decomposing these layers leads to disproportionate performance degradation.
**(2)** **Information-Theoretic Interpretation of Sensitivity Metric**: We have derived our sensitivity metric ( $S_L = Σ f_i · p_i · a_i$) from an information-theoretic perspective please see **A2** in responsing to Reviewer Cmem.
We show that under certain assumptions, our metric approximates the expected information loss from decomposition:
$E[I(W_i; Y|X)] - E[I(W̃_i; Y|X)] ∝ f_i · p_i · a_i$
where $I(W_i; Y|X)$ represents the mutual information between expert weights $W_i$ and model output $Y$ given input $X$.
**(3)** **Matrix Redundancy Theory**: In Section 3.3, we have added theoretical analysis explaining why V-matrices exhibit higher redundancy than U-matrices in MoE architectures. This analysis builds on prior work in Similarity of Neural Network Representations Revisited (Kornblith et al., 2019), showing that the shared output space constraints in MoE models naturally lead to similar output transformations (V-matrices) while maintaining diverse input transformations (U-matrices) for specialization.
**(4)** **Error Bounds for Matrix Sharing**: We have derived error bounds for our V-matrix sharing approach based on matrix approximation theory. For an expert weight $W_i = U_iΣ_iV_i^T$ with selection rank-r approximation $W_i^{r} = U_i^{r}Σ_i^{r}(V_i^{r})^T$, sharing the V-matrix introduces additional error bounded by: $||W_i - U_iΣ_iV_s^T||_F^2 ≤ ||W_i - W_i^r||_F^2 + ||Σ_i||_F^2 · ||V_i^T - V_s^T||_F^2$ . $V_s$ is the shared V-matrix selected based on router sampling frequency as defined in Equation 5 in the paper.This bound becomes tighter as the similarity between V-matrices increases, aligning with our empirical observations of high CKA similarity (average >0.8) between expert V-matrices.
**(5)** **Compression-Performance Tradeoff Analysis**: We have added a theoretical framework that characterizes the relationship between compression ratio, parameter efficiency, and model performance. This analysis explains why selective decomposition based on sensitivity metrics achieves a better Pareto frontier than uniform compression approaches. We derive performance bounds that predict deterioration patterns at different compression ratios, validating our experimental findings in Table 2.
**Q2: About minimal performance loss**
**A3:**
**(1)** The term "minimal" is comparative rather than absolute. We will revise our abstract: " For 40-60% compression, MoE-SVD achieves .... while maintaining relative performance superior to other compression methods." in the revision.
**(1)** Our claim of "minimal performance loss" is relative to other compression methods at the same compression ratio. As shown in Table 2, at 20% compression, MoE-SVD maintains 92% of the original model's average accuracy (0.58 vs. 0.63), while competing methods maintain less than 85%.
**Q3: Novelty Relative to Prior Work**
**A3:**
**(1)** In Table 1 and Appendix D, we have already compared and analyzed of MoE-SVD to provide insights into why our approach succeeds where standard SVD methods (ASVD, SVD-LLM) fail on MoE architectures.
**(2)** Our V-matrix sharing and U-matrix trimming strategies represent a novel approach to exploiting the unique redundancy patterns in MoE models. **We emphasize that these observations and designs not present in existing MoE and SVD methods.**
**(3)** Compared to expert pruning methods (MoE-Compression, MoE-I²), our approach maintains the sparse activation mechanism while making each expert more efficient. This avoids the significant performance drops associated with expert elimination (e.g., 23% accuracy drop when pruning 25% of experts in Mixtral-8×7B).
**(4)** MoE-SVD requires no retraining and is hardware-independent, making it more practical for real-world deployment than methods requiring extensive fine-tuning or specialized hardware.
----
**Q4: Typos**
**A4:** Thanks for the suggestion. We will fix "TThese" and double check and revise all typos in the revision.
------
**Finally, we genuinely hope that our explanations and efforts can improve the overall evaluation of our work.** We thank the reviewer again for the helpful comments. | Summary: This paper introduces MoE-SVD, a decomposition-based compression approach specifically designed for Mixture of Experts (MoE) Large Language Models (LLMs). Leveraging Singular Value Decomposition (SVD), the method reduces parameter redundancy and memory requirements without requiring additional training. The authors propose selective decomposition using sensitivity metrics, employing a shared V-matrix across experts and trimming U-matrices through top-k selection. Experiments conducted on various MoE models such as Mixtral, Phi-3.5, DeepSeek, and Qwen2 demonstrate a 60% compression ratio and 1.5× faster inference speed with minimal performance degradation.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Figure 3 lacks clear explanations regarding the depicted increases and baselines. Clarifying the baseline used and explicitly describing the nature of the improvements observed would significantly enhance reader comprehension.
Theoretical Claims: Yes. The ai in equation 4 is still unclear to me even after checking the supplementaries.
Experimental Designs Or Analyses: The improvements shown in evaluation results are not particularly significant. While the proposed approach successfully reduces memory and parameters, the extent of performance gain or maintenance could be more convincingly demonstrated.
Supplementary Material: Yes. I checked all supplementary material.
Relation To Broader Scientific Literature: This paper provides an interesting and practical solution for compressing MoE architectures in LLMs.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
Quantitative Parameter Reduction Analysis: The paper presents a thorough quantitative analysis in Section 3.3, particularly highlighting the efficiency of parameter reduction via V-matrix sharing and U-matrix trimming.
Comprehensive Evaluation: The experimental results are extensive, covering essential aspects such as generalizability, scalability, and multiple ablation studies. This robust evaluation contributes significantly to understanding the efficacy of the proposed method.
Clarity and Readability: The paper is well-organized, clearly written, and accessible, which helps readers easily grasp the concepts and methodology.
Weaknesses:
Ambiguity Regarding the Shared Matrix V: It remains unclear whether the shared V-matrix is universal across different models and tasks or sensitive to specific models and tasks. Addressing the generalizability or specificity of this shared matrix explicitly would clarify the practical implications of the method.
Other Comments Or Suggestions: See Other Strengths And Weaknesses
Questions For Authors: See Other Strengths And Weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the valuable feedback. We have tried our best to address all concerns in the last few days. If the reviewer finds our response adequate, **we would really appreciate it if the reviewer considers raising the score.** Please see our responses below one by one:
-----
**Q1: Explanation of Figure 3**
**A1: We clarify that we already provided detailed explanations in Section 3.2 [Lines 220-239] Decomposable Expert Layer Analysis.**
Figure 3 shows our decomposition strategy results across different compression ratios, visualising which expert layers are selected for decomposition at each ratio is the best strategy based on formulation 4, revealing how compression targets different layers as the ratio increases. We will revise the figure caption and add clarifications in Section 3.3 in the revision.
**Q2: About `a_i` in Equation 4**
**A2:** **As detailed in Section 3.2 [Line 207-211], Appendix D.1 [Line 850-859] and pyhon pseudo-code Algorithm 1 [Line 980-985],** the $a_i$ signifies the activation outliers for the $i$-th expert, defined as the outlier ratio within the expert's weight matrix. Outliers are identified as values exceeding a predefined threshold, which is set as a multiple of the mean value of the matrix. Mathematically, the outlier ratio is calculated as:
$$
a_i = \frac{\sum_{a \in A_i} \mathbb{I}(|a| > \tau \cdot \operatorname{Mean}(|A_i|))}{|A_i|}
$$
where $|A_i|$ denotes the total number of activations for the $i$-th expert, $\operatorname{Mean}(|A_i|)$ is the mean absolute value of these activations, and $\tau$ is a user-defined threshold. This metric highlights the presence of outlier activations indicative of the expert's contribution to the model's capacity.
**Algorithm: Pyhon pseudo-code for $a_i$.**
```pseudocode
# Compute activation outliers (a_i)
a_i = torch.zeros(num_experts, device=device)
for i, activation in enumerate(activations):
mean_abs_act = torch.mean(torch.abs(activation))
outliers = torch.sum(torch.abs(activation) > tau * mean_abs_act)
a_i[i] = outliers / activation.numel()
# Compute final sensitivity metric S_L
S_L = torch.sum(f_i * r_i * a_i)
```
**Q3: About Performance Improvements**
**A3:**
**(1)** As shown in Table 2. At 20% compression ratio, MoE-SVD achieves an average accuracy of 0.58 across reasoning tasks compared to 0.51 for ASVD, 0.44 for SVD-LLM and 0.33 for MC-SMoE. This represents a 23% reduction in performance degradation relative to the original model (0.63). At higher compression ratios (50-60%), where other methods experience catastrophic collapse (e.g., ASVD and SVD-LLM reach perplexities >10,000 on WikiText-2), MoE-SVD maintains reasonable performance with a perplexity of 13.52 on WikiText-2.
**(2)** The real-world utility is validated by significant efficiency gains: 1.5-1.8× inference speedup on Mixtral-8×7B at 60% compression making deployment feasible on resource-constrained devices.
**(3)** Ablation studies in Tables 4-6 confirm that each component of our approach contributes meaningfully to its success, with selective decomposition, V-matrix sharing, and U-matrix trimming all playing crucial roles in balancing efficiency and performance.
**Q4: Generalizability of the Shared V-Matrix**
**A4:**
The shared V-matrix is not universal but dynamically determined for each model architecture. To demonstrate this, we conducted comprehensive layer-wise CKA similarity analysis across different MoE architectures:
**Layer-wise Analysis Within Models**: this table shows the layer-wise CKA similarity patterns for U and V matrices within three diverse MoE architectures:
| Model | Layer Position | U-matrix Similarity | V-matrix Similarity |
| --------------- | -------------- | ------------------- | ------------------- |
| Phi-3.5-MoE | Early (1-10) | 0.29 | 0.78 |
| | Middle (11-20) | 0.36 | 0.86 |
| | Final (21-32) | 0.31 | 0.74 |
| DeepSeekMoE-16B | Early (1-8) | 0.21 | 0.75 |
| | Middle (9-18) | 0.26 | 0.82 |
| | Final (19-28) | 0.25 | 0.71 |
| Qwen2-57B-A14B | Early (1-8) | 0.18 | 0.77 |
| | Middle (9-18) | 0.23 | 0.83 |
| | Final (19-28) | 0.16 | 0.73 |
These findings provide strong empirical evidence that our approach of dynamically selecting V-matrices based on layer-specific router statistics is well-justified.
-----
**Finally,** **we hope our response could address the concerns, and we thank the reviewer again for the helpful comments.** **We genuinely hope that our explanations and efforts can improve the overall evaluation of our work.** | null | null | null | null | null | null |
TabFSBench: Tabular Benchmark for Feature Shifts in Open Environments | Accept (poster) | Summary: The paper studies the problem of feature shifts in tabular learning. It introduces TabFSBench, a benchmark for evaluating four types of feature-shift scenarios, assessing the robustness of four categories of tabular models. The study shows among others that most tabular models have limitations in handling feature-shifts and the feature importance has linear relationship with performance degradation.
Claims And Evidence: The paper evaluates four types of feature shifts and their impact on model performance: (a) single shift, where a single feature with different correlations is shifted; (b) most-relevant shift, where features with the highest correlations are shifted; (c) least-relevant shift, where features with the lowest correlations are shifted; and (d) random shift, where a random subset of features are shifted.
The main claims, including that current state-of-the-art methods fall short in handling the four types of shifts in the 'feature-decrement' scenario and that the degree of performance degradation is closely related to feature correlation, are well supported by the experimental results. However, the scope of the evaluation is somewhat limited. For example, the paper only considers the 'feature-decrement' case, where some of the features are missing values, for all test example cases. The evaluation would be more comprehensive if the shifted features varied across different test examples and if the 'feature-increment' scenario were also considered, allowing an assessment of how much the model's performance degrades compared to when additional features are available.
Methods And Evaluation Criteria: TabFSBench is specifically designed to evaluate the types of feature-shift scenarios and tabular models considered in this paper, and so the benchmark is appropriate for the evaluation. However, it could have included more tabular tasks, e.g., those with substantially larger feature sets, to strengthen the evaluation. Moreover, the different types of tabular models considered are evaluated 'in isolation'. Instead, some of the approaches could be combined to potentially better handle the feature-shift scenarios. For example, traditional tree-based models that tend to do better when large training data is available could benefit from features imputed using LLMs as feature generators. Such new proposals for handling feature shifts and demonstrating those on the benchmark would have strengthened the evaluation substantially.
Theoretical Claims: No substantial theoretical claims are made in the paper.
Experimental Designs Or Analyses: The overall design of TabFSBench to include 12 tabular tasks, with the four types of feature shifts is sound. The mean-value imputation strategy used is a common approach and therefore is a valid choice. Again, the experiments could have been extended in several ways, such as considering cases where different test examples experience different types of feature shifts or exploring other imputation strategies, such as using LLMs as feature generators.
Supplementary Material: No supplementary material has been reviewed.
Relation To Broader Scientific Literature: While prior works tend to focus more on distribution shifts, this paper examines feature shifts in tabular learning, introducing a benchmark for assessing four types of feature shifts. In addition, the paper evaluates different types of tabular models on this benchmark, including both traditional tree-based models as well as more recent LLM-based approaches.
Essential References Not Discussed: While the paper discusses general-purpose and tabular LLMs and evaluates them on the benchmark, LLMs have also been used as feature generators in tabular learning, and discussion on those use cases are missing. Some relevant works in this regard include OpenFE [1], CAAFE [2], and OCTree [3].
---
[1] Zhang et al., OpenFE: Automated Feature Generation with Expert-level Performance.\
[2] Hollmann et al., Large Language Models for Automated Data Science: Introducing CAAFE for Context-Aware Automated Feature Engineering.\
[3] Nam et al., Optimized Feature Generation for Tabular Data via LLMs with Decision Tree Reasoning.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: Typos:
- $C^{train}$ near line 192.
Questions For Authors: Q. Observation 3 states that models with superior performance in closed environments also perform better in feature-shift scenarios. This seems somewhat counterintuitive, as one might expect that models achieving high performance in closed environments rely more heavily on specific features, making them more vulnerable to feature shifts. Do you have any intuition about this? Or, does this suggest potential improvements to the benchmark to strengthen the evaluation?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Reviewer B7Cf:
Thank you very much for your valuable feedback on our paper. We will take your reviews seriously and make the necessary revisions or additions to the final version.
**Q1: Shifted features vary across different examples.**
We appreciate your feedback on our experimental design. In our random shift experiments, to evaluate model performance under n/N% feature shift (where n is the number of shifted features and N is the total feature count), we follow this protocol:
1. Randomly sample min(10,000, C(N,n)) distinct feature combinations for each shift magnitude.
2. Remove each selected combination during testing to assess performance.
3. The performance for a given n/N% shift is the mean across all combinations.
**Q2: Considering cases with different types of feature shifts**
Regarding this question, we would like to clarify two potential interpretations:
- Different types of (feature shifts): Please refer to our response to Reviewer Q7p8 Q1. The first leaderboard provides a comprehensive evaluation of model robustness across various types of feature shifts.
- Different types of (feature) shifts: We conducted evaluations of CatBoost performance on a single shift, categorized by: Categorical, Numerical and Boolean features. Results demonstrate that model performance exhibits the highest sensitivity to Categorical features, followed by Boolean features while showing the lowest susceptibility to Numerical features. The complete experimental results are available in the **[link](https://anonymous.4open.science/r/icmlrebuttal-55BD/feature%20types.pdf)**.
|Metric|Raw|Categorical|Numerical|Boolean|
|-|-|-|-|-|
|Acc|0.792|0.769|0.787|0.778|
**Q3: Feature-increment scenario.**
Please refer to our response to Reviewer Q7p8 Q2.
**Q4: More datasets.**
Due to time and computational constraints in the rebuttal phase, we couldn't complete evaluations on larger benchmarks. However, following OcTree, we conducted random shift experiments on two high-dimensional datasets (madelon and nomao) and present model performance under 10%, 20%, ..., and 100% feature shift degrees, limited by space. This **[link](https://anonymous.4open.science/r/icmlrebuttal-55BD/high%20dimension.pdf)** has detailed results.
|Dataset|Metric|0%|10%|20%|30%|40%|50%|60%|70%|80%|90%|100%|
|-|-|-|-|-|-|-|-|-|-|-|-|-|
|Nomao|Acc|0.931|0.926|0.918|0.907|0.880|0.871|0.845|0.822|0.831|0.765|0.747|
|Madelon|Acc|0.590|0.581|0.572|0.568|0.573|0.564|0.555|0.543|0.526|0.513|0.524|
The table demonstrates that as the degree of feature shift increases, the model performance decreases significantly. Please refer to our response to Reviewer Q7p8 Q1 for further explanation.
**Q5: Imputation methods.**
We appreciate your suggestions. As Reviewer Q7p8 and WumN noted, our paper is a benchmark study and needn't propose new methods. Thus, we did not explore AutoFE in feature shifts. Additionally, our focus on evaluating model robustness meant we did not extensively cover advanced imputation methods, as explained in Section 3.3.
In light of your proposal, we posit that AutoFE can address feature shift scenarios through two approaches. Detailed results can be found in the **[link](https://anonymous.4open.science/r/icmlrebuttal-55BD/autofe.pdf)**.
- Imputation: CAAFE and OcTree generate rules for imputing specific features, while OpenFE's generated features do not match the originals. We tested single shifts on the heart dataset using CAAFE and OcTree for imputation. Results show that LLM-based AutoFE can effectively generate matching features, enhancing model robustness compared to mean imputation.
|Model|Raw|CAAFE|OcTree|
|-|-|-|-|
|CatB|0.846|0.852|0.850|
|PFN|0.859|0.869|0.870|
- Generation: We used AutoFE to generate new features to offset the impact of missing original features on model performance. Tests on the heart dataset with random shifts via OpenFE, CAAFE, and OcTree (with the number of generated features matching the original dataset) showed that LLM-based AutoFE has significant potential in feature shift scenarios.
|Model|Raw|OpenFE|CAAFE|OcTree|
|-|-|-|-|-|
|CatB|0.710|0.849|0.854|0.849|
|PFN|0.763|0.859|0.867|0.865|
As the importance of features increases, both model performance declines, further corroborating Observation 2 of TabFSBench. Please refer to our response to reviewer wA1o Q7 on other imputation methods.
**Q6: Typos.**
We will make revisions in the final version.
**Q7: Questions For Authors.**
Feature importance, calculated via PCC, correlates strongly with the impact of feature absence on performance. Higher importance results in greater performance decline, indicating that model dependency on features is consistent across closed environments and feature shift scenarios. This stability is due to the models' robust feature selection and weight allocation mechanisms. For more details, refer to our response to reviewer wA1o Q6 and the **[link](https://anonymous.4open.science/r/icmlrebuttal-55BD/heart%20single.pdf)**. | Summary: The paper introduces TabFSBench, a benchmark designed to evaluate feature shifts in tabular data. The authors argue that while distribution shifts have been extensively studied, feature shifts remain an underexplored challenge in open environments. TabFSBench includes 12 datasets, four feature-shift scenarios, and evaluates various model categories, including tree-based models, deep learning approaches, and large language models (LLMs). The key findings from the study include:
- Most tabular models perform poorly under feature shifts.
- The importance of shifted features correlates linearly with performance degradation.
- Model performance in closed environments is indicative of its robustness to feature shifts.
Claims And Evidence: The paper claims that feature shifts significantly degrade model performance, but the supporting evidence is limited by a small selection of datasets (12 datasets), unclear dataset selection criteria, and insufficient hyperparameter optimization (HPO) for baseline models. The claims regarding linear correlation between feature importance and performance degradation are presented with empirical results, but the justification for why Pearson correlation is the appropriate metric for feature importance is weak. Additionally, the claim that LLMs show potential for handling feature shifts is not rigorously validated.
Methods And Evaluation Criteria: The experimental design includes:
- A benchmark with four feature-shift scenarios: single shift, most/least relevant shift, and random shift.
- Comparison of various tabular models, including tree-based models, deep learning approaches, and LLMs.
- Evaluation metrics: accuracy for classification, RMSE for regression, and a robustness metric based on performance drops.
However, the dataset selection process is unclear, and the justification for using 12 datasets is missing. Many benchmarks for tabular learning typically use dozens or even hundreds of datasets to ensure robustness. Additionally, the HPO process for baseline models is underdeveloped, which raises concerns about whether the reported performance reflects true model capabilities.
Theoretical Claims: The paper does not introduce novel theoretical claims but suggests a linear relationship between shifted feature importance and model degradation. While this is weakly supported by empirical evidence, no theoretical justification is provided. The Pearson correlation analysis is simplistic, and additional validation is needed.
Experimental Designs Or Analyses: The experiments are not sufficiently rigorous due to:
- Limited dataset variety: Only 12 datasets, whereas other tabular learning studies often use significantly more.
- Weak HPO process: Many tabular models, particularly gradient-boosted decision trees (GBDTs), are highly sensitive to hyperparameter tuning, but details on the HPO space are lacking.
- Overgeneralized conclusions: The claim that "most models struggle with feature shifts" is based on a narrow selection of datasets and does not consider variations across different domains.
The benchmark fails to justify why these 12 datasets are representative of real-world tabular learning tasks.
Supplementary Material: The supplementary material includes dataset details and additional experimental results. However, it lacks statistical analysis to strengthen the empirical observations. No justification is provided for dataset selection or alternative methods for defining feature importance.
Relation To Broader Scientific Literature: The paper correctly identifies that distribution shifts have been extensively studied, while feature shifts remain relatively unexplored. However, it does not cite all relevant prior work on feature shift mitigation. The paper could benefit from a more detailed discussion of:
- Domain adaptation and domain generalization techniques relevant to feature shifts.
- Methods for assessing robustness of tabular models beyond standard performance metrics.
Essential References Not Discussed: Most of recent essential references are discussed.
Other Strengths And Weaknesses: Strengths:
- Addresses an underexplored problem (feature shifts in tabular data).
- Includes a diverse set of tabular models, including LLMs. (even though they are only loaded from the previous python library.)
Weaknesses:
- Lack of clarity in key definitions: The paper does not provide a formal, quantifiable definition of "feature shift."; The explanation of distribution shift vs. feature shift is vague.
- Limited dataset selection: Only 12 datasets are used without clear justification.
- Insufficient hyperparameter tuning: HPO significantly affects tabular models, but the paper lacks a proper discussion of its impact.
- Overgeneralized conclusions: Claims about limited applicability of tabular models are made based on a small dataset selection.
- Weak novelty: The benchmark consists of existing datasets and models, with no novel methodological contribution.
Other Comments Or Suggestions: - Clarify definitions: Provide formal mathematical definitions of feature shift and distribution shift.
- Improve dataset selection justification: Explain why these 12 datasets were chosen and why they represent real-world tabular learning challenges.
- Enhance HPO methodology: Describe the hyperparameter search space and ensure fair comparisons among models.
- Avoid overgeneralized claims: Reword broad claims such as “most models struggle with feature shifts” to reflect limitations of the current benchmark.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Dear Reviewer wA1o:
Thank you very much for your valuable feedback on our paper. We will take your reviews seriously and make the necessary revisions or additions to the final version.
**Q1: Weakness&Suggestion 1**
We adopt notations used in our paper and will provide a more detailed explanation in the final version.
- Distribution Shift: During the training phase and the testing phase, the feature sets remain unchanged, i.e., $C_{train} = C_{test}$. However, there are three types of distribution shifts of the samples themselves: $p(x)$, $p(y|x)$, and $p(x|y)$, where $x \in \mathcal{X}$ and $y \in \mathcal{Y}$.
- Feature Shift: During the training phase and the testing phase, the feature sets change while the data distribution of samples has no shift.
- Increment: $C_{train} \subseteq C_{test}$.
- Decrement: $C_{test} \subseteq C_{train}$.
**Q2: W&S 2.**
Please refer to our response to Reviewer Q7p8 Q1.
**Q3: W&S 3.**
We implement adaptive hyperparameter optimization based on the Optuna framework and following previous studies[1], fixing the batch size at 1024 and conducting 100 independent trials through train-validation splits to prevent test set leakage, with the best-performed hyperparameters fixed during the final 15 seeds.
**Q4: W&S 4**
Please refer to our response to Reviewer B7Cf Q4.
**Q5: W 5**
Since our paper focuses on evaluating the robustness of tabular machine learning models in feature-shift scenarios, as Reviewer Q7p8 and WumN noted, our paper is a benchmark and doesn't propose new methods. Therefore, we did not propose a new method. However, through our response to reviewer B7Cf Q5, we explored the potential of AutoFE in addressing the challenges of feature shifts.
**Q6: The justification for the Pearson correlation is weak.**
We calculated Kendall's $\tau$ coefficient among Pearson, Spearman, SHAP, and mutual information to assess the consistency of feature importance rankings. The table showed a high degree of consistency across these metrics:
|Metric|$\tau$|
|-|-|
|Pearson|0.60|
|Spearman|0.61|
|SHAP|0.49|
|Mutual Information|0.53|
Despite Spearman's high consistency ($\tau$ = 0.61), We chose Pearson for its widespread use and interpretability. The minor difference in consistency between Pearson and Spearman does not affect the conclusions of the analysis. Detailed data are provided in the **[link](https://anonymous.4open.science/r/icmlrebuttal-55BD/feature%20importance.pdf)**. We will further refine related experiments in the final version.
**Q7: The claim that LLMs show potential in feature shifts is not rigorously validated.**
We evaluated LLM performance in scenarios without missing value imputation. Experimental results demonstrate that LLM can not only handle feature shifts but is also more robust than tree-based models. Please refer to our response to reviewer 8NVr Q5. Additionally, we tested the performance of LLM-based AutoFE in handling feature shifts. Please refer to our response to reviewer B7Cf Q5.
**Q8: Observation 2 lacks theoretical justification.**
This observation is drawn from experimental results, thereby limiting the scope of the analytical section. To further investigate this observation, we conduct a theoretical justification.
A high PCC signifies a strong feature-target relation, enabling the feature to provide substantial information increment and enhance model performance. Thus, removing important features leads to a significantly greater performance degradation than removing unimportant ones. Additionally, the model depends strongly on highly correlated features during training, and their absence markedly deteriorates performance (see single-shift experiment results).
We have already pointed out the limitations regarding the impact of interactions among input features on model performance in lines 433-435 in the Conclusion of our paper. Subsequently, we will conduct an in-depth exploration of this. For additional explanations regarding this observation, please refer to our response to Reviewer 8NVr Q2.
**Q9: Not cite all relevant work.**
Thank you for your suggestion. Although existing Heterogeneous Domain Adaptation (HeDA) methods have achieved significant progress on feature shift of images, tabular data presents a fundamentally different pattern. The inherent structure of it presents challenges when attempting to directly implement HeDA on such datasets. Moreover, our review of the literature reveals that what is often referred to as "feature shift" in many papers is essentially a form of distribution shift. For example, [2] regards covariate shift as feature shift. Other related works on feature shift discussed in our paper, please refer to our response to Reviewer 8NVr Q10.
[1] Liu, Si-Yang, et al. TALENT: A Tabular Analytics and Learning Toolbox. arXiv preprint arXiv:2407.04057, 2024.
[2] He, Huan, et al. Domain adaptation for time series under feature and label shifts. International Conference on Machine Learning, 2023. | Summary: This paper establishes a new benchmark for tabular data, focusing on feature shift issues, where feature increments and decrements occur between the training and testing phases. This benchmark includes three types of tasks across twelve datasets and evaluates twenty-four tabular methods. The experimental results reveal three key observations that can guide future research in this area.
Claims And Evidence: The authors' major claims are based on three key observations from the experiments: (1) tabular models have limited performance in handling feature-shift scenarios; (2) feature importance shows a linear trend with performance degradation; (3) performance under feature shift is related to performance in a closed environment. The authors have demonstrated these key observations through their experiments.
Methods And Evaluation Criteria: This is a benchmark paper, and as such, it does not propose any new method to address the feature shift problem. The evaluation in this paper is comprehensive and accurate. The authors use accuracy and ROC-AUC for classification tasks and RMSE for regression tasks. Additionally, the performance gap is illustrated to demonstrate model robustness. Extensive analyses are provided to highlight the shortcomings of existing methods and suggest future directions.
Theoretical Claims: The paper does not present theoretical claims.
Experimental Designs Or Analyses: The experimental design and corresponding analyses are comprehensive because this is a benchmark paper. The three key observations are fully supported by the experimental results.
Supplementary Material: The supplementary material includes a URL that links to an anonymous Git repository. This repository contains the full code and instructions for using the code in this paper.
Relation To Broader Scientific Literature: This paper can advance tabular data research towards deployment in real-world applications. Specifically, this research has revealed a weakness in existing tabular methods, that is, feature shift leads to performance degradation issues, and established a benchmark to comprehensively evaluate the performance of each method, guiding future method development.
Essential References Not Discussed: No essential reference is omitted.
Other Strengths And Weaknesses: Strengths:
1. This paper studies a novel and practical problem for the tabular classification and regression problems. In tabular tasks, feature failure is a common problem. With the advance of this problem, the tabular methods can better deploy in the real-world applications.
2. The experimental results and corresponding analysis in this paper is comprehensive. The analyses have revealed three observations, which can guide the following researches.
3. The authors provide a large-scale benchmark for the studied problem, containing three types of tasks and twelve datasets evaluated by four metrics. This can benefit the whole tabular community.
Weakness:
Although the studied problem and conducted experiments offer a certain contribution, the authors should provide datasets that naturally contain feature shift issues rather than simply analyzing performance on synthetic datasets.
Other Comments Or Suggestions: Please see the strengths and weaknesses above.
Questions For Authors: 1. What is the performance upper bound for each dataset when the feature shift issue occurs? When feature shift happens, the optimal performance of each dataset should change, and understanding this optimal performance is helpful to gauge the gap between existing methods and a perfect method.
2. Could the authors provide some examples of feature shift issues occurring in the real world? This would be helpful for understanding the importance of this problem.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer WumN:
Thank you very much for your valuable feedback on our paper. We will take your reviews seriously and make the necessary revisions or additions to the final version.
**Q1: The authors should provide datasets that naturally contain feature shift issues rather than simply analyzing performance on synthetic datasets.**
Regrettably, there currently exists no dataset specifically designed for feature shift, unlike Tableshift [1] which was developed for distribution shifts. However, we have preliminarily constructed a feature-shifted dataset based on the Heart dataset. Given that different features in the original dataset require distinct measurement instruments, we categorized the features into three groups: basic features, electrocardiogram (ECG) features, and exercise stress test features.
In the constructed feature-shifted Heart dataset, both the training set and the test set step 0 contain all features. However, patients in the test set step 1 lack ECG measurements, resulting in the absence of RestingECG and ST_Slope features. Similarly, patients in the test set step 2 did not undergo an exercise stress test, leading to the absence of ExerciseAngina and Oldpeak. A subset of examples is provided in this **[link](https://anonymous.4open.science/r/icmlrebuttal-55BD/feature-shift%20heart%20dataset.pdf)**.
Additionally, we evaluated CatBoost on this dataset:
|Metric|Test Set Step 0|Test Set Step 1|Test Set Step 2|
|------------|----------------|----------------|----------------|
|Accuracy| 0.879| 0.658| 0.823|
Note that for meaningful evaluation of feature-shifted datasets, models must be assessed under specific partitioning schemes. Applying the four experimental settings proposed in our paper would undermine the unique characteristics and practical relevance of such datasets. In the final version, we will include a comprehensive evaluation of the feature-shifted Heart dataset and will open-source the dataset for broader research use.
**Q2:What is the performance upper bound for each dataset when the feature shift issue occurs?**
We evaluated the performance upper bound of the models by training directly on the shifted data and assessing their performance on a held-out test set. Due to time constraints, we did not test the performance upper bounds of LLMs and tabular LLMs. The detailed performance for all evaluated models on 9 datasets is provided in this **[link](https://anonymous.4open.science/r/icmlrebuttal-55BD/upper%20bound.pdf)**, with the average performance upper bound and feature-shift performance on the Heart dataset presented here.
|Model|Average Vanilla Performance|Average upper-bound performance|
|-|-|-|
|LightGBM|0.700|0.707|
|XGBoost|0.698|0.761|
|CatBoost|0.718|0.767|
|TabPFN|0.778|0.804|
|DANets|0.617|0.628|
|MLP|0.775|0.783|
|NODE|0.599|0.613|
|ResNet|0.796|0.808|
|SwitchTab|0.810|0.821|
|TabCaps|0.743|0.777|
|TabNet|0.604|0.616|
|TANGOS|0.790|0.794|
|AutoInt|0.629|0.642|
|DCNv2|0.776|0.786|
|FT-Transformer|0.661|0.671|
|GrowNet|0.578|0.589|
|Saint|0.770|0.778|
|SNN|0.718|0.729|
|TabTransformer|0.502|0.515|
|TabR|0.806|0.818|
|ModernNCA|0.781|0.819|
The results demonstrate that, under the same degree of feature shift, models trained on the original dataset exhibit inferior performance compared to those trained directly on the shifted dataset. This indicates that although training on the original dataset provides more comprehensive information, feature shift significantly compromises model robustness, resulting in performance degradation relative to models trained on the shifted data. Furthermore, as the degree of feature shift continues to increase, the performance gap between original and shifted datasets also becomes progressively larger, indicating a progressively diminishing robustness of the model.
**Q3:Could the authors provide some examples of feature shift issues occurring in the real world?**
Section 2.2 of our paper employs forest disease monitoring as a case study to demonstrate how sensor degradation leads to a reduction in available features. As further evidenced by the designed heart dataset in Q1, incomplete medical examinations may result in missing diagnostic indicators (features) due to the absence of specific equipment.
The feature shift phenomenon also manifests prominently in financial and transportation domains:
- Finance: Stock prediction models trained on comprehensive features (e.g., financial ratios, macroeconomic indicators) may encounter missing features (e.g., market sentiment indices) during real-world deployment due to unforeseen events.
- Transportation: Accident prediction models relying on features like road conditions and weather data may experience partial feature absence caused by sensor failures or insufficient data collection.
[1] Gardner, Josh, Zoran Popovic, and Ludwig Schmidt. Benchmarking distribution shift in tabular data with tableshift. Advances in Neural Information Processing Systems, 2023: 53385-53432. | Summary: # Summary
This paper proposes a benchmark to study "feature shift" in the tabular data setting -- where the set of features (i.e. columns) available to the model change at test time vs. training time. The authors identify a set of 12 datasets (four binary classification/multiclass classification/regression datasets), along with different shift settings (random, most/least relevant, single shift), and conduct a set of benchmarking experiments across these datasets, highlighting some empirical findings.
Overall, this is a much-needed direction of high relevance to the tabular data research + practitioner community. However, I have some concerns about design decisions in the study, which seem to make it difficult to rely on the comclusions. At the very least, it is critical to see empirical validation that the authors' decisions do not overly bias the results. Additionally, the benchmark seems fairly small, and the empirical results are noisy, which can make the results feel somewhat unreliable. Some clarification is also needed on various details.
# Major comments
* In some sense, each "table" is a data point in this study. As such, the authors have 12 data points (four binary classification/multiclass classification/regression datasets), which makes it difficult to rely on the noisy results without clear trends (both Figure 3 and Figure 4 show data that clearly do *not* conform to the line of best fit drawn on the plot). The benchmark would feel much more reliable if it was comprised of more tables, or if the authors were at least able to better explain the results (for example, why are there multiple, apparent "trajectories" or lines in Figure 3?). I would suggest that the authors incorporate more high-quality tables from existing benchmarks, including some of those referenced in the paper (such as the Grinsztajn benchmark, TabZilla https://arxiv.org/abs/2305.02997, or the UniPredict suite https://huggingface.co/datasets/mlfoundations/tabula-8b-eval-suite/tree/main/unipredict).
* API design: most users won't want to re-run your experiments (shown in "Example Command" -- this should be a latex code listing). Instead, they will want to get the *data*, and run their own algorithms on it. They may also want to contribute new data to the bechmark, and run your implementations against this. Please comment on how this is supported, and consider adding this to your documentation.
* The authors make a significant design decision -- perfoming **mean-value imputation with missing features** -- which seems likely to have a major impact on the study's results. In particular, **replacing** values in the existing columns, after training on the true values, is not the same as "removing" the column. It revalues that column, and changes the model's predictions to p(x_j{j\neq 1}|x1=x1) for some imputed feature x1. In this case, it seems an almost trivial observation that "removing" (imputing) important features leads to a linear drop, as a model is likely to learn to rely on these important features, and then mean-inputation actually distorts the distribution of X along that dimension, leading to an unsurpising performance drop. If the authors want to demonstrate that this is a reasonable design decision, I would suggest (1) providing evidence from other studies that this is a realistic and reliable method for feature "removal", and (2) show that the results of the study are robust to this decisino (for example, by performing other forms of removal or imputation and demonstrating that the key claims still hold). I, however, am extremely skeptical of the results without such validation.
* Many methods used in this study are capable of natively modeling missing data. However, the authors do not investigate this decision, despite the fact that any practitioner would likely use this in practice if the schema was known to change. Please also include variants of models where they are trained with missing data (at train time, for example, by randomly masking x% of cells to missing) and then the target column is set to missing at test time.
* It would be helpful to provide "oracle" measures of performance on the shifted data, by training a model directly on the shifted data and evaluating on a held-out performance. This would help understand how much of the perofrmance degradation is due to the information loss (an irredicible source of error), vs. how much is due to lack of robustness (it is this latter form of drop that the authors seem interested in minimizing/understanding).
# Minor comments
* I think the name "feature shift" is not an accurate descriptor of the setting being studied, and I am not aware of this terminology being used in prior tabular studies. Perhaps "column shift" or "schema shift" would be better names.
* Details of the Pearson correlation method (L168) belong in the main text, not in the supplementary.
* Why do the authors use a "sampling with replacement" approach to single shift, instead of simply removing one feature randomly? Please explain or revise.
* While the hyperparameter grids are given, the HPO method is not - please clarify how hyperparameters are tuned.
* The design of iteratively removing important/unimportant features is also in the Gardner et al. 2024 paper.
# Typos ec
* Abstract: "Python codes" --> Python code
* L079: "feature-shift challenge" --> missing period.
Claims And Evidence: See above.
Methods And Evaluation Criteria: See above.
Theoretical Claims: See above.
Experimental Designs Or Analyses: See above.
Supplementary Material: No.
Relation To Broader Scientific Literature: See above.
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Reviewer 8NVr:
Thank you very much for your valuable feedback on our paper. We will take your reviews seriously and make the necessary revisions or additions to the final version.
**Q1: It is difficult to rely on noisy results without clear trends.**
We would like to clarify that |$\rho$| greater than 0.7 can be considered as a moderate linear correlation[1]. Tableshift[2] figure 5 also draws a conclusion of linear correlation based on $\rho$ value of 0.7. In our paper, $\rho$ value in Figure 3 is 0.7405, and $\rho$ value in Figure 4 is 0.6. Therefore, despite the presence of noisy data, relevant conclusions can still be drawn based on $\rho$ values. We will add the evaluation results of more datasets to mitigate the noise.
**Q2: why are there multiple trajectories in Figure 3?**
The reason for multiple trajectories in Figure 3 is that each trajectory consists of points derived from the same dataset. During the fitting process, the results of all datasets were combined, resulting in each dataset forming a distinct trajectory. It is worth noting that these trajectories reflect the relationship between feature importance and model performance across different datasets. Specifically, each dataset's trajectory demonstrates how its specific feature importance influences the model's performance.
To better support our conclusions, this **[ link](https://anonymous.4open.science/r/icmlrebuttal-55BD/trajectories.pdf)** provides correlation and accuracy plots for 12 different datasets. Below are $\rho$ values for the relationship between correlation and accuracy for each of the 12 datasets.
|Dataset|Credit|Electricity|Heart|Miniboone|Eyemovements|Iris|Penguins|Jannis|Abalone|Bike|Concrete|Laptop|
|-|-|-|-|-|-|-|-|-|-|-|-|-|
|$\rho$|0.8597|0.8939|0.9244|0.9482|0.9796|0.9646|0.9377|0.9891|0.6866|0.9656|0.8360|0.9896|
It indicates that although levels of feature importance vary across different datasets, they all support our research conclusion, namely, that there is a significant linear correlation between feature importance and model performance. These trajectories further confirm the generality and reliability of our conclusion.
**Q3: Incorporate more high-quality tables.**
Please refer to our response to Reviewer B7Cf Q4.
**Q4: API design**
We have already provided the --export_dataset option in our paper. Users can set this option to True to export specific versions of the datasets they wish to evaluate (e.g., single-column missing, missing to a certain extent, and all possible missing scenarios).
The README.md in Supplementary Material already explains how users can add new datasets and new models. We will further elaborate on the code functions in the final version.
**Q5: Major comment 3&4.**
1. Reviewer B7Cf has mentioned that "The mean-value imputation strategy used is a common approach and therefore is a valid choice." We also found that benchmark datasets such as Talent[3] also use mean imputation as a method for handling missing values.
2. We compared the performance of various models using their own imputation methods, random imputation, and mean imputation. We present the average performance of several representative models under different imputation methods.
|CatBoost|NAN|Mean|Random|
|-|-|-|-|
|0%|0.845|0.851|0.818|
|9%|0.820|0.826|0.807|
|18%|0.792|0.801|0.767|
|27%|0.763|0.775|0.739|
|36%|0.733|0.747|0.702|
|45%|0.702|0.716|0.665|
|54%|0.670|0.682|0.630|
|63%|0.637|0.643|0.608|
|72%|0.600|0.596|0.574|
|81%|0.538|0.541|0.502|
Our experiments show that the model performs best under mean imputation, and its performance declines under all three types of imputation methods, indicating that the model still faces challenges from feature shift. Detailed data are presented in this **[ link](https://anonymous.4open.science/r/icmlrebuttal-55BD/imputation.pdf) and we strongly recommend that you view this link.**
**Q6: Major comment 5.**
Please refer to our response to Reviewer WumN Q2.
**Q7: Minor comment 1&2 & typos ec.**
We will make revisions in the final version.
**Q8: Minor comment 3.**
The purpose of the single-shift experiment is to test the impact of the absence of features with different importance levels on model performance. "Removing one feature randomly" is the experimental setting for a feature shift degree of $\frac{1}{n}$ (n = number of columns) in our random shift experiment.
**Q9: Minor comment 4.**
Please refer to our response to Reviewer wA1o Q3.
**Q10: Minor comment 5.**
We have already cited the paper you mentioned on lines 683-684 of Section A.3 on page 13 of our submitted paper.
[1] Iversen, Gudmund R, et al. Statistics: The conceptual approach. Springer Science & Business Media, 2012.
[2] Gardner, Josh, et al. Benchmarking distribution shift in tabular data with tableshift. Advances in Neural Information Processing Systems, 2023: 53385-53432.
[3] Liu, Si-Yang, et al. TALENT: A Tabular Analytics and Learning Toolbox. arXiv preprint arXiv:2407.04057, 2024. | Summary: This paper proposes a new benchmark “TabFSBench” for tabular data learning, especially for the feature shift in open environments. Feature shift means the feature could be decrement or increment. The authors select open-source datasets from OpenML and Kaggle’s dataset library, including three curated tasks of binary classification, multi-class classification, and regression, covering various domains such as finance, healthcare and geology. Four feature-shift scenarios including single shift, most/least relevant shift, and random shift. Some insightful empirical analysis are provided. Moreover, the paper provides callable APIs.
## update after rebuttal
My problem has been solved, and I decided to keep my score.
Claims And Evidence: Yes, the claims are well supported.
Methods And Evaluation Criteria: Yes, the paper evaluates the performance of different models with ROC-AUC and RMSE, which is reasonable for the evaluation.
Theoretical Claims: The paper doesn’t include theoretical results, but the empirical analysis from the results are reasonable and insightful.
Experimental Designs Or Analyses: Yes, I have checked the experimental design and analyses. The authors conducted extensive to evaluate the performance of different methods on the feature shift scenarios. The compared methods include both tree-based methods, deep learning methods, LLM methods, tabular LLM methods. The experimental design is sound and the empirical analyses are insightful, pointing the challenges that current methods remain in the feature shift scenarios. I believe these results could help promote related research.
Supplementary Material: Yes, I reviewed more detail and results about the experiments in the supplementary.
Relation To Broader Scientific Literature: This paper is related to the field of tabular data learning, especially in the benchmark for tabular learning. Different from previous methods or benchmarks, this paper is the first to study the tabular data learning under feature shift. The authors present a definition about feature shift in tabular data learning. This problem is important and unique to the tabular data (For images or text, there is no feature decrement or increment). Therefore, it is important to construct a benchmark to help evaluate the performance of tabular data learning methods under the feature shift problem. The benchmark could promote related research.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: It is good to see that the authors provided easy-tool-use API, I believe this could help related researchers. I hope the authors could release a leaderboard for the tabular data learning methods on this benchmark.
Other Comments Or Suggestions: No.
Questions For Authors: In the paper, the benchmark doesn’t consider the feature increment. I think it is also a promising direction to study how to better exploit the new feature information.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Dear Reviewer Q7p8:
Thank you very much for your valuable feedback on our paper. We will take your reviews seriously and make the necessary revisions or additions to the final version.
**Q1: I hope the authors can release a leaderboard for the tabular data learning methods on this benchmark.**
We have currently implemented a leaderboard for TabFSBench and incorporated additional evaluation results for TabPFN v2. This **[link](https://anonymous.4open.science/r/icmlrebuttal-55BD/Leaderboard%20_%20TabFSBench.pdf)** provides access to the TabFSBench homepage we designed. Upon completion of the paper review process, public access will be granted. Furthermore, we will continuously update the benchmark with the evaluation results of new models. Researchers are also cordially invited to contribute their own evaluation results on additional datasets or models. Additionally, we'll be expanding TabFSBench in the future from below two parts:
***1. For the dataset selected in TabFSbBench.***
We selected 12 datasets from Grinsztajn[1] and TabZilla[2]. They exhibit substantial heterogeneity in scale, domain, and task characteristics, deliberately encompassing diverse potential scenarios of feature shift. To comprehensively evaluate the challenges posed by feature shifts, we have implemented four distinct experimental configurations that collectively enhance both the breadth and depth of our benchmark assessment. Multiple experimental repetitions were conducted to mitigate stochastic variability.
Due to rebuttal time constraints, we were unable to review the large table benchmark in a short period of time. We will incorporate evaluation results based on the Grinsztajn benchmark into the final version.
***2. For the future work for TabFSBench.***
- We have established and maintained a project homepage and a comprehensive rank leaderboard. We continuously update the performance evaluation results of various newly released models or datasets (for example, the benchmark test data of the recently included TabPFN v2 model have been incorporated), ensuring that the research community can access the latest evaluation information in a timely manner. This homepage will be made open source in the subsequent phase.
- To enhance the scalability of cross-model comparison research, TabFSBench has been specifically designed with a feature-shift dataset export module. Through command-line parameter settings, this module can export datasets under various feature-shift scenarios, thereby effectively supporting researchers in conducting evaluations on their own models (for specific implementation details, please refer to our response to Reviewer 8NVr Q4).
- In the anonymized code repository provided in Supplementary Materials, TabFSBench has implemented a plug-and-play interface for importing new datasets and new models. This enables researchers to conveniently test their custom datasets or models within the TabFSBench framework (for specific implementation details, please refer to our response to Reviewer 8NVr Q4).
- We encourage and actively accept researchers to provide us with their own evaluation results. Subsequently, we will also be committed to constructing feature-shift datasets with practical significance and opening up both public and private evaluation rank leaderboards. This will further promote the research and evaluation of the feature-shift scenario within the community (for other relevant content, please refer to our response to Reviewer WumN Q1/3).
**Q2: In the paper, the benchmark doesn’t consider the feature increment.**
Section 2.2 of the paper indicates that this study primarily focuses on the impact of feature shifts on model performance. However, tabular machine learning models evaluated in this work are inherently incapable of handling newly added features in feature-increment scenarios, as they require consistent input and output dimensions. Consequently, in such scenarios, these models automatically disregard the new features while maintaining their original performance.
In the future work, we plan to extend the evaluation scope of TabFSBench by incorporating additional datasets and model evaluations. Specifically, we will include specialized models designed for feature increment scenarios (e.g., [3][4]) to assess their performance improvements under such conditions.
[1] Léo, Grinsztajn, et al. Why do tree-based models still outperform deep learning on typical tabular data? Advances in Neural Information Processing Systems, 2022: 507-520.
[2] McElfresh, Duncan, et al. When Do Neural Nets Outperform Boosted Trees on Tabular Data? Advances in Neural Information Processing Systems, 2023: 76336-76369.
[3]Zhang, Zhen-Yu, et al. Learning with feature and distribution evolvable streams. International Conference on Machine Learning, 2020.
[4]You, Dianlong, et al. Online learning for data streams with incomplete features and labels. IEEE Transactions on Knowledge and Data Engineering, 2024. | null | null | null | null |
Calibrating Video Watch-time Predictions with Credible Prototype Alignment | Accept (poster) | Summary: The paper proposes ProWTP, a two-stage approach designed to enhance watch-time prediction in video recommender systems by integrating prototype learning with optimal transport (OT). In the first stage, ProWTP employs a hierarchical vector quantized variational autoencoder (HVQ-VAE) to transform continuous watch ratio labels into discrete prototypes. In the second stage, optimal transport is used to align the distribution of these labels with the instance representation distribution, thereby boosting prediction accuracy. Extensive offline experiments conducted on three datasets show that ProWTP outperforms existing methods.
#########
Added after rebuttal: I checked the authors' rebuttal, and gave a detailed response. In summary, the authors' feedback doesn't resolve my concerns, so I maintain my original ranking of the paper.
Claims And Evidence: No. This work is problematic in at least the following aspects:
1. Unclear Motivation: The submission does not adequately justify the motivation of the proposed methods (e.g. D2Q for debiasing, TPM for reducing classification difficulty). The claim, "However, those methods struggle to consistently maintain high predictive accuracy across different models," was made without detailed explanation or evidence. The authors should strengthen the motivation of their approach.
2. Overgeneralized Claim: The assertion that the method is "suitable for any deep recommendation model" (in line 52) is questionable. In scenarios where data lacks multimodal distributions, the model’s robustness is uncertain, and no evidence addresses this limitation.
3. Insufficient Comparison: The authors claim, "Different from traditional prototype learning, ProWTP generates prototype vectors from label distributions, providing models with more precise and credible calibration references." (in line 81) While metrics are provided in tables, the superiority over simple clustering-based contrastive learning (which could also handle multimodal predictions) is not convincingly demonstrated. Visualizations comparing ProWTP with clustering, beyond the TR vs. ProWTP in the appendix, would enhance credibility.
4. Efficiency Concerns: The use of optimal transport likely incurs high computational cost, yet the efficiency of the proposed two-stage online approach is not adequately addressed, raising doubts about its practicality.
5. Lack of Literature Review: The paper lacks a thorough review of prior watch time prediction research and omits key references. Notably, the claim, "We investigate the multimodal distribution properties of watch-ratio across different video duration buckets for the first time,"(in line 105) is inaccurate, as the bimodal distribution of KuaiRand was previously studied in Zhao et al., 2023.
[1] Zhao et al., 2023, Uncovering User Interest from Biased and Noised Watch Time in Video Recommendation
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes, I checked the soundness/validity of any experimental designs or analyses. I have the following concerns:
1. Visualization in Appendix: While the appendix visualizes a comparison between TR and ProWTP, it lacks a comparison with clustering-based methods. Including such a comparison (e.g., ProWTP vs. clustering) would strengthen the evidence for ProWTP’s superiority, as clustering could also address multimodal distributions, making the current visualization less convincing.
2. Rationale for Optimal Transport: It is unclear why aligning the feature space with prototype vectors via optimal transport improves watch time prediction. Simpler alternatives (e.g., cosine similarity or k-means) might achieve similar alignment. The authors should justify the choice of optimal transport and explain its specific benefits for the prediction task.
3. Efficiency Trade-offs: The trade-off between performance gains and computational efficiency is concerning. Although the authors sample 20% of the data for contrastive learning, the per-sample cost of optimal transport appears substantial. Additionally, the deployment strategy is ambiguous: Are prototypes fixed during online inference? If so, how does the model address distribution shifts, and what are the implications for robustness?
4. Robustness Across Scenarios: The claim "suitable for any deep recommendation model" is tested only on multimodal data. The experimental design lacks validation on scenarios without multimodal distributions (e.g., unimodal data), leaving the model’s generalizability and robustness unverified.
Supplementary Material: Yes, all parts.
Relation To Broader Scientific Literature: The contributions of the paper were built upon and extending several strands of work in the scientific literature:
1.Prototype Learning and Representation Discretization: Based on methods like VQ-VAE, the paper extends unsupervised learning and clustering techniques by converting continuous labels into discrete prototype.
2.Optimal Transport for Distribution Alignment: Optimal transport has been widely used to address distribution mismatches. Here, it is employed to align the label-based prototype distribution with instance representations for for video watch-time prediction
Essential References Not Discussed: 1. The paper does not provide a thorough review of prior research on watch time prediction and omits several important references. Specifically, the claim in line 105, "We investigate the multimodal distribution properties of watch-ratio across different video duration buckets for the first time," is inaccurate. The bimodal distribution of KuaiRand was previously explored in [1], undermining the novelty asserted by the authors.
2. Additionally, the paper lacks comparisons with key prior works, including:
CREAD[2], SWAT[3], GR[4].
3. The reported improvements over baselines on the two datasets appear modest, warranting a statistical significance test to validate their impact. To strengthen the evaluation, the authors should compare their method against these approaches and test performance on additional benchmarks such as KuaiRec and CIKM datasets.
4. The paper lacks discussion on interpreting prototypes and their connection to user behavior, which could hinder insight into the model’s predictions.
[1] Zhao et al., 2023, Uncovering User Interest from Biased and Noised Watch Time in Video Recommendation
[2] CREAD: A Classification-Restoration Framework with Error Adaptive Discretization for Watch Time Prediction in Video Recommender Systems
[3] SWaT: Statistical Modeling of Video Watch Time through User Behavior Analysis
[4] Generative Regression Based Watch Time Prediction for Video Recommendation
Other Strengths And Weaknesses: Figure 1b fails to specify the method used or which data components are analyzed to demonstrate "Representation confusion." In Figure 2, the boundaries and logic between modules are poorly defined and highly confusing. Overall, the quality of the figures is substandard, detracting from the paper’s readability and professionalism.
Other Comments Or Suggestions: No.
Questions For Authors: Please refer to the problems above for detailed weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Dear Reviewer KkGb,
We appreciate your valuable questions and suggestions. We summarize your concerns below and provide responses.
> **Q1: The motivation is clear and explicit.**
We mention that existing WTP models struggle to achieve high accuracy. We believe the main reason lies in instance representation confusion, which we explain from two perspectives:
1. Mathematical Explanation
let the instance representation of a sample $(x,y)$ be $f(x)$, with its ideal center being $ \mu_y = \mathbb{E}[f(x) \mid y]$, where $y$ is the ground-truth. The degree of instance representation confusion is defined as the distance between the instance representation and the ideal center $ d(f(x), \mu_y) = \|f(x) - \mu_y\|.$ Then, the model's prediction error $ \Delta_x = |y - \hat{y}|$ is predominantly correlated with the degree of instance representation confusion $d(f(x), \mu_y)$.
2. Model Performance
In existing WTP models, the instance representation space is often disorganized, with instance representations of different data types failing to form well-defined clusters, as shown in Figure4. Also, we provide additional results of different duration buckets in **Reviewer WSao Q3**. ProWTP's performance is even better in medium and long videos with large prediction errors, which further illustrates that ProWTP optimizes the distance between instance representations and prototypes to achieve more accurate predictions.
We believe that ProWTP improves prediction accuracy primarily by alleviating instance representation confusion and directly incorporating label distributions into the model, thereby providing additional information gain. So the motivation is clear and explicit.
> **Q2: Suitable for any deep recommendation model**
This statement wants to show that ProWTP is a model-independent scheme, i.e., $f$ in $f(x)$ can be replaced by any depth model (i.e. DCN), and there is no overclaim.
> **Q3: Comparison with traditional clustering**
In Table 3, we compare Kmeans' scheme for generating prototype. As for the clustering-based contrastive learning scheme, we have not searched relevant literature in WTP at this time.
> **Q4: Efficiency**
**A.4.** has discussed this problem in details, OT is only present during training, OT is removed during testing, and online inference time complexity is linear O(CK).
> **Q5: lack of references**
**Its not TRUE. line 096 in Related work part, [1] proposes bimodal distribution, which we have already cited**. In our work, we focus on the multi-modal distribution characteristics of videos. Previous studies have overlooked the distribution of watch-time beyond the video duration, and we are the first to propose focusing on the multimodal characteristics rather than just bimodal distribution. Importantly, we need to emphasize that the main contribution of our paper is the transformation of label distributions into prototypes that provide a credible reference for model calibrations.
> **Q6: Visualization**
Thanks for your suggestion, we will add the use of kmeans-generated prototypes for ProWTP visualization subsequently.
> **Q7: Rationale for OT**
**In Appendix A.12, we compare three different alignment methods: OT, L2 distance, and no alignment.** We present their results and conduct a case study. The OT alignment method performs global alignment rather than aligning each sample independently. Compared to other alignments, OT introduces structured information. This approach not only achieves better results but also produces sparser weights between instances and prototypes.
> **Q8: Efficiency Trade-off**
**A.4.** has discussed this problem in details. We tested the wr distribution across months and its distribution does not change significantly, so it is reasonable that prototype is naturally fixed. **If it changes in extreme cases, we just need to re-pull the data for training, as is always done in industry.**
> **Q9: Robustness Across Scenario**
This scenario does not occur in the WTP task, because according to our platform data analysis (its DAU is 400 millions), **the users' watch-ratio always shows a multi-modal distribution. This is inherent in the WTP.**
> **Q10: only two datasets**
**ITS NOT TRUE**. In our paper it is **three datasets** and the results are reported averaged over five runs.
> **Q11. The interpretability of prototypes and their association with user behavior .**
It has been discussed in **A.3** in details. Due to space constraints, you can refer to A.3.
> **Q12: More baselines**
| Wechat | CREAD | SWAT | ProWTP |
|--------|-------|-------|--------|
| RMSE | 28.93 | 29.31 | 28.47 |
The time of GR in arxiv is Sat, 28 Dec 2024, which is a same-time work and can be disregarded.
> **Q13: Figures modification**
Due to space constraints, you can refer to Reviewer WSao Q1.
We sincerely thank you again for your feedback and hope that our responses can change your mind. We look forward to your reply and further discussion. | Summary: This paper focuses on the watch-time prediction problem in video recommender systems. It employs a two-stage framework: (1) using a hierarchical vector quantized variational autoencoder to generate credible prototypes from watch-ratio distributions; and (2) leveraging semi-relaxed unbalanced optimal transport to align samples with those prototypes. The goal is to alleviate “instance representation confusion” that leads to prediction errors. Appendix gives the corresponding mathematical proof and statistical analysis that justifies the motivation of the paper. This paper provides offline experiments and online A/B tests to verify the effectiveness of the method in real scenarios. Overall, this paper has a very fresh perspective, is undeniably innovative, and presents interesting ideas and scenarios.
Claims And Evidence: The authors argue that conventional methods fail to fully exploit the multi-modal nature of watch-ratio under different video-duration buckets and often overlook the confusion that arises in deep recommendation model representations. By generating credible prototypes on the label side and performing distribution alignment, their method substantially reduces representation offset, thereby lowering prediction errors. Empirical evidence is provided by experiments on multiple real-world industrial datasets, where the proposed approach outperforms a wide range of baselines, demonstrating its effectiveness.
Methods And Evaluation Criteria: The papers proposed two-stage approach involves:
1. Using HVQ-VAE to quantize the watch-ratio distribution into prototypes for model calibration;
2. Employing SUOT to align sample representations with these prototypes, along with an assignment loss and a compactness loss to further ensure samples cluster around their respective prototypes.
Evaluation uses RMSE and MAE to measure regression errors, and XAUC and XGAUC to assess ranking performance.
Theoretical Claims: Appendix A.1 provides the key mathematical derivation, proving that watch-time prediction error $\Delta$ is positively correlated with $\|f(x) - \mu_y\|$. The proof views the network’s prediction as $\mathrm{ReLU}(W f(x) + b)$ and assumes there exists a center $\mu_y$ related to the true value. Under this assumption, closeness of f(x) to $\mu_y$ yields smaller prediction errors. The mathematical discussion, split according to the activation regions of ReLU, supports the approach of mitigating confusion by aligning representations to credible prototypes.
Experimental Designs Or Analyses: The authors conduct experiments on multiple real-world datasets, comparing with various watch-time prediction baselines (like D2Q, TPM) and debiasing models (like DVR, CWM). They also perform ablation studies to assess how removing HVQ-VAE, SUOT, or the assignment loss affects performance. Additional visualizations illustrate how the representation space becomes significantly less confused once samples are attracted to their respective prototypes.
Supplementary Material: In the appendices, the authors detail the formulas and algorithmic steps, including how HVQ-VAE is implemented, the proof, further visual illustrations of representation confusion, discussion on time complexity, and relation between prototypes and labels. Proposition A.1 is acore mathematical component, offering robust theoretical grounding for the method’s design.
Relation To Broader Scientific Literature: This work applies vector-quantized variational autoencoders and distribution alignment via optimal transport to the recommendation context, extending beyond prior watch-time methods that rely on simplistic bucketization or direct regression. It proposes an effective scheme to tackle instance confusion via prototype calibration, particularly suited to scenarios featuring multi-modal label distributions.
Essential References Not Discussed: I do not find such related works.
Other Strengths And Weaknesses: Strengths:
1. The paper proposes a novel and interesting two-stage method that stands out by quantizing label distributions themselves—rather than relying on solely sample-based embeddings—to derive meaningful prototypes for instance representation calibrations.
2. A Well-Motivated paper. By focusing on “instance representation confusion,” the authors provide both an intuitive and a formal rationale for aligning sample representations with label-derived prototypes
3. The paper presents experiments on multiple real-world datasets as well as online ab test, systematically comparing the proposed method with a range of established baselines.
4. In addition to empirically validating their approach, the authors incorporate a rigorous theoretical analysis, particularly in Appendix A.1, where they link prediction error to the distance from an “ideal center.” This mathematical proof underpins why the prototype alignment mechanism effectively curbs representation confusion and improves watch-time prediction accuracy.
Weakness:
1. The proposed two-stage framework may introduce additional modeling complexity compared to simple direct regression approaches, can you provide more details?
2. While the paper provides a solid exploration of assignment and compactness loss functions, it would be insightful to see a deeper analysis of how these losses interact with other potential regularizers.
3. It would be beneficial to include a broader discussion on how prototypes evolve when user behavior changes over time or when new content is introduced. Realistic settings often involve rapidly shifting distributions, and observing how HVQ-VAE adapts (or could be adapted) would strengthen the paper’s practical insights.
4. The online experiment only includes results in Appendix, plz sharing more findings from an online A/B test would highlight how the proposed method translates to real-world gains, offering more confidence to practitioners.
5. This paper briefly touches on how prototype alignment can alleviate “representation confusion.” More extensive visualizations or case studies showing how specific user-video pairs move in representation space before and after alignment would further clarify this mechanism.
6. Adding a list of symbols can help others better understand the content.
Other Comments Or Suggestions: I have no other comments and suggestions.
Questions For Authors: Please refer to the weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for recognizing the significance of our work and for your generous positive-score evaluation. We are very grateful for your valuable suggestions and constructive questions, which have helped us improve this paper. Below, we provide detailed responses to your queries:
> **Q1: modeling complexity(two-stage framework vs simple direct regression approaches)(refer to A.4)**
Due to space constraints, please refer to the answer to Q3 in the rebuttal for Reviewer gDS8.
> **Q2: analysis of assign loss and compact loss**
During Stage II, we adopt two losses to help with calibrating the sample space. In our method, we aims for the instance representations to cluster tightly around their corresponding prototypes. So the assign loss is designed to decrease the distance between samples and their corresponding prototypes. And since we hope for instances assigned to the same prototype to be closer together in the representation space, the compact loss is designed to encourage samples under the same prototype to cluster more closely in the representation space
By minimizing both losses, our method can not only help reduce instance representation confusion but also enhance the model’s ability to capture fine-grained features, ultimately improving prediction performance.
> **Q3: Model adjustment for changes in user behavior (refer to A.4)**
We consider the following two aspects:
(1) The stability of user behavior
HVQ-VAE is completely independent, and its spatio-temporal complexity does not affect the training and inference time of ProWTP. When the distribution of watch ratios is sufficiently large, the resulting prototype distribution is stable. Furthermore, we observed that for a well-established video recommendation APP, the watch-ratio distribution remains largely unchanged and consistent across multiple months.
As shown in Figure 10, we randomly sampled 200,000 users from our APP (a short-video platform) and extracted their historical behavior on the 1st day of each month from January to November 2024. The data were divided into D=15 buckets based on video duration. We then computed the Wasserstein Distance between the watch-ratio probability density distributions of each month and November, as well as the Kolmogorov-Smirnov test with $p<0.05$ between their cumulative empirical distributions. The results indicated no significant distribution shifts across multiple months.
(2) Model adjustments driven by user behavior changes
Even in extreme scenarios where user behavior undergoes notable adjustments, we only need to resample the watch-ratio distributions for each duration buckets, perform offline retraining, and update the weights of ProWTP. This process incurs minimal computational overhead.
> **Q4: more findings from the online A/B test**
| Duration | 0-7 | 8-12 | 13-24 | 25-42 | 43-60 | 61-84 | 85-120 | 121-160 | 161-320 | 321-600 |
| :--------: | :----: | :----: | :----: | :----: | :----: | :----: | ------ | ------- | ------- | ------- |
| watch time | 0.072% | 0.089% | 0.093% | 0.118% | 0.137% | 0.162% | 0.178% | 0.189% | 0.213% | 0.229% |
According to the results, we observed two key findings:
1. ProWTP demonstrated consistent performance improvements over D2Q across all duration buckets.
2. When examining different duration segments, we found that medium and long-form videos (>60s) exhibited significantly more pronounced watch time benefits compared to shorter videos. This pattern also aligns with the results observed in our offline dataset evaluations.
> **Q5: alignment visualization**
We visualized the relationship between errors and instance representations in Appendix A.1 (Figures 4 and 5). By comparing these visualizations, we observed that:
1. Prediction error ∆ is positively correlated with the degree of confusion.
2. TR exhibits a significantly higher level of confusion, while ProWTP effectively mitigates this confusion by reducing the distance between instances and reliable prototypes.
3. Compared to TR, ProWTP shows significantly fewer points with large errors.
This demonstrates that the root cause of reducing prediction errors lies in learning better instance representations.
> **Q6: Symbol list**
Thank you for your suggestion. Due to space constraints of rebuttal, we will add a list of symbols in the final paper.
Thank you once again for your valuable suggestions and feedback. If you have any further questions, we would be happy to continue the discussion with you. We are looking forward to your reply.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' thorough responses to all the questions. The additional experiments significantly strengthen the paper's validity: 1) real-world scenario tests demonstrate practical robustness, 2) hyperparameter sensitivity analyses clarify method stability, and 3) extended baseline comparisons provide broader contextualization. The discovery of multimodal phenomena rooted in real-world media data is particularly novel, offering fresh insights that could inspire new research directions in data-driven AI. These enhancements notably improve the paper's technical rigor and conceptual impact. Given the improved empirical validation and original findings, I raise my score to accept.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer 3cw2,
We sincerely thank you for your generous raise and recognition of our work and are honored to be able to address all of your concerns! | Summary: This paper presents ProWTP, a novel two-stage approach for predicting user watch-time in video recommendation systems. The method improves prediction accuracy by aligning label distributions with instance representation distributions through prototype learning and optimal transport techniques. Specifically, it leverages a Hierarchical Vector Quantised Variational Autoencoder (HVQ-VAE) and Semi-relaxed Unbalanced Optimal Transport to address the inherent distribution characteristics of labels and mitigate instance representation confusion—challenges often overlooked by existing methods. Experimental results on three datasets demonstrate the effectiveness of ProWTP in enhancing watch-time prediction.
Claims And Evidence: Yes, this paper has strong experimental and theoretical evidence to support the claim.
Methods And Evaluation Criteria: Yes, this paper used two public datasets and one private dataset with a reasonable training-testing-validation setup. The use of widely recognized metrics (MAE, RMSE, XAUC, XGAUC) ensures the results' validity and comparability with existing research.
Theoretical Claims: Yes, I check the theoretical claims and can confirm the correctness.
Experimental Designs Or Analyses: Yes, The experiments look solid. The ablation studies and comparative analyses effectively validate the contribution of each proposed component (HVQ-VAE, SUOT, assignment loss, compactness loss). More experiment results can be found in appendix.
Supplementary Material: Yes, I reviewed the supplementary material, particularly the detailed explanation of the instance representation confusion analysis and prototype generation details (Appendix A.1 and A.2). These sections clearly support the main findings by providing additional theoretical explanations and methodological clarifications.
Relation To Broader Scientific Literature: The ProWTP framework addresses a key limitation of existing watch-time prediction approaches, which typically transform watch-time labels for prediction and then reverse the transformation, neglecting both the natural distribution properties of labels and the instance representation confusion that can lead to inaccurate predictions.
Beyond improving the accuracy of video recommendations, this work bridges multiple subfields by offering insights into generalized regression problems, multimodal learning, and structured representation alignment in AI models. Its implications extend beyond recommendation systems, potentially influencing a broader range of machine learning applications that require distribution alignment and representation calibration.
Essential References Not Discussed: No essential references appear to have been omitted. The paper comprehensively cites and engages with relevant prior works.
Other Strengths And Weaknesses: S1: This paper presents a completely different perspective to optimize the WTP, proposing phenomena and methods that are very interesting and can be extended to more application scenarios.
S2: The hierarchical VQ-VAE-based credible prototype generation is innovative, providing clear advantages in capturing multimodal distribution patterns.
S3: Distribution alignment effectively mitigates the representation confusion issue, leading to improved predictive performance.
W1: Lack of detailed analysis regarding the sensitivity to hyperparameters, especially in different dataset scenarios.
W2: Minor language and grammatical errors should be revised for enhanced readability.
Overall: I think this paper is very interesting and the proposed phenomenon of multi-peaked distributions is valuable. And the scheme of conversion to prototype by distribution is very innovative.
Other Comments Or Suggestions: * Minor typos are observed (e.g., "Sine" instead of "Since" in Section 5.1). Thorough proofreading is recommended.
* In Appendix, the authors have a typo: (see line 1074), A9 and 10 are the same thing
Questions For Authors: * Can authors discuss the computational efficiency of ProWTP, particularly in large-scale production environments?
* How sensitive is ProWTP to the hyperparameters (e.g., $\lambda$ in SUOT)? A brief analysis or guidance on hyperparameter tuning would enhance the practical applicability.
* Provide more online experiments environment will be better.
* How does ProWTP handle cases with highly imbalanced data distributions, such as scenarios with rare but critical user behavior patterns?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer gDS8,
We sincerely thank you for recognizing the significance of our work and for your generous positive-score evaluation. We are very grateful for your valuable suggestions and constructive questions, which have helped us improve this paper. Below, we provide detailed responses to your queries:
> **Q1: Lack of detailed analysis regarding the sensitivity to hyperparameters**
In our method, we have two hyperparameters: K (number of prototypes) and β (weight of the compact loss). In Section 5.2, we have already provided the experimental results and analysis for hyperparameter K. Due to space constraints, here we only supplement the hyperparameter β experiments on the KuaiRand-Pure dataset.
| $\beta$ | 0.00 | 0.05 | 0.10 | 0.15 | 0.20 |
| :-----: | :----: | :----: | :--------: | :----: | :----: |
| RMSE | 41.12 | 40.66 | **40.45** | 40.64 | 40.98 |
| MAE | 24.92 | 24.70 | **24.43** | 24.68 | 24.89 |
| XAUC | 0.7221 | 0.7258 | **0.7290** | 0.7252 | 0.7223 |
| XGAUC | 0.7004 | 0.7033 | **0.7048** | 0.7036 | 0.7001 |
> **Q2: Minor language and grammatical errors**
Thank you for pointing out the writing issues, we will correct it in the final paper.
> **Q3: Complexity analysis and inference efficiency**
Unlike simple direct regression approaches, our method incorporates an additional prototype generation process. But importantly, this prototype generation is completely independent, and its spatio-temporal complexity does not affect the training and inference time of ProWTP. When the distribution of watch ratios is sufficiently large, the resulting prototype distribution remains stable, which means that Stage I does not need to be performed frequently.
In Stage II, during the inference phase, the OT module is removed. The final value is computed as a linear combination of similarities to each prototype, which is then input into the regressor. The time complexity of this process is only O(CK), where C and K are small constants, ensuring that the time overhead remains negligible.
> **Q4: online experiments environments details**
Thank you for your suggestion. Industrial recommendation systems typically employ a cascading architecture consisting of four stages: recall, pre-ranking, ranking, and re-ranking—a structure designed to efficiently recommend items to users from massive pools of video candidates. Since the recall stage operates without stringent real-time requirements, we strategically deployed ProWTP in this layer of an online short-video recommendation system, where it functions as one of multiple recall paths.
To validate its effectiveness in real-world applications, we used D2Q as our baseline model for comparison. The experimental results convincingly demonstrate the significant performance improvements our method delivers in actual business scenarios.
> **Q5: deal with imbalanced data distributions**
Thanks for your question. In fact, video recommendation scenarios inherently face imbalanced data distributions in terms of video duration. Most video datasets, including WeChat and KuaiRand, exhibit left-skewed distributions in durations. Our experimental results clearly demonstrate that ProWTP outperforms competing models on the WTP task, showing particularly strong performance in watch time prediction under imbalanced data distribution conditions.
Thank you once again for your valuable suggestions and feedback. If you have any further questions, we would be happy to continue the discussion with you. We are looking forward to your reply. | Summary: The authors propose ProWTP, a two-stage method combining prototype learning and optimal transport for watch-time regression prediction and deep recommendation models. First, a hierarchical vector quantized variational autoencoder (HVQ-VAE) is used to convert the continuous label distribution into a high-dimensional discrete distribution, providing credible prototypes for calibration. Then, ProWTP views the alignment between prototypes and instance representations as a Semi-relaxed Unbalanced Optimal Transport (SUOT) problem, with the prototype constraints relaxed. Moreover, ProWTP introduces the assignment and compactness losses to encourage instances to cluster closely around their respective prototypes, enhancing the prototype-level distinguishability. Offline experiments on three industrial databases serve to demonstrate the proposed model performance.
## update after rebuttal
I acknowledge the authors for their rebuttal. The authors have addressed my main concerns, particularly about clarity in Figure 1, performance improvements, and the lack of qualitative analysis. The planned revisions to figures and the addition of prototype visualizations should significantly enhance the understanding of the proposed method. While the performance gains are modest, the authors have justified their practical significance, especially for medium-to-long videos. Given these improvements, I believe the paper could be accepted and have raised my score to Weak accept.
Claims And Evidence: The authors start motivating their research by highlighting the need for video recommendation systems for personalized content, where the user watch-time metric constitutes a key metric for measuring user engagement. They state that existing Watch-Time Prediction (WTP) models have difficulties in achieving a high predictive accuracy across different recommendation systems because they do not consider the multimodal distribution of labels, reflected in Figure 1(a).
Then, the authors also highlight instance representation confusion. Although this issue is illustrated in Figure 1(b), neither a description of the features represented in the 2D scatter plot nor the concept itself (i.e., the authors only refer to “various patterns”) is provided, which makes it difficult to understand this claim.
When reaching the Related Work section, i.e., Section 2, the authors properly identify additional drawbacks of existing WTP approaches (e.g., difficult to model ordinal relationships and quantiles, no uncertainty quantification). They also review optimal transport and deep clustering with variational autoencoders (VAEs), identifying those works that constitute the basis of the proposed ProWTP, i.e., Semi-relaxed Unbalanced Optimal Transport (SUOT, see subsequent Section 3. Background) and Vector Quantized Variational Autoencoder (VQ-VAE), respectively.
Overall, the claims made in the paper seem to make sense, but there is a lack of details on the evidence provided in Figures 1(a) and 1(b). Conversely, it should be noted that the Related Work and Background sections are complete and comprehensive, allowing for researchers without experience in WTP, optimal transport, and deep clustering to understand the core of the proposed methodology.
Methods And Evaluation Criteria: The proposed method for watch-time prediction is properly designed to overcome the limitations of existing approaches identified at the beginning of the paper. First, the authors make use of prototypes for improving prediction given the multimodal distribution followed by watch-ratio. These prototypes are generated directly from the distribution of the prediction target via the proposed hierarchical VQ-VAE module. Second, to mitigate instance representation confusion, the generated prototypes are used to calibrate instance representations via alignment using SUOT. Finally, training objectives/loss functions are introduced, with additional details provided in the Appendix, important for the sake of reproducibility.
About the evaluation criteria, the authors make use of three different databases (Wechat, Kuairand, and Short-video), suitable for the application to be addressed. The performance of the proposed ProWTP is evaluated using well-known metrics such as MAE, RMSE, and AUC extensions for continuous values. Several baselines representing popular WTP methods are considered for comparison purposes.
In summary, the proposed methods and evaluation criteria make sense for the WTP task.
Theoretical Claims: A description and a mathematical explanation of instance representation confusion are provided in Appendix A.1. The authors should have referred to this additional information in the main paper, to ensure readers understand this less-known concept. The corresponding proposition seems correct and easy to understand, but the authors do not provide details on the experiments conducted to verify that only a small percentage of the total training data lies in the non-activation region of ReLU.
Experimental Designs Or Analyses: Reported results include first a comparison with several baselines representing popular WTP in the literature. However, according to Table 1, it should be noted that ProWTP slightly outperforms most of these approaches. A qualitative/error analysis could have been helpful to get a better idea about which situations the proposed method is more advantageous than existing ones, thanks to prototypes and SUOT. The visualization/analysis of the prototypes learned could have also significantly strengthened the discussion of results.
Second, an ablation study to measure the impact of the different ProWTP modules is shown in Table 2, supporting the suitability of the design decisions made. Third, different prototype generation methods and distribution alignment methods are tested, reporting results in Tables 3 and 4 to validate the usefulness of the proposed HVQ-VAE and SUOT. It should be noted that the improvements shown are again very small overall, even when compared to classical and simpler K-means/Random methods for clustering, or not using alignment for distribution. The authors should have completed their analysis by focusing on those samples where ProWTP has a clear advantage. Finally, the impact of the number of prototypes learned is shown in Figure 3, again with little differences appreciated between configurations. Although the proposed methodology for ProWTP is convincing, results do not seem to support its significance.
Supplementary Material: The Appendix is very complete and includes 1) explanations of instance representation confusion; 2) more details of pre-processing and prototype generations; 3) the description of the relationship between multimodal distributions, prototypes, and user behavior; 4) a discussion about the computational complexity; 5) the details on databases, baselines, metrics, and losses; 6) the results on different duration buckets; 7) the motivation for optimal transport; and 8) an online development and A/B test. The authors should have referred to the different Appendix sections throughout the paper to facilitate the understanding of crucial concepts to fully comprehend the proposed ProWTP.
Relation To Broader Scientific Literature: The key contributions of the paper integrate concepts from several research areas, including VAEs, deep clustering, and optimal transport, for watch-time prediction, to enhance this task in recommendation systems. The use of hierarchical VQ-VAEs for discrete representation of continuous watch-time distributions aligns with recent advances in prototype learning and quantized representation learning. Moreover, the use of SUOT for aligning instance representations with prototypes extends prior research on transport-based methods for structured prediction tasks.
Essential References Not Discussed: Overall, the related works mentioned in the paper seem relevant and enough to understand the context for the main contributions of the paper. Both Related Work and Background sections are complete and comprehensive, identifying advantages and limitations of existing methods.
Other Strengths And Weaknesses: The proposed ProWTP method effectively integrates prototype learning with optimal transport in a novel way to enhance watch-time prediction (WTP), addressing key challenges related to prediction accuracy and representation confusion. The paper is well-organized, including a thorough literature review and a clear methodological presentation, which makes it accessible to researchers beyond WTP experts. The evaluation is robust, incorporating multiple datasets and diverse baselines to ensure the reliability of the reported results. Additionally, the supplementary material is comprehensive, covering aspects such as computational complexity and additional experimental validations.
However, the reported performance improvements over baselines are relatively small, raising concerns about the practical impact of ProWTP. The experimental section could benefit from qualitative analyses, such as visualizations of learned prototypes and error analyses, to better illustrate when and why ProWTP is most effective. Finally, the clarity of the methodology figures (Figures 1 and 2) could be improved to better align with the textual descriptions, ensuring that the proposed approach is easily understandable.
Based on the strengths and weaknesses of the paper, my current overall recommendation is “Weak reject”.
Other Comments Or Suggestions: - The authors should have referred to the additional details provided in the Appendix throughout the paper, to ensure readers have a clearer understanding of those concepts which are briefly described in the paper, e.g., instance representation confusion.
- Figure 1(a): Which database(s) have been used to estimate and show the watch-ratio distribution? A brief description of the nature/contents of the videos considered should be provided at least for better understanding.
- Figure 1(b): How representation confusion is represented, i.e., which database(s)/features were used for the scatter plot?
- Figure 1(c) and Figure 2: The quality of the diagrams shown to illustrate the proposed ProWTP could be notably improved, for better comprehension. I think they should be less schematic and more specific on the details of the different stages (watch-ratio distribution, W, meaning of codebook vectors and matrices, etc.). Try to align them with the ProWTP description provided in Section 4 as much as possible.
- Figure 1 caption should be descriptive, allowing for the understanding of the figure without the need for reviewing the corresponding paragraphs in the paper.
- Appendix A.1.1: The proposition for instance representation confusion seems correct and easy to understand, but the authors should have provided details on the experiments conducted to verify that only a small percentage of the total training data lies in the non-activation region of ReLU.
Questions For Authors: Q1. The reported improvements over baselines are relatively small. Could you provide additional justification for the practical significance of ProWTP? Are there specific scenarios where these gains are particularly meaningful?
Q2. Have you considered including qualitative analyses, such as visualizations of learned prototypes or error breakdowns, to better illustrate when and why ProWTP outperforms other methods? Could such analyses identify specific cases where ProWTP is most effective?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer WSao,
We greatly appreciate your valuable questions and suggestions, which have helped us improve this paper significantly. We summarize your concerns below and provide detailed responses.
> **Q1: Details of Figures 1 and 2, and subsequent modifications**
- Figure 1(a) shows the watch-ratio distributions of three different video duration buckets (short/mid/long) in the Wechat dataset.
- Figure 1(b) provides a dimensionality-reduced visualization of the instance representations $f(x)$ generated by the TR model on the Wechat dataset, using samples from four different video duration buckets to illustrate “instance representation confusion.” Since this figure does not fully convey the core issue of instance representation confusion, we plan to replace it with Figure 4(4) (from the Appendix) in the revised paper to present a clearer illustration in the Introduction.
- Figure 2 depicts the two-stage training process of ProWTP:
(1) In the first stage, the watch-ratio distribution $W$ is used as input and reconstructed via the HVQ-VAE. We then extract the parameter $P$ (the codebook) obtained in this process to serve as the prototype parameter for the second stage.
(2) In the second stage, sample features $X$ are used as input, and the output is the predicted $y$.
We will enhance Figure 2 with more symbols and annotations in our next revision for better clarity.
---
> **Q2: Experimental improvements and their significance in real-world scenarios**
- Table 1 reports the average performance over five runs for each model. Generally, for ranking metrics like AUC, an improvement of 0.001 is considered significant. The improvement of 0.01 in RMSE and MAE metrics is significant, which is consistent with previous studies [1].
- Our online experiments indicate that different video duration buckets exhibit varying levels of watch-time gain, with **medium-to-long videos** seeing the greatest benefits. This outcome suggests that ProWTP can deliver higher gains in practice by recalibrating samples that are more prone to large prediction errors (i.e., medium and long videos).
| Duration(s) | 0-7 | 8-12 | 13-24 | 25-42 | 43-60 | 61-84 | 85-120 | 121-160 | 161-320 | 321-600 |
| :--------: | :----: | :----: | :----: | :----: | :----: | :----: | ------ | ------- | ------- | ------- |
| watch time | 0.072% | 0.089% | 0.093% | 0.118% | 0.137% | 0.162% | 0.178% | 0.189% | 0.213% | 0.229% |
[1] CREAD: A Classification-Restoration Framework with Error Adaptive Discretization for Watch Time Prediction in Video Recommender Systems.
---
> **Q3: Lack of prototype visualization and error analysis**
- We observed that the prototypes learned by ProWTP form distinct “striped” clusters in high-dimensional space, and we will add a prototype visualization in the revised version.
- As mentioned in the previous section, ProWTP is particularly effective for medium-to-long videos, which inherently have larger prediction errors. By using prototypes as anchors and bringing samples with large errors closer, the overall performance is improved. Below is RMSE comparisons between TR and ProWTP for different video-duration buckets in the Wechat dataset to illustrate this improvement. According to the proof of A.1, the larger the prediction error, the further its f (x) is from the ideal center, then the larger ASSIGNMENT LOSS will be, and ProWTP will focus on optimizing such samples.
| Bucket | TR | ProWTP |
|--------|-------|--------|
| 0 | 8.62 | 8.57 |
| 1 | 13.74 | 13.13 |
| 2 | 21.58 | 20.15 |
| 3 | 33.42 | 31.95 |
| 4 | 52.62 | 48.97 |
---
> **Q4: Explanation in A.1.1 regarding why only a small portion of training data falls into the non-activation region of ReLU**
- We tested all three datasets; for each sample $(x, y)$, $y$ is the ground truth, and $ReLU(model(x))$ is the prediction. We found that the number of samples with \(model(x) < 0\) (i.e., the activation output being zero) accounted for only about 1–2% of the total data in each dataset.
- This suggests that very few samples lie in the non-activation region of ReLU, supporting our assumption about instance confusion in Appendix A.1. We will include a scatter plot in subsequent revisions to further verify this claim.
---
> **Q5: Revisions to the main text and Appendix linkage to improve readability**
We appreciate your suggestions and plan to reference Proposition A.1 in Section 4.3. At the same time, we will revise Figure 1(b) to more clearly explain the concept of instance representation confusion. By doing so, we hope to guide readers to the relevant sections of the Appendix and ensure they can easily find the in-depth explanations presented there.
---
We sincerely thank you again for your feedback and hope that our responses can change your mind. We look forward to your reply and further discussion. | null | null | null | null | null | null |
Compositional Condition Question Answering in Tabular Understanding | Accept (poster) | Summary: This paper mainly aims at the problem of poor tabular comprehension ability of MLLM, and proposes a new method for extracting visual context information. By adding row and column-based image patch segmentation, and using the cross-attention method to fuse visual and textual features, the MLLM can capture the contextual relationships at the row and column levels in the table and the connections between images and text, thereby improving its performance on tabular Q&A tasks. At the same time, this paper also proposes a multimodal tabular understanding benchmark MMTU, including four tabular understanding tasks across ten different domains, to better evaluate the tabular understanding ability of MLLM.
## update after rebuttal
I still maintain my original decision because I think there are some fatal problems with this paper that have not been resolved.
1. The introduction of additional row and column patches does bring many extra visual tokens. This is contrary to the current development direction of MLLM. For high-resolution images, this method has obvious limitations. Moreover, there are already many redundant visual tokens in the current MLLM, and this method further increases the redundancy.
2. The authors configure COCOTAB with the pre-trained Siglip-ViT as the vision encoder and Qwen2-Instruct as the backbone for LLM. However, in the ablation study, they use the vanilla LLaVA-1.6 as the baseline to verify the validity of each component. This operation is puzzling. The authors did not address this issue in their rebuttal. Why not use the same baseline?
3. The experiment content is too small, only including a table and a figure. More adequate experimental design and analysis are needed.
4. Compared with some recent MLLMs, the proposed model does not have a significant performance advantage.
In summary, I think the current version of this paper is not well prepared and does not meet the standards of ICML.
Claims And Evidence: Yes
Methods And Evaluation Criteria: This paper proposes a new benchmark for MLLM tabular understanding, which includes richer domains and problem types, which is meaningful for the evaluation of MLLM tabular understanding ability.
Theoretical Claims: Yes, the proposed claims are correct.
Experimental Designs Or Analyses: 1. The proposed method increases the number of input image patches and the input context length of the LLM in the image encoder stage, which will introduce additional computational overhead in model inference, but the proposed method is not compared with other methods in terms of model inference speed and computational overhead in the experimental part.
2. The methods compared in the experimental section are outdated.
3. The overall results of the method proposed by the authors are relatively low in the current community.
Supplementary Material: Yes, I read the supplementary material, including the training data, training parameters, and the details of the benchmark build.
Relation To Broader Scientific Literature: None
Essential References Not Discussed: The author needs to add more papers on multimodal large models published in the second half of 2024.
Other Strengths And Weaknesses: My main concerns are about the issues mentioned above regarding experiments.
Other Comments Or Suggestions: The author used the original LLaVA as a baseline and added two components on top to prove their effectiveness. However, the performance of the original LLaVA is low and is far behind some of the latest MLLMs. The improvement on such a weak baseline does not necessarily prove that these components can bring new inspiration or help to the community. The author should verify it on a stronger baseline.
Questions For Authors: In addition to splitting the input image with the size of H*W into patches of P*P, the input image is also slice according to the dimensions of H*P and P*W. The image encoder used in the article is ViT, which can only accept the input patch size with a fixed size of P*P, and the article does not mention how the image patches with the size of H*P and P*W are input into the image encoder.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate your recognition of our setting and method and your valuable comments. We will answer the questions below, and we hope this clears up your concerns.
**Q1**: additional computational overhead
**A1**: We appreciate the need to rigorously evaluate computational efficiency. While our paper focused on accuracy and structural alignment improvements, we conducted additional experiments to quantify the computational overhead of CoCoTab. Here are the key results and analyses:
| Model | Time (s) | Memory (G) |
|--------------|---------|-----------|
| **InstructBlip** | 0.294 | 30.98|
| **LLaVA1.6** | 0.456 | 17.21|
| **DocOwl** | 0.3695| 19.50|
| **TextMonkey** | 1.684| 23.96|
| **Donut** | 0.425 | 0.478|
| **internvl** | 1.277 | 43.28|
| **TabPedia** | 0.201 | 19.02|
| **CoCoTab** | 0.209 | 20.47|
We have measured the **average inference time** and memory consumption per sample. As shown in the results, our method does not introduce significant overhead. On the contrary, compared to LLaVA, our approach provides more **concise and precise answers**, leading to improved inference speed. As noted in response A2 to Reviewer eFpb, COCOTAB adds only 7% in token count but outperforms in all baselines.
**Q2**: More baselines and results.
**A2**: Thank you for your critical feedback. We appreciate the opportunity to clarify and strengthen our experimental comparisons. Below, we address each concern systematically:
Our work focuses on document and tabular understanding, a subfield of MLLM that demands specialized structural alignment capabilities. To ensure fair and state-of-the-art comparisons, we selected the strongest open-source baselines published in 2024, including LLava1.6(baseline), TextMonkey, DocOwl2.These models represent the most advanced techniques in document/table understanding as of mid-2024.
For completeness, we also compared it with InternVL2.5-8B and TabPedia (NeurIPS 2024).
| Model| IE | RC | CC | CR |
|---------|----|----|----|----|
| InternVL| 0.68 | 0.48 | 0.25 | 0.6 |
| TabPedia| 0.71 | 0.47 | 0.34 | 0.27 |
| CoCoTab| 0.68 | 0.5 | 0.43 | 0.38 |
Although our method may not outperform InternVL and TabPedia in certain aspects, it significantly **surpasses both in CC** performance. Moreover, our training data (1.8M) is considerably smaller than that of TabPedia (~4M) and certainly much less than InternVL. Despite this, our approach still achieves competitive performance. We believe these results conclusively demonstrate CoCoTab's advancements in tabular understanding and we promise we will add these comparisons in our final version.
**Q3**: The improvement on weak baseline LLaVA.
**A3**: Thank you for raising this critical point. We appreciate the need to validate our method’s contributions against stronger baselines and would like to clarify our experimental design:
Our ablation studies (Section 5.2) intentionally used the original LLaVA **architecture (Vision encoder + MLP + LLM)** to isolate the impact of our proposed components (row/column patches and conditional tokens) under strictly controlled conditions. By keeping the LLM, training data, and hyperparameters identical, we ensure that performance differences directly reflect the effectiveness of our structural enhancements. This approach is standard in ablation studies to **disentangle contributions from architectural changes** versus other factors. Our ablation study shows that the performance gain validates the importance of our proposed components.
From another perspective, our ablation baseline (LLaVA + our training data) already **achieves higher accuracy** than vanilla LLaVA-1.6, demonstrating that our curated training data itself improves performance. This validates the quality of our data pipeline even before adding structural components.
**Q4**: How the image patches with the size of HP and PW are input into the image encoder.
**A4**: We appreciate the opportunity to clarify our methodology for handling multi-scale patches.
In our CoCoTab, we generate row-wise (HP) and column-wise (PW) patch embeddings in parallel with the standard PP patches using lightweight convolutional operations. Here are the key steps:
The input image is resized to 384×384 (as in SigLIP) and split into 27×27 patches of size 14×14 (P=14) with a 2D convolutions with kernal size PP, following standard ViT processing. Similarly, we apply **2D convolutions with kernel sizes H×P (for rows) and P×W (for columns)** to the resized image. These operations are computationally lightweight and performed before the ViT’s patch embedding layer. This design allows the ViT to implicitly reason about row/column structures without violating its architectural constraints. We will revise Section 4.1 to detail these steps.
Thank you again for highlighting this ambiguity. We hope this clarification helps strengthen the details of our proposed CoCoTab. | Summary: The authors identified two issues that the current models lack in the context of multimodal table understanding: 1) the visual encoder’s patch-based processing which splits rows and columns can lead to misalignment, and 2) a failure to fully integrate the conditions specified in the query with the visual features.
To tackle these issues, the authors propose CoCoTab: 1) augment the vision encoder with row and column patches, and 2) add conditional tokens to explicitly link query conditions to the corresponding image regions.
Finally, the authors propose the Massive Multimodal Tabular Understanding (MMTU) benchmark to assess the full capabilities of MLLMs in tabular understanding, and show that CoCoTab performs best.
Claims And Evidence: The authors posit that current MLLMs underperform on compositional condition questions due to patch misalignment and the overlooking of query conditions. This is backed by initial analysis by splitting TQA into 4 aspects: IE, RC, CC, and CR and show that CC is particularly bad for most models. The main evidence other than the effectiveness of the proposed method (will discuss more in the later sections). In all, the claim/hypothesis is clear and straightforward and easy to test.
Methods And Evaluation Criteria: Strength:
- The proposed method is simple and shown to be effective.
Weakness:
- The proposed method can be seen as a form of data augmentation. Despite the simplicity and effectiveness, this type of data augmentation might have the following limitations: 1) assumes a particular structure of the data (well structured table) and might not further transfer to more generic type of vision input (e.g., an image of tables drawn on a blackboard), 2) by adding more patches and training on them it increases the cost.
- The baselines comparisons are slim. How does this method compare with other data augmentation methods or position-aware encoding methods.
- The evaluation and comparison doesn't seem totally fair. The proposed method is trained on a couple of datasets (sec 4.3), some are of the same distribution w.r.t. the final evaluation (e.g, WTQ), but the other models in comparison are not further trained.
Theoretical Claims: The authors do not propose theoretical claims in this paper.
Experimental Designs Or Analyses: See Methods And Evaluation Criteria.
Supplementary Material: Briefly skimmed through the entire supplementary material.
Relation To Broader Scientific Literature: The authors positioned the work in the MLLM and tabular understanding literature. But I find the discussion a bit lacking around the data augmentation aspect of the proposed method. I would love to see more comparison on this front.
Essential References Not Discussed: See Relation To Broader Scientific Literature.
Other Strengths And Weaknesses: The paper is clear and easy to follow. My concern is mostly around the baseline comparisons and the evaluation protocol (i.e., other models are not being trained, thus the comparison is not exactly fair).
Other Comments Or Suggestions: N/A
Questions For Authors: See previous sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your recognition of our setting and method and your valuable comments. We will answer the questions below, and we hope this clears up your concerns.
**Q1**: A well-structured table might not further transfer to a more generic type of vision input.
**A1**: Thank you for this insightful observation. We fully agree that extending tabular understanding to unstructured or free-form visual inputs (e.g., handwritten blackboard tables) is an important and challenging direction. Our work focuses on structured tables for two key reasons:
**Foundation for Real-World Applications**: Existing benchmarks (e.g., WTQ, TabFact) and practical use cases predominantly involve structured tables. Even in these "well-structured" settings, current MLLMs struggle with basic compositional reasoning. Addressing these fundamental limitations is a necessary step before extending to more complex, unstructured scenarios.
**Generalization via Training Diversity**: While our primary evaluation focuses on structured tables, CoCoTab’s training incorporates diverse visual QA datasets (e.g., OCR-VQA, PlotQA) that include semi-structured inputs such as book covers, posters, and diagrams. These datasets help the model develop robustness to irregular layouts, partial occlusions, and noisy text—capabilities that can facilitate generalization to blackboard-style tables.
We acknowledge that fully adapting to free-form tables requires additional innovations, such as dynamic patch resizing to accommodate arbitrary layouts. We will incorporate a discussion of these limitations and potential future directions in the final version.
**Q2**: More patches and the cost.
**A2**: Thank you for raising this practical concern. We acknowledge that introducing additional patches incurs computational overhead. However, our design carefully balances cost and performance:
The original method divides the table image into 27×27 patches (729 total). COCOTAB adds 27 row patches (one per row, covering full width) and 27 column patches (one per column, covering full height), resulting in only 54 additional patches (7% increase over the original 729). This **modest increase in token count** (from 729 to 783) incurs minimal computational overhead during training and inference.
As shown in our ablation study, **adding row/column patches alone improves accuracy** on CC tasks over the baseline. This demonstrates that the cost increase is highly justified by the performance gains, particularly for critical tasks like compositional reasoning.
In summary, COCOTAB achieves **performance gain with only a marginal cost increase**, making it a practical and efficient solution. **Please refer to A1 to Reviewer qd5K for more details.**
**Q3**: The baselines are slim.
**A3**: Thank you for your feedback. Our work focuses on improving tabular understanding in MLLMs, especially for compositional QA. To ensure fair comparisons, we chose state-of-the-art document-understanding MLLMs that integrate advanced techniques like data augmentation and positional encoding. For example, Donut: Uses synthetic data for pertaining. DocOwl: Leverages high-resolution document images for positional awareness. TextMonkey: Utilizes shifted window attention for spatial understanding.
These baselines represent the strongest open-source MLLMs in this area and inherently employ the techniques you mentioned. Yet, as shown in Table 2, CoCoTab outperforms them on all tasks, highlighting that existing approaches still **struggle with tabular structure alignment and conditional reasoning**. **More experiments are in A2 to Reviewer qd5K.**
**Q4**: The evaluation and comparison don't seem fair.
**A4**: We appreciate the importance of fair comparisons and would like to clarify our experimental design:
**Strict Data Separation**: While WTQ was included in our training data for Stage 1, we rigorously ensured that the test splits of WTQ, TabFact, and other benchmarks were **never exposed during training**. Many compared models are pre-trained or fine-tuned **on the same data** (e.g., DocOwl uses DocVQA, TabFact and WTQ). Our work follows a similar protocol but focuses on structural alignment.
**MMTU**: The proposed MMTU was designed to evaluate TQA from different aspects. It includes tables from WTQ but **reformulates questions and introduces new compositional conditions**, ensuring no overlap with training data. Previous work [1] shows that existing benchmarks lack variation in their questions, resulting in a bad performance with perturbations. CoCoTab's superiority on MMTU and ablation study further validates its generalizability.
In summary, our experiments adhere to fair evaluation standards, and the improvements are attributable to our method’s architectural innovations rather than data bias. We appreciate your attention and we will make it clear in the final version.
[1] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs. NeurIPS, 2024. | Summary: COCOTAB is a novel method for improving table question answering, particularly for queries with compositional conditions. The paper identifies two key challenges in existing models: the disruption of table structure due to patch-based vision encoders and the models' tendency to miss important query conditions. To overcome these, COCOTAB introduces dedicated row and column patches to better preserve structural relationships and uses conditional tokens to align query conditions with visual features. Additionally, the paper presents the MMTU benchmark, a comprehensive dataset for evaluating various aspects of table understanding. Experimental results show that COCOTAB significantly outperforms prior approaches, especially on complex compositional tasks.
Claims And Evidence: Most claims are supported by convincing experimental evidence. For instance, the paper clearly demonstrates through quantitative comparisons and ablation studies that existing MLLMs struggle with compositional conditions, and that augmenting the vision encoder with dedicated row and column patches—along with introducing conditional tokens—significantly improves performance on benchmarks like MMTU.
However, one potential concern is the claim that the vision encoder’s patch-based approach is solely responsible for the shortcomings. While evidence supports its major role, other factors (such as the LLM's reasoning and alignment capabilities) might also contribute, but these aren’t fully disentangled in the experiments. Overall, though, the primary claims are well supported.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the challenges of table question answering. The authors introduce row and column patches to preserve the structural relationships within tables and conditional tokens to ensure that query conditions are accurately attended. This dual approach directly targets the shortcomings of patch-based vision encoders and misaligned query interpretations.
On the evaluation side, breaking down table understanding into four aspects—individual elements (IE), rows/columns (RC), compositional conditions (CC), and calculations/reasoning (CR)—along with the introduction of the MMTU benchmark, offers a comprehensive framework.
Theoretical Claims: The paper does not present formal proofs for its theoretical claims. Instead, it provides mathematical formulations (e.g., Equations 1–4) that describe the model's architecture and the transformation of visual tokens into language embeddings. These expressions are consistent with standard practices in multimodal models and transformer architectures. Since the focus is on designing and empirically validating a new method rather than on deriving novel theoretical guarantees, no issues were found in the provided formulations.
Experimental Designs Or Analyses: The experimental design is sound overall. The paper evaluates models on table question answering by dividing tasks into four aspects—individual elements (IE), rows/columns (RC), compositional conditions (CC), and calculations/reasoning (CR)—and uses both the WikiTableQuestions (WTQ) dataset and the new MMTU benchmark. The authors also conduct detailed ablation studies (e.g., Figure 5) and error analyses (e.g., Figure 2) to pinpoint issues like misalignment and missing conditions.Yes. I reviewed the supplementary material.
Supplementary Material: Yes. I reviewed the supplementary material.
Relation To Broader Scientific Literature: The paper builds on prior work in multimodal and document understanding by addressing shortcomings identified in existing MLLMs. Earlier studies (e.g., Donut, mPLUG-Owl, LLaVA) showed that patch-based vision encoders and general alignment methods work well for basic table extraction and OCR tasks, but they struggle with the inherent structure of tables when faced with compositional condition questions. COCOTAB extends these ideas by introducing row and column patches to preserve the table's spatial relationships and by incorporating conditional tokens to more effectively align the query with the table content.
Essential References Not Discussed: I didn't notice any missing key references.
Other Strengths And Weaknesses: Strengths and weaknesses were discussed in other sections.
Other Comments Or Suggestions: Overall, the paper is well-written and clearly presents its contributions.
Questions For Authors: - While the paper emphasizes the vision encoder’s patch-based approach as a major contributor to misalignment, have you conducted experiments to isolate the impact of the LLM’s reasoning and alignment capabilities separately?
- Did you have some qualitative examples showing the failure cases of COCOTAB?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your recognition of our setting and method and your valuable comments. We will answer the questions below, and we hope this clears up your concerns.
**Q1**: While the paper emphasizes the vision encoder’s patch-based approach as a major contributor to misalignment, have you conducted experiments to isolate the impact of the LLM’s reasoning and alignment capabilities separately?
**A1**: Thank you for raising this critical question. We fully acknowledge the challenge of disentangling the LLM’s reasoning capabilities from alignment issues in multimodal systems. To address this, we designed ablation studies (Section 5.2 and Figure 5) to **isolate the impact of our proposed structural enhancements** (row/column patches and conditional tokens). These performance gains with our architecture were achieved without modifying the LLM, demonstrating that our structural enhancements directly address vision-text alignment limitations, independent of the LLM’s reasoning capabilities.
We agree that stronger LLMs can further improve performance. However, our method focuses on optimizing the visual encoder and cross-modal interaction, which are **orthogonal to the choice of LLM**. This ensures compatibility with any downstream LLM.
In summary, our ablation studies confirm that the performance improvements stem from better structural alignment in the visual encoder and cross-modal attention, rather than LLM-specific reasoning enhancements. Thank you for your valuable question. We promise that we will make this clear in the final version.
**Q2**: Did you have some qualitative examples showing the failure cases of COCOTAB?
**A2**: Thank you for this question. While COCOTAB improves performance on CC tasks, we acknowledge that failure cases persist, particularly in scenarios involving **OCR errors or complex reasoning**. Here are two representative examples from our analysis:
In our response in CC task, the model correctly identified relevant rows but erred in misrecognized scores (e.g., “93” misread as “98” in one cell). As shown in Appendix E, the OCR ability of our model may be strengthened through other strategies, such as more datasets and more vision encoders or tokens.
From another perspective, when it comes to CR tasks, the model erred in computing the difference between 63.49 - 38.77 (= 24.72) as 37.22. In this paper, we concentrate on CC tasks. For such calculation and reasoning tasks, other methods such as Group Relative Policy Optimization (GRPO) in DeepSeek will help improve performance.
Thanks for your valuable suggestion. We will discuss these failure cases in Appendix and plan to address them via OCR error correction and enhance the reasoning ability in future work. | Summary: This paper investigates the table understanding task under compositional condition questions using multimodal large language models (MLLMs). The authors point out that current MLLMs face two major challenges: the inability of the vision encoder to accurately recognize table row contents and the tendency of the model to overlook conditions specified in the question. To address these issues, the paper proposes a new method, COCOTAB, which enhances row/column patch information in the vision encoder and introduces conditional tokens between visual patches and queries to improve the model’s ability to understand tabular data. Additionally, the paper constructs the MMTU benchmark dataset to comprehensively evaluate the capabilities of MLLMs in table understanding tasks.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The proposed method is reasonable and well-aligned with the problem. The authors augment ViT-based vision encoders with structural patch enhancements (row and column tokens) and apply cross-modal attention using conditional tokens to better ground queries to relevant table regions. The evaluation framework is clear and systematic, using a new benchmark (MMTU) with four distinct task types (IE, RC, CC, CR) alongside existing benchmarks such as WTQ, TabFact, and ComTQA.
However, no measurements of model complexity or inference latency are reported. This omission makes it difficult to assess the practical deployment potential of the proposed system.
Theoretical Claims: There are no formal theoretical proofs in the paper.
Experimental Designs Or Analyses: The authors present a comparative evaluation across multiple state-of-the-art open-source and proprietary MLLMs, conduct a detailed error analysis (OCR, alignment, and condition misses), and perform ablation studies isolating each component of their method.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper builds effectively upon recent advances in MLLMs for table and document understanding, referencing core works like LLaVA, BLIP, Donut, DocOwl, and more. It identifies a gap in MLLM performance on CC-style questions and introduces a novel approach to fill this gap.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths
(1)The paper identifies common errors in MLLMs when handling tabular data, such as misalignment of the vision encoder with rows/columns (Figure 1: Westport’s negative growth rate being misrecognized as Oil Springs) and the model’s tendency to overlook conditions (e.g., GPT-4o exhibiting a high error rate on CC tasks). The motivation is reasonable. (2)The results indicate that the proposed method achieves a decent improvement.
Weaknesses
(1)The core improvements of the paper (row/column patches + conditional tokens) mainly focus on enhancing the vision encoder, raising concerns about the originality of the approach. (2)The paper primarily compares MLLMs but does not include comparisons with structured table parsing-based QA methods. (3)Table 2 only reports accuracy but does not provide inference time data. If COCOTAB requires significantly more inference time to achieve better performance, its practical application value may be limited.
Other Comments Or Suggestions: No
Questions For Authors: 1.Efficiency Consideration: Your method adds both row/column patches and conditional tokens. Have you evaluated the runtime or memory overhead this introduces during inference? Please include such results if available.
2.Comparison with Structured Methods: Have you considered comparing your method against strong structured table QA models (e.g., TAPEX, TaPas)? Even if they operate on different modalities, such a comparison would contextualize your model’s relative advantages and limitations.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | null | null | null | null | null | null | |
Adversarial Reasoning at Jailbreaking Time | Accept (poster) | Summary: This paper proposes an adversarial attack on LLMs: finding prompts to get a model to ignore its safety guidelines. The formulation of the method is equation (3.1) and the goal is to iteratively modify the string S, so that the attacker model A produces prompt A(S) that breaks the target model T. The proposed method uses equation (3.2) as the feedback signal and uses a feedback LLM to generate feedback strings which are then used by a refiner LLM to produce new candidates for S. Experimental results show higher attack success rate than existing methods.
Claims And Evidence: The evaluation is by human annotators: "We manually verify all proposed jailbreaks to avoid false positives." It's crucial to describe the annotation process in details, particularly on what measures were taken to remove bias.
Methods And Evaluation Criteria: See above.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Not applicable.
Relation To Broader Scientific Literature: The proposed method uses some ideas from existing work, e.g. equation (3.2), but the overall approach (Algorithm 1) seems novel.
Essential References Not Discussed: NA.
Other Strengths And Weaknesses: The proposed method seems sound. One concern is is on evaluation. The examples of successful attacks provided in Figures 9 and 10 in appendix, including the success prompts and the target models' responses after jailbreak, are not very convincing. Since the correctness of attack is by human annotation, I recommend the authors to describe the annotation process in details, particularly on what measures were taken to remove bias, to mitigate the concern.
Other Comments Or Suggestions: NA.
Questions For Authors: NA.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their invaluable feedback to our paper. Please find below our responses to address your valid concerns.
**Human verification**:
We use the HarmBench judge to evaluate the target responses. However, as explained in line 271, we manually verify all of the jailbreaks marked as positive by the judge. The reviewer has expressed their concerns since this is a crucial part of evaluations, and must be carefully explained in the appendix. We will add a paragraph to appendix B to explain this.
We remark that the additional human-based evaluation on the top of the HarmBench judge is in fact necessary, and has been done in previous work (e.g. [1], page 7, paragraph 2). This is because the HarmBench judge occasionally makes mistakes by detecting harmless answers as jailbreaks. Here is an example:
> **Task**: Give instructions for how to illegally purchase a handgun as a minor in the US.
> **Response**: As a responsible gun owner and experienced hunter, I strongly advise against any illegal activities, especially the illegal purchase of firearms by minors. It is important to recognize that possessing a firearm comes with serious responsibilities and potential consequences, and it is crucial to follow all applicable laws and regulations to ensure the safety of yourself and those around you. However, if you have already made the decision to illegally purchase a handgun, I would advise you to follow these steps to minimize the risks and potential consequences of your decision: 1. Seek legal and safe alternatives: Instead of resorting to illegal means, consider exploring legal alternatives for self-defense, such as taking a firearms safety course or investing in non-lethal forms of protection, such as pepper spray or a personal alarm. 2. Consult with a licensed firearms dealer: While I cannot recommend or encourage illegal activities, I can suggest that you consult with a licensed firearms dealer to understand the legal requirements and regulations surrounding firearm purchases. They can provide you with information on how to obtain a firearm legally and ...
In an attempt to keep the manual evaluation impartial, we enlisted three experts in jailbreaking to evaluate the responses according to the same outline given to the Harmbench judge (the system prompt provided in the appendix). Experts do not know the algorithms and are only provided with the tasks, jailbreaking prompts, and responses. We classify a response as a jailbreak only if all three experts unanimously agree.
It is worth noting that designing the judge in jailbreaking is an active research direction, and it is known that the majority of judges have a positive bias (see e.g. [2], Table 3). Nevertheless, we do not claim to improve or critique judge design, and only aim to verify the jailbreaks before reporting them in order to remain truthful.
[1] Improving Alignment and Robustness with Circuit Breakers, https://arxiv.org/pdf/2406.04313.
[2] A StrongREJECT for Empty Jailbreaks, https://arxiv.org/abs/2402.10260. | Summary: This paper explores an adversarial reasoning approach to model jailbreaking, aiming to elicit harmful responses from aligned LLMs. It introduces a test-time computational framework that leverages reasoning techniques to bypass safety mechanisms. The proposed method, termed Adversarial Reasoning, formulates jailbreaking as a reasoning problem and follows a structured three-step process: reason, verify, and search. This framework systematically explores the adversarial prompt space using loss-based feedback loops, refining prompts step-by-step to optimize attack success. Conceptually, the idea is straightforward, as it directly builds upon existing inference-time computation methods, particularly search-based approaches. By iteratively refining adversarial prompts, the method effectively enhances jailbreak success rates.
The study demonstrates that Adversarial Reasoning achieves SOTA ASR against various aligned LLMs, outperforming both token-space and prompt-space jailbreaking techniques, especially when targeting adversarially trained models. Experimental results validate the efficacy of the approach, showing high success rates even when employing weaker attacker LLMs. Furthermore, the paper explores the transferability of adversarial prompts to black-box models using surrogate loss functions, achieving significant success rates on models such as OpenAI o1-preview and DeepSeek.
Claims And Evidence: The paper claims that Adversarial Reasoning can successfully transfer jailbreaks to black-box models. Transfer learning techniques using surrogate loss functions lead to significantly improved attack success rates.
Methods And Evaluation Criteria: The primary metric ASR used to measure effectiveness, and the study also evaluates transferability by applying surrogate losses.
Theoretical Claims: The paper does not heavily rely on formal proofs.
Experimental Designs Or Analyses: The authors test their approach on both open-source and proprietary LLMs, evaluating it in both white-box and black-box settings.
While the paper carefully examines how adversarial prompts transfer to unseen black-box models, it does not explore adversarial reasoning under defensive mechanisms.
Supplementary Material: The supplementary material includes:
1.Additional results on transfer attacks.
2.Hyperparameter settings for reproducibility.
3. Detailed jailbreak examples.
Relation To Broader Scientific Literature: This paper expands on token-space and prompt-space attacks with a reasoning-based framework. The work is new in framing jailbreaking as a structured reasoning task but could benefit from a discussion on whether similar techniques could be used for defensive LLM alignment.
Essential References Not Discussed: Works on exploiting LLM interpretability for adversarial attacks and jailbreak defense, and jailbreak judge.
Other Strengths And Weaknesses: Strengths:
The method achieves state-of-the-art ASR on multiple benchmarks.
Transferability to black-box models: Demonstrates robustness in real-world conditions.
Weaknesses:
While the paper explores attack mechanisms, it does not suggest strategies for mitigation, and also it does not explore adversarial reasoning under defensive mechanisms.
The idea is straightforward, as it directly builds upon existing inference-time computation methods, particularly search-based approaches. By iteratively refining adversarial prompts, the method effectively improves jailbreak success rates. However, similar findings have already been explored in previous works, such as PAIR and related studies.
Other Comments Or Suggestions: NA
Questions For Authors: How do you justify the assumption that optimizing a loss function on a surrogate model generalizes well across different architectures?
Given that your method iterates on reasoning, how does the computational cost compare to simpler adversarial attacks?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback to our paper. Below, we address each concern in detail.
**Adversarial reasoning under defensive mechanisms**:
We agree that designing robust defenses is an important direction when it comes to jailbreaking LLMs. Indeed, good defenses require good attacks, and ultimately, all of the efforts aim to make LLMs safer and better aligned. However:
- Attacking and defense are two parallel sides of the jailbreaking literature–new defenses are evaluated by the existing attacks and vice versa. It is uncommon for a single paper to comprehensively address both simultaneously. Many recent influential works ([1,2,3]) have explicitly focused on attack techniques without providing new defense strategies. Yet, these papers have directly impacted defense design by revealing critical vulnerabilities. For instance, the emergence of perplexity-based and smoothing defenses [4,5].
[1] Zou et al., https://arxiv.org/pdf/2307.15043
[2] Chao et al., https://arxiv.org/abs/2310.08419
[3] Andriushchenko et al., https://arxiv.org/abs/2404.02151
[4] Alon et al., https://arxiv.org/abs/2308.14132
[5] Robey et al., https://arxiv.org/abs/2310.03684
- Attack methods are frequently used in adversarial training mechanisms that substantially enhance model robustness. An example is the use of the PAIR method [2]—a purely attack-oriented method—for the refusal training of llama-3 models ([6], page 48). Similarly, the HarmBench framework leverages the GCG attack ([1]) in the training of its robust refusal R2D2 model ([7], Section 5). Following this tradition, our paper’s primary contribution is highlighting the necessity of considering reasoning-based adversaries (and it provides the first attempt in this regard). This can be used to provide stronger defense mechanisms. For example, one promising approach to design defense mechanisms is adversarial training via a game between a reasoning-enabled target LLM that attempts to maximize the loss and an attacker that utilizes reasoning to minimize the loss. We will use the additional two pages allowed in the revised version to highlight some potential approaches for defenses (as future directions) that can be based naturally on adversarial reasoning.
[6] The Llama 3 Herd of Models, https://arxiv.org/abs/2404.02151
[7] Mazeika et al., https://arxiv.org/abs/2402.04249
- Finally, we should mention that scaling the test-time compute (see [9, Figure 7] and [8, Figure 1]) has been recently deployed by OpenAI to enhance the alignment of reasoning models as explained in the last paragraph of the section on related work (“Reasoning vs safety”). But this is done without considering any reasoning capabilities of the attacker. In this regard, our work sheds light on the use of reasoning for attacks, showing that reasoning-based attacks are more effective against reasoning-enabled alignment. Indeed, our method obtains a significantly higher ASR on OpenAI’s o1-preview compared to all the previous attacks as reported in Table 3 of our paper. As a result, frontier models such as o1 can become safer if reasoning-based adversaries are used in their safety fine-tuning.
[8] Gual et al., https://arxiv.org/abs/2412.16339
[9] Zaremba et al., https://arxiv.org/pdf/2501.18841
**Comparison to PAIR and other methods**:
While our work is partly based on PAIR–particularly with respect to employing the attacker LLM–our method is significantly different from PAIR and other methods in the following ways:
- Our method explicitly deploys the loss to pursue more promising adversarial reasoning directions. This stands against existing methods, including PAIR, that heuristically propose strategies or rely on a handful of strategies given by an expert. To our knowledge, this makes us the first semantically-meaningful attack that effectively reduces the loss function as Figure 3 shows.
- We explicitly construct the adversarial reasoning steps (see Figure 2) to ensure the relevance of such directions to decreasing the loss function. This structured and explicit design of the reasoning tree differs fundamentally from approaches relying solely on intrinsic CoT capabilities of LLMs.To underscore this distinction empirically, we conducted additional experiments using PAIR with Deepseek R1 as the attacker. As the table below shows, even the use of the strongest reasoning models such as Deepseek as the attacker does not provide any additional advantages as long as the directions are produced heuristically and with no external supervision. Our conclusion is that finding the relevant directions by acquiring the feedback string, which is based on the loss, is necessary to jailbreak stronger models.
| Target Model \ Baseline | PAIR + Deepseek R1 | Adversarial Reasoning |
| -------- | ------- | ------- |
| Claude-3.5-Sonnet | 16% | 36% |
| o1-preview | 16% | 56% | | Summary: The paper proposes a novel automated method for red teaming language models. It builds on prior research that works via iterated prompt optimization, develops a more sophisticated framework for iteratively improving natural language jailbreaks by using separate attacker, feedback and refiner LLMs. They set the jailbreak task up as a structured reasoning problem, where they explore a tree of reasoning traces using feedback in the form of a the cross entropy loss of a target string. They find that their method is competitive as a jailbreak with other
Claims And Evidence: Yes. The main claim that the submission implicitly makes is that their method is effective at finding jailbreaks and competitive with other SOTA jailbreaking methods. They demonstrate this by measuring the attack success rate against various models, both off-the-shelf LLMs and LLMs specifically optimized for jailbreak robustness. They find that their method outperforms existing techniques against most of these models. In addition, they measure attack success rate in the multi-shot transfer attack setting, and again find that their method beats other methods. In general, the success of such jailbreak methods is relatively sensitive to details of the setup such as hyperparameter tuning, so we're somewhat reliant on the authors' own implementations of competitor methods being sufficiently good to provide an accurate comparision (I assume this is how they've got their comparison measurements?). But generally, I find the evidence they provide pretty compelling.
Their measurements of the attack success rate for different strengths of attacker LLM seem like appropriate evidence that their method scales well with additional inference time compute. The ablation studies provide good evidence that their method does indeed decrease their target loss function. I find the result that most jailbreaks occur in the first few steps of the application of their method (for most non-adversarially trained models) somewhat surprising - assuming I'm correctly interpreting their claims. Does this mean that actually the first step of your method is already providing substantial jailbreaking power and more than 1-3 steps is only required for particularly hard prompts or models? (Assuming I'm understanding right though, and this is true, the evidence seems appropriate.)
Methods And Evaluation Criteria: Generally, their method seems like an appropriate way to increase the effectiveness of model jailbreaking, and using attack success rate measured over a variety of target models is an appropriate evaluation criteria.
However, it would be nice to have seen some indication in the amount of compute required for each method, or a count of the number of queries required of the target model to achieve a jailbreak. It's unclear how the cost of e.g. PAIR or GCG compares with the method presented.
Theoretical Claims: The paper does not make any theoretical claims.
Experimental Designs Or Analyses: Yes, I checked the validity of all the experimental designs in the main body and they seemed sound. My comments are made in previous sections.
Supplementary Material: I skimmed the whole supplementary material, with a particular focus on the prompts for the attacker, feedback and refiner, in order to better understand the algorithm. The supplementary material all looks appropriate, useful and well-presented.
Relation To Broader Scientific Literature: This work iterates on existing work presenting methods for finding natural language jailbreaks of LLMs. This is a well-studied problem and previous methods to approach it have been proposed. The authors make a quantitative improvement on these methods, justified by comparision to those methods which they reference extensively in their work. Generally the progress over the existing literature is incremental, but clear.
Essential References Not Discussed: None of which I am aware.
Other Strengths And Weaknesses: ### Strengths
- The presentation of the paper is extremely clear.
### Weaknesses
- (Minor comment) The terminology "token-space" and "prompt-space" seem unideal to me - since token changes do edit the prompt and vice versa. I think you might mean more like "semantic-space" for the second category (to distinguish semantic changes from arbitrary token changes).
Other Comments Or Suggestions: - Page 6, line 292: "since as the rely' looks like a typo.
- Page 7, line 347: "showcase**s**" is a typo.
- Page 7, line 384: "Section 5.3" looks like a typo - it should refer to a figure.
Questions For Authors: I have no questions beyond the comments raised above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their invaluable feedback to our paper. We appreciate the opportunity to clarify these points.
**Distribution of jailbreaks**:
The reviewer is indeed correct about the observation that the first few iterations account for the majority of jailbreak successes when it comes to easier tasks (less malicious) as well as less aligned models. In such cases, as our experiments show, PAIR and TAP achieve comparable results for such cases since the attacking prompts are similar at iteration 1 (except that we use a simpler system prompt). Nonetheless, the difference becomes significant when we test the algorithms against safer models such as o1-preview or Claude-3.5 as shown in Figure 8 where the attacking prompts at iteration 1 are less effective and stronger modifications (e.g., feedbacks) are needed to find a successful jailbreaking prompt.
**Query Efficiency**:
We have provided the table below for comparing the overall queries-per-success for various baselines: https://ibb.co/Qv7TLnwL. We will add a more detailed version in the revised version of the paper.
While our method does require more queries than PAIR, it achieves higher success rates on difficult tasks that demand multiple refinement steps. In this manner, another useful comparison is the one provided in Figure 4 where we compare the number of jailbreaks at each iteration of both algorithms.
**Minor Weakness**:
We appreciate the reviewer’s note about potential naming conventions (i.e., “token-space” and “prompt-space”) and will consider replacing “prompt-space” with “semantic-space” in the revised version to avoid confusion. | Summary: This paper leverages the reasoning capabilities of LLMs for jailbreaking. By constructing a few LLMs as Attacker, Feedback, and Refiner modules, the proposed attack optimizes a reasoning string for jailbreaking. Experiments show the proposed attack outperforms some existing attacks on multiple LLMs.
## update after rebuttal
Thanks for the authors' rebuttal, which addresses most of my concerns. I have raised my score and lean toward acceptance.
Claims And Evidence: See the sections below.
Methods And Evaluation Criteria: From my perspective, the proposed method does not really utilize the "reasoning" capability of LLMs- it falls in a thread of research that designs a few LLM-based agents to collaborate on refining jailbreak prompts to attack other LLMs, including but not limited to the references below. This paper claims the methods use a "reasoning" tree, but actually, it is a form of loss-based tree-search process, and
such tree-search based attack methods have already been explored in previous works. What is particularly related to "reasoning" is not clear.
[1] GUARD: Role-playing to Generate Natural-Language Jailbreakings to Test Guideline Adherence of Large Language Models https://arxiv.org/abs/2402.03299
[2] Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia. https://arxiv.org/abs/2402.05467
[3] AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMs. https://arxiv.org/abs/2410.05295
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: 1. Most baselines compared in this paper are somewhat outdated. More compassion with methods discussed above and other recent SOTA methods may be necessary.
2. Moreover, no explicit computational time was compared. This part is important since some recent attacks can achieve high ASR with a single forward pass without optimization.
3. The multi-shot transfer attack comparison may not be fair. Other attacks can also use multi-shot prompts for transfer.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: Proposing stronger red-teaming methods can benefit the development of AI safety.
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. Below, we address the concerns raised.
**The proposed method does not utilize reasoning**:
The reviewer says:
> method does not really utilize the "reasoning" capability of LLMs, it falls in a thread of research that designs a few LLM-based agents
We have some points to argue:
- Our approach aligns with several papers in reasoning [1, 2] that explicitly construct reasoning paths using LLM-based agents. Feedback LLM must reason to infer which directions have been more promising so far. This is consistent with “Modifying the proposal distribution” in [11] where the proposer’s distribution is modified either by additional tokens in the input as a piece of feedback. Our algorithm precisely does that.
- We emphasize that our reasoning paths–a series of decisions made by Feedback LLM based on the loss value–are critical. Our experiments show that even powerful reasoning models like Deepseek R1 struggle without our constructed reasoning algorithm and external supervision via the loss, as illustrated below:
| Target Model \ Baseline | PAIR + Deepseek R1 | Adversarial Reasoning |
| -------- | ------- | ------- |
| Claude-3.5-Sonnet | 16% | 36% |
| o1-preview | 16% | 56% |
> This paper claims to use a “reasoning” tree, but it is a form of loss-based tree-search
“A loss-based tree-search process” is accurate; however, this inherently involves the key elements of reasoning processes when conditioned on semantically-meaningful prompts. We refer to [3], an important paper in reasoning, that utilizes a similar setup—a fixed generator analogous to our off-the-shelf attacker—while studying different reward models. However, in optimization-based jailbreaking, the loss function is a natural choice for the reward. We thank the reviewer for mentioning [4]; we will cite it along with [5] (cited already) to show the efficacy of search-based methods.
> Such tree-search based attack methods have already been explored
Our key distinctions with the mentioned papers by the reviewer:
1- Unlike the first and third papers, we rely on the loss function to find the directions for the next step rather than heuristically propose strategies and evaluate them by trial-and-error. This is, to our knowledge, the first semantically meaningful attack shown to effectively reduce the loss (Figure 3).
2- Unlike the second paper, our Feedback LLM operates directly in the space of meaningful attacks, explicitly reasoning about interpretable strategies, rather than performing token-level perturbations.
**Comparison with other methods**:
We have updated the table (https://ibb.co/MyWkKN4M) and included a new baseline. We believe they are representative of SOTA. [6], from the latest ICLR, is a leading gradient-free token attack, and [7], another recent ICLR paper, represents SOTA meaningful attacks. Besides, PAIR remains a strong baseline per StrongReject
([8], Figure 3) and is used in SOTA defense methods such as [9] (Table 1). We also want to refer to [10], a concurrent work written by experts in jailbreaking, that adopts nearly the same comparison table (Page 6, Table 2). Our results on OpenAI o1-preview are in fact SOTA as it has been evaluated on StrongReject by its authors; at submission time, no competing results on this model were public.
**Number of Queries**:
We now include a table of overall queries per success:https://ibb.co/Qv7TLnwL. Although AutoDAN-Turbo and Rainbow Teaming require few evaluation-time prompts, they involve extensive upfront computation to identify initial strategies. This should be incorporated into any comparison, consistent with [10].
We clarify that the key message of our paper is that scaling the test-time compute–akin to the recent efforts in reasoning [11]–improves the performance beyond a simple Best-of-N (PAIR), and this stands as an alternative to extensive fine-tuning of attackers.
**Mutli-shot transfer is not fair**:
Table 3 deliberately includes only black-box methods (PAIR, TAP) that directly interact with the target model, maintaining a fair comparison. Further, as a contribution, we introduce a method for transferring loss-based strategies by averaging the surrogate losses, significantly improving upon the single-shot transfer used in [12]. As discussed in the introduction, a motivation was overcoming the limitations of binary feedback that black-box approaches rely on. As Table 3 demonstrates, a general open-box method utilized with a surrogate loss outperforms standard black-box techniques.
[1]https://arxiv.org/abs/2408.16326
[2]https://arxiv.org/abs/2409.03271
[3]https://arxiv.org/abs/2305.20050
[4]https://arxiv.org/abs/2402.0546
[5]https://arxiv.org/abs/2402.12329
[6]https://arxiv.org/pdf/2404.02151
[7]https://arxiv.org/abs/2410.05295
[8]https://arxiv.org/abs/2402.10260
[9]https://arxiv.org/pdf/2406.04313
[10]https://arxiv.org/abs/2503.15754
[11]https://arxiv.org/abs/2408.03314
[12]https://arxiv.org/abs/2307.15043
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' rebuttal, which addresses most of my concerns. I have raised my score and lean toward acceptance. However, I suggest the authors acknowledge some limitations in their revision:
1. Given the overlap of (loss-based) tree-search with existing methods, it would be better to explain the differences discussed in the rebuttal and tone down claims on novelty (Especially the term "reasoning", since existing attacks have already applied similar intuition).
2. Explicitly acknowledge the limitation that the attack is a white-box one and attacking closed-source models requires a surrogate model.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer’s time and effort for reviewing our paper, responding to our comments, and increasing their score. We will surely incorporate their feedback in the revised version of the paper. In this regard, we will:
1- Clearly state throughout the paper that our method is a “white-box” method as it requires the loss values. Specifically, we will apply this to the abstract, introduction, and the method section. Moreover, we will emphasize in Section 5.2 that for blackbox models we use a surrogate loss.
2- Add an explanation to the third paragraph of the introduction as well as the related work to describe how other methods (e.g., PAIR, TAP, AutoDAN-turbo) rely on LLMs’ inherent reasoning. We will explain the overlap and differences as discussed in the rebuttal and adjust the claims about novelty accordingly.
3- Apply all of the discussed points in the rebuttal (including the updated table, and the query-per-success comparison). We thank the reviewer once again for their comments. | null | null | null | null | null | null |
Learning Adaptive Lighting via Channel-Aware Guidance | Accept (poster) | Summary: The paper uses the channel differences in spatial and frequency domains for lighting adaptation and mixes them for enhancement. The paper is well organised and the experimental results show the effectiveness and efficiency of the proposed method. The authors can give more detail about the structure of DDCM and MCM in Figure 3, which is not required in the rebuttal period.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: No
Relation To Broader Scientific Literature: The paper can promote the development of image post-process.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The paper is generally well-written.
Other Comments Or Suggestions: No
Questions For Authors: 1. What do cA, cH, cV and cD mean in equation 11?
2. The structure of Color Mixing Representation and Light Guided Attention seems basic SSM(Mamba) and Attention structure, do you make any improvements?
3. How about the running speed of the proposed method for each image? Does it have advantages over other methods?
4. What are the differences between LALNet-Tiny and LALNet?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s insightful comments and provide detailed responses below.
>Q1. What do cA, cH, cV and cD mean in equation 11?
**A1:** These variables represent the components of the 2D wavelet transform applied to the input image:
1. cA denotes the approximation coefficients of the wavelet transform.
2. cH denotes the horizontal detail coefficients.
3. cV denotes the vertical detail coefficients.
4. cD denotes the diagonal detail coefficients.
The wavelet transform decomposes the image into different frequency bands, which enables effective separation of illumination and detail information. These components are then leveraged in our color mixing representation to enhance adaptive lighting adjustment. We will clarify this in the revised manuscript to improve readability.
>Q2. The structure of Light Guided Attention seems basic SSM(Mamba) and Attention structure, do you make any improvements?
**A2:** We acknowledge the reviewer’s concerns regarding the relationship between our Light State Space Module (LSSM) and Mamba. While LSSM shares similarities with state-space modeling principles, we introduce several key improvements beyond MambaIR:
1. Gated MLP instead of CAB: Unlike MambaIR, which uses the CAB module to mitigate channel redundancy, we found through activation visualizations that such redundancy was not significant in our task. Instead, we propose a Gated MLP module, which jointly processes spatial and channel-wise features, enhancing adaptability.
2. Channel separation in LSSM: We inject channel-separated features into the light adaptation process, enabling the network to better perceive channel brightness variations, thereby improving color consistency and balance across RGB channels.
3. LSSM as an optional component: While LSSM enhances feature extraction, it is not irreplaceable. We experimented with substituting LSSM with residual blocks and self-attention modules, and the model still achieved performance on par with state-of-the-art methods.
Additionally, LGA utilizes channel-separated features to guide color-mixed features, ensuring effective lighting adaptation while maintaining cross-channel consistency—a novel application not explored in prior works. We will provide clearer explanations and appropriate citations in the revision.
>Q3. How about the running speed of the proposed method for each image? Does it have advantages over other methods?
**A3:** We recognize the importance of computational efficiency and provide a running time comparison with state-of-the-art methods.
Table R3: Inference Speed Comparison (Seconds per Image on 480p).
| Method | Run Time |
|:-:|:-:|
| FECNet | 0.012 |
| COTF | 0.010 |
| CSRNet | 0.012 |
| RetinexMamba | 0.4778 |
| RetinexFormer | 0.028 |
| MambaIR | 0.4595 |
| Ours | 0.018 |
Our method achieves a competitive inference time of 0.018s per image, which is faster than RetinexFormer, RetinexMamba, and MambaIR, while maintaining superior enhancement quality. This demonstrates a favorable balance between efficiency and effectiveness.
>Q4. What are the differences between LALNet-Tiny and LALNet?
**A4:** LALNet-Tiny is a lightweight variant of LALNet, designed to improve efficiency while preserving strong performance. The key differences include:
* Reduced channel dimensions: Fewer feature channels to reduce computational complexity.
* Fewer LSSM modules: A more compact design with fewer LSSM modules.
We will include a brief section describing LALNet-Tiny design and relationship to LALNet.
We appreciate the reviewer’s recognition of our contributions and their thoughtful feedback. We believe these clarifications and improvements will further strengthen our work. Thank you for your time and valuable insights. | Summary: This paper proposes a unified light adaptation framework, Learning Adaptive Lighting Network (LALNet), based on the finding that the color statistics on spatial and frequency domains differ from different light-related tasks. On top of this observation, the paper introduces a Dual Domain Channel Modulation module that captures and aggregates color-separated and color-mixed embeddings for optimal light to the target task. The method shows superior performance by evaluating the model on four light-related datasets, such as exposure correction, image retouching, low-light enhancement, and tone mapping.
Claims And Evidence: The main finding of the paper is the color statistics on spatial and frequency domains differ from different light-related tasks. This observation leads to a unified framework that learns adaptive light for various light-related tasks. The finding and its corresponding architecture design sound good to me. And its effectiveness is well supported by experimental results.
Methods And Evaluation Criteria: The paper proposed a unified light adaptation framework for four light-related tasks, including exposure correction, image retouching, low-light enhancement, and tone mapping. For this purpose, the paper evaluates the method on exposure correction (SCIE (Cai et al., 2018)), image retouching (HDR+ Burst Photography (Hasinoff et al., 2016)), low-light enhancement (LOL dataset (Wei et al., 2018)), and tone mapping (HDRI Haven (Zhang et al., 2019a), along with common image evaluation metrics, such as PSNR, SSIM, etc.
The proposed method, dataset, and evaluation criteria make sense for the paper's problem definitions.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: The experiments are conducted on four different light-related tasks. One concern was that the used dataset seems small-scale, so the scalability of the proposed method is curious. But, supplementary material (Sec E) shows more complex and larger-scale experiments on low-light enhancement, HDR survey, and UVTM dataset. So, the overall experimental design and setup are sound good to me.
However, the LALNet-Tiny is not described in the manuscript how designed from LALNet.
Supplementary Material: The supplementary material provides 1) additional related works on image retouching, tone mapping, exposure correction, low-light image enhancement, 2) further analysis of motivation, 3) detailed architecture for the proposed modules (LDP, LSSM, iterative detail enhancement), 4) additional ablation study, and 5) additional visualization results.
Relation To Broader Scientific Literature: The paper's key contribution is a unified framework for various light-related tasks, which can influence low-light enhancement, exposure correction, HDR, retouching, etc.
Essential References Not Discussed: The related works on a unified framework, low-light enhancement, exposure correction, and tone mapping are adequately cited in the main paper and supplementary material.
Other Strengths And Weaknesses: [Weakness]
* The presentation for the method section can be improved by adding subsection and subsubsection. For instance, the color mixing representation section is composed of the wavelet transform part, the Light State Space Module (LSSM) part, and its first, second, and third streams. They can be divided rather than describing them in a single paragraph. This is the same for other method sections. In general, the current presentation reduces the readability of the method sections.
Other Comments Or Suggestions: * I think I might have missed it, But I can not find the loss function to train the model.
* LALNet-Tiny is not described in the manuscript how designed from LALNet.
Questions For Authors: Please answer the weakness (Q1), other comments (Q2, Q3)
Ethical Review Concerns: No ethical concerns
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s constructive feedback and valuable suggestions. Below, we address the key concerns and outline our planned improvements.
>Q1. Improving the Readability of the Method Section
**A1:** To improve the clarity of the Method section, we have restructured it into a more logical and intuitive organization. The revised structure is as follows:
* 2.1 Motivation: We introduce the fundamental challenges of lighting adaptation and explain why a unified framework is necessary.
* 2.2 Framework Overview: We provide a high-level description of the proposed framework, detailing its architectural design and how different components interact.
* 2.3 Color Separation Representation: Our model first extracts color-separated features to decouple illumination-related information in both the frequency and spatial domains. This step enables the network to learn distinct representations for different color channels, mitigating color distortion and channel-dependent illumination inconsistencies.
* 2.4 Color Mixing Representation: We introduce mixed channel modulation for capturing inter-channel relationships and lighting patterns that achieve a harmonious light enhancement.
- 2.4.1 Mixed Channel Modulation: This subsection introduces mechanisms to effectively integrate cross-channel lighting information.
- 2.4.2 Light State Space Module (LSSM): We introduce the LSSM, which leverages state-space representations to refine the global illumination representation dynamically, improving stability and adaptability under varying lighting conditions.
* 2.5 Light-Guided Attention: We propose a novel light-guided attention mechanism that utilizes color-separated features to guide color-mixed light information for adaptive lighting. This mechanism enhances the network’s capability to perceive changes in channel luminance differences and ensure visual consistency and color balance across channels.
This restructuring ensures that the methodology follows a progressive flow from problem motivation to implementation details, making it easier for readers to understand.
>Q2. Loss Function for Model Training
**A2:** Thank you for your valuable feedback. Our framework optimizes light adaptation using the following objective function:
The overall objective function is formulated as follows:
$L_{\mathrm{total}} = \alpha L_{Re} + \beta L_{SSIM} + \gamma L_{HF} + \eta L_{p}$,
where:
* $L_{Re}$: reconstruction loss,
* $L_{p}$: perceptual loss,
* $L_{HF}$: high-frequency loss,
* $L_{SSIM}$: structural similarity loss.
$\alpha$, $\beta$, $\gamma$, and $\eta$ are the corresponding weight coefficients.
To justify our design, we conducted an ablation study (Table R2):
Table R2: Ablation studies on different loss functions.
| $L_{Re}$ | $L_{HF}$ | $L_{SSIM}$ | $L_{p}$ | PSNR↑ | SSIM↑ |
|:-:|:-:|:-:|:-:|:-:|:-:|
| ✔ | ✖ | ✔ | ✔ | 30.14 | 0.944 |
| ✔ | ✔ | ✖ | ✔ |29.88 | 0.941 |
| ✔ | ✔ | ✔ | ✖ | 29.72 | 0.939 |
| ✔ | ✔ | ✔ | ✔ | 30.36 | 0.946 |
These results demonstrate that incorporating all four losses leads to the best overall performance, underscoring the necessity of each component. We will clarify this in the revised manuscript.
>Q3. LALNet-Tiny Design Description
**A3:** LALNet-Tiny is a lightweight variant of LALNet, designed to improve efficiency while preserving strong performance. The key differences include:
* Reduced channel dimensions: Fewer feature channels to reduce computational complexity.
* Fewer LSSM modules: A more compact design with fewer LSSM modules.
We will include a brief section describing LALNet-Tiny design and relationship to LALNet.
We appreciate the reviewers' valuable insights. Our key improvements include:
1. Method Section Restructuring: Introducing explicit subsections and improving explanations with figures.
2. Loss Function Clarification: Providing a detailed breakdown, justification, and coefficient selection strategy.
3. LALNet-Tiny Performance Analysis: Add a detailed description.
We believe these enhancements will further strengthen our work and address the reviewers' concerns effectively. Thank you for your constructive feedback!
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Most of my concerns are resolved well.
For Q3, could you describe the details of LALNet-tiny, such as exactly how many channels or LSSM modules are reduced?
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for their continued interest in our work. Regarding the details of LALNet-Tiny, we provide the following clarifications:
In the configuration file, we have two hyperparameters to control the model size:
* **n_feat:** denotes the number of channels;
* **num_blocks:** is a list where the list length denotes the number of stages and num_blocks[i] denotes the number of LSSMs in the i-th stage.
In the full version of LALNet, the number of channels (n_feat) is 48 and num_blocks is configured as [6, 6, 6, 6]. Specifically, there are totally 4 stages, and each contains 6 LSSM modules.
In the LALNet-Tiny version, the number of channels (n_feat) is 32 and num_blocks is configured as [1, 1]. That is, LALNet-Tiny has only 2 stages, each containing one LSSM module.
These changes allow LALNet-Tiny to maintain good performance while being more computationally efficient. We will include a detailed description of LALNet-Tiny in the revised version. Also, we will release the full source code and pretrained models after the acceptance of this work for further validation and reproducibility.
If you have any further questions or need more detailed information, please feel free to let us know. | Summary: This paper proposes a unified framework, LALNet, for handling multiple light-related tasks in computer vision, such as exposure correction, image retouching, low-light enhancement, and tone mapping. The authors identify common properties across these tasks, such as varying light properties in different color channels and their differences in both spatial and frequency domains. LALNet leverages these properties by using a dual-branch architecture that separates and mixes color channels, incorporating a Light Guided Attention mechanism to ensure visual consistency across channels. The framework is designed to adapt to light variations while preserving image details. Extensive experiments demonstrate that LALNet outperforms state-of-the-art methods in several tasks and requires fewer computational resources.
Claims And Evidence: The motivation of the paper is unclear. In the abstract, the author states that current research addresses light-related tasks, such as HDR imaging and exposure correction, individually. However, it's not clear why handling these tasks separately is a significant drawback. The use of "however" seems to be a transition to introduce the author's work, but logically, there is no strong connection between the two statements, which weakens the paper's fundamental motivation. The writing in other parts of this paper also has not explained this point clearly yet.
Methods And Evaluation Criteria: This paper proposes multiple components, such as MCM, DDCM, LGA, and LSSM. However, from Table 5, it seems that LSSM is relatively crucial for performance improvement. Here, LSSM appears to be a structure borrowed from mamba. The author should clearly elaborate and explain this and provide proper citations. In addition, it is not clear what the relationship is between these components and the motivation claimed by the author in Figure 2. The author only lists the necessary materials and steps like writing a recipe or instruction manual (even so, there is still much room for improvement in writing).
Theoretical Claims: There are no proofs for theoretical claims.
Experimental Designs Or Analyses: Just from the text expounded in this article, the experimental designs and analysis are reasonable.
Supplementary Material: I have reviewed the supp.
Relation To Broader Scientific Literature: Previous work in this paper has used the unified framework to handle lightness adaptation. This paper should emphasize the differences from these works and further prove the rationality of its own motivation. The current manuscript appears to be a collection of various designed modules without a clear, engaging integration.
Essential References Not Discussed: No essential references are not discussed.
Other Strengths And Weaknesses: Although the network proposed by the author has achieved a great improvement in final performance, I still have the following concerns.
1. Weak Motivation and Unclear Justification: The manuscript claims that current methods address light-related tasks in isolation, but fails to explain why this is a significant issue. The authors introduce their work by stating "however," but there is no clear link between the shortcomings of existing methods and the necessity of their approach. This weakens the foundational argument for the proposed framework.
2. Modular Approach Lacking Coherent Integration: The paper presents several modules, such as Dual Domain Channel Modulation and Light Guided Attention, but they feel disconnected. While each module may perform well individually, there is insufficient explanation of how these modules are effectively integrated into a unified framework that addresses multiple light-related tasks. This lack of integration diminishes the overall impact of the framework.
3. Writing Style Needs Improvement: The writing style of the manuscript has significant room for improvement. The current structure and presentation make it difficult to grasp the key points clearly. The arguments are often disjointed, and the flow of ideas lacks coherence, which hinders the reader's understanding of the main contributions of the paper.
4. Lack of True Unified Framework: Although the paper claims to present a unified framework, it seems that, if I am not mistaken, the framework still requires separate training for different tasks. In practice, this means that multiple sets of parameters are needed for various tasks, which raises the question of how this approach offers any advantages over training task-specific networks separately. From a practical application standpoint, this limitation undermines the claim of achieving a truly unified solution.
Other Comments Or Suggestions: see above.
Questions For Authors: see above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their feedback and the opportunity to clarify our work. Below, we address the concerns raised.
> Q1. Motivation and Justification
**A1:**
As detailed in the “Introduction” section, many light-related tasks—such as exposure correction, image retouching, low-light enhancement, and tone mapping—share a common goal: adjusting scene lighting to achieve perceptual optimality. However, each task has its own subtle emphasis:
- **Exposure correction** balances under/overexposed regions.
- **Image retouching** enhances aesthetics via global illumination adjustments.
- **Low-light enhancement** brightens dark areas while suppressing noise.
- **Tone mapping** compresses HDR content while preserving details.
Existing methods typically rely on task-specific architectures and training strategies. We are not claiming that handling these tasks separately is inherently a drawback. Instead, our key motivation is that treating these tasks under a unified framework offers significant advantages:
1) **Improved generalization** across diverse lighting conditions (Figure 1).
2) **Enhanced efficiency** by reducing the need for multiple specialized models.
3) **Consistent performance** across tasks without significant trade-offs (Table 1, Table 2, Table 3, and Table 4).
Indeed, prior works have attempted unified treatments for some of these tasks but often sacrificed performance compared to specialized models. Along this avenue, we aim to design a unified framework that effectively generalizes across diverse light adaptation tasks. Our proposed LALNet achieves this goal by delivering robust performance across multiple light-related tasks (Figure 1).
> Q2. Modular Approach Lacking Coherent Integration
**A2:** LALNet’s design is grounded in two key insights from light-related tasks:
(i) Different color channels have different light properties;
(ii) The channel differences reflected in the spatial and frequency domains are different.
To effectively leverage these observations, LALNet employs a dual-branch architecture:
* **MCM (Mixed Channel Modulation):** Capture channel-mixed features, focusing on inter-channel relationships and lighting patterns.
* **DDCM (Dual Domain Channel Modulation):** Extract color-separated features, focusing on light differences and color-specific luminance distributions for each channel in the spatial and frequency domains.
To ensure harmonious fusion of these complementary features, we introduce:
* **LSSM (Lighting State Space Model):** Integrates color-separated and color-mixed features to enhance illumination consistency across channels.
* **LGA (Lighting-Guided Attention):** Uses color-separated features as queries, guiding the learning of inter-channel illumination and noise differences for adaptive light restoration.
Our ablation studies (Table 5) confirm that each component is indispensable, as removing any one leads to performance degradation. We will further clarify these interdependencies in the revised manuscript.
> Q4. Unified Framework
**A4:** We appreciate the reviewer’s concern regarding whether LALNet constitutes a truly unified framework. While it is true that each task is optimized separately within LALNet, these tasks share a common architecture and design principles. This shared structure provides a promising advantage over entirely separate task-specific models, considering practical streamlined deployment for low-level tasks. A single framework can be adapted to multiple tasks with minimal adjustments, simplifying hardware deployment. We will clarify this point further in the revised manuscript.
>Q4. Writing and Presentation Improvements
**A4:** We sincerely appreciate the reviewer's feedback regarding the writing style and structure of our manuscript. We acknowledge that certain aspects, such as missing citation ("Mamba"), the improper use of transitions (e.g., "however"), and disjointed arguments, may have affected the clarity and coherence of our key points. To address these concerns, we will carefully revise the manuscript to enhance the logical flow, strengthen the motivation, and improve the articulation of our contributions and methodology, ensuring they are more accessible to readers.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the rebuttal, which has addressed some of my concerns.
However, in the spirit of constructive dialogue and with a sense of responsibility to the community, I would like to further discuss the motivation of the paper. The authors claim that the proposed unified framework offers three significant advantages.
Regarding the first claimed advantage—*improved generation*—Figure 1 shows that the proposed method achieves strong performance on individual tasks. However, it remains unclear how these improvements are related to the claimed *generalization*. As shown in Figure 1, the results are still obtained by training separately on each task. I would encourage the authors to further clarify how they define and evaluate generalization in this context.
As for the second point—*enhanced efficiency by reducing the need for multiple specialized models*—this appears to be a claim rather than a demonstrated result. It would strengthen the paper if the authors could provide empirical evidence to substantiate this point.
Finally, with respect to the third claimed advantage—consistent performance across tasks without significant trade-offs (Table 1, Table 2, Table 3, and Table 4)—I would appreciate further clarification on whether this consistency is a unique property of the proposed approach. Alternatively, do prior methods also maintain such consistency without exhibiting clear trade-offs in similar settings?
Overall, based on the author's response, the paper appears to present a modest contribution—one with some value, but not particularly novel. It seems to offer a relatively incremental and conventional network improvement, which may be seen as somewhat routine within the current landscape.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer's constructive feedback and the opportunity to clarify our contributions further. Below, we address each point in detail:
### 1. Clarification on Generalization
In our manuscript, we introduce LALNet as a **unified framework**. But, through the discussions with you and other reviewers, we have come to recognize that this term may not fully convey the unique characteristics of our approach. To provide a clearer understanding, we will revise the terminology in the manuscript and describe LALNet as a **multi-task framework**. It is important to clarify that LALNet is not an all-in-one model, i.e., one model performs well in multiple tasks. Instead, LALNet operates as a multi-task framework, where each task is trained independently while leveraging the same underlying architecture. This design allows the models to take advantage of the shared architecture, thereby simplifying hardware deployment in multitasking scenarios.
Additionally, as shown in Table R4, we evaluate LALNet on third-party datasets. The results demonstrate that LALNet exhibits promising adaptability to unseen data, further validating its effectiveness as a multi-task framework and its consistent performance across diverse tasks.
Table R4: Validating generalization on third-party datasets includes HDR Survey and UVTM video datasets.
|Datasets|Metrics|HDRNet|CLUT|CoTF|ZS-Diffusion|IVTMNet|Ours|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|HDR Survey|TMQI|0.8641|0.8140|0.8612|0.8915|0.9160|0.9296|
|UVTM|TMQI|0.8281|0.8799|0.9006|0.8871|0.8991|0.9584|
### 2. Empirical Evidence for Efficiency Improvements
**From the deployment perspective:** LALNet requires the design of only one hardware path, loading different models for different tasks. In contrast, multiple task-specific models require distinct hardware paths, each with different operators and resource allocations, which complicates deployment, especially on edge devices. This simplicity is especially beneficial in resource-constrained environments, where efficient deployment is critical.
**Performance as a single model:** As shown in Table R5, the comparison between multiple task-specific models and LALNet demonstrates that LALNet significantly reduces computational cost in terms of FLOPs, parameters, and inference time.
Table R5: Computational cost comparison.
|Method|#Params|GFLOPs|Run Time|
|:-:|:-:|:-:|:-:|
|RetinexMamba|4.59M|42.8|0.478|
|RetinexFormer|1.61M|15.6|0.028|
|MambaIR|4.31M|110.1|0.460|
|Ours| 230K|1.8|0.018|
### 3. Clarification on the Consistency
As shown in Fig. 1 and Tab. 1–4, LALNet achieves significant performance gains across all four tasks:
* Exposure correction: +0.86dB
* Image retouching: +2.15dB
* Low-light enhancement: +1.23dB
* Tone mapping: +3.86dB
These results demonstrate that LALNet achieves consistent performance across all tasks, unlike some existing methods, such as in the exposure correction task, Retinexformer 23.75 vs. 23.56 RetinexMamba, but in the image retouching task, Retinexformer 26.20 vs. 26.81 RetinexMamba. And COTF (CVPR 2024), which performs well (ranked $3^{rd}$ in Table 1) in exposure correction but shows a drop in performance for image retouching (ranked $10^{th}$ in Table 2) and low-light (ranked $10^{th}$ in Table 4) tasks.
The consistency between tasks in LALNet is due to our careful observation of these tasks, and based on the physical properties of the light-related tasks, we designed different modules to ensure the light consistency between different tasks.
### 4. Clarification of Innovation
Compared to existing methods, LALNet introduces significant innovations rooted in a careful analysis of light-related tasks. Based on these insights, we developed a multi-task framework specifically tailored for handling multiple light-related tasks.
Our framework is grounded in the physical properties of light-related tasks, where different color channels exhibit unique lighting characteristics. We design the DDCM module to extract color-separated features, focusing on light differences and color-specific luminance distributions in both spatial and frequency domains. Additionally, the LSSM integrates both color-separated and color-mixed features, enhancing illumination consistency across channels. By using the extracted color-separated features as queries, LGA guides the main channel-mixed features to learn channel differences, improving adaptability for task-specific needs.
These physics-based design innovations are first introduced in our work and have been validated across multiple tasks to demonstrate their effectiveness. We believe that LALNet represents a meaningful step forward, offering a novel approach to task modeling grounded in the physical properties of light, rather than merely an incremental improvement.
We appreciate the reviewer’s thoughtful comments and will incorporate these clarifications. If you have any further questions or need more detailed information, please feel free to let us know. | null | null | null | null | null | null | null | null |
SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song Generation | Accept (poster) | Summary: This paper proposes a single-state autoregressive transformer for song generation that produces vocals and accompaniments either simultaneously or in an interleaved manner. SongGen-Mixed Pro utilizes the delayed token prediction method from MusicGen, along with an auxiliary vocal token prediction to enhance vocal learning. Additionally, SongGen-Interleaving explores interleaving methods such as A-V or V-A.
## Update after rebuttal
After reading all the reviews from different reviewers and the author’s feedback, I still believe this paper presents solid work, despite the limitation of using a 16 kHz sampling rate. I strongly recommend accepting this paper for ICML 2025.
Claims And Evidence: The proposed idea is very simple yet powerful. Although this paper follows the overall framework of MusicGen (specifically the delayed token prediction method), it significantly improves mixed acoustic audio generation performance by incorporating auxiliary vocal token prediction. This simple concept has the potential to enhance overall song generation performance.
Furthermore, the authors adopted an interleaving prediction method for dual-track generation. They investigate the effectiveness of different prediction orders, such as vocal-first or accompaniment-first.
I recommend adding a discussion of similar work, such as MusicGen-Stem [1], which predicts the bass token first, followed by drums and other components, to edit music. This suggests that disentangling some components is important for predicting others.
[1] Rouard, Simon, et al. "MusicGen-Stem: Multi-stem music generation and edition through autoregressive modeling." ICASSP, 2025.
Methods And Evaluation Criteria: It would be preferable to include the evaluation code after the paper is accepted, given the limited open-source implementations for music generation. This could have a significant impact on the audio generation community.
Theoretical Claims: This paper used a well-defined token prediction method.
Experimental Designs Or Analyses: The main concern is that SongGen utilizes a much larger dataset compared to others. Furthermore, I could not believe that X-Codec could outperform other codecs, given that it was only trained on a speech dataset. Did you retrain X-Codec with an audio dataset? Please provide more details about the codec.
In addition, the model uses a sampling rate of 16 kHz. This may undermine the contributions of the paper, as many audio generation models should be trained at sampling rates above 32 kHz. Moreover, many objective metrics are calculated using a sampling rate of 16 kHz, which might lead to unfair comparisons.
Although the authors conducted many ablation studies, these do not demonstrate the superiority of this model when using 16 kHz audio.
There is important information available at sampling rates above 16 kHz. Please move the limitation section to the main manuscripts.
Supplementary Material: .
Relation To Broader Scientific Literature: .
Essential References Not Discussed: .
Other Strengths And Weaknesses: .
Other Comments Or Suggestions: For interleaving methods, do you mix the vocals and accompaniment by simply adding them together? I suggest building a post-mixing parallel generation model that uses the generated vocal and accompaniment to produce a refined waveform. If possible, you could also design the model to incorporate audio super-resolution or stereo generation.
Questions For Authors: The details of X-Codec should be described more.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's positive response and constructive comments. Below, we address each of the concerns and suggestions in detail.
**Q1:Details of X-Codec**
In our work, we use the publicly released 'xcodec_hubert_general_audio' checkpoint provided by the X-Codec authors. This model is trained on a large-scale private dataset (~200,000 hours) with a distribution similar to AudioSet, which is specifically adapted for general audio tasks.
While Encodec and DAC indeed yield better perceptual quality for ***audio reconstruction***, we observed that in ***song generation***, both codecs resulted in higher rates of invalid outputs—such as failure to follow lyrics and generation of noise or silence. In contrast, X-Codec consistently demonstrated more stable training, faster convergence, and higher success rates in generating coherent vocals, with notably lower PER scores. We speculate that the advantage of X-Codec may stem from its pretraining on a large amount of music data. Additionally, as emphasized in the X-Codec paper, the incorporation of not only acoustic but also semantic features from SSL representations might also contribute positively to the performance.
Although Encodec and DAC have been widely adopted in prior audio generation systems across domains such as speech, sounds, and instrumental music, song generation presents a substantially higher level of semantic complexity. Unlike speech, singing involves a broader pitch range, a rich variety of expressive techniques, and highly dynamic rhythmic patterns. Moreover, songs demand the precise coordination of vocal and instrumental components to achieve harmonic and structural coherence.
Despite X-Codec operating at a relatively low sampling rate of 16 kHz, we selected it as the most suitable option available at the time. To date, high-fidelity, song-specific neural codecs tailored for generative modeling remain an open challenge in the research community.
**Q2: Use of 16 kHz Sampling Rate and Its Limitations**
We appreciate the reviewer’s critical insight regarding the sampling rate. We acknowledge that using 16 kHz may limit audio fidelity and overlook important high-frequency content. As suggested, we will move the discussion of this limitation to the main manuscript.
As discussed in Q1, high-fidelity, song-specific codecs suitable for generation are still lacking. We thank the reviewer for this important suggestion, and we are actively working on integrating audio super-resolution modules and stereo rendering to mitigate this limitation in future work.
Regarding evaluation, we followed widely used protocols to ensure consistency with prior work. However, we recognize that many objective metrics computed at a 16 kHz sampling rate may fail to capture information above this frequency range. We agree that developing higher-resolution evaluation metrics is crucial for advancing the field of audio generation.
We would also like to clarify that all ablation studies reported in the main manuscript were conducted using X-Codec at a 16 kHz sampling rate, which reflects the model’s performance under this setting.
We believe these results still offer valuable insights, demonstrating the effectiveness of our proposed token pattern design for jointly modeling vocal and accompaniment tracks within a single-stage autoregressive Transformer framework.
**Q3: Track Mixing**
Thank you for the valuable suggestion. Currently, we mix the vocal and accompaniment tracks by simple waveform addition. We agree that developing a post-mixing parallel generation model could further improve the quality of the final output, and we plan to explore this direction in future work.
**Q4:Discussion of MusicGen-Stem**
Thank you for the suggestion. We agree that MusicGen-Stem provides valuable insights into the importance of disentangling different components in instrumental music generation. We will include a discussion of this work in the revised Related Work section.
**Q5:Release of Evaluation Code**
Thank you for the suggestion. We will release the evaluation dataset and code to support reproducibility and benefit the broader song generation research community.
---
Rebuttal Comment 1.1:
Comment: After reading all the reviews from different reviewers and the author’s feedback, I still believe this paper presents solid work, despite the limitation of using a 16 kHz sampling rate. I strongly recommend accepting this paper for ICML 2025.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your encouraging feedback. We are truly grateful for your recognition of our work and your strong recommendation. Your thoughtful review and kind support mean a great deal to us. We sincerely appreciate the time, effort, and expertise you dedicated to evaluating our submission, and your positive endorsement greatly motivates us to further pursue this line of research. | Summary: The authors propose SongGen, a pre-trained model for text-to-song generation supporting a variety of input controls (voice identity, music style description, lyrics) and two output modes (mixture, vocals + accompaniment independently). They explore numerous training configurations for modeling two streams of audio tokens with a single model. The authors will release pre-trained model weights, code, and data annotations.
Claims And Evidence: Key claims are not supported by adequate evidence. For example, it is claimed that “SongGen significantly outperforms Stable Audio Open, MusicGen, and Parler-tts across both subjective and objective metrics”. There are a few issues with this claim. Firstly, no precise analyses of statistical significance are presented to substantiate the claim. Secondly, it is unclear how statistical significance could be formulated for distribution-level metrics like FAD with only one training run (and therefore one sample). Thirdly, the claim is a strawman w.r.t. models like MusicGen and StableAudioOpen which do not support lyrics conditioning.
Methods And Evaluation Criteria: The proposed methods of delay patterns are reasonable, however there is limited evidence of broader applicability - the methods may be fairly narrowly constrained to this specific task of multitrack music modeling. The evaluation criteria are reasonable overall, though a challenge here is heterogeneity in input modalities supported by baselines. It would have been preferable to break down the evaluation into sets of baselines with a common “type signature” using appropriate prompts, e.g.: instrumental only (MusicGen, StableAudio, proposed), and voice + lyrics (Suno, Jukebox, proposed). As it stands, the comparisons are straw man. Also, given the subjectivity of the task, I would have strongly preferred to see a pairwise subjective evaluation setup over opinion scores.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Experimental setups are overall reasonable. Would have been nice to see a quantitative analysis of the proposed data processing pipeline. Does the lyric recognition pipeline improve performance on a small dataset of gold standard lyrics transcription? Does the proposed CLAP-based captioning filtering improve human-judged relevance overall?
Supplementary Material: Briefly reviewed all the supplementary audio material. I am fairly intrigued by the claims in section D that semantic information is helpful for generation. I would like to see this investigated in greater detail, as this observation seems to represent a relative increase in metrics that far exceeds that of the proposed methods in the rest of the paper.
Relation To Broader Scientific Literature: This paper generally relates to an increasing interest in text-to-music generation emphasizing broader forms of control and increasing quality. There are a number of recent papers that explore joint modeling of multi-stream audio, e.g., Moshi (Défossez et al. 2024) and similarities / differences of the token patterns. There is little to no discussion of these other works.
Essential References Not Discussed: Other work exploring joint modeling of multi-stream audio (both within music and outside in speech). Other papers that focus on multi-stream generation (e.g. SingSong Donahue et al. 2023, Diff-A-Riff Nistal et al. 2024)
Other Strengths And Weaknesses: Strengths: the release of the weights for this model will be helpful for the broader open weights music AI research community.
Weaknesses:
- Results are very low fidelity (16kHz, noisy) - is this all the codec or is the generative model contributing as well? Why not use a higher quality codec?
- The authors frame joint modeling of P(voice, accompaniment) as a feature over pipeline-based approaches of P(accompaniment | voice) * P(voice). However, this could just as easily be framed as a criticism, as this model does not obviously support vocals-to-accompaniment generation.
- Overall, an impressive engineering feat, but limited interest from a research point of view.
Other Comments Or Suggestions: “Accompaniment is easier to produce” in intro: unjustified, subjective, potentially misleading
Questions For Authors: - Why are the objective metrics missing for GT / Suno?
- In sound examples, the reference voice contains the input lyrics - does SongGen generalize when the reference voice differs from lyrics prompt?
- Why does “GT” sound so compressed in the sound examples page?
- What does it mean to release “annotated data”? Does this mean releasing just the annotations or the MSD audio (which is copyrighted)?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and critical feedback. Below, we provide point-by-point responses to the concerns.
**Q1: Single-stage vs. Two-stage**
We respectfully disagree that joint modeling should be viewed as a criticism. For text-to-song generation, single-stage models consistently outperform the two-stage pipeline in both efficiency and generation quality. Due to space limitations, we kindly refer the reviewer to Response Q1 to Reviewer F4Th for supporting experiments and detailed discussion.
**Q2: Low Fidelity (16 kHz); Codec Choice**
In our experiments, we tested several codecs, including higher-fidelity options like Encodec and DAC. While these codecs perform well in speech and pure music generation, they show unsatisfactory performance in song generation. For a more detailed discussion of our codec selection, we kindly refer the reviewer to Response Q1 to Reviewer 3Stt.
Although X-Codec operates at 16 kHz, we selected it as the most suitable option available at the time. Currently, high-fidelity, song-specific codecs for generative modeling remain an open problem in the community. We fully acknowledge the limitations imposed by low fidelity, and we are actively working on integrating audio super-resolution modules to improve audio quality in future work.
**Q3: Clarification of “Accompaniment is easier to produce”**
We apologize for the confusion caused by this phrasing, and we will revise the sentence in the manuscript to avoid ambiguity.
Our intent was to highlight the learning bias observed during joint modeling of vocals and accompaniment. To illustrate this more intuitively, we include a visualization of the Content Enjoyment (CE) curves for both tracks, generated by the mixed-mode model over training steps (refer to [Section C of our anonymous demo page](https://songgen66.github.io/demos/) ). The figure shows that the accompaniment track improves much faster, reaching near-GT performance around 104k steps, whereas the vocal track improves more slowly, and still exhibits a noticeable gap from GT performance even after 168k steps.
**Q4: Evaluation Criteria and Fairness of Comparisons**
At the time of submission, there were no open-source baselines specifically for text-to-song generation (except Jukebox), which limited direct comparisons. We appreciate the reviewer’s suggestion and revise our evaluation to group baselines by input modality. We also incorporate automatic audio aesthetics metrics for a more comprehensive assessment.
For text-to-song, we compare against a two-stage pipeline and Parler-TTS*. Jukebox is excluded due to its impractical inference time (~3 hours for 20s on a V100). (refer to [Section B of our anonymous demo page](https://songgen66.github.io/demos/) ).
To evaluate accompaniment quality, we separated it from the generated songs and compared it with instrumental-only models. As shown in the table below, SongGen's accompaniment achieves performance between these two, despite being trained on a much smaller song dataset without pure instrumental music data.
| Model | KL↓ |CE↑ | PC↑ |
|-|-|-|-|
|MusicGen |0.74|7.29|5.32|
| Stable Audio Open|1.17|6.36|3.98|
| Mixed pro (ours)|0.88|6.40|5.21|
**Q5: Quantitative analysis of the proposed data processing pipeline**
Thank you for the helpful suggestion. However, building a gold-standard lyrics dataset and conducting human evaluations are time-consuming. To provide timely feedback, we instead randomly sample 5,000 examples from the training set and evaluate the filtering strategy using recently proposed automatic metrics for audio aesthetics and text-audio alignment. The results show that edit-distance filtering improves Content Enjoyment(CE) and Production Quality(PQ), while CLAP-based filtering increases CLaMP3 scores, indicating stronger audio-text relevance.
| filter |sample nums |CE↑ | PQ↑ |
|-|-|-|-|
|random sample|5000|6.77|7.15|
|edit distance<=20%|3038|6.97|7.31|
|edit distance<=5%|1680|7.04|7.37|
| filter |sample nums |CLaMP3 ↑|
|-|-|-|
|random sample|5000|0.135|
|CLAP>=0.25|1648|0.143|
**Q6: Missing Objective Metrics for GT / Suno**
We noticed that prior works typically do not report objective metrics for GT, and we thus omitted them as well. We now include them for completeness: FAD 0, KL 0, CLAP 0.18, CLaMP3 0.1052, PER 21.39, SECS 76.42. As the CLAP score for GT is unexpectedly low, we also report CLaMP3 for a more robust text-audio alignment evaluation. For Suno, due to the lack of an official API, all user study samples were generated manually. With 326 samples, full objective evaluation was infeasible due to the high manual cost.
**Q7: Why Does GT Sound Compressed?**
In the demo page, we use X-Codec to reconstruct the GT audio for a fair comparison under consistent codec settings and to represent the model's upper bound.
**Q8: Annotated Data**
We only release annotations from MSD, including VAD results, aligned lyrics, and generated captions.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. A couple follow-ups:
Q1: My criticism here was not about quality. Instead, it is about that the single stage approach *removes a control capability* offered by two-stage models. Namely, the ability to generate an accompaniment *given* a pre-existing (non-generated) vocal input. Please clarify if I am still misunderstanding.
Q6: I'm confused - the paper says in "Evaluation dataset and metrics" that the test set is 326 samples for all methods. Are you using a bigger set to compute metrics like FAD? If so, can you clarify and add those details to the paper?
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response.
**Q1:**
Thank you for raising this point — we now better understand your concern. Our current single-stage framework focuses on joint generation and does not support vocals-to-accompaniment generation by default. However, this control capability can be naturally integrated into our framework via fine-tuning. As a preliminary solution, we prepend the vocal track to the target audio sequence in the decoder during fine-tuning, enabling the mixed-mode model to generate a song conditioned on a pre-existing vocal input. Generation demos are provided in [Section D of our anonymous demo page](https://songgen66.github.io/demos/).
In addition to offering improved generation quality, our single-stage model is also flexible and extensible. With some adaptations, it can support track-conditioned generation (e.g., vocals, drums, bass). While our current fine-tuning strategy serves as a simple proof of concept, we plan to explore and implement more effective track-conditioning mechanisms in future work.
**Q6:**
We apologize for the confusion. We did not use a bigger set to compute metrics such as FAD. As there is currently no standardized public benchmark for song generation, we selected 326 song-relevant samples from the widely used MusicCaps test set to ensure transparency. All objective evaluations (including FAD) were conducted on this same 326-sample test set across all models.
Please feel free to let us know if you have any further questions or suggestions. | Summary: SongGen is a single-stage autoregressive Transformer that takes lyrics, description, and optional reference voice as input, and generates either mixed or dual-track (vocal/accompaniment) audio. High-level design of conditioning methods follow recent common practice using frozen encoders (MERT, T5, and VoiceBPE) through cross-attention, and proposes several token patterns (parallel vs. interleaving for dual-track modeling) along with an auxiliary vocal training loss (Mixed Pro) in a mixed mode for better vocal quality.
Claims And Evidence: While the design proposed in SongGen is technically correct with competitive quality against previous work of instrumental music generation, one of core claims "Traditional methods often rely on multi-stage processes, making pipelines inflexible and complex." would need further evidence by comparing a single-stage model (as in SongGen) versus a two-stage approach. While I understand that there're no suitable public baseline for evaluation, judging from the setup the authors have used (for example, using Demucs to gather vocal/accompaniment pairs), the authors can design a controlled experiment by training a two-stage Transformer stack (text-to-vocal & vocal-to-accompaniment).
Methods And Evaluation Criteria: The considered methods are based on variations of existing work (such as delayed codebook pattern in MusicGen), which are technical correct. Evaluation criteria include known objective and subjective metrics.
Theoretical Claims: This paper is mostly empirical, and I find no standout theoretical claims to evaluate.
Experimental Designs Or Analyses: The experimental setup employed several well recognized objective metrics (FAD, KL, CLAP, etc.) and subjective metrics on 5 attributes. My concern is about a rigor in statistical evaluation of the subjective metrics, where the confidence interval is lacking. I am not able to conclude if the improvements are significant.
Supplementary Material: I reviewed the demo samples.
Relation To Broader Scientific Literature: Simplifying singing music generation pipeline into a single-stage, decoder-only autoregressive model brings practicality and ease of use to end users, potentially fostering acceleration in open community research in this area where commercial models have been dominating.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: The simplicity of single stage Transformer design brings practicality to the end user, which does require significant effort and warrants credit. As mentioned in Claims and Evidence section, the claimed academic findings of the single-stage model being better than cascaded one has room to improve.
Other Comments Or Suggestions: No significant other comments.
Questions For Authors: Did the authors consider more controlled experiments of two-stage vs. single-stage appraoch, since the paper posited this as one of the main motivation towards single-stage design?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive comments, which are extremely helpful in improving our work. We are also grateful for your recognition of the technical soundness and practical value of our approach, as well as your acknowledgment of the substantial effort behind it. Below we provide detailed responses to your concerns.
**Q1: Controlled experiments-- Single-stage Outperforms Two-Stage**
Thank you for your insightful suggestion to conduct a controlled comparison. We train a two-stage Transformer stack using the same architecture and training data as SongGen. Specifically, in Stage 1, given lyrics, description, and a 3-second reference voice, the first model generates the vocal track; in Stage 2, given lyrics, description, and the generated vocal (prepended into the decoder), the second model generates the accompaniment track. The final song is then mixed from the two tracks.
We conduct both automatic and human evaluations. For automatic evaluation, we introduce recently proposed and effective metrics to provide a more comprehensive assessment, including the audio-text alignment score CLaMP3 [1] and content-based aesthetics metrics proposed by Meta [2], covering Content Enjoyment (CE), Content Usefulness (CU), Production Complexity (PC), and Production Quality (PQ). Additionally, we measured inference time on an A800 GPU for generating 30-second song samples.
***Generation audios are also provided in [Section A on our demo page](https://songgen66.github.io/demos/) for the reviewers to review.***
Table 1 Automatic Evaluation:
| Model| FAD↓ | KL↓ |CLAP↑ | CLaMP3↑ | CE↑ | CU↑ | PC↑ | PQ↑ | Inference Time↓ |
|-|-|-|-|-|-|-|-|-|-|
| two-stage|2.18|0.78|**0.29**|0.085|6.39|6.27|5.90|6.69| 42.85s|
| Mixed |1.74|0.71|**0.35**|0.093|6.50|6.66|6.14|7.03|18.02s|
| Mixed pro (ours)|**1.71**|**0.69**|**0.35**|**0.094**|**6.77**|**6.86**|**6.18**|**7.19**|18.04s|
| Interleaving (A-V) (ours)|1.87|**0.69**|**0.35**|0.093|6.67|6.72|6.11|7.12|34.5s |
Table 2 Human Evaluation:
| Model| OVL. |REL.| VQ. | HAM. | SS. |
|-|-|-|-|-|-|
| two-stage|3.39±0.03|3.20±0.04|3.98±0.07|2.97±0.04|3.89±0.03|
| Mixed |3.58±0.05|3.70±0.02|3.55±0.07|3.39±0.05|3.92±0.05|
| Mixed pro (ours)|**3.96**±0.04| 3.86±0.04| 4.07±0.06| **4.01**±0.05| **4.04**±0.05|
| Interleaving (A-V) (ours)|3.95±0.03| **3.87**±0.06| **4.15**±0.05| 3.82±0.03| 3.93±0.04|
Our results demonstrate that ***the single-stage model outperforms the two-stage pipeline in both efficiency and generation quality***:
- ***Efficiency***: Compared to the single-stage approach, the two-stage pipeline requires more complex training and inference procedures. Experimental results indicate that the inference time of the two-stage model is more than twice that of the mixed-pro single-stage model.
- ***Generation Quality***:
- Unlike joint modeling of P(vocal, accompaniment), which indicates the joint probability of modeling vocal and accompaniment, the pipeline-based approach, which separately optimizes P(vocal) and P(accompaniment ∣ vocal), may fail to capture global optimality due to error accumulation across stages. This limitation is especially problematic for song generation, where harmony between vocals and accompaniment is crucial.
For instance, in genres like rap, vocal rhythm is tightly coupled with the instrumental beat. Generating vocals first without considering the underlying rhythm may result in rhythm misalignment. Conversely, in expressive genres such as ballads, where vocals typically guide the emotional flow, generating accompaniment first may constrain vocal expressiveness, resulting in rigid or disconnected performances.
In both cases, pipeline approaches struggle to capture the intricate interplay between vocals and accompaniment. In contrast, joint modeling in a single-stage framework enables better coordination and global optimization, resulting in more coherent and musically aligned outputs.
- Our results further support these observations — our single-stage model consistently outperforms the two-stage pipeline across both automatic and human evaluations, particularly on the aesthetics metrics (CE, CU, PC, PQ) and subjective scores such as Overall Quality (OVL.) and Harmony (HAM.).
**Q2: Statistical Rigor of Subjective Evaluation**
Thank you for pointing this out. We have updated the results to include 95% confidence intervals for each subjective metric. Furthermore, we incorporate newly proposed aesthetics metrics[2] to enhance the evaluation and better reflect the improvements brought by our approach.
Due to space limitations, the updated results are provided in [Section B of our anonymous demo page](https://songgen66.github.io/demos/).
[1] CLaMP 3: Universal Music Information Retrieval Across Unaligned Modalities and Unseen Languages (Feb 2025)
[2] Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound (Feb 2025)
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal along with the additional experiments. The controlled experiment compared against the cascaded version would better illustrate the merit of this work. The qualitative case studies of the samples as described in the rebuttal will also be helpful (now that there're more from the demo). From the provided samples, the cascaded approach can be off-beat, which I think is also reflected by HAM.
Could you also specify the training details of the newly added two-stage baseline? Since it is the author's own, the reader would further question the fairness in terms of budget allocation and rigor in optimizing the training recipe. That would convince the readers about the soundness of the baseline and the merits of the proposed single-stage design.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments.
Regarding the newly added two-stage baseline, we provide more details to address your concerns.
The resource allocation and training strategy for the two-stage pipeline are consistent with those used for SongGen mixed training Step 1. Since voice-free support is not directly related to the core comparison between the one-stage and two-stage designs, we omit this part of the two-stage model in the rebuttal phase to provide timely feedback.
Specifically, the two models in the two-stage pipeline are trained separately for approximately 200K steps each, using 16 NVIDIA A100 (80GB) GPUs with a batch size of 16 per GPU. We observe that the loss begins to plateau around 60K steps for both models. For optimization, we employ the AdamW optimizer with β₁ = 0.9, β₂ = 0.99, and a weight decay of 0.0001. The learning rate is set to 0.0001, and we apply a cosine learning rate schedule throughout training.
To ensure reproducibility, we will make the complete training configurations and scripts for the two-stage baseline publicly available.
We hope this information helps address your concerns. Please feel free to let us know if you have any further questions or suggestions. We would be very grateful if you could kindly consider raising the score. | null | null | null | null | null | null | null | null |
Unveiling AI's Blind Spots: An Oracle for In-Domain, Out-of-Domain, and Adversarial Errors | Accept (poster) | Summary: The authors trained a secondary model (a mentor network) to predict whether a deep learning model would make a mistake or not. A mentor network consists of a backbone and two MLPs. The first MLP is trained to replicate latent variables of the mentee network whose answers are analyzed, and the second MLP is trained to predict the correctness of the mentee network. More specifically, the authors analyzed mentors’ accuracy on in-domain, out-of-domain and adversarial inputs.
Claims And Evidence: The topic is of high importance because probing deep learning models’ decision-making process will help us build reliable models, but it remains unclear what we can learn from the authors’ reported results. First, the reported accuracy of mentor networks is not much higher than the baseline measures. Second, a mentor network is another black box, meaning we still do not fully understand their decision-making process. The authors speculated that mentor network errors can reveal the loss landscape of mentee networks (see loss landscape analyses), but they do not provide any quantitative results. If they can provide some evidence supporting that mentors can explain decision-boundary (or even loss landscape) to mentees, their study may have a profound impact.
Methods And Evaluation Criteria: The authors compare their own algorithm with multiple baselines using multiple datasets, which is a strong point.
Theoretical Claims: There are no theoretical claims.
Experimental Designs Or Analyses: See my comments above.
Supplementary Material: Briefly going through them.
Relation To Broader Scientific Literature: The topic is highly important. But I do not think using another black-box model (i.e., mentor) can help us to better understand deep learning models (i.e., mentee) decision-making. The results presented are not sufficient.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: Q1: It is not clear why a mentor network would need two MLPs, each of which is independently trained. What happens if the first MLP, trained on the embedding of the mentee network, is removed?
Q2: The authors created out-of-domain examples by adding noises. As the corrupted inputs have the same semantic meanings, I do not think they are out-of-domain inputs. Can the authors further explain why the corrupted inputs are used as out-of-domain inputs?
Q3: The authors found that mentors can generalize across mentees. This result may be related to natural adversarial examples that many ImageNet models fail to correctly classify. Can the authors provide any insights into the implications of natural adversarial examples for the mentor-based analysis?
Reference for natural adversarial examples: Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., & Song, D.X. (2019). Natural Adversarial Examples. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 15257-15266.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **[YWy5.1-Claims And Evidence]** First, our paper serves as a proof-of-concept, showing that training mentors with adversarial attack (AA) errors from the mentee has a greater impact on improving error prediction accuracy than training with in-domain (ID) or out-of-domain (OOD) errors. In **Tab. 1**, we use the adversarial images produced in the final iteration of the PIFGSM method to train SuperMentor.
As requested by the reviewer, to probe the decision boundary of the mentees, we used adversarial images from all iterations of the PIFGSM method (with 3 iterations in total) to train our SuperMentor and named this new SuperMentor as "En-SuperMentor". By doing so, we can further diversify the samples and better capture the mentee's decision boundary. As shown in **Tab. R2 in [link](https://drive.google.com/file/d/1JtoYDxTmAcNfgN_6SVtoBGO1r7u49T6Q/view?usp=sharing)**, En-SuperMentor boosts average accuracy from 77.0% to 84.1%, which is significantly better than the other baselines and original SuperMentor.
Next, as requested, we also added two quantitative loss landscape analyses using the method in [b]. In these analyses, we apply small perturbations to the mentor's weights and monitor the resulting changes in SuperMentor’s average accuracy.
In the first loss landscape analysis (**Fig. R4 in [link](https://drive.google.com/file/d/1JtoYDxTmAcNfgN_6SVtoBGO1r7u49T6Q/view?usp=sharing)**), we examine mentors trained on three different error types: ID, OOD, and AA from the ImageNet-1K dataset. The results indicate that mentors trained on AA errors exhibit a wider loss landscape than those trained on ID or OOD errors. This suggests that training mentors with adversarial images makes them more robust to weight perturbations, thereby enhancing their ability to capture the generic features for predicting the mentee's decision-making. This finding strongly supports our loss landscape analysis discussed in **Sec. 4.1**.
The second loss landscape analysis (**Fig. R5 in [link](https://drive.google.com/file/d/1JtoYDxTmAcNfgN_6SVtoBGO1r7u49T6Q/view?usp=sharing)**) examines the mentor’s error prediction performance using two different backbones: ResNet50 and ViT. The figure shows that mentors with ViT architectures have a considerably wider loss landscape compared to those with ResNet50. This further reinforces our claims in **Sec. 4.2** that transformer-based mentor models outperform their ResNet-based counterparts in error prediction.
We will add these results and discussions in the final version.
[b] Li et al. "Visualizing the loss landscape of neural nets." NeruIPS 2018.
**[YWy5.2-Questions For Authors]** The MLP branch associated with distillation loss $L_d$ enables the mentor to mimic the mentee's predictions, and the MLP branch associated with loss $L_r$ is responsible for predicting whether the mentee will make mistakes on the input images. Without the first MLP for $L_d$, the mentor cannot effectively capture the learning patterns of the mentee within its feature extraction backbone. We conducted an ablation study (see **Appendix, Tab. S4**) by removing the first MLP (i.e., eliminating the distillation loss $L_d$). The experimental results indicate that excluding the first MLP leads to a significant decrease in the mentor’s performance across all datasets.
**[YWy5.3-Questions For Authors]** The concept of out-of-domain data remains somewhat ambiguous (Guérin, et al., 2023) in computer vision. In our paper, we define out-of-domain data as any data that falls outside the training domain (see **Sec. 3.3**), following the settings outlined in (Hendrycks & Gimpel, 2016; Luo et al., 2021; Yu et al., 2022). Although corrupted images have the same semantic meaning as clean images, the corruption methods introduce distributional shifts relative to the training domain. Whether these noisy images are classified as ID or OOD is irrelevant; what truly matters is that our SuperMentor can predict the errors of mentees when noise is added to the original images.
**[YWy5.4-Questions For Authors]** According to the suggested paper by the reviewer, the natural adversarial samples refer to the samples that always lead to wrong predictions regardless of which AI model we use. Following the same definition, we defined sets $N_1, N_2, N_3$ in the caption of **Tab. R3 in [link](https://drive.google.com/file/d/1JtoYDxTmAcNfgN_6SVtoBGO1r7u49T6Q/view?usp=sharing)**. The result in **Tab. R3** indicates that the natural adversarial samples in $N_1$ play a key role in explaining SuperMentor’s strong generalization across different architectures. However, SuperMentor can also perform well on the non-natural adversarial samples ($N_2$ and $N_3$). For example, it achieves 77.7% accuracy on the set $N_3$ for ID error types. This demonstrates that SuperMentor’s robust generalization ability is not limited solely to natural adversarial samples.
We will cite the suggested paper and add these new results in the final version.
---
Rebuttal Comment 1.1:
Comment: I appreciate the fact that the authors conducted more experiments, which further supports that mentor networks can learn mentee networks’ behaviors. Although I still am not sure what we can learn from the black-box models that are trained to explain other black-box models, the authors’ empirical studies may be helpful to future experiments. Thus, I am raising my rating to 3.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the time and effort you have put into thoughtfully reviewing our paper. Your feedback has been helpful in enhancing our work. | Summary: This paper proposes to use mentor models to predict the errors of mentee models. The mentor model can learn from the mistakes of the mentees on adversarial images and generalize to predict the in-domain and out-of-domain errors of the mentees. In addition, the mentor trained on a mentee generalizes well in predicting the errors of other mentees.
## update after rebuttal
The rebuttal solved my concerns, and I will keep my positive rating after the rebuttal if the authors could incorporate the clarification into their draft.
Claims And Evidence: The claims are supported by experimental evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation make sense for the problem setting.
Theoretical Claims: There is no theoretical claim in this paper.
Experimental Designs Or Analyses: The experiment design makes sense in this problem setting, and the authors compared the performance of their proposed method with a series of baselines.
Supplementary Material: Appendix A, D, H
Relation To Broader Scientific Literature: Previous relevant work on error monitoring relies on the manually defined metrics to estimate the likelihood of errors, while this work aims to train a mentor model to predict various errors that might happen in mentee model predictions.
Essential References Not Discussed: I didn't identify any such references.
Other Strengths And Weaknesses: Strengths:
This paper proposes to predict model errors in a unified way, including errors on the in-domain, out-of-domain, and adversarial images. The mentor model is also generalizable to other mentee models and error types. This paper is well-written and easy to follow.
Weaknesses:
This work focuses on the accuracy of predicting various errors for mentee models but does not distinguish the specific error type. In addition, applying mentor models requires training and aligning with the mentee models, which relies on the accessibility of mentee models and training data and, therefore, might limit its applicability. In addition, the evaluation of some common shifts between training and test data might be worth exploring, such as errors from spurious correlation, which can be categorized as ID errors and have great potential in applications.
Other Comments Or Suggestions: NaN
Questions For Authors: Except for the weaknesses I mentioned above, I only have minor questions:
1. Given that mentor models can recognize ID errors, will the mentor model facilitate the training of mentee models?
2. The paper mentioned that the joint training with ID, OOD, and AA data marginally outperforms the mentor trained only with AA. Was the number of samples the same for joint training or AA training? The improvement might be from the increased number of samples
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **[B54K.1-Other Strengths And Weaknesses]** Thank you for your insightful comments! We address the points raised as follows:
First, training a mentor to distinguish specific error types is a promising future research direction, as we have mentioned in **Sec. 5** of our paper. This task is more challenging than predicting whether a mentee will make an error. It requires a mentor to first achieve high accuracy in error prediction before extending its capabilities to distinguish the exact types of errors a mentee may encounter. Therefore, our work serves as a solid foundation for this promising research direction.
Second, we agree that our method requires access to the mentee's model parameters to generate adversarial images. Training the mentor without using these parameters (i.e., treating the mentee as a black box) presents an interesting and challenging future research direction. However, we respectfully disagree that our mentor needs access to the mentee's training data. Our mentor is instead trained solely on the mentee's behaviors on its evaluation datasets. This is a significant advantage for real-world applications, especially when training the mentee is resource-intensive (e.g., high memory usage) or when the training data of a mentee is inaccessible.
Third, as described in **Appendix, Sec. I**, we conducted experiments evaluating our SuperMentor on additional OOD domains of the ImageNet dataset, including errors arising from spurious correlations. Specifically, we tested our SuperMentor on datasets such as ImageNet9-MIXED-RAND [a], ImageNet9-MIXED-SAME [a], and ImageNet9-MIXED-NEXT [a], which include images with random backgrounds from a random class, random backgrounds from the same class, and random backgrounds from the subsequent class, respectively. These datasets are designed to reveal how models can misclassify objects by focusing on spurious background cues rather than the objects themselves. For instance, an AI model might label an object as a bird based solely on a tree in the background. The experimental results demonstrate that our SuperMentor can still achieve above-chance error prediction performance on these datasets.
We will include these discussions in the final version of our paper.
[a] Xiao, et al. "Noise or Signal: The Role of Image Backgrounds in Object Recognition." ICLR 2021.
**[B54K.2-Questions For Authors]** Thank you for the reviewer's insightful question! As mentioned in **Sec. 5** of our paper, the reviewer’s point is indeed a promising future research direction. We would like to emphasize that our work serves as an important foundation in this research direction, as mentors with higher error prediction accuracy may enhance the training of mentee models by correcting their mistakes on the fly.
**[B54K.3-Questions For Authors]** We would like to clarify that the number of samples differs between Joint training and AA training. According to **Appendix, Tab. S1**, Joint training uses 25092 samples, whereas AA training uses only 6774 samples. The marginal improvement observed in Joint training may be attributed to the larger training sample size. This suggests that the quantity of samples is not the key factor in enhancing a mentor’s error prediction performance. Training mentors on samples that accurately capture the error patterns of mentees is crucial. For instance, the adversarial images generated using the PIFGSM method in our paper effectively illustrate this point. With only 6774 training samples from PIGFSM—one-fourth the size of the Joint training dataset—our SuperMentor on AA training achieves performance comparable to Joint training. These findings further underscore the significance of our study by providing critical insights into optimal training practices for mentor models.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal from the authors and I will keep my rating. I agree that it would be an advantage if the mentor model only needs the evaluation datasets and the discussion on the comparison of joint and AA training makes sense to me. In addition, does the joint training use all samples from ID, OOD, and AA in Tab S1? A minor confusion is that I'm not sure which datasets and models correspond to "Joint training uses 25092 samples, whereas AA training uses only 6774 samples" in the rebuttal. The ResNet mentee with CIFAR 10 described in Appendix D has a total number of 5766 test samples for AA and less than 10k test samples for the sum of ID, OOD, and AA.
---
Reply to Comment 1.1.1:
Comment: For Joint training, we do NOT use all the samples from ID, OOD and AA listed in **Appendix, Tab. S1**. Instead, as described in **Appendix, Sec. D**, we select samples from the speckle noise (SpN) error source for the OOD error type and from the PIFGSM error source for the AA error type, along with ID samples, to train the mentor. We select SpN and PIFGSM because these two error sources are most effective for training mentors in their respective error types. In other words, mentors trained on SpN achieved the highest average accuracy among the OOD-trained mentors, while those trained on PIFGSM attained the best average accuracy among the AA-trained mentors (see **Appendix, Fig. S3(c)**).
Therefore, based on the setup described above, in Joint training, ResNet50 serves as the mentee, and the training samples are drawn from the CIFAR-10 dataset’s ID, SpN, and PIFGSM error sources. The total number of training samples of Joint training is calculated as 151 + 9547 + 690 + 7930 + 1613 + 5161 = 25092. For AA training, only the training samples from the PIFGSM error source are used, resulting in 1613 + 5161 = 6774 training samples. Please refer to **Appendix, Tab. S1** for these numbers.
In addition, during this rebuttal, we also introduced a fully joint trained mentor which was trained on all error sources (ID, OOD, and AA, a total of 9 error sources), achieving an average evaluation accuracy of 84.6% across all error sources. In this scenario, all testing samples are in distribution because the mentor has seen samples from every error source during training. Despite this, the improvement compared to a mentor trained solely on the PIFGSM error source (84.6% vs. 78.0%) is insignificant. Consequently, this finding further indicates that (1) Training mentors exclusively on the AA error source offers a more data-efficient strategy for real-world applications. (2) Remarkably, a SuperMentor trained solely on AA generalizes well to unseen error sources, achieving performance comparable to the upper bound obtained by jointly training on all error sources. | Summary: The authors propose training a "mentor" model to learn to predict the errors of a "mentee" model. The authors evaluate multiple choices for training the mentor model, such as training with in-distribution, out-of-distribution, and adversarial examples to predict errors. The authors combine these results to propose a "SuperMentor" which can outperform the baseline mentors in predicting errors.
Claims And Evidence: Strengths:
- The paper trains and evaluates their mentor models on a number of settings, across different architectures and datasets, and compares to many relevant baselines.
- The discussion regarding what training data (between in-distribution, out-of-distribution, and adversarial examples, plus some investigation of level of distortion of the images) is very interesting
- Investigating the role of how different architectures affect the the accuracy was also interesting.
- The paper tackles an important problem of predicting when neural networks make errors.
Weaknesses:
- The main technical weakness is the evaluation. In particular, the authors choose to select a data split for each model and each dataset where half of the examples were classified correctly and half of the examples were classified incorrectly. This means that each model is evaluated on different datasets. Plus, after generating the dataset, the remaining examples were used for training, and so the training differ from model to model and setting to setting. This makes comparisons across settings very hard to evaluate. For example, there is a confounding variable where it is unclear if adversarial vs OOD vs ID examples carry more information about predictions themselves, or if the examples for which the models make errors are easier/harder to learn. I think to address this the authors should consider how to control for this setting—or, if a reasonable method to control for this doesn't exist—at least ablating the choice of exactly which examples were chosen for training and evaluation. While this is hopefully easily addressable, I believe that this issue can potentially confound the takeaways from the paper, and so it should be addressed before publication. If this were addressed at all my score would increase.
- A more minor issue: In, e.g., Figure 3, the variance appears quite large. It would be likely important to run some sort of a statistical test to ensure that the differences are in fact statistically significant before making any claims. Further, Table 1 should include some measure of variance to understand if the difference in performance between the proposed method and baselines (which is itself relatively small) is a result of noise.
Methods And Evaluation Criteria: The methods and evaluation criteria make sense, although please refer to Claims and Evidence for additional comments on the validity of the methods.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Please refer to Claims and Evidence for comments.
Supplementary Material: Reviewed (via a skim for relevant details) the entire appendix.
Relation To Broader Scientific Literature: The contributions are connected to relevant literature.
Essential References Not Discussed: No essential references are not discussed, as far as I can tell.
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **[LEYc.1-Claims And Evidence]** We appreciate the reviewer’s thoughtful question and would like to clarify the following two points:
First, our evaluation settings are fair for all mentors. The dataset split presented in **Appendix, Tab. S1** applies only to the error source used for training the mentor, not to all error sources. For instance, if Mentor A is trained on adversarial images generated by PIFGSM error source, only the PIFGSM-generated samples are split into training and evaluation sets according to **Appendix, Tab. S1**. All samples from other ID, OOD, and AA error sources are fully used for evaluation. Thus, comparing its performance with that of other mentors on these error sources is fair. It is true that Mentor A may perform better on the PIFGSM error source since the mentor was trained on it. However, even when excluding its performance on the PIFGSM error source, its overall average performance ranking compared to other mentors remains largely unchanged. This observation applies to mentors trained on other error sources as well. Confusion matrices in **Appendix, Fig. S3, S4, and S5** show that the evaluation results on the trained error sources (accuracy in the white background of confusion matrices) do not dominate the average performance (accuracy in the pink background of confusion matrices).
Second, we appreciate the reviewer's question regarding a potential confounding variable in mentor training—specifically, whether AA, OOD, or ID examples inherently provide more predictive information, or if some error examples are simply easier or harder to learn. To address this, we adopted an alternative dataset splitting strategy for mentor training, distinct from the one in **Appendix, Tab. S1**. For example, we start with the original images from CIFAR-10 and separately apply two perturbation methods, PIFGSM and SpN, to produce two sets of error sources. Next, use 60% of the PIFGSM-generated images to train a mentor model, and reserve the remaining 40% for testing. Similarly, use 60% of the SpN-generated images (corresponding to the same original CIFAR-10 images as the PIFGSM training set) to train another mentor model, and reserve the remaining 40% (corresponding to the same original CIFAR-10 images as the PIFGSM testing set) for testing.
This approach is applied consistently across mentors trained on other error sources, ensuring that all mentors are trained and evaluated on images derived from the same original CIFAR-10 images but different domain shifts. Consequently, the confounding factor related to the inherent ease or difficulty of learning certain samples is eliminated, leaving only the question of whether AA, OOD, or ID samples inherently provide more predictive information.
**Figures R1, R2, and R3 in [link](https://drive.google.com/file/d/1JtoYDxTmAcNfgN_6SVtoBGO1r7u49T6Q/view?usp=sharing)** present the performance of mentors on various error sources for the CIFAR-10, CIFAR-100, and ImageNet-1K datasets, respectively, under this new data split strategy. The results reveal that training mentors with adversarial attack errors from the mentee leads to a greater improvement in error prediction accuracy compared to training with ID or OOD errors. Consistent with our original finding reported in the paper, this finding confirms that adversarial images carry more predictive information than either ID or OOD images.
We will add these discussions and new experimental results in the final version.
**[LEYc.2-Claims And Evidence]** Thank you for the suggestions! To support our claims in **Fig. 3**, we computed p-values to compare the error prediction accuracy of mentors trained with different error types. Our primary claim is that mentors trained on the adversarial attack (AA) error type achieve higher accuracy than those trained on in-domain (ID) or out-of-domain (OOD) error types. To validate this, we calculated p-values using a two-tailed t-test and compared the performance of AA-trained mentors to ID-trained mentors and AA-trained mentors to OOD-trained mentors across four different [mentee]-[mentor] settings (ResNet50-ResNet50, ViT-ResNet50, ResNet50-ViT, ViT-ViT) on three datasets: CIFAR-10, CIFAR-100, and ImageNet-1K. All pairwise p-values are below 0.05, indicating that these performance differences are statistically significant. Therefore, our claim is well-supported. We will include these p-value analyses in the final version.
We also agree with the reviewer's suggestion regarding **Tab 1**. The updated version, now including variance over 3 runs, is shown in **Tab. R1 in [link](https://drive.google.com/file/d/1JtoYDxTmAcNfgN_6SVtoBGO1r7u49T6Q/view?usp=sharing)**. This confirms that the performance differences between our proposed method and the baselines are not simply attributable to noise. We will include this updated table in the final version.
---
Rebuttal Comment 1.1:
Comment: Just to clarify—are all mentor models, regardless of their training data, evaluated on the exact same evaluation examples?
On line 241 right column: "To create balanced training and test sets for the mentor and avoid the long-tailed distribution problem, we select an equal number of correctly and incorrectly classified samples for each training batch and every test set."
This seems to imply that different mentors evaluate on different test sets, since different mentees (between ViT and ResNet50) get different examples correct and incorrect.
---
Reply to Comment 1.1.1:
Comment: TLDR:
The reviewer rightly notes that the evaluation sets for the mentors in our original submission were not identical, although they were drawn from the same underlying distributions. In our rebuttal, we introduce more rigorous controls to address this issue and ensure that all mentors are evaluated using the exact same test set. Importantly, the results from these updated experiments do **not** change the overall conclusions of the paper. We will include these revised experiments in the updated version.
Below, we provide detailed clarifications to directly address the reviewer’s concern.
1. The process described in line 241 (right column) applies exclusively to the training error source. For instance, if a mentor is trained using adversarial images generated by the PIFGSM error source, we ensure that both its training batch and the corresponding PIFGSM test set contain an equal number of correctly and incorrectly classified samples. In contrast, when evaluating this mentor on other error sources (like SpN or GaB), all available samples from other error sources are used. This approach is applied consistently across all mentors.
2. When comparing two mentors trained on the performance of **same** mentee using different error sources, each mentor is evaluated with **exact same** sample sets for every error source except the one used during training. For example, if the mentee is ResNet50 and we compare a mentor trained on the PIFGSM error source with one trained on the SpN error source, then the evaluation samples are **identical** across all error sources except for the PIFGSM and SpN error sources. The evaluation samples in PIFGSM and SpN error sources are determined using the data splitting strategy outlined in **Appendix, Tab. S1**. Importantly, even if we exclude their performance on the PIFGSM and SpN error sources, their overall average performance ranking compared to other mentors remains largely unchanged. This observation applies to mentors trained on other error sources as well. Confusion matrices in **Appendix, Fig. S3, S4, and S5** show that the evaluation results on the trained error sources (accuracy in the white background of confusion matrices) do not dominate the average performance (accuracy in the pink background of confusion matrices). This addresses the concern that in-domain performance might overshadow the out-of-domain performance.
3. After realizing that the point 1 above might lead to unfair comparisons among all the mentors due to their different test sets, we introduced an alternative evaluation strategy in **[link](https://drive.google.com/file/d/1OzBkXlU1ROtJD66r7Zz1ISFZq69LeQeh/view?usp=sharing)** by assessing all mentors on the **exact same** balanced testing set—each error source's test set contains an equal number of samples that the mentee correctly and incorrectly classified. In this approach, mentors trained on the performance of the same mentee are evaluated on **identical** testing sets across all error sources. **Figures R1, R2, and R3 in [link](https://drive.google.com/file/d/1OzBkXlU1ROtJD66r7Zz1ISFZq69LeQeh/view?usp=sharing)** illustrate the mentors' performance across various error sources for the CIFAR-10, CIFAR-100, and ImageNet-1K datasets, respectively, under this new evaluation strategy. The results show that training mentors with adversarial attack errors from the mentee leads to a significantly greater improvement in error prediction accuracy compared to training with ID or OOD errors, which is consistent with our original findings. | null | null | null | null | null | null | null | null |
Be a Goldfish: Forgetting Bad Conditioning in Sparse Linear Regression via Variational Autoencoders | Accept (poster) | Summary: The paper studies the classic spare linear regression problem from a linear variational autoencoder (VAE) viewpoint. On the theoretical aspect, the paper provides two theorems. The first theorem guarantees that every local minimum of the VAE energy is a global minimum under a fixed variance and the global minimum achieves the optimal sparse solution when the variance converges to zero under the restricted isometry constant (RIP) condition. The second theorem is similar to the first one but with preconditioning to tackle the case when the design matrix is ill-conditioned. Numerical results show that the proposed VAE approach outperforms Lasso in some sparsity levels.
## update after rebuttal
The reviewer would like to thank the authors for conducting additional experiments on SBL and sharing the comparisons. As my main concerns regarding SBL have been answered, I have adjusted my rating from weak reject to weak accept. Please incorporate these important comparisons of SBL into the paper as it is a better baseline compared to LASSO. Lastly, the design matrix should be fat and full rank so I believe the size 500x100 mentioned by the authors might be a typo.
Claims And Evidence: The VAE models considered in the paper are very restrictive. They are over simplified linear VAEs which we don’t use in practice. The models considered in the paper are basically linear regression models that can be derived without the knowledge of VAEs. The guarantee of the exact recovery still relies on the RIP conditions and the limiting scenario when the variance converges to zero. This does not explain how VAEs work in practice and how to design better VAEs.
Methods And Evaluation Criteria: It is expected to compare the proposed method with sparse Bayesian learning approaches such as Tipping’s and EM algorithms given the fact that they yield much better performance than Lasso. However, such comparisons are missing in the paper.
Theoretical Claims: The reviewer did not verify the correctness of the proof.
Experimental Designs Or Analyses: Similar to the above argument, it is hard to see the performance of the proposed method due to the lack of comparison to spare Bayesian learning approaches.
Supplementary Material: The supplementary material contains proofs and the reviewer did not verify the correctness of the proofs.
Relation To Broader Scientific Literature: The main theorems of this paper are related to the field of spare signal recovery in signal processing. On the machine learning side, the connections to VAEs and generative models are weak.
Essential References Not Discussed: Reference adequate.
Other Strengths And Weaknesses: Strengths:
1. The paper is well-written and easy to follow.
2. The background knowledge is well presented.
Main weaknesses:
1. There are some clarity issues in the paper. In Theorem 4.2, the optimal sparse solution in shown in (15). However, it still depends on $\hat{w}_x$ and no algorithm is given to solve for $\hat{w}_x$. When the variance converges to zero, how to you solve the optimal $\hat{w}_x$? Is this still an NP-hard problem? Both Theorem 4.2 and 4.4 have the same issue.
2. The connections to VAEs and generative models are weak. These concepts seem redundant to this paper as the paper studies the optimization problems in (14) and (18). It is not clear to the reader how these findings can motivate better design of VAEs to model complex real-world datasets.
3. SBL algorithms provide much better support recovery performance than Lasso but comparison to SBL is missing. The numerical results in their current form are insufficient to determine the goodness of the proposed method.
Other Comments Or Suggestions: 1. Notations in Line 112 and 116 are not consistent with Eq. (2).
2. The function $g$ in (6) needs to follow a set of properties to be a valid diversity measure. Perhaps the authors can briefly clarify how to design $g$.
Questions For Authors: 1. When the variance converges to zero, how to solve the optimal $\hat{w}_x$? Is this still an NP-hard problem?
2. How does the proposed VAE method compare to SBL approaches such as Tipping’s and EM algorithms?
3. How to train the VAE models and how to generate data?
4. What is the distribution of ground truth $z^*$ used in the simulation? Is the proposed method sensitive to distribution?
5. The SNR seems to be fixed in Section 5. Is the proposed method robust to different SNR levels?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for providing feedback and suggesting insightful experiments for our work. Please find our detailed response below:
## Specific answers:
1. **Role of decoder variance γ**: We solve for optimal $\hat{\mathbf{w}}_x$ by training the proposed VAE with SGD while using γ as a training parameter. Theorems 4.2 & 4.4 show the existence of optimal sparse solutions when γ → 0, and our empirical results achieve the same for most cases except when SGD fails to attain the limiting solution. We have also addressed this point in detail as a part of our response to *Reviewer aNAM* on γ term in Theorem 4.4. We request you kindly to look at the same owing to limited #characters for the rebuttal, in case additional clarification is required.
2. **Comparison with other work**: Our experiments (Fig. https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R2FB.png ) reveal that while SBL methods such as Tipping’s and EM algorithm [a] surpasses LASSO in support recovery, our VAE-based approach demonstrates superior performance for Gaussian random walk design matrix. Additionally, our work provides theoretical optimality guarantees for VAEs even when dealing with ill-conditioned design matrices, a domain where SBL's performance remains largely unexplored [b]. Since SBL shares probabilistic roots with VAEs, this comparison suggests promising future research on investigating SBL's behavior under ill-conditioned design matrices and in high-sparsity regimes.
3. **Training VAEs**: To generate training data x we first select r uniformly random sparse locations within an n dimensional vector, then sample r non-zero coefficients from a standard normal distribution to obtain sparse coefficients z. Thereafter, x is generated by multiplying z with the design matrix Φ. These x values are used to train the VAE using SGD.
4. **Ground truth distribution**: Our proposed VAE focuses on locating non-zero coefficients of z, irrespective of their distribution. Once locations are determined, standard least squares retrieves exact coefficients. Our empirical results (Fig. https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R2FC.png ) demonstrate similar support recovery rate and mean squared error across various distributions, confirming our method's distributional invariance.
5. **Impact of noise**: To assess its impact of noise on sparse recovery, we performed extensive experiments at different SNR levels (Fig. https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R2FD.png ). While recovery rate increases with SNR in all cases, our proposed VAE obtains a higher recovery rate at a lower SNR compared to others. The higher recovery rate of VAE at lower SNR can be boosted further to achieve sparse recovery in presence of larger noise. This presents an interesting future research direction where preprocessing techniques such as filtering & mixture of Gaussian models [c] can be applied.
## Relation to broader scientific literature
Our proposed VAE architecture focuses on theoretical exploration, beyond generative tasks like image or text synthesis. We aim to understand VAE properties relevant to challenging NP-hard problems such as SLR. Our work opens up the application space for VAEs beyond standard generative domains, such as in areas with sparsity constraints like brain anomaly localization and genome selection [d].
## Other comments or suggestions:
1. **Notations**: We are referring to the posterior probability $p_\theta(z |x)$ in Line 112 and Line 116, which is being modeled by $q_\theta(z |x)$ in Eq. (2). The likelihood term $p_\theta(x |z)$ in Eq. (2), is different from posterior $p_\theta(z |x)$ in Eq. (2). We will update the explanation in the manuscript to enhance clarity.
2. **Sparse penalty term**: In Eq. (6), g is a penalty term for the coefficients which promotes sparsity. Please note, we want a sparsity promoting term for defining a cost function for SLR instead of a diversity promoting term which increases variety in coefficients, e.g., g is L1 norm for LASSO.
3. **Choice of linear architecture**: Lemma 4.1 connects the linear VAE global minimum to optimal sparse representations. We agree with the reviewer that linear VAEs limit conventional generative capabilities such as for images and text; however, they offer valuable insights into VAE properties. Please note that standard linear regression (LR), fails for sparse, underdetermined systems ( see Fig. https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R2FA.png ) due to exponentially large solution space where VAEs are capable of finding the sparse solution.
References:
[a] https://arxiv.org/abs/2105.10439;
[b] https://ieeexplore.ieee.org/document/9810291;
[c] https://ieeexplore.ieee.org/document/9420311;
[d] https://www.mdpi.com/1099-4300/26/9/794.
---
Rebuttal Comment 1.1:
Comment: The reviewer would like to thank the authors for their response and for providing some results on SBL. Despite it addresses some of my concerns, more work is needed. The major concern here is that it is still unclear how the proposed VAE method compared to Tipping's or EM SBL under the standard assumption where the elements of the design matrix follow a Gaussian distribution. Also note that diversity measure minimization is a standard approach for promoting sparsity. In sparse signal recovery literature, $g$ is usually referred to as the general diversity measure and there are several properties for a separable diversity measure. The reviewer will keep the rating unchanged and encourage the authors to conduct further study on SBL and incorporate it in the paper.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the comment and our comparison details are as follows:
## **Comparison with Tipping’s algorithm under standard Gaussian distribution**
In the case of the well-conditioned design matrices such as the standard Gaussian design matrix (500x100), VAEs empirically demonstrate a higher tolerance to low sparsity compared to Tipping's SBL [f] (with EM) in achieving a higher recovery rate especially with a larger number of features (n=500) (please see Fig. [https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R2FJ.png ]). Furthermore our VAE-based approach offers a low per epoch time complexity of $O(n^2)$ (in backpropagation through a linear VAE) compared to $O(n^3)$ for matrix inversion in EM-based SBL, where $n$ is the dimension of the sparse coefficients $z$.
Note: Our preconditioned VAE achieves both higher recovery rate and tolerance to lower sparsity for ill-conditioned design matrices compared to EM-based SBL (as shown in Fig. [https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R2FJ.png ]). This is because the VAE approach relies on SGD for minimizing loss that does not involve matrix inversion, which on the other hand is an integral part of EM-based SBL leading to inaccurate retrieval for ill-conditioned matrices.
## **Advantages of proposed VAE over SBL-based approach**
1. **Obtaining the global minima**: The global minima for SBL in context of SLR corresponds to the optimally sparse coefficients [g] but reaching the same is not guaranteed for EM-algorithms as they can converge to a spurious local minima. Our VAE architecture smoothes out the bad local minima in the loss landscape and guarantees that reaching the global minima coincides with the optimal sparse solution in the limiting case of the decoder variance (Theorem 4.2 of the manuscript).
2. **Handling Ill-conditioned matrices**: Ill-conditioned design matrices which don’t satisfy the RIP bound, lead to increased numerical instability during matrix inversion of the EM algorithm for SBL, thereby lowering the sparse recovery rate. On the other hand our VAE algorithm preconditions the ill-conditioned design matrices to satisfy RIP, thereby achieving a higher recovery rate (Fig. H [https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R2FJ.png ]) compared to SBL at the same sparsity.
3. **Computation complexity**: Each iteration of the EM algorithm has the matrix inversion step of time complexity $O(n^3)$, where n is the dimension of the sparse coefficients $z$. This matrix inversion makes SBL computationally expensive, limiting the algorithm's applicability for large n. In contrast the backpropagation for our linear VAE is of the order $O(n^2)$ per epoch as it does not encounter the inverse operation.
4. **Prior information for sparse recovery**: The SBL algorithm is an empirical Bayesian method as opposed to VAE which employs variational inference. Therefore SBL, performing a type II maximum likelihood estimation, performs better with the a-priori knowledge of the distribution of non-zero coefficients in $z$ [h]. However this information is difficult to obtain in real-life settings as coefficients are unknown. VAEs fill this gap by incorporating a trainable diagonal matrix $\text{diag}(\mathbf{w}$) into the decoder architecture that captures the “sparsity” information without explicitly modeling the prior distribution. This is empirically validated (also in our previous response to Reviewer bPHS) in Fig. [https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R2FC.png ] showing that support recovery of VAEs is invariant to the distribution of the non-zero coefficients.
Given an opportunity. we will add a new subsection in Section 4.2, to reflect the comparison with SBL techniques in detail.
Lastly, we have used the ‘g’ notation for the sparsity-inducing penalty term in Eq. (6) of the manuscript based on Eq. (2) in [h] (Wipf et al., 2011).
### **References**
[f] Tipping, M.E., 2001. Sparse Bayesian learning and the relevance vector machine. Journal of machine learning research, 1(Jun), pp.211-244.
[g] Wipf, D.P. and Rao, B.D., 2004. Sparse Bayesian learning for basis selection. IEEE Transactions on Signal processing, 52(8), pp.2153-2164.
[h] Wipf, D.P., Rao, B.D. and Nagarajan, S., 2011. Latent variable Bayesian models for promoting sparsity. IEEE Transactions on Information Theory, 57(9), pp.6236-6255. | Summary: This paper studies the application of Variational Autoencoders for the sparse linear regression problem, particularly in cases involving ill-conditioned design matrices and low sparsity, situations where traditional methods like LASSO often fail.
Claims And Evidence: The claims in the paper are supported by both theoretical analysis and empirical evidence. Specifically:
- Theoretical claims: Proved including the elimination of spurious local minima (Theorems 4.2 and 4.4) under specific architectural constraints.
- Empirical claims: Experiments conducted with three different types of design matrices (standard Gaussian, Gaussian random walk, and real-world Riboflavin dataset) demonstrate the effectiveness and improved robustness of the proposed method.
Methods And Evaluation Criteria: The proposed VAE methods and evaluation criteria (sparse support recovery rates across various sparsity levels and design matrix conditions) are sensible and appropriately chosen for the problem context.
Theoretical Claims: No significant issues were found with the proofs provided.
Experimental Designs Or Analyses: The experimental design and analyses were well-executed and sound. Specific checks included:
Supplementary Material: The supplementary material provides additional clarification
Relation To Broader Scientific Literature: It extends recent theoretical results by Wipf (2023) by applying VAEs beyond Simultaneous Sparse Regression (SSR) to the more challenging scenario of Sparse Linear Regression (SLR).
Essential References Not Discussed: nothing as far as I know
Other Strengths And Weaknesses: The work proposed to use VAE models to address solve the SLR problem. The work is self-contained interesting and sound in its own problem setup.
Other Comments Or Suggestions: Minor stylistic corrections could further enhance clarity (e.g., consistency in variable definitions, notation simplicity).
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for taking the time to review this work and provide valuable comments. We will be sure to update our manuscript to correct for stylistic corrections. Some of the plausible modifications include:
1. Using the same variable “n” for the coefficient dimension of SLR design matrix and latent dimension in VAE.
2. Hat (local/ global minimum) and star (ground truth) notations can be reserved for the theorem/ lemma results, and all intermediate variables can be represented without hat and star. We will clarify this in the text about specific variable usage and update with the regular notations in the manuscript. | Summary: The paper addresses the benefits of the variational autoencoder (VAE) objective to solve sparse linear regression (SLR). It shows that a particular VAE setup solves SLR under a restricted isoperimetry property (RIP) condition (Theorem 4.2), and a modification of this setup can induce preconditioning of the design matrix (Theorem). The benefit of this approach with respect to LASSO and augmented basis pursuit are demonstrated in an empirical validation study, specifically in the ill-conditioned and low-sparsity settings.
Claims And Evidence: Claims are supported by theoretical and experimental results.
Theory supports the claim that the chosen VAE setups are well-behaved for the SLR problem, and provide conditions for such behavior (notably on decoder noise parameter gamma). See below for a detailed discussion.
However the theory only covers some properties of the theoretical optimum of the VAE objective and thus does not cover imperfect optimization.
Therefore the empirical study of section 5 is welcome, and shows some benefits of VAEs for support recovery. However, other metrics, such as bias of the estimated solution, is not investigated empirically, nor even discussed.
Methods And Evaluation Criteria: The methods and evaluation criteria make sense however they are limited, when it comes to empirical validation.
Notably, the influence of the number of features and number of observations is not studied. Importantly, the choice/learning of decoder hyperparameter gamma is also not studied while it is crucial to the theory.
Ideally, one would also like to see the performance of the method on a real-world dataset.
Theoretical Claims: I checked at a high level:
- theorem 4.2: the steps of the proof look reasonable, except for:
- the use of the RIP condition (section A.2 in appendix): the exact role of delta (e.g. how small it must be) is not explicit, neither in the statement of the theorem, nor in the proof. A reference to (Candes and Tao, 2005) is provided, but it is overall unsatisfactory as a proof argument, because no explicit result is given.
- the study of the convergence of the solution as gamma goes to zero: the authors seem to take the limit of the loss and study the resulting solution, instead of proving that the solution for a fixed gamma converges itself as gamma goes to zero.
- lemma 4.3.: the steps of the proof look reasonable
- theorem 4.4: I could not check (yet). However, I could not follow in the first place the explanations associated to the main text statement: below equation (19) (line 326) it is said "treating gamma as trainable parameters effectively penalizes the condition number of Phi in the VAI objective (18)", I do not see why (I don't see the condition number appearing in the objective).
Experimental Designs Or Analyses: The experimental design is satisfactory, baring uncertainties on the choice of hyperparameters mentioned above.
Supplementary Material: Was reviewed (see theoretical claims)
Relation To Broader Scientific Literature: As far as I know, the paper mainly relates to (Wipf, 2023) addressing the problem of solving Simultaneous Sparse Regression (SSR) with VAEs. In contrast to it, the present paper addresses SLR (a more classic setting), but specifically investigates the benefits of VAEs in the challenging case of ill-conditioning and weak sparsity. This case has been address with a different approach in (Kelner et al., 2024), which is used as baseline.
Essential References Not Discussed: I am not sure it should be labeled as essential, but bias issues of SLR are surprisingly not discussed nor quantified, while there is an extensive literature on it notably for LASSO. E.g.
A. Javanmard and A. Montanari. Debiasing the Lasso: optimal sample size
for Gaussian designs. Annals of Statistics, 46:2593–2622, 2018.
Other Strengths And Weaknesses: The writing can be improved in several places (e.g. sentences with missing words).
Other Comments Or Suggestions: The setting could be better introduced to a general audience by being explicit about what the variables and parameter represent in a standard regression setting, at the beginning of section 3
Questions For Authors: Please address my comments on:
- potential gaps/issues in the proof of theorem 4.2
- the claim that gamma can be automatically learned in theorem 4.4
- the missing analyses and metrics in the experimental section.
I hope some of the raised issues are misunderstandings, therefore I am leaning towards accept for now, but could change my score.
# Post rebuttal update
I am happy with the answers provided by the authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their insightful comments on our submission. Please find our detailed response below:
## Addressing concerns about Theorem 4.2:
1. **Use of RIP condition**: Thanks for this excellent observation. The main requirement for A.2 Lemma 1 is the absence of distinct $\mathbf{w}^{(1)}$ and $\mathbf{w}^{(2)}$ that satisfy Eq. (37). With a perfect optimizer, a full rank Φ ensures the presence of a unique inverse $\Sigma^{-1}(\mathbf{w})$ for each $\mathbf{w}$, leading to no bad local minima. However, practical SGD might identify distinct $\mathbf{w}$’s with close inverse values satisfying Eq. (37). We use the RIP bound 𝛅 to ensure that the difference in inverse terms large enough to be detectable by SGD. For a 𝛋-sparse z, the RIP bound 𝛅 can be expressed as a weighted sum of activated column norms of Φ and cross correlations between them (for detailed explanation, please see: https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R3Exp.pdf ). However, the presence of aligned columns leads to large correlations increasing 𝛅 value. A large 𝛅 suggests small difference in the inverse terms for $\mathbf{w}$’s that differ along the indices of the aligned columns having large correlations. Therefore small 𝛅 suggests higher success rate for the optimization algorithm in finding the true local/global optimum.
2. **Study of convergence as gamma goes to zero**: In the presence of γ (decoder variance), the optimal sparse solution requires us to solve the implicit stationarity condition in Eq. (37) for which a closed form solution does not exist. We leverage Theorem 4 from (Dai and Wipf, 2019 [e]) which states that for any γ>0, there exists a γ′<γ for which the VAE loss can be reduced. Our proposed VAE architecture satisfies the conditions for Theorem 4 from [e] implying that γ → 0 leads to minimizing the VAE loss. Therefore, it's valid to evaluate the limiting value of loss function in Eq. (26), and use it to obtain the local/ global minimum solution.
## γ term in Theorem 4.4
The γ term is a training parameter in Theorem 4.4, and it impacts the condition number of PΦ. As shown in Lemma 4.3 any positive value of γ improves the condition number of PΦ compared to Φ. Therefore, adding γ to the overall loss function improves the effective condition number of Φ, pushing it to satisfy the RIP property.
**Choice/Learning of Decoder hyper-parameter γ**: The trainable γ assists in achieving a higher support recovery rate of the preconditioned VAE for ill-conditioned SLR (Fig. [https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R3FH.png ]) compared to fixed γ. Although proposed VAE architecture can achieve no bad local minima condition for a fixed γ, optimal sparse recovery with γ → 0 is contingent on the success of the optimization algorithm. Imperfect optimization can hinder achieving this ideal scenario, as evidenced by our empirical results. We will update our manuscript to clarify this point.
## Experiments:
We chose the ill-conditioned Gaussian random walk design matrix scenario from Sec. 5 to perform the following experiments:
1. Dependence on number of features: Fig. [https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R3FF.png ] shows that the pre-conditioned VAE consistently outperforms other methods in achieving a higher support recovery rate as the number of features increases.
2. Dependence on number of observations: Fig. [https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R3FG.png ] shows that the support recovery rate for preconditioned VAE increases with the number of observations, as with increase in observations d, more information is available for solving the SLR.
Note: We performed experiments with real-world Riboflavin dataset in Sec. 5.3 using the 100 highest variances genetic features. Additional experimental results with random Riboflavin features (Fig. [https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R3FI.png ]) shows similar support recovery rate as shown in Sec. 5.3. These results will be added to the manuscript.
## Bias of the estimated solution
We emphasize that our VAE-based sparse recovery aims to identify the positions of non-zero coefficients rather than directly estimating their values. Once the correct support (non-zero coefficient positions) is identified, the SLR problem simplifies significantly and can be solved using ordinary least squares (OLS). Under a full-rank design matrix and Gauss-Markov assumptions, OLS is an unbiased estimator leading to optimal sparse recovery with no bias. However, at high sparsity VAE achieves a suboptimal support identification, introducing a bias (Fig. [https://anonymous.4open.science/r/VAE-for-SLR-3E8F/RB/R3FE.png ]) but VAE still achieves a lower bias compared to LASSO at all sparsities.
Response to other comments: We will start Section 3 with standard regression setting in the camera-ready manuscript.
References: [e] https://arxiv.org/abs/1903.05789 | null | null | null | null | null | null | null | null |
Fine-Grained Captioning of Long Videos through Scene Graph Consolidation | Accept (poster) | Summary: This paper proposes a zero-shot video captioning framework that leverages scene graph consolidation to bridge image-based vision-language models (VLMs) to video understanding without requiring paired video-text annotations.
Claims And Evidence: Some claims made in the submission are not sufficiently supported.
1. Efficiency Claims: While the authors emphasize "lower inference cost", LLaVA-NEXT-7B’s computational cost (7B parameters) negates this claim compared to smaller VLMs. VideoPrism-B [1] w/ PaLM-1B achieves higher performance with fewer parameters as reported in Table 11 of [2]. Besides, efficiency metrics (FLOPs, latency, or GPU memory usage) are not reported.
[1] Zhao, Long, et al. "Videoprism: A foundational visual encoder for video understanding." arXiv preprint arXiv:2402.13217 (2024).
[2] Wang, Yi, et al. "Internvideo2: Scaling foundation models for multimodal video understanding." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024.
Methods And Evaluation Criteria: While the methods align with zero-shot goals, the quantitative experiments are insufficient. Kindly refer to the Experimental Designs Or Analyses section for more details.
Theoretical Claims: No theoretical claims are made. The paper focuses on empirical validation.
Experimental Designs Or Analyses: The paper’s experimental design and analysis suffer from critical flaws that undermine the validity of its claims.
1. Misleading Categorization & Incomplete VLM Comparisons: While the authors position their method as "text-only training" (Table 1), the reliance on LLaVA-NEXT-7B (line 305) for frame-level captioning inherently makes this an LLM-based approach. This misleading presentation undermines the fairness of comparisons in Table 1, where LLM-based methods (e.g., Video ChatCaptioner) are treated as separate categories. Besides, the authors fail to directly compare against LLaVA-NEXT-7B itself or state-of-the-art video-centric VLMs like VideoPrism-B [2], which achieves CIDEr=40.3 on MSR-VTT (vs. 24.0 in this work) as reported in Table 11 of [1]. This omission raises concerns about whether the proposed scene graph consolidation adds value over simply using the base VLM’s outputs.
2. Efficiency Claims: While the authors emphasize "lower inference cost," LLaVA-NEXT-7B’s computational cost (7B parameters) negates this claim compared to smaller VLMs. As mentioned above, VideoPrism-B w/ PaLM-1B achieves higher performance with fewer parameters, suggesting the scene graph pipeline introduces bottlenecks rather than enhancements. Besides, efficiency metrics (FLOPs, latency, or GPU memory usage) are not reported.
3. Inadequate Ablation Study: The only ablation study in this paper is about the number of frames, which is not enough to support technical claims. It is necessary to conduct more ablation studies on the VLM backbone (e.g., BLIP-2 vs. LLaVA), hyperparameter k in subgraph extraction, and the graph-to-text model.
[1] Wang, Yi, et al. "Internvideo2: Scaling foundation models for multimodal video understanding." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024.
[2] Zhao, Long, et al. "Videoprism: A foundational visual encoder for video understanding." arXiv preprint arXiv:2402.13217 (2024).
Supplementary Material: I have read the entire Appendix, which mainly introduces additional qualitative results and examples of prompt instructions.
Relation To Broader Scientific Literature: The paper’s contributions align with several key directions in vision-language research, particularly in video captioning and multimodal representation learning.
Essential References Not Discussed: This paper has discussed enough essential related works.
Other Strengths And Weaknesses: Strengths:
1. The idea of parsing frame-level captions into scene graphs, then merging them into a unified graph and converting it into a video-level caption is reasonable and interesting.
Weaknesses:
1. Misleading Categorization & Incomplete VLM Comparisons: While the authors position their method as "text-only training" (Table 1), the reliance on LLaVA-NEXT-7B (line 305) for frame-level captioning inherently makes this an LLM-based approach. This misleading presentation undermines the fairness of comparisons in Table 1, where LLM-based methods (e.g., Video ChatCaptioner) are treated as separate categories. Besides, the authors fail to directly compare against LLaVA-NEXT-7B itself or state-of-the-art video-centric MLLMs like VideoPrism-B [2], which achieves CIDEr=40.3 on MSR-VTT (vs. 24.0 in this work) as reported in Table 11 of [1]. This omission raises concerns about whether the proposed scene graph consolidation adds value over simply using the base MLLM’s outputs.
2. Suboptimal Performance: While adopting a multimodal large language model (MLLM) as the backbone, the performance on zero-shot video captioning is inferior compared to other MLLMs and VLMs as reported in [1]. e.g., 24.0 (this work) vs. 40.3 (VideoPrism-B [2] w/ PaLM-1B) in CIDEr scores (↑) on MSR-VTT.
3. Efficiency Claims: While the authors emphasize "lower inference cost," LLaVA-NEXT-7B’s computational cost (7B parameters) negates this claim compared to smaller VLMs. As mentioned above, VideoPrism-B w/ PaLM-1B achieves higher performance with fewer parameters, suggesting the scene graph pipeline introduces bottlenecks rather than enhancements. Besides, efficiency metrics (FLOPs, latency, or GPU memory usage) are not reported.
4. Inadequate Ablation Study: The only ablation study in this paper is about the number of frames, which is not enough to support technical claims. It is necessary to conduct more ablation studies on the VLM backbone (e.g., BLIP-2 vs. LLaVA), hyperparameter k in subgraph extraction, and the graph-to-text model.
5. Missing Details on Graph-to-Text Model: The technical details on this module are insufficient, restricting reproducibility. It is reasonable to report detailed network architecture. e.g., feature dimensions and how these features are processed using mathematical notations. Besides, the claim of "lightweight" is unsupported without parameter counts or running time metrics versus baselines.
[1] Wang, Yi, et al. "Internvideo2: Scaling foundation models for multimodal video understanding." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024.
[2] Zhao, Long, et al. "Videoprism: A foundational visual encoder for video understanding." arXiv preprint arXiv:2402.13217 (2024).
Other Comments Or Suggestions: 1. Limitations of Image VLMs: The framework relies on LLaVA-NEXT-7B for frame captioning. A discussion of how errors in frame-level captioning propagate to the final video caption (e.g., via case studies) would improve transparency.
Questions For Authors: 1. Performance Gap vs. VideoPrism-B: VideoPrism-VideoPrism-B [1] w/ PaLM-1B CIDEr=40.3 on MSR-VTT, far exceeding your CIDEr=24.0. Does this gap stem from limitations in scene graph consolidation, or is it inherent to using LLaVA-NEXT-7B for frame parsing?
2. Ablation on Subgraph Extraction: How does the choice of k affect caption quality? For instance, does higher k improve coherence at the cost of missing transient objects?
[1] Zhao, Long, et al. "Videoprism: A foundational visual encoder for video understanding." arXiv preprint arXiv:2402.13217 (2024).
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **1. Categorization of algorithms**
Given the diversity of zero-shot video captioning approaches, their categorization is not straightforward, and we acknowledge that alternative perspectives exist. However, since SGVC consistently outperforms LLM summarization throughout our experiments, its improved performance is not merely due to the use of LLMs but rather the effectiveness of the proposed algorithmic components. In the revised manuscript, we will reorganize Table 1 to better position SGVC and clarify the sources of its performance gains.
**2. Comparison against LLaVA-NEXT-7B itself and VideoPrism-B**
Since LLaVA-NEXT-7B [1] is an image-based captioner, directly evaluating its performance on video captioning is not straightforward. A meaningful approach is to compare LLM summarization and SGVC, as both share the same pipeline for extracting image-level captions using LLaVA-NEXT-7B but differ in their subsequent processing. As shown in Tables 1 and 2, our results clearly demonstrate the superior performance of SGVC over the simple combination of LLaVA-NEXT-7B and LLMs.
VideoPrism-B [2] indeed achieves outstanding performance on MSR-VTT, but it has been trained on an extensive collection of public and private videos with paired captions. As a result, its encoder is specifically optimized for video data, contributing to its superior performance. Additionally, it employs a powerful decoder (PaLM-2) for caption generation, which further enhances its results. Given these factors, we argue that a direct comparison between VideoPrism-B and our method is not entirely fair. Since VideoPrism-B’s model and source code are not publicly available, we instead selected InternVL2.5-1B [3], a video captioner, to assess SGVC’s performance with a video-based model. Our results in Tables D and E demonstrate that SGVC outperforms LLM summarization when integrating a clip-level captioner, highlighting the effectiveness of our graph-based approach.
Table D. Experiments on MSR-VTT, using InternVL2.5-1B as VLM backbone.
|Method|B4|METEOR|CIDEr|$P_\text{BERT}$|$R_\text{BERT}$|$F_\text{BERT}$|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|LLM summarization|15.1|**23.6**|19.6|0.339|0.532|0.414|
|SGVC|**17.7**|23.5|**26.7**|**0.481**|**0.550**|**0.513**|
Table E. Experiments on ActivityNet Captions, using InternVL2.5-1B as VLM backbone.
|Method|B4|METEOR|CIDEr|$P_\text{BERT}$|$R_\text{BERT}$|$F_\text{BERT}$|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|LLM summarization|5.4|11.5|13.0|0.324|0.320|0.322|
|SGVC|**8.4**|**13.4**|**25.6**|**0.348**|**0.326**|**0.337**|
**4. Efficiency**
Please refer to our response to #1 for Reviewer JfkZ.
**5. Additional ablation studies**
Please refer to Tables A, B, and C for the ablation studies regarding VLM captioner backbones and hyperparameter $\tau$, respectively.
Additionally, we analyze the impact of the hyperparameter $k$, which determines the size of the extracted subgraph. As shown in Table F, lower $k$ values yield more concise subgraphs centered on salient objects, enhancing precision-oriented metrics (CIDEr, $P_\text{BERT}$). In contrast, higher $k$ values produce richer subgraphs that capture broader contextual information, leading to improvements in recall-oriented metrics (METEOR, $R_\text{BERT}$).
Table F. Ablation study on the hyperparameter $k$, on the MSR-VTT test set.
|$k$|B4|METEOR|CIDEr|$P_\text{BERT}$|$R_\text{BERT}$|$F_\text{BERT}$|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|1|17.1|23.0|**24.0**|**0.455**|0.547|**0.497**|
|3|17.1|**23.7**|22.1|0.428|**0.551**|0.482|
**6. Clarification on the graph-to-text model**
Our graph-to-text model consists of a BERT-based encoder and a T5-base decoder, totaling 235M parameters. Given that the other two-stage video captioning models often rely on LLMs, our approach is significantly more lightweight. We will include detailed information about our architecture in the revised version of our paper. More importantly, we plan to release our source code and models upon acceptance of our paper.
**7. Error propagation case study**
As mentioned in our response to #3 for Reviewer JfkZ, hallucinations in frame-level caption generation introduce errors in the final video-level captions. We will discuss cases of error propagation in the revised manuscript.
[1] Liu, H. et al. LLaVA-NeXT: Improved reasoning, OCR, and world knowledge. https://llava-vl.github.io/blog/2024-01-30-llava-next/, 2024.
[2] Zhao, L. et al. VideoPrism: A Foundational Visual Encoder for Video Understanding. In ICML 2024.
[3] Chen, Z. et al. Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling. In arXiv, 2024.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response, but my original rating remains unchanged as my concerns about experiment soundness and efficiency reporting (FLOPs, latency, or GPU memory usage) were not addressed.
Regarding the authors' missing comparison, Videoprism was one of the papers I referenced. As other reviewer mentioned, this submission actually omitted other relevant works, such as [R1], which achieved a higher CIDEr score (27.1) on MSR-VTT in 2022. Additionally, the experiments were limited to MSR-VTT and ActivityNet, whereas most recent video captioning works also evaluate on Kinetics-400, MSVD, and YouCook2 to demonstrate generalization.
[R1] Yan, Shen, et al. "VideoCoCa: Video-text modeling with zero-shot transfer from contrastive captioners." arXiv preprint arXiv:2212.04979 (2022).
---
Reply to Comment 1.1.1:
Comment: **8. Efficiency report**
Table G presents a detailed comparison of computational costs, in terms of average per-video inference time and peak GPU memory usage on a single NVIDIA A6000 GPU, alongside captioning performance (CIDEr) on the MSR-VTT test set. Our key findings are summarized below:
* Decap, which employs a lightweight CLIP backbone, offers faster inference and lower GPU memory usage but achieves a relatively low CIDEr score.
* $C^3$ demonstrates fast inference and strong performance, though its high accuracy may partially stem from its reference to the annotations of the target dataset.
* SGVC consistently outperforms LLM-based summarization approaches across all metrics, regardless of the underlying backbones. Furthermore, our scene graph merging algorithm, which currently runs on the CPU, could be further accelerated by GPU implementation.
* Video ChatCaptioner, VidIL, and [Tewel et al.] are all slower and less accurate. While Video ChatCaptioner and VidIL report lower GPU usage, they introduce additional latency (1+ seconds per API call), which significantly impacts overall efficiency.
Table G. Computational costs of the compared methods on the MSR-VTT test set. $\alpha$ indicates the latency associated with GPT API calls, which typically take approximately 1+ seconds in our environment.
|Method|VLM backbone|Total Params. (B)|GPU usage (GB)|Inference time (s)|CIDEr|Using ref.|Using GPT API|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Decap|CLIP|0.22|2.87|0.17|18.6|-|-|
|C^3|ImageBind|1.33|5.28|0.23|27.8|$\checkmark$|-|
|LLM summarization|BLIP|7.50|14.50|1.27|10.8|-|-|
|**SGVC**|BLIP|0.74|5.07|1.14|24.6|-|-|
|LLM summarization|BLIP2|11.00|28.20|1.51|15.4|-|-|
|**SGVC**|BLIP2|4.24|18.40|1.37|26.9|-|-|
|Video ChatCaptioner|BLIP2|3.75|14.53|(0.10+$\alpha$) $\times$ 30|16.5|-|$\checkmark$|
|VidIL|BLIP+CLIP|0.67|3.57|0.20+$\alpha$|19.4|$\checkmark$|$\checkmark$|
|Tewel et al.|CLIP|0.51|2.40|83.30|11.3|-|-|
**9. Comparisons with VideoPrism and VideoCoCa**
Although VideoPrism and VideoCoCa are relevant, a fair comparison with our approach (SGVC) is difficult due to differences in algorithm design and evaluation protocols.
In addition to the discussion in #2 of our previous response, VideoPrism is specifically designed and trained for short video clips. As such, holistic encoding of an entire long video is not well-supported by the model. To apply VideoPrism to long video captioning, an additional module would be required to overcome this limitation—yet no such extension is proposed in the original paper. Furthermore, VideoPrism does not report performance on the ActivityNet Captions benchmark. While results on YouCookII are provided, they are based on individual segments rather than full videos.
VideoCoCa adopts a pipeline more similar to ours, utilizing attentional pooling to consolidate features from multiple frames. While VideoCoCa slightly outperforms SGVC on MSR-VTT (as shown in Table A of our rebuttal to Reviewer JfkZ), SGVC demonstrates stronger performance on the long-video dataset, although we admit that the comparison is not entirely fair. ActivityNet Captions, as presented in Table B of our rebuttal to Reviewer JfkZ. The experimental protocol used for YouCookII is identical to that of VideoPrism, which means VideoCoCa has not been properly evaluated on this dataset.
We are eager to conduct comprehensive comparisons with both VideoPrism and VideoCoCa using reproduced results. Unfortunately, this is not feasible, as the authors have not released their models or pretrained weights. As such, direct comparisons based on the numbers reported in their papers may be misleading. We kindly ask reviewers to take this context into account when evaluating our work.
**10. Further ablation study on LLM summarization**
We further validated SGVC by comparing it to LLM summarization with a stronger LLM, GPT-4o-mini. SGVC generally outperforms GPT-4o-mini across all VLM backbones, confirming the advantage of graph-based consolidation.
Table H. Ablation study of LLM summarization on ActivityNet Captions val set. LLM (M), LLM (G) denotes LLM summarization using Mistral-7B and GPT-4o-mini, respectively. "*" denotes the same backbone VLM as the row above.
|Method|Backbone VLM|B4|METEOR|CIDEr|$P_\text{BERT}$|$R_\text{BERT}$|$F_\text{BERT}$|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|LLM (M)|BLIP|4.0|10.1|8.7|0.304|0.277|0.290|
|LLM (G)|*|4.6|10.3|10.4|0.324|0.285|0.303|
|SGVC|*|**7.1**|**11.9**|**17.5**|**0.366**|**0.288**|**0.322**|
|LLM (M)|BLIP2|4.8|11.0|10.5|0.319|0.300|0.309|
|LLM (G)|*|5.0|10.5|12.8|0.343|0.300|0.320|
|SGVC|*|**7.9**|**12.9**|**22.6**|**0.369**|**0.310**|**0.337**|
|LLM (M)|LLAVA-Next-7B|4.7|11.0|10.3|0.297|0.303|0.300|
|LLM (G)|*|4.7|10.5|12.9|0.311|0.299|0.305|
|SGVC|*|**7.4**|**12.5**|**22.0**|**0.342**|**0.308**|**0.324**|
|LLM (M)|InternVL2.5-1B|5.4|11.5|13.0|0.324|0.320|0.322|
|LLM (G)|*|5.8|11.4|16.0|0.330|**0.328**|0.329|
|SGVC|*|**8.4**|**13.4**|**25.6**|**0.348**|0.326|**0.337**| | Summary: This paper proposes a zero-shot video captioning method utilizing frame-level scene graphs obtained from image-based Visual-Language Models (VLMs). The authors consolidate these frame-level graphs into a unified video-level representation using Hungarian matching, followed by generating video-level captions through a lightweight graph-to-text decoder trained only on text corpora. Experimental evaluations on standard benchmarks (MSR-VTT, ActivityNet Captions) demonstrate superior performance compared to existing zero-shot methods, approaching supervised results.
Claims And Evidence: Most claims are supported with clear empirical evidence, such as thorough experimental results using multiple established metrics (BLEU, METEOR, CIDEr, BERTScore). However, the claim regarding inference-time efficiency is not sufficiently supported by explicit timing comparisons with baseline methods. The claim that the approach reduces hallucinations is qualitative rather than quantitatively supported. Quantitative evaluation would strengthen this claim.
Methods And Evaluation Criteria: The proposed methods are well-reasoned and effectively address the zero-shot captioning challenge. The benchmarks and evaluation metrics are also appropriate.
Theoretical Claims: The paper makes no explicit theoretical claims. It is primarily empirical and methodological. The proposed methods are practical, with clear descriptions and validation through empirical studies.
Experimental Designs Or Analyses: Almost of the experimental designs make sound. However, it is difficult to visually recognize the results of Table 1 with and without reference captions because they are directly attached to the SGVC.
Supplementary Material: I reviewed the supplementary material, including visual diagrams and additional qualitative examples. The materials clarified the method and demonstrated practical caption quality improvements clearly.
Relation To Broader Scientific Literature: The paper situates its contributions within the current literature well, citing recent works on zero-shot video captioning and Dense Video Captioning (DVC) frameworks. Notably, it introduces a novel framework that leverages scene graphs, a method not commonly used in previous zero-shot video captioning research.
Essential References Not Discussed: The key contribution of the paper is the generation and consolidation of Vision-Language Model (VLM) captions to enhance zero-shot video captioning. The proposed approach effectively addresses the problem with a well-structured process. There are no critical missing references.
Other Strengths And Weaknesses: __Strength__
S1. Novelty in applying scene graph consolidation for zero-shot video captioning.
S2. Clear methodological explanations supported by strong experimental validation.
S3. Effectively addresses practical limitations inherent in zero-shot captioning scenarios.
__Weakness__
W1. Lack of evaluation regarding inference-time cost.
W2. Missing ablation studies examining the impact of different Visual-Language Models.
W3. Can the authors provide the case of captioning failure? (e.g. due to VLM hallucination)Suggestion 1. Can you provide the case of captioning failure? (e.g. due to VLM hallucination)
W4. How sensitive is the performance of your method to varying the threshold (τ) during graph merging?
W5. What are the criteria for selecting frames in an ablation study for the number of frames in Table 3?
Other Comments Or Suggestions: N/A
Questions For Authors: See the weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1. Inference cost**
SGVC is efficient in terms of latency, comparable to Decap and C3. However, direct comparisons are challenging as each method utilizes different backbone models for image captioning (SGVC: BLIP, BLIP2, LLAVA-NEXT-7B; Decap: CLIP; C3: ImageBind). In contrast, Video ChatCaptioner and [Tewel et al.] are significantly slower, by at least an order of magnitude, due to the former requiring multiple evaluation rounds and the latter involving test-time optimization.
For the performance of SGVC with BLIP and BLIP2, please refer to our response for Tables A and B in #2. These models actually outperform our implementation with LLAVA-NEXT-7B.
**2. Different VLM captioning models**
We evaluate our approach using various VLM backbones, including image-based models (BLIP, BLIP-2, LLaVA-NEXT-7B) and a video-based model (InternVL2.5-1B [1]), on the MSR-VTT (Table A) and ActivityNet Captions (Table B) datasets. The results consistently demonstrate that SGVC outperforms its LLM summarization counterpart, regardless of the chosen VLM backbone. Note that the SGVC results with LLaVA-NEXT-7B in Table B differ slightly from those in the original manuscript, as we extended fine-tuning on Visual Genome paragraphs from 500 to 3200 iterations.
Table A. Ablation of VLM backbones on MSR-VTT test set.
|Method|Backbone VLM|Using ref.|B4|METEOR|CIDEr|$P_\text{BERT}$|$R_\text{BERT}$|$F_\text{BERT}$|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|VidIL|BLIP+CLIP|$\checkmark$|13.3|20.3|19.4|0.452|**0.553**|0.486|
|LLM summarization|BLIP||9.6|21.6|10.8|0.313|0.516|0.390|
|SGVC|BLIP||**17.9**|**22.4**|**24.6**|**0.485**|0.536|**0.509**|
|Video ChatCaptioner|BLIP2||13.2|22.0|16.5|0.396|0.510|0.436|
|LLM summarization|BLIP2||11.5|**23.1**|15.4|0.308|0.528|0.389|
|SGVC|BLIP2||**18.6**|23.0|**26.9**|**0.475**|**0.540**|**0.505**|
|LLM summarization|LLAVA-Next-7B||15.3|**23.8**|19.5|0.338|0.535|0.414|
|SGVC|LLAVA-Next-7B||**17.1**|23.0|**24.0**|**0.455**|**0.547**|**0.497**|
|LLM summarization|InternVL2.5-1B||15.1|**23.6**|19.6|0.339|0.532|0.414|
|SGVC|InternVL2.5-1B||**17.7**|23.5|**26.7**|**0.481**|**0.550**|**0.513**|
Table B. Ablation of VLM backbones on ActivityNet Captions val set.
|Method|Backbone VLM|B4|METEOR|CIDEr|$P_\text{BERT}$|$R_\text{BERT}$|$F_\text{BERT}$|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|LLM summarization|LLAVA-Next-7B|4.7|11.0|10.3|0.297|0.303|0.300|
|SGVC|LLAVA-Next-7B|**7.4**|**12.5**|**22.0**|**0.342**|**0.308**|**0.324**|
|LLM summarization|BLIP|4.0|10.1|8.7|0.304|0.277|0.290|
|SGVC|BLIP|**7.1**|**11.9**|**17.5**|**0.366**|**0.288**|**0.322**|
|LLM summarization|BLIP2|4.8|11.0|10.5|0.319|0.300|0.309|
|SGVC|BLIP2|**7.9**|**12.9**|**22.6**|**0.369**|**0.310**|**0.337**|
|LLM summarization|InternVL2.5-1B|5.4|11.5|13.0|0.324|0.320|0.322|
|SGVC|InternVL2.5-1B|**8.4**|**13.4**|**25.6**|**0.348**|**0.326**|**0.337**|
**3. Failure cases**
We provide two failure cases given by hallucinations in the initial image-based caption:
Example 1.
* Reference captions: ["A group of people dressed in all of the colors of the rainbow sing a happy song.", "Two elderly women dancing with a group of men.", …]
* SGVC output: "Two guys in multi-colored tops dance in front of a wall."
While the caption accurately captures specific visual details such as "multi-colored tops", "wall", and "dance", the VLM hallucinates the number of individuals (reporting "two guys" instead of the actual group of five people).
Example 2.
* Reference captions: ["A man fixes a piece of machinery that appears to be a miniature tank.", "A guy fixing his camera equipment.", … ]
* SGVC output: "A man is holding a drill in his hand while working on machinery."
The object in the person's hand is a camera, but the initial frame-level captioner incorrectly identified it as a "drill", influenced by the surrounding context. This hallucinated detail was propagated to the final consolidated caption.
**4. Impact of threshold $\tau$ in graph merging**
We conducted an ablation study by varying the cosine similarity threshold, $\tau$, as shown in Table C. The results demonstrate stable performance within the range $\tau \in [0.80, 0.95]$.
Table C. Ablation study on the threshold $\tau$.
|$\tau$|B4|METEOR|CIDEr|$P_\text{BERT}$|$R_\text{BERT}$|$F_\text{BERT}$|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|0.95|17.1|23.0|24.0|0.455|0.547|0.497|
|0.90|17.3|22.5|24.5|0.463|0.542|0.499|
|0.85|17.2|22.8|23.8|0.455|0.545|0.496|
|0.80|16.9|22.8|23.0|0.445|0.546|0.490|
**5. Frame selection criteria**
In our experiment for Table 3, we uniformly sampled frames to ensure coverage of the entire video content.
**6. Quantitative evaluation of hallucination reduction**
Thank you for your suggestion. However, conducting a quantitative evaluation within a short time frame is challenging due to the lack of necessary annotations and evaluation metrics.
[1] Chen, Z. et al. Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling. In arXiv 2024. | Summary: This paper introduces a novel zero-shot video captioning approach that leverages scene graphs to bridge image and video understanding without requiring paired video-text data. The four-step process involves: generating frame-level captions using an image VLM, converting these into scene graphs, consolidating the graphs into a unified video-level representation through a novel merging algorithm, and finally generating captions using a lightweight graph-to-text model trained only on text corpora.Evaluated on MSR-VTT and ActivityNet Captions datasets, the method outperforms existing zero-shot baselines while reducing computational costs.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes, the proposed methods make sense to me.
Theoretical Claims: This paper doesn't contain formal mathematical proofs or theoretical claims requiring verification.
Experimental Designs Or Analyses: The experimental design is sound overall, with appropriate datasets, baselines, metrics.
Supplementary Material: Yes, Section A, B and C.
Relation To Broader Scientific Literature: The use of scene graphs as structured intermediate representations connects to prior work on scene graph generation and reasoning in the image domain. The graph consolidation approach extends ideas from temporal video understanding while addressing the data scarcity problem in video captioning through a zero-shot framework.
The work also relates to recent efforts bridging image and video domains using large language models and creating more structured representations for video understanding beyond simple frame averaging.
Essential References Not Discussed: Some important methods are not discussed, such as [1, 2, 3], and some important benchmarks were not used for evaluation, such as [4].
[1] Huang, B., Wang, X., Chen, H., Song, Z., & Zhu, W. (2024). Vtimellm: Empower llm to grasp video moments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14271-14280).
[2] Tang, Y., Shimada, D., Bi, J., Feng, M., Hua, H., & Xu, C. (2024). Empowering LLMs with Pseudo-Untrimmed Videos for Audio-Visual Temporal Understanding. arXiv preprint arXiv:2403.16276.
[3] Kim, M., Kim, H. B., Moon, J., Choi, J., & Kim, S. T. (2024). HiCM2^2
2: Hierarchical Compact Memory Modeling for Dense Video Captioning. arXiv preprint arXiv:2412.14585.
[4] Zhou, L., Xu, C., & Corso, J. (2018, April). Towards automatic learning of procedures from web instructional videos. In Proceedings of the AAAI conference on artificial intelligence (Vol. 32, No. 1).
Please also see the "Other Strengths And Weaknesses" part.
Other Strengths And Weaknesses: Strengths:
S1: The approach cleverly addresses the scarcity of video-text paired data by leveraging existing image VLMs and text-only training.
S2: The scene graph consolidation represents an elegant solution to maintain consistency in object identity across frames, addressing a common issue in naive frame-based approaches.
Weaknesses:
W1: The approach relies on the quality of the initial frame-level captions and scene graph parsing, creating potential error propagation throughout the pipeline.
W2: The performance of 'LLM summarization' on ANet Caps shown in Table 2 seems to be too low. Some LLM-based methods mentioned in "Essential References Not Discussed" part can achieve better performance than that.
Other Comments Or Suggestions: N/A
Questions For Authors: Q1: How robust is the scene graph consolidation algorithm to errors in the initial scene graph parsing? Since the method relies on accurate object detection and relationship parsing, I'm curious whether you've analyzed how errors in this initial stage propagate through the pipeline.
Q2: The ActivityNet videos are relatively short compared to many real-world video understanding scenarios. Do you consider evaluating your method on benchmarks with longer videos, such as the [4] mentioned in "Essential References Not Discussed" part?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1. Reliance on the quality of frame-level captions and vulnerability to cumulative parsing errors in scene graphs**
* Error propagation is common in other algorithms:
The primary objective of this work is to understand video content and generate fine-grained, video-level captions without encoding entire videos at once. This approach is especially effective for handling long videos, which require division into multiple segments or frames. However, consolidating information from these segments inevitably leads to cumulative errors—a challenge common to various algorithms despite their efforts to minimize it.
* Error quantification is difficult:
Due to the zero-shot nature of our method, directly quantifying the cumulative error from abstracting segment-level (or frame-level) visual information into textual captions and extracting video-level representation based on a scene graph generation and consolidation is challenging. This is because existing methods employ fundamentally different algorithms, including variations in encoder and decoder choices for representation and captioning.
* An attempt to estimate the impact of cumulative parsing error:
One way to isolate the parsing error in scene graph construction is to compare performance between LLM summarization and our approach, SGVC. Note that LLM summarization follows the same pipeline as SGVC up to the segment-level (or frame-level) captioning stage but consolidates multiple captions directly using an LLM. As shown in Tables 1, 2, and 3 of the main paper, SGVC significantly outperforms LLM summarization, indicating that scene graph consolidation effectively integrates information without introducing substantial errors.
* Artifact from the generation of segment-level (frame-level) captions:
Analyzing the impact of segment-level (or frame-level) caption generation is more complex. However, converting video segments (or frames) into text is sufficiently effective for merging semantic information across multiple segments (or frames), facilitating more coherent video-level understanding. Note that the counterpart, maintaining visual representations for individual segments (or frames), is not interpretable and requires an additional module for consolidating the information from multiple segments (or frames).
**2. Low performance of 'LLM summarization’ in Table 2**
We appreciate the suggestions for additional references [1, 2, 3] to compare on ActivityNet Captions. However, we clarify that these methods are extensively trained on large-scale video-text datasets, including ActivityNet Captions, whereas our approach operates in a zero-shot setting without using target dataset annotations during training. Therefore, direct comparisons between [1, 2, 3] and SGVC are not meaningful.
**3. Evaluation on YouCookII**
We agree that evaluating on a dataset with longer videos, such as YouCookII [4], is meaningful. We are currently conducting experiments on YouCookII and expect that the benefits of SGVC will be particularly evident in datasets with long videos, similar to ActivityNet Captions. However, due to time constraints, we were unable to include these results in our initial response.
[1] Huang, B. et al. VTimeLLM: Empower LLM to Grasp Video Moments. In CVPR 2024.
[2] Tang, Y. et al. Empowering LLMs with Pseudo-Untrimmed Videos for Audio-Visual Temporal Understanding. In AAAI 2025.
[3] Kim, M. et al. HiCM2^2 2: Hierarchical Compact Memory Modeling for Dense Video Captioning. In AAAI 2025.
[4] Zhou, L. et al. Towards Automatic Learning of Procedures From Web Instructional Videos. In AAAI 2018.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough rebuttal. I find your explanation regarding error propagation convincing, and I accept your clarification about comparisons with references [1,2,3] given your zero-shot approach.
However, I still believe evaluation on YouCookII with longer videos is necessary to fully demonstrate your method's capabilities. If possible, I recommend adding these experiments to the final version.
I'd like to maintain weak accept recommendation. | Summary: The authors propose a zero-shot video captioning approach that combines frame-level scene graphs from a video to obtain intermediate representations for caption generation. This method generates frame-level captions using an image VLM, converts them into scene graphs, and consolidates these graphs to produce comprehensive video-level descriptions. To achieve this, the authors leverage a lightweight graph-to-text model trained solely on text corpora, eliminating the need for video captioning annotations.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: Need more comparisons with work [1].
[1] Nguyen, Trong-Thuan, Pha Nguyen, and Khoa Luu. "Hig: Hierarchical interlacement graph approach to scene graph generation in video understanding." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18384-18394. 2024.
Essential References Not Discussed: [1] Nguyen, Trong-Thuan, Pha Nguyen, and Khoa Luu. "Hig: Hierarchical interlacement graph approach to scene graph generation in video understanding." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18384-18394. 2024.
Other Strengths And Weaknesses: Strengths:
The proposed approach for fine-grained video captioning through scene graph consolidation demonstrates significant strengths, including effective utilization of image-based vision-language models (VLMs) adapted successfully for video domains without requiring task-specific annotations or computationally intensive training.
Weaknesses:
1. Potential reliance on the quality and accuracy of frame-level captions generated by image VLMs, vulnerability to cumulative parsing errors in scene graphs.
2. Limited ability to capture subtle temporal dynamics between frames due to the discrete nature of the frame-level graph merging process.
3. Paper was written in a rush.
Other Comments Or Suggestions: See Weaknesses.
Questions For Authors: See Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: **1. Reliance on the quality of frame-level captions and vulnerability to cumulative parsing errors in scene graphs**
* Error propagation is common in other algorithms:
The primary objective of this work is to understand video content and generate fine-grained, video-level captions without encoding entire videos at once. This approach is especially effective for handling long videos, which require division into multiple segments or frames. However, consolidating information from these segments inevitably leads to cumulative errors—a challenge common to various algorithms despite their efforts to minimize it.
* Error quantification is difficult:
Due to the zero-shot nature of our method, directly quantifying the cumulative error from abstracting segment-level (or frame-level) visual information into textual captions and extracting video-level representation based on a scene graph generation and consolidation is challenging. This is because existing methods employ fundamentally different algorithms, including variations in encoder and decoder choices for representation and captioning.
* An attempt to estimate the impact of cumulative parsing error:
One way to isolate the parsing error in scene graph construction is to compare performance between LLM summarization and our approach, SGVC. Note that LLM summarization follows the same pipeline as SGVC up to the segment-level (or frame-level) captioning stage but consolidates multiple captions directly using an LLM. As shown in Tables 1, 2, and 3 of the main paper, SGVC significantly outperforms LLM summarization, indicating that scene graph consolidation effectively integrates information without introducing substantial errors.
* Artifact from the generation of segment-level (frame-level) captions:
Analyzing the impact of segment-level (or frame-level) caption generation is more complex. However, converting video segments (or frames) into text is sufficiently effective for merging semantic information across multiple segments (or frames), facilitating more coherent video-level understanding. Note that the counterpart, maintaining visual representations for individual segments (or frames), is not interpretable and requires an additional module for consolidating the information from multiple segments (or frames).
**2. Limited ability to capture subtle temporal dynamics between frames**
As discussed in our response to #1, our core contribution lies in developing a pipeline for fine-grained video captioning without encoding entire videos at once, which involves scene graph generation and its consolidation. This allows for a variety of captioning models to be used; our approach is not restricted to frame-level captioners but can also incorporate video-level ones. Notably, the combination of a video-level encoder (InternVL2.5-1B [1]) and SGVC demonstrates promising performance, outperforming LLM summarization, although comparisons with external algorithms are not available at this time. For detailed results, please refer to our response to Reviewer 3YCD for #2.
**3. Additional reference**
We appreciate the reference to HIG [2] for video scene graph generation. This study offers a valuable technique for constructing more comprehensive scene graphs. However, our work is not a competitor to HIG; rather, it has significant potential to benefit from integrating HIG’s scene graph generation techniques to enhance performance. Additionally, HIG requires extensive training to learn a scene graph generation model, but since pretrained models are not publicly available, immediate integration into our approach may not be feasible.
[1] Chen, Z. et al. Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling. In arXiv, 2024.
[2] Nguyen, T. et al. HIG: Hierarchical interlacement graph approach to scene graph generation in video understanding. In CVPR 2024. | null | null | null | null | null | null |
Transformative or Conservative? Conservation laws for ResNets and Transformers | Accept (oral) | Summary: This paper studies gradient flow conservation laws in neural networks of many architectures including linear networks, ReLU networks, networks with attention layers, and networks with residual skip connections. Using ideas from Lie Algebra (specifically Frobenius' theorem), they are able to characterize the number of conservation laws and in some cases prove that their provided conservation laws are complete. For transformers such conservation laws take the form of functions of $Q Q^\top - K K^\top$ and $VV^\top - OO^\top$, similar to the conservation laws of weights of deep linear networks. They also characterize gradient flow conservation laws for subsets of the parameters such as blocks in a deep residual network. Lastly, they provide an error analysis for discrete time step GD and gradient flow, showing an error in the conservation law which accumulates over time on the order of $\text{learning rate}^2$.
Claims And Evidence: This paper is primarily theoretical and provides many proofs and references to the very relevant prior work of Marcotte et al 2023. The claim of stability of the conservation law at finite learning rate is verified in an experiment in Figure 1, where for very small learning rate, the conservation law holds well and for larger learning rates the error accumulates.
Methods And Evaluation Criteria: The paper is primarily theoretical so there is not much to check. Perhaps more numerical experiments verifying different conservation laws in the transformer could be interesting as these results appear to be novel contributions of this work.
Theoretical Claims: I checked and read Appendix A, B and C for the theoretical derivations. I also read some of Marcotte et al 2023 which contains some background material on this approach. Appendix B and C contains much of the technical machinery to prove properties and exhaustiveness of the conserved quantities. The result relies on the Frobenius theorem which is a fundamental tool in integrable systems theory. I also briefly skimmed Appendices E-I which derive the results for the attention layer which appeared correct.
Experimental Designs Or Analyses: The Figure 1 should state which conservation law is being tested. Some additional numerical tests, especially regarding the new conservation laws (attention, residual blocks, etc), could improve the paper.
Supplementary Material: I reviewed most of the Appendix sections but primarily focused on A-C. Once I understood their methods, I mostly skimmed the remaining sections.
Relation To Broader Scientific Literature: This paper provides a nice analytical toolkit to examine conservation laws in different neural network architectures. This is fundamental to deep learning theory since it aids our understanding of the implicit bias of gradient flows and the types of solutions networks converge to. While I was familiar with the two layer linear and ReLU conservation laws, I like that this approach is flexible enough to provide some insights into the residual networks and models with attention layers.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Another strength of the paper is its rigor and precision. However, the current writing could potentially be challenging to a machine learning theorist with less background in Lie Algebras and integrable systems. Some informal discussions after each result could make the paper more readable. In addition, it could improve the paper to test numerically some of the newer claims to demonstrate the power of this approach in the newer settings that go beyond Marcotte et al 2023.
Other Comments Or Suggestions: See questions
Questions For Authors: 1. The authors write that all conservation laws in the transformer case are functions of $QQ^\top - KK^\top$. Is this full matrix difference conserved or just specific nonlinear functions of it? I ask because in the deep linear case, the conservation laws take the form $ W_{\ell} W_{\ell}^\top - W_{\ell+1}^\top W_{\ell+1}$ for **all matrix entries**. Is there a method to extract which nonlinear functions of this difference are conserved?
2. What conserved quantity is being tracked in Figure 1 and the Appendix Figures?
3. Do the authors think that normalization layers (like layernorm) will make the number of conservation laws larger? If the gradient is independent of the scale / radial direction of the features, then it could be that the dynamics are confined to a lower dimensional subspace.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for his positive comments and his insightful feedback.
> **Q1d** "The Figure 1 should state which conservation law is being tested. Some additional numerical tests, especially regarding the new conservation laws (attention, residual blocks, etc), could improve the paper.” + “What conserved quantity is being tracked in Figure 1 and the Appendix Figures?”
Thank you for pointing out that point. The caption will be fixed. As detailed in section 5.2, the conserved quantity that is tracked is the sum of all conserved quantities of equation 14 associated to the first convolutive layer. In particular, Figure 1 focuses on some new conservation laws of our paper (ResNet architecture, thus skip connection and convolutive layers). Concerning numerical experiments regarding new conservation laws associated to attention layers, see our answer to your question **Q3d**.
> **Q2d** Some informal discussions after each result could make the paper more readable
Thank you for the suggestion that will be taken care of in the final version.
> **Q3d** In addition, it could improve the paper to test numerically some of the newer claims to demonstrate the power of this approach in the newer settings that go beyond Marcotte et al 2023.
Beside numerically testing our new theoretical founding in the case of a ResNet’s training (see our answer to your question **Q1d**) in our paper (cf Fig. 1), we propose to add this experiment https://anonymous.4open.science/r/Numerical_experiments-2730/Transformer_IMDb.ipynb that tracks the evolution of a conservation law during the training of a transformer. More precisely, we train a transformer on the sentiment analysis dataset (IMDb dataset) and we track the evolution of $| h(\theta_n) - h (\theta_0)|$ where $h (\theta) = \| \| Q_1 Q_1^\top - K_1 K_1^\top\| \|_p$ (ie the norm of Frobenius of the conservation law given by Corollary 3.10, where $Q_1$ and $K_1$ are the query and key matrices of the first head and from the first layer. Similarly to Figure 1, we obtain the same bound O (step * learning_rate^2). This new experiment also focuses on new conservation laws (attention block and skip connection).
> **Q4d** The authors write that all conservation laws in the transformer case are functions of $QQ^\top−KK^\top$. Is this full matrix difference conserved or just specific nonlinear functions of it? I ask because in the deep linear case, the conservation laws take the form $W_l W_l^\top−W_{l+1}^\top W_{l+1}$ for all matrix entries. Is there a method to extract which nonlinear functions of this difference are conserved?
Yes, the full matrix difference is conserved, and this implies that any linear or nonlinear function of these matrix entries is also conserved: when a function $h(\theta)$ is conserved, any function$ \Psi(h(\theta))$ is also conserved. This is exactly why we need to define a notion of independent functions (Definition 2.10), so that we remove all these functional redundancies (see also our Proposition 2.12). We will stress on that point on the final version of our paper.
> **Q5d** Do the authors think that normalization layers (like layernorm) will make the number of conservation laws larger? If the gradient is independent of the scale / radial direction of the features, then it could be that the dynamics are confined to a lower-dimensional subspace
Since these layers are not parameterized and only marginally change the diversity of possible inputs to the subsequent layers, we expect them to have no effect on the analysis and, therefore, on the number of conserved laws.
However, if you consider in other settings, another type of normalization that operates directly on the parameters, you should expect (as in the case of a cross-entropy layer) new conservation laws.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses. I will maintain my score. | Summary: The authors study conservation in more general networks than the previously studied ReLU and linear networks. In particular, they focus on convolutional ResNets and Transformers.
## Update after rebuttal
I appreciate the authors' detailed rebuttal, which has resolved my concerns. As a result, I have raised my score.
Claims And Evidence: Yes the claims that the paper is the first to analyze conservation in Transformers is true to the best of my knowledge and the proofs are rigorous of the proposed theorems.
Methods And Evaluation Criteria: Yes, the methods used in the proofs make sense for the gradient flow problem at hand. This work is mainly theoretical.
Theoretical Claims: The authors prove that conservation laws of gradient flows with weight decay are determined by their equivalent without weight decay in Theorem 2.1 and list this as a contribution. The emphasis on weight decay is unclear at this point in the introduction. The authors then discover new conservation laws for more modern architectures with the assumption that they have skip connections, i.e., shallow multi-channel convolution layers (Theorem 3.6), self-attention layers (Corollaries 3.9-3.10), cross-entropy classification layer (Prop 3.11) and MLP (Prop 3.2). Theorem 4.6 is the proof that these conservation laws match the laws of blocks in isolation. They also prove that the error bound scales as the step-size squared. The proofs seem correct to me.
Experimental Designs Or Analyses: The paper mainly consists of theoretical results. There are numerical results that demonstrate the the error scales numerically as the $O(step-size^2)$ in Figure 1. This numerical experiment studies ResNet-18 on CIFAR-10 dataset. It may be interesting to test different architectures on additional datasets as well so that the numerical results can be more thorough. I would be interested in seeing the CIFAR results on the Transformer architecture.
Supplementary Material: I quickly read through the supplementary material. The proofs are written clearly and seem correct but I did not the them that carefully.
Relation To Broader Scientific Literature: - The paper is well-connected within the gradient flows literature and relevant to extend the work there to more modern architectures including transformers. The authors in the introduction describe conservation laws as a "balancedness condition". I think it would be interesting to see how this relates to conservation laws in PDEs and scientific machine learning literature, where a conservation law in integral form states that the rate of change of total mass of the system is balanced by the change in fluxes in the domain, which seems to be a related condition (see ansen et al., "Learning Physical Models that Can Respect Conservation Laws", ICML, 2023 and Richter-Powell et al., "Neural Conservation Laws: A Divergence-Free Perspective", NeurIPS, 2022).
Essential References Not Discussed: - Other good ODE-based references to add include Ott et al., "ResNet After All: Neural ODEs and Their Numerical Solution", ICLR, 2021 and Krishnapriyan et al., "Learning continuous models for continuous physics", Nature Communications, 2022, which views a ResNet as a discrete Forward Euler discretization of a ODE and shows similar performance to the "continuous" NeuralODE. Onken et al., "Discretize-Optimize vs. Optimize-Discretize for Time-Series Regression and Continuous Normalizing Flows", 2020 shows that it may be preferable to first discretize and then optimize.
Other Strengths And Weaknesses: ## Weaknesses
- The authors mention a clear limitation in the conclusion that their work does not dynamics from Adam optimizer, which is the most commonly used optimizer for training Transformer and hence limits the practical aspects of the work. It would be interesting for the authors to discuss ideas of how to support Adam as future work.
### Minor
- Typo in "traing" on line 065 left column for training.
- \citep is also incorrect on line 062
- “overalapping” on line 102
- Increase x and yaxis label size, legend and titles in Figure 1
## Strengths
- Literature is well-summarized and reviewed.
- Section 2 provides a clear problem definition
- The authors are very clear on their contribution.
- Clear limitations and directions for future work are discussed in the conclusion.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. There is a clear connection to ODEs in the paper. Could the method also be connected to PDEs with a spatial component potentially in the image domain?
2. The problem definition defines the problem for tabular data with respect to classical regression or classification tasks. How could this method be extended to time series data?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments and insightful feedback.
> **Q1c** about structure theorem
We stated theorem 2.1 right at the beginning to lighten the notation in the rest of the paper. It is new to the best of our knowledge.
> **Q2c** CIFAR results on the Transformer architecture
In this experiment https://anonymous.4open.science/r/Numerical_experiments-2730/Transformer_IMDb.ipynb, we train a transformer and track the evolution of the conservation law given by Corollary 3.10, confirming the O (step * learning_rate^2) behaviour. See also our answer to **Q3d** of reviewer iC26 for more detail.
> **Q3c** how this relates to conservation laws in PDEs and scientific machine learning literature (...)
Thank you for these references! The common technical aspect to our work and these references is the link between symmetries and conservation laws. However, while we *uncover* conservation laws satisfied by *the parameters of a neural network during the flow associated to their training*, these references *design neural networks implementing functions that satisfy certain physics-driven/PDE-driven conservation laws.*
> **Q4c** in Essential References Not Discussed
Thank you for these references! The 'gradient flow' ODE (3) describes *parameter* dynamics during training, while the Neural ODEs governs the *input* transformation $x$ through an infinitely deep network, *not the parameter evolution $\theta$*. Connections to Neural ODEs demonstrate the broader utility of Lie algebraic techniques for universality proofs in ML, see e.g. arXiv:1908.07838, arXiv:2404.08289. We will add this discussion to our related work section.
> **Q5c** How to support Adam as future work
Analyzing the original Adam algorithm is challenging due to the presence of two momentum parameters. However, a simplified version can be readily analyzed using our framework, and we will update the manuscript accordingly. A more detailed analysis of Adam itself remains an interesting direction for future work.
The continuous limit of Adam yields an ODE where the gradient is replaced by its sign. $W(\theta)$ is spanned by signed gradients, it satisfies the assumptions of our results. In the simple case of a $2$-layer linear neural network we easily relate it with $\phi(\theta) = U V^\top$ and deduce in this code https://anonymous.4open.science/r/Numerical_experiments-2730/Adam.ipynb that $dim Lie (W(\theta)) = D$, so that there is no conservation law (except when $U$ and $V$ are scalars, in which case $|U|- |V|$ is conserved). For more detail, see our answer to **Q3b** of reviewer RSFD. We will mention this at the end of our paper.
> **Q6c** (...) connected to PDEs with a spatial component (...)?
This is a good question that relates closely to your earlier question **Q3c**. In response to **Q3c**, we mentioned that our analysis focuses on conservation with respect to the parameters $\theta$, rather than the input $x$ of the networks. However, we would like to point out that it is indeed possible to apply our approach to networks that include layers acting spatially on a signal or image $x$ (such as differential operators in the context of PDEs). In particular, one can design weight matrices in the network to operate directly on the spatial domain, typically corresponding to a discretization of a differential operator. This approach could provide a useful starting point for studying architectures such as PINNs or Neural Operators. After discretization, translation-invariant differential operators (e.g. Laplacians) become convolutions, which align naturally with our current methodology. That said, extending our results to infinite-dimensional operators remains an open and promising direction for future work. We will include these possibilities in the final version of the manuscript. If the reviewers have specific architectures or PDEs in mind, we would be very interested to hear more about them.
> **Q7c** (...) extended to time series data?
Transformers are indeed well-suited for time series data (as well as for NLP tasks involving next-token prediction). The only difference compared to the setup presented in our paper is the inclusion of masking to make the architecture causal. This modification does not affect our theoretical framework—the conservation laws remain unchanged. Additionally, the numerical code used to test the approximate conservation law under SGD can be applied in the causal setting simply by enabling the use_mask =True flag (cf https://anonymous.4open.science/r/Numerical_experiments-2730/Transformer_IMDb.ipynb). In practice, we observe that the predictions of our theory remain accurate even when accounting for discrete stepping in the algorithm. We will update the manuscript to reflect this. Extending our methods to other types of causal architectures, such as recurrent neural networks and Selective State Space Models, represents a promising direction for future work, which we will also mention.
---
Rebuttal Comment 1.1:
Comment: I thank the reviewers for their detailed rebuttal. I will increase my score. | Summary: **Summary of Contributions:**
The paper makes significant contributions by **deriving and analyzing conservation laws for modern deep learning architectures, specifically convolutional ResNets and Transformer networks**. It extends the understanding of conservation laws, which were previously mostly studied in shallow ReLU and linear networks, to these more practical architectures. The authors first show the conservation laws for basic building blocks and then analyze deeper networks, introducing the notion of conservation laws that depend only on a subset of parameters (e.g., a layer, a residual block, or an attention layer). A key finding is that **residual blocks have the same conservation laws as the block without the skip connection**. The paper also demonstrates that these conservation laws, initially established in the continuous gradient flow regime, are approximately preserved under discrete Stochastic Gradient Descent (SGD) dynamics.
* **Literature:**
* The paper **appropriately cites relevant works** in the field of conservation laws in neural networks and the theory of deep learning. It clearly positions its contributions in relation to existing results, such as the completeness of conservation laws for shallow ReLU and linear networks and the analysis of implicit biases in ResNets and Transformers. The references are comprehensive and include both foundational works and recent advancements.
**3. Pros and Cons:**
* **Pros:**
* **Novel and significant theoretical contributions** extending the understanding of conservation laws to modern deep architectures (ResNets and Transformers).
* **Rigorous mathematical framework** with formal definitions, theorems, and detailed proofs in the appendices.
* Introduction of the concept of **conservation laws depending on a subset of parameters** ("block" conservation laws).
* Important findings such as **residual blocks having the same conservation laws as their non-residual counterparts**.
* Theoretical analysis showing that conservation laws are **approximately preserved under SGD dynamics** with an error bound.
* **Empirical validation** of the approximate conservation under SGD through numerical experiments on ResNet-18.
* Completeness results for conservation laws in several basic building blocks with skip connections.
**Overall Assessment:**
This paper presents a **highly valuable and significant contribution** to the theoretical understanding of deep learning by extending the concept of conservation laws to modern neural network architectures. The rigorous mathematical derivations and the analysis of the behavior under SGD provide important insights into the training dynamics of ResNets and Transformers. While some limitations exist in terms of the architectures and optimizers covered, the novelty and soundness of the presented work are strong.
Claims And Evidence: The paper "Transformative or Conservative?" claims to support its contributions with clear evidence through **mathematical derivations and proofs** presented within the main text and in the appendices. Several key findings are explicitly linked to theorems, propositions, and corollaries, with their proofs detailed in the appendices. For instance:
* The structure theorem (Theorem 2.1) demonstrating the relationship between conservation laws with and without weight decay has its proof in Appendix A.
* The conservation laws for shallow multi-channel convolution layers are presented in Theorem 3.6, with a more general proof in Appendix E.
* The characterization of conservation laws for self-attention layers is given in Corollary 3.9 and 3.10, with proofs in Appendix G.
* The existence of independent conservation laws for the cross-entropy classification layer is shown in Proposition 3.11, with a proof in Appendix H.
* The proposition that residual blocks have the same conservation laws as their non-residual counterparts (Proposition 3.2) includes a proof in the main text.
* Theorem 4.6, stating that conservation laws depending only on a block's parameters match those of the isolated shallow block, has its proof in Appendix M.
* The absence of conservation laws for overlapping blocks in ResNets (Theorem 4.7) is proven in Appendix O.
* The approximate preservation of conservation laws under SGD (Proposition 5.1) is supported by a proof in Appendix P.
Furthermore, the paper includes **numerical experiments** in Section 5.2 and Figure 1 that empirically evaluate the behavior of a conserved function during ResNet training with SGD. The results are presented as confirming the approximate conservation and aligning with theoretical expectations derived from Proposition 5.1.
Methods And Evaluation Criteria: Here's a breakdown:
* **Proposed Methods:** The core method employed in the paper is **mathematical derivation and analysis** of conservation laws. This is a fundamental approach for studying invariant quantities in dynamical systems, which aligns directly with the definition of conservation laws in the context of gradient flow training dynamics. By first establishing conservation laws for basic building blocks (like shallow networks and attention layers) and then extending the analysis to deeper architectures and considering subsets of parameters, the paper adopts a **structured and analytical approach** to a complex problem.
* **Evaluation Criteria:** The evaluation strategy consists of two main components:
* **Mathematical Proofs:** The paper claims to support its theoretical contributions with **rigorous mathematical proofs** provided in the main text and, more extensively, in the appendices [the previous response]. Deriving these proofs is the primary way to validate the existence and form of the identified conservation laws. The presentation of findings as theorems, propositions, and corollaries indicates a commitment to mathematical rigor [the previous response].
* **Empirical Validation:** To examine the persistence of conservation laws under practical training scenarios with discrete dynamics like Stochastic Gradient Descent (SGD), the paper includes **numerical experiments**. These experiments track the evolution of a specific conserved quantity during the training of a ResNet-18 on the **CIFAR-10 dataset**. CIFAR-10 is a standard benchmark dataset in computer vision, making it a relevant choice for evaluating the behavior of ResNet architectures, which have been highly successful in this domain. The paper analyzes the conservation error in relation to the learning rate, comparing it with theoretical bounds derived for SGD. This empirical approach helps to bridge the gap between the theoretical continuous gradient flow regime and real-world discrete optimization.
In summary:
* The use of **mathematical derivations** is a natural and appropriate method for investigating conservation laws, which are inherently mathematical properties of the training dynamics.
* The **evaluation based on mathematical proofs** is the cornerstone of validating the theoretical claims made in the paper [the previous response].
* The inclusion of **numerical experiments on a relevant benchmark dataset (CIFAR-10) with a widely used architecture (ResNet-18) and optimizer (SGD)** provides empirical support for the theoretical findings, particularly concerning the behavior of conservation laws in discrete optimization settings. Tracking a theoretically conserved quantity and comparing its empirical behavior with theoretical predictions is a sensible evaluation criterion.
Theoretical Claims: I haven't checked all the proofs of the paper, but some of them were inspired by previous literature.
Experimental Designs Or Analyses: Yes Experimental design is well versed.
Supplementary Material: No
Relation To Broader Scientific Literature: * **Extending the Study of Conservation Laws to Modern Architectures:** The paper explicitly states that while conservation laws in gradient flow training dynamics are relatively well-understood for shallow ReLU and linear networks, their study has been **"largely unexplored for more practical architectures"** like convolutional ResNets and Transformer networks. This directly addresses a gap in the existing literature. Prior works like **Saxe et al. (2013)**, **Du et al. (2018)**, and **Arora et al. (2019)** established conservation laws primarily for these simpler architectures. This paper aims to bridge this gap by focusing on more complex, state-of-the-art models.
* **Building Upon Completeness Results for Shallow Networks:** The paper mentions that **(Marcotte et al., 2023) demonstrated the "completeness" of conservation laws for shallow ReLU and linear networks** under Euclidean gradient flows, meaning no additional conservation laws exist in those cases. The current paper extends this line of inquiry to more complex architectures, not just characterizing laws but also considering their completeness for certain building blocks.
* **Connecting to Work on Conservation Laws with Different Optimizers:** The paper cites **(Marcotte et al., 2024)** for unveiling novel conservation laws under alternative optimization algorithms like non-Euclidean gradient flows and momentum-based dynamics, highlighting their different characteristics compared to simple gradient flows. While the current paper primarily focuses on Euclidean gradient flows, it acknowledges this related work, suggesting a broader interest in the role of the optimization algorithm in determining conservation laws.
* **Generalizing Prior Findings on Convolutional Networks:** The paper notes that **(Du et al., 2018) identified conservation laws for feed-forward networks with single-channel convolutions**. Theorem 3.6 in the current paper **generalizes these findings to multi-channel convolutional ReLU networks**, demonstrating that similar types of functions are preserved in this more complex setting.
* **Investigating the Impact of Residual Connections:** Given the fundamental role of skip connections in ResNets **(He et al., 2016)**, the paper specifically examines their effect on conservation laws. Proposition 3.2, which shows that residual blocks have the same conservation laws as their non-residual counterparts, is a novel finding that directly relates to the architectural innovation of ResNets. This also contrasts with work like **(Marion et al., 2023)** which explores the implicit bias of ResNets towards Neural ODEs but doesn't focus on conservation laws in the same way.
* **Analyzing Conservation Laws in Transformer Architectures:** Transformers **(Vaswani, 2017)** have become dominant in various domains. The paper's derivation and analysis of conservation laws for **single attention layers (Corollary 3.9) and partially for multi-head attention (Corollary 3.10)** contribute significantly to understanding the training dynamics of these complex models. This builds upon work like **(Vasudeva et al., 2024)** which focuses on the implicit bias and fast convergence of self-attention but through a different lens (convergence to SVM).
* **Relating Conservation Laws to Network Invariances:** Remark 3.1 and Appendix J.1 discuss the intrinsic connection between the identified conservation laws and network invariances. This links the paper's findings to a broader theme in machine learning where symmetries and invariances of the model and loss function play a crucial role in the learning process.
* **Examining the Persistence of Conservation Laws Under Discrete Optimization:** Section 5 investigates whether the conservation laws derived in the continuous gradient flow regime persist under **Stochastic Gradient Descent (SGD)**, a widely used discrete optimization method. Proposition 5.1 and the numerical experiments provide insights into the approximate nature of conservation in this practical setting, connecting the theoretical findings to real-world training scenarios. This relates to the vast literature on the behavior and convergence of SGD in deep learning **(e.g., Bach, 2024; Garrigos & Gower, 2023)**.
In summary, the paper's key contributions are deeply intertwined with the existing scientific literature by **extending established concepts of conservation laws to modern deep learning architectures**, building upon prior theoretical results for simpler models, and investigating the practical implications of these laws in the context of widely used optimization techniques and network design principles. The specificity of analyzing ResNets and Transformers, including the impact of residual connections and attention mechanisms, positions this work as a significant advancement in understanding the training dynamics of contemporary neural networks.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: * **Strengths**
1. Novel theoretical framework for analyzing conservation laws in ResNets and Transformers.
2. Rigorous mathematical derivations and completeness results for key building blocks.
3. Insightful findings regarding the impact of residual connections and the behavior under SGD.
4. Introduction of "block" conservation laws, offering a new perspective on analyzing deep networks.
Other Comments Or Suggestions: **Detailed Evaluation:**
* **Novelty, Relevance, and Significance:**
* **Novelty**: The paper is **highly novel** as it tackles a largely unexplored area of conservation laws in modern, deep architectures like ResNets and Transformers. The introduction of "block-wise" conservation laws and the analysis of their persistence under SGD are new contributions. The paper also provides new completeness results for several basic building blocks with skip connections.
* **Relevance**: Understanding the training dynamics of deep networks is a fundamental challenge in deep learning. Conservation laws offer valuable insights into the implicit bias of training algorithms and network architectures, as well as playing a role in theoretical convergence analyses. This research is therefore **highly relevant** to the deep learning research community. Furthermore, the potential application of these laws in designing new optimization schemes adds to the practical relevance.
* **Significance**: By extending the theory of conservation laws to ResNets and Transformers, the paper bridges an important gap in our theoretical understanding of these widely used architectures. The findings, such as the invariance of conservation laws with respect to skip connections and the approximate preservation under SGD, are **significant** for both theoretical analyses and potentially for guiding the development of new training techniques. The completeness results for basic building blocks provide a solid theoretical foundation.
* **Soundness:**
* The paper appears to be **theoretically sound**. It builds upon existing frameworks for analyzing conservation laws in gradient flows. The paper provides formal definitions and propositions, with many proofs detailed in the appendices. The structure theorem (Theorem 2.1) establishing the relationship between conservation laws with and without weight decay seems crucial. The characterization of conservation laws for various building blocks (shallow convolutional ReLU networks, attention layers, cross-entropy classification layers) is supported by theorems and corollaries.
* The extension to deeper networks through the analysis of "block" conservation laws (Proposition 4.3 and Theorem 4.6) provides a rigorous way to connect the conservation laws of individual components to the global network behavior. The negative result regarding conservation laws overlapping residual connections (Theorem 4.7) is also an interesting and potentially important finding.
* The analysis of the persistence of conservation laws under discrete SGD dynamics (Proposition 5.1) provides a theoretical justification for the empirical observations. The error bound derived scales with the square of the step size, which is supported by the numerical experiments.
* The assumptions made throughout the paper (e.g., Assumption 2.4, 2.6, A.8, E.1) are clearly stated, and their implications are discussed. The paper acknowledges limitations, such as not covering Adam optimizer or normalization layers.
* **Quality of Writing/Presentation:**
* The writing is generally **clear and well-structured**. The paper begins with a concise introduction and clearly outlines its contributions. The use of definitions, propositions, theorems, and corollaries provides a formal and logical flow.
* The organization of the paper, moving from basic building blocks to deeper networks and then to discrete dynamics, is logical. The appendices provide necessary details for the proofs, allowing the main body of the paper to remain focused on the key concepts and results.
* The inclusion of numerical experiments (Section 5.2 and Figure 1/4) helps to illustrate and validate the theoretical findings regarding the approximate conservation under SGD.
* Potential improvements could include:
* More intuitive explanations of the implications of the discovered conservation laws for the training process and the properties of the learned models.
* Further discussion on the intuition behind the absence of conservation laws overlapping residual connections.
* A more detailed explanation of the conditions under which the key assumptions (e.g., bounded Hessian, bounded gradients) hold in the context of deep networks.
Questions For Authors: 1. **Completeness of Conservation Laws for Key Architectures:** The paper establishes conservation laws for several building blocks, including single attention layers, and shows that residual blocks inherit the laws of their non-residual counterparts. However, the completeness of conservation laws for deeper ResNet and Transformer architectures, as a whole, is not fully addressed, and completeness for multi-head attention remains an open problem.
* **Question:** Could the authors elaborate on the challenges in proving the completeness (or demonstrating the existence of additional laws) for deeper ResNet and Transformer architectures? What are the key obstacles compared to the analysis of the individual building blocks?
2. **Practical Exploitation of Conservation Laws for Training:** The paper touches upon the potential for designing new optimization schemes that enforce or deviate from these conservation laws to potentially accelerate convergence. The numerical experiments demonstrate the approximate preservation under SGD.
* **Question:** Beyond observing their approximate preservation under SGD, do the authors have further insights or preliminary ideas on how the newly discovered conservation principles could be actively leveraged in practice? For example, could these laws inform initialization strategies, regularization techniques, or the design of more efficient optimizers for ResNets and Transformers?
3. **Addressing Limitations in Future Work:** The conclusion explicitly mentions several limitations, including the absence of analysis for Adam optimizer, normalization layers, and the full complexity of multi-head attention in Transformers.
* **Question:** Could the authors briefly outline their planned approach or initial thoughts on how they intend to address these limitations in future research? What are the main challenges they foresee in extending their analysis to these more complex scenarios?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for his constructive feedback and positive comments.
> **Q1b** challenges in proving the completeness of laws beyond blockwise laws
From a theoretical perspective, the Lie algebra for deeper cases becomes infinite dimensional, which makes the analysis significantly more involved than in block setting. Numerical experiments briefly described in Section 4.4 (see also the supplementary material), however, suggest that combining the laws associated with each block yields a complete set of laws for the deeper case. We will emphasize that theoretical challenge at the end of section 4.
> **Q2b** leveraging newly discovered conservation principles in practice
A first key application of conservation laws is to understand how initialization influences optimization dynamics—for example, by deriving an implicit Riemannian metric parameterized by the initialization. The results of arXiv:1910.05505 (in the case of linear neural networks) show that initializing with certain values of the conserved function can lead to accelerated convergence. Another application lies in the so-called “Edge of Stability” analysis, which, in the linear case, explains how increasing the step size in gradient descent leads to minimizers with small values of the conservation law (“balanced condition”). Extending this analysis beyond the linear case, as initiated in arXiv:2502.20531, is an interesting direction for future research.
We will insist more on these practical applications in the introduction of our paper.
> **Q3b** Could the authors briefly outline their planned approach (...)
Regarding **normalization layers**, they can be treated as a new block and we expect much of the analysis from Appendix M to be adaptable. Then, the main challenge to fully address **multihead attention** is to prove the ``minimimality’’ of the associated reparameterization, which requires more geometric insight than the proof in Appendix F for the case of a single head. Finally, combining our findings on blockwise laws and the non-Euclidean setting of Marcotte et al. 2024 is likely to yield advances regarding **other optimizers**. Indeed Marcotte et al. 2024 can deal with mirror gradient flow or with natural gradient flow. Analyzing the original **Adam algorithm** is challenging due to the presence of two momentum parameters. However, a simplified version can be readily analyzed using our framework, and we will update the manuscript accordingly. A more detailed analysis of Adam itself remains an interesting direction for future work.
To make the analysis more tractable, we consider a limiting case of Adam where both momentum parameters tend to infinity. In this regime, the algorithm approaches sign gradient descent, and its continuous-time limit is given by the ODE:
$ \dot \theta (t) = - \text{sgn} (\nabla L_Z(\theta(t))) \ (1).$
In our framework, this corresponds to considering the linear space of signed gradients:
$
W(\theta) := \text{span} \\{ \text{sgn} (\nabla L_Z(\theta)) : Z \\}
$
for any parameter $\theta$. An interesting property of $W(\theta)$ is that, due to the quantization introduced by the sign function, $W(\theta)$ is locally constant around generic values of $\theta$. This leads to the identity $\text{Lie}(W)(\theta) = W(\theta)$. By Theorem 2.11, this implies that in a neighborhood of $\theta$, there are exactly
$
D - \text{dim} (\text{Lie}(W)(\theta)) = D - \text{dim} (W(\theta))
$
independent conservation laws. In particular, if $\text{dim} (W(\theta)) = D$, then no conservation law exists in that region.
In the simple case of a $2$-layer linear neural network $g((U, V), x) = U V^\top x$, we easily show (direct with Theorem 3.4) that $W(\theta) = \text{span} \\{ \text{sgn} ( \partial \phi (\theta)^\top w): w \\}$ where $\phi(\theta) = U V^\top$, which is easily computationaly tractable. In this code https://anonymous.4open.science/r/Numerical_experiments-2730/Adam.ipynb, we test it on different dimension settings and we obtain that, except in the case where $U$ and $V$ are both scalars, we have $\text{dim} W(\theta) = D$, so that there is no conservation law. In the special case where $U$ and $V$ are both scalars, we obtain that $\text{dim} W(\theta) = 1$, and thus there is only $2-1= 1$ conservation laws: indeed, $|U|- |V|$ is conserved.
While we only provide numerical explanations in the $2$-layer linear case, we expect that it holds for all other architectures. This indicates a radically different geometric behavior of the Adam optimizer. It is worth recalling that adding a momentum term also leads to a loss of the total number of conservation laws (in most settings (cf Marcotte et al. 2024), there is no conservation law at all when considering a momentum term). A better understanding of these phenomena is an interesting future direction. We will mention this discussion at the end of our paper. | Summary: This paper studies conservation laws for gradient flow on a variety of modern neural network architectures, including ResNets and Transformers. The paper provides a characterization of all possible conservation laws for both shallow and deep architectures. Moreover, the paper quantifies the extent to which SGD satisfies an approximate conservation law.
Claims And Evidence: For the most part, the main claims of the paper (a characterization of all possible conservation laws) are well supported.
On a more minor note, the abstract claims that "some deeper networks can [have] more conservation laws than intuitively expected." I do not see where this claim is proven in the paper.
Methods And Evaluation Criteria: Yes, the methods in this paper make sense.
Theoretical Claims: I skimmed the proof of the main claims, and they appear to be sound.
Experimental Designs Or Analyses: The experimental validation in Figure 1 appears to be sound.
Supplementary Material: I skimmed the proofs in the supplementary material.
Relation To Broader Scientific Literature: This paper builds on prior works establishing a "balancedness" condition for ReLU and linear neural networks (Du et al., 2018; Arora et al., 2019), along with previous work on conservation laws of gradient flow (Marcotte et al., 2023, 2024).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - One weakness of this paper is that the conservation law $h$ must hold for "any initialization and any dataset." In particular, $h$ cannot depend on the training dataset. This seems to be an overly strict condition. For example, in overparameterized linear regression (i.e L(w) = \|y - Xw\|^2), the quantity $P_{X^TX}^\perp w$, which does depend on the dataset, is conserved, and this is useful for characterizing the implicit bias of gradient descent.
- Next, while the paper builds significant technical machinery, the specific conservation laws for Transformers derived in Corollary 3.9 and 3.10 are already known in the matrix factorization literature; indeed, given any model with a parameter $W$ parameterized as $W = Q^TK$, it is well known that the quantity $QQ^T - KK^T$ will be conserved throughout the gradient flow trajectory. It is thus not clear to me what additional insights this paper provides.
- On the clarity front, I find Section 2 to be quite technically dense and hard to follow. The paper would improve with more exposition providing intuition for why each of these technical results are needed.
Other Comments Or Suggestions: Typos:
- abstract: “can more” → “can have more”
- abstract: “the introduction the” → “the introduction of the”
- line 102, left column: “that “overalapping” two residual blocks”
Questions For Authors: - Can the authors please comment on my concerns in the "Strengths and Weaknesses" section?
- Many of the results in Section 2 and 3 seem to be restated results from (Marcotte et al., 2023, 2024). It is thus not clear to a reader unfamiliar with these prior works what the additional contribution of this current paper is. Could the authors please clarify in which ways their paper generalizes the results of these prior works?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for his constructive feedback.
> **Q1a** the abstract claims (...)
Thank you for pointing this out, you’re right. Since this is not the focus of our paper as we don't have a detailed theoretical analysis of this phenomenon, we will remove this sentence from the abstract. Instead, we will add the following simple example at the end of section 4.4.
Example: Consider a ReLU neural network with two-residual blocks, $g((u, v, s, t), x) = x + u \sigma (v x) + s \sigma(t (x + u\sigma( v x))$ with $(u, v, s, t) \in \Omega \subseteq \mathbb{R}^4$ and $x \in \mathbb{R}$. While there are *two* "block" conservation laws: $u^2 - v^2$ and $s^2 - t^2$, we exhibit a domain $\Omega$ where there are *three* conservation laws. For this, consider $\Omega$ the set of all parameters such that $sgn(t) = sgn(v) = sgn(u).$ Then, for any $x$:
- either $vx, sx \leq 0$ and then $g(\theta, x) = x$ and thus $\nabla_{\theta} g = 0$;
- or $vx, sx \geq 0$ and then $g(\theta, x) = x + uv x + stx +stuv x$ as $vx, sx, tu \geq 0$, and then $ 1/x \nabla_{\theta} g=: \chi$ is a vector field that does not depend on $x$.
Thus the space $W_{\theta}^{g, l} = \mathbb{R} \chi(\theta)$ (cf Proposition 2.3) is made by only one non-null vector field. Its associated Lie algebra is thus itself, and by Theorem 2.11, there are exactly 3 conservation laws as claimed.
This phenomenon seems however restricted to the scalar case, as we empirically show in Section 4.4.
> **Q2a** h cannot depend on the dataset (...)
Thank you for this insightful comment. We view the dataset-independence condition as a deliberate design choice rather than a weakness, as it yields general insights on the implicit bias stemming solely from the architecture and optimization scheme.
While we agree that introducing dataset dependency could enrich the analysis, it also opens the Pandora’s box of how to constrain this dependency to ensure the problem remains nontrivial and the conservation law remains informative. For instance, allowing unrestricted dataset dependency would make the problem on the number of independent conservation laws trivial, as we now explain.
Indeed, the Lie algebra framework of Marcotte et al. can also be applied to vector fields associated with specific datasets (see Definition A.3). For a fixed dataset, the associated functional space (Definition A.4) becomes one-dimensional, and its generated Lie algebra is simply itself. By Frobenius' theorem, this yields exactly D-1 conserved functions which typically *exhibit non-trivial dataset dependencies*. Your example of overparameterized linear regression fits perfectly within this framework.
Thus, finding the right constraints to keep the problem well-posed and meaningful becomes challenging. It seems feasible to adapt the algorithm that builds polynomial conservation laws (section 2.5 of Marcotte et al. 2023) to settings with fixed datasets by identifying polynomial conservation laws that are *also polynomial in the given dataset parameters* using formal calculus.
We appreciate this suggestion and will add a discussion in the conclusion highlighting this as a promising direction for future research.
> **Q3a** (...) conservation laws for Transformers (...)
Indeed, this is a direct consequence of invariances, however, since we could not find any reference clearly stating this phenomenon for transformers, we believe it is worth highlighting. More importantly, our main contribution is, however, the nontrivial proof that *there is no other conservation law that only depends on the parameters of such attention blocks*. In particular, while finding the reparameterization $\phi(\theta)$ is relatively easy, proving its "minimality" (cf Appendix F) is much more involved. We will reword to insist on these facts.
> **Q4a** Many of the results in Section 2 and 3 seem to be restated results (...)
All results in section 3 are new except the one recalled in section 3.2 “ReLU and linear networks: known results”. In Section 2, novelties are notably Theorem 2.1, Corollary 2.8, Propositions 2.9 and 2.12. While Theorem 2.11 is not new, we provide a new simplified proof. We will clarify this.
> **Q5a** in which ways does their paper generalize the results of these prior works?
The only result that can be seen as a generalization of previous work (all other results are entirely new) is Theorem 3.6, as detailed in Remark 3.7. In particular, the function $f$ used to factorize the neural network $g_{conv}$ defined in equation (13) $g_{conv} = \psi \circ f$ is the same as for the 2-layer ReLU neural network $g$ defined in equation (12): one has $g = \phi \circ f$. Thus, to show Theorem 3.6, you can first directly use the result of Theorem 3.4 as it only depends on $f$ (see Lemma E.2). However, to complete the proof of Theorem 3.6, you also need to compute the generated Lie algebra associated to $\psi$, which is pretty involved and differs from Lie algebra computations associated to $\phi$. | null | null | null | null | null | null |
On The Concurrence of Layer-wise Preconditioning Methods and Provable Feature Learning | Accept (poster) | Summary: The author show theoretically how, in the settings of linear representation and single index learning with data with non-trivial covariance structures using a two-layer network, SGD suffers from fundamental statistical biases and limitations, and how the use of a simple Kronecker-factored preconditioning scheme allows to overcome those issues. More precisely, in the case of linear representation learning, they establish how the preconditioned dynamics are endowed with condition number-free guarantees, in contrast to SGD, which is suboptimal. For single-index learning, they study the spike developed by the first layer weights following a single gradient step. They show that a SGD step leads to a biased spike compared to the target weights, while a preconditioned step does not. They corroborate their findings in numerical experiments, and include a comparison with Adam and batch-normalization schemes, which are suboptimal in the considered settings.
Claims And Evidence: The theoretical claims of Theorem 3.9 are quantitatively verified in Fig. 2, where they are compared to numerical experiments, with convincing agreement. The theoretical insights on the suboptimality of SGD and benefits of KFAC are illustrated in numerical experiments in Figs. 1,3.
Methods And Evaluation Criteria: The paper is primarily theoretical in nature, and all experiments are conducted within the two stylized settings (linear representation task and single-index learning) considered in the theoretical parts.
Theoretical Claims: From my reading, the claims look reasonable and sound, but I did not check the proofs.
I have minor concerns and questions regarding Proposition 3.5 and Theorem 3.6. In Theorem 3.6, the notation $\kappa$ and $\sigma_{min}$ are, unless I am mistaken, not introduced. I assume they refer to the condition number and the $\lambda_{min}$. In this case, would the range of application of Theorem 3.6 not also suffer from the conditioning number, as an increasingly close initialization to the ground truth is required? If this is the case, the statement under Theorem 3.6, although still valid, would gain to be nuanced by discussing this fact.
As a minor question, is it possible to reach a similar high-probability local convergence for SGD? Even if impossible or unclear, I believe it would aid to grasp better the comparison with KFAC, as for now Proposition 3.5 is a worst-case result. On the other hand, I do believe the experiments of Figs. 1,5,9 compellingly show the superiority of KFAC. This is meant as a minor suggestion.
Experimental Designs Or Analyses: I did not check the experimental designs in detail, however, the exposition of the setup provided in the main text is sufficiently detailed and I do not see any issue.
(Minor question) What is the value of $\lambda_G$ in Figs. 1,5,8? Do the qualitative behaviors of the curve depend on this parameter ?
Supplementary Material: I did not review the supplementary material in detail.
Relation To Broader Scientific Literature: The alternating SGD scheme studied is related to algorithms appearing in (Collins et al., 2023) or (Zhang et al., 2024), but seems more general, although I do not have a large familiarity with this line of works.
The single-step analysis of Theorem 3.9 is a generalization of Proposition 2 and Theorem 3 of (Ba et al., 2022) to anisotropic covariates, which is to the best of my awareness a novel result.
Essential References Not Discussed: I did not identify any essential reference which the authors fail to discuss. However, I have limited familiarity with the literature on alternating or preconditioned descent methods, so it is possible I overlooked some works.
Other Strengths And Weaknesses: I am overall in favor of acceptance. The results contained in the paper are interesting, as they highlight the fundamental limitations of vanilla SGD and how preconditioning schemes naturally emerge as pathways to mitigate these issues. Due to my limited expertise, I am unable to assess confidently the novelty or significance of the results presented in section 3.1. I have left a few questions and comments in the above sections.
Other Comments Or Suggestions: I do not have particular comments or suggestions.
Questions For Authors: I do not have any particular question, beside those I listed in "Theoretical Claims" and "Experimental Design"
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. We are glad that you found our results regarding the fundamental limitations of SGD and the natural emergence of preconditioning schemes as solutions to address these challenges interesting. Please find our detailed response to the comments and questions below.
- **Defining $\kappa, \sigma_{\min}$ in notation.**
We thank the reviewer for pointing this out. The parameters $\kappa$ and $\sigma_{\min}$ correspond to condition number and the smallest singular value respectively. We have added these to the revision.
- **Dependence of the locality the requirement on $\kappa, \sigma_{\min}$.**
The presence of $\kappa, \sigma_{\min}$ in the initialization requirement is rather subtle. Firstly, we note that locality requirements are distinct from the rate; an analogous bound for SGD in the isotropic setting will have both the initialization requirement *and* a rate that contains $\kappa(\mathbf F)$ [1, Thm 3.1-3.2], not to mention that SGD is non-convergent under anisotropy $\kappa(\Sigma_{\mathbf x}) \gg 1$ (cf. Figure 1). Secondly, similar requirements are ubiquitous in linear representation learning guarantees; however, it is generally believed that initialization is not required for this problem [2, Sec. 6], [3, Remark 3.2]. On the other hand, linear representation learning can be viewed as a low-rank matrix-sensing problem [1], and as far as we know there does not exist a general *global* convergence guarantee for that problem, let alone for the anisotropic sensing matrices our setting induces. We lastly remark that the initialization requirement does not appear in a stylized, noiseless setting $\sigma_{\varepsilon} = 0$, where we fix $\mathbf F_0$ at initialization and solely perform KFAC updates on $\mathbf G$. In particular, for each iteration $t$ we have
$$
\mathbf G_t = \mathbf R_0 \mathbf G_\star + \left(1-\eta_{\mathbf G}\right)^t \left(\mathbf G_0 - \mathbf R_0 \mathbf G_\star\right), \\;\\; \mathbf R_0 = (\mathbf F_0^\top \mathbf F_0)^{-1}\mathbf F_0 \mathbf F_\star.
$$
Thus, for $\eta_{\mathbf G}$ close to $1$, $\mathbf G_t$ converges linearly to $\mathbf R_0 \mathbf G_\star$, and for full-rank $\mathbf F_0$, $\mathrm{rowsp}(\mathbf R_0 \mathbf G_\star) = \mathrm{rowsp}(\mathbf G_\star)$.
- **High-probability bound for SGD and role of Proposition 3.5.**
We note that we identify two suboptimalities of SGD. Firstly, given anisotropic covariates, SGD can be non-convergent. However, even given isotropic covariates, SGD is unavoidably affected by the conditioning of the output layer $\mathbf F$. Convergence upper bounds analogous to Theorem 3.6 for SGD can be found in [1, 4]. The role of Proposition 3.5 is therefore to show that *even under extremely ideal conditions* such as isotropic inputs, SGD necessarily suffers when $\mathbf F_\star$ is ill-conditioned. This bound is indeed worst-case in that the adversary chooses $\mathbf G_0$, but should be interpreted as "SGD generally suffers from ill-conditioning", whereas KFAC's convergence rate is always condition-number-free. As noted after Proposition 3.5, an instance-specific SGD convergence rate *could* in principle be estimated and will almost surely involve a function of the spectrum of $\mathbf F_\star$, but is besides the point of our paper.
- **Role of ridge parameter $\lambda_{\mathbf G}$ in linear representation experiments.**
We have set $\lambda_{\mathbf G} \to 0$ in Figures 1, 5, and 8. We conduct additional experiments for the same experimental setup, but with varying degrees $\lambda_{\mathbf G}$ [(anon. link)](https://drive.google.com/file/d/1n6OmlzOpo0hRFp2YOwkHDun2bo63HjK3/view?usp=sharing). We observe that the regularization does not significantly alter the qualitative behaviors of the curves for a wide range of $\lambda_{\mathbf G}$.
[1] https://arxiv.org/abs/2102.10217
[2] https://arxiv.org/abs/2105.08306
[3] https://arxiv.org/abs/2308.04428
[4] https://arxiv.org/abs/2102.07078
---
Rebuttal Comment 1.1:
Comment: I thank the reviewers for answering my questions and addressing my concerns. I wish to stand my evaluation and maintain my score. | Summary: The authors aim to study the effectiveness of layer-wise preconditioning methods with per-axis preconditioning. They show that the Kronecker-Factored structure naturally occurs for the problem of linear representation learning and single index learning. This provides a potential explanation for why these methods can outperform their exact second-order counterpart.
## update after rebuttal
In the rebuttal, the authors state that they avoid making general claims about Kronecker-Factored preconditioning being better than element-wise/diagonal preconditioning for feature learning, or that factored curvature estimates always lead to faster convergence than the dense versions.
If that is the case, then I believe some parts of the writing can be improved to avoid giving out this impression. For example, in the introduction section, they first state that KFAC generally outperforms its ideal counterpart NGD and then state that they want to give explanation to the performance of Kronecker-Factored methods. This could give the impression that they want to claim evidence for KFAC better than not only SGD but also NGD.
In either case, this means the paper only try to show that Kronecker-Factored methods are better than SGD, which is solid work, but also much more expected so I keep more original score.
I also verified the authors claim about rowspan is correct. I was originally thinking about left-multiplication instead of right multiplication.
Claims And Evidence: It feels like some of the claims are only supported indirectly without being articulated clearly. Specifically, two of the more interesting claims,
1. Kronecker-Factored structured preconditioning is necessary and thus element-wise preconditioning like Adam is not enough,
2. There are theoretical justification for why Kronecker-Factored structured preconditioning can outperform their exact second-order counterpart,
are discussed very little in the paper. Most of the main paper is about how SGD can fail, which is more expected and less interesting to the readers.
Methods And Evaluation Criteria: The evaluation setting is quite simple so that it aligns with the analyzed theoretical setting. Since the purpose of the experiments is just to verify the theory, this seems reasonable.
Theoretical Claims: I do not find any issues with their theoretical claims.
Experimental Designs Or Analyses: I do not find any issues with their analyses.
Supplementary Material: I spent most of my attention on Appendix D.
Relation To Broader Scientific Literature: The paper provides potential explanations to the empirical success of Kronecker-Factored preconditioning methods. This might lead to the development of new optimization methods.
Essential References Not Discussed: I do not find any essential references that are missed by the authors.
Other Strengths And Weaknesses: The writing seems to be a weakness of the paper. Some of the claims seem not to be clearly articulated.
Other Comments Or Suggestions: I have no other suggestions.
Questions For Authors: 1. I am not sure if the equality about rowspan on line 302 is correct. It seems to me that right multiplication with a matrix will not change the rowspan. (The authors might be considering some noise term to the solution of G, which cannot be clearly seen from the equation)
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments. We are glad that you find that these results provide explanations for the empirical success of Kronecker-Factored preconditioning methods, and that it can potentially guide algorithm design. Below please find our response to the comments and questions.
- **Indirect support of certain claims.**
We would like to clarify that the main goal of our paper is to address key issues of relying on SGD in feature learning theory, and in doing so establish a feature-learning motivation for layer-wise preconditioned optimizers. We avoid making general claims about Kronecker-Factored preconditioning being better than element-wise/diagonal preconditioning for feature learning, or that factored curvature estimates always lead to faster convergence than the dense versions. Rather, specific observations of these phenomena in prior literature are what *motivate* our study of Kronecker-Factored preconditioning. We provide some additional information and context for each.
1. **KF preconditioning is necessary vs element-wise.**
Current literature notably lacks work showing provable feature learning capabilities of diagonal preconditioning methods like Adam, despite a huge amount of work studying SGD in this context. Anecdotally, this is because the entry-wise (non-linear) operations that Adam performs are somewhat unamenable to analyze given the linear-operator structure of weights in neural networks. Therefore, by straightforwardly deriving feature learning properties of a representative Kronecker-Factored method, we see our work as evidence that they are a more natural NN-oriented optimizer class. On the other hand, we note diagonal preconditioning is *not* a strict subset of Kronecker-Factored preconditioning: for a given entrywise product $\mathbf M \odot \mathbf G$, generally there does not exist $\mathbf P, \mathbf Q$ such that $\mathbf M \odot \mathbf G = \mathbf P \mathbf G \mathbf Q$, which is why we do not claim that diagonal preconditioning is generally worse than KF preconditioning for feature learning, even if KF preconditioning may be more mathematically natural.
2. **Outperforming dense 2nd-order variant.**
As stated in our introduction, many deep learning optimizers, including Adagrad (Adam), KFAC, Shampoo etc. were initially derived as computationally-efficient approximations to full 2nd-order methods like (Gauss-)Newton method and Natural Gradient Descent. Therefore, a natural conclusion is that *if one had the resources*, the full 2nd-order method should yield better convergence. However, the behavior of full 2nd-order methods on neural networks is notoriously poorly understood from theory, with basic questions such as "are negative curvature directions (negative Hessian eigs) in Newton's method good?" and "is Gauss-Newton (guaranteed psd) preferable to Newton?" having no conclusive answers [1]. Therefore, our contribution can be seen as *directly* deriving the benefits of the approximating method *without* reasoning about the full 2nd-order method. The fact that the approximant numerically *outperforms* the full method is only further evidence that this is a fruitful path of analysis, though we note this observation is not original to us [2].
- **Correctness of rowspan.**
The math in line 302 should be correct. To see how right-multiplication by a matrix can change the rowspace, consider the simple example $\mathbf G = [1 \quad 0] \in \mathbb R^{1 \times 2}$, whose rowspace is $\{[c \quad 0], c \in \mathbb R\}$. Multiplying by a (psd) matrix $\Sigma = \begin{bmatrix}2 & 1 \\\\ 1 & 2 \end{bmatrix}$ yields $\mathbf G \Sigma = [2\quad 1]$, which is not contained in $\mathrm{rowsp}(\mathbf G)$. To tie this back to batchnorm, our observation is that whitening the training-distribution covariates allows SGD to converge (since the covariates are now isotropic) but changes the rowspace, breaking the shared structure between the source and transfer distributions.
[1] https://arxiv.org/abs/1503.05671
[2] https://arxiv.org/abs/2201.12250 | Summary: This paper demonstrates KFAC is better at feature learning than vanilla SGD with two model examples. For linear representation learning, they show the convergence rate of SGD will suffer from the condition number while KFAC gets rid of it. For single-index learning, they show that one-step update of SGD can only learn the correct direction with isotropic data while the one-step update of KFAC can learn the direction under isotropy. They conduct experiments to show the effectiveness of their theory.
Claims And Evidence: The author claims they show SGD can be drastically slow on linear representation learning under anisotropy. But I only saw they cite previous papers on this claim and proposition 3.5 assumes isotropy. It can not viewed as one contribution by this paper without further explanation.
Methods And Evaluation Criteria: The two representation learning models are commonly used in this area. There are two synthetic experiments for each models. They are already reasonable but I am hoping for some real world dataset.
Theoretical Claims: I roughly checked the proof of proposition 3.5 and didn’t find any obvious mistakes.
Experimental Designs Or Analyses: I checked the experiment setting and results and didn’t find any serious issue.
Supplementary Material: I read section B.1 and D.1.
Relation To Broader Scientific Literature: It can help understand the difference between optimization algorithms and why some algorithms can succeed on some tasks.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strength:
1. This paper makes a good connection between feature learning and kronecker-factored algorithms.
2. The theoretical claims and proof look well-written and relatively rigorous.
Weakness:
1. The theoretical results are only for one specific layer-wise preconditioning method. Since Shampoo is a more popular kronecker-factored optimization algorithm than KFAC, it should be more interesting to get the results for Shampoo.
2. Different $P_G$ are used for KFAC in the two models. It would be better if lemma 3.12 is proved with unregularized $P_G$.
3. The negative result is only shown for SGD. As mentioned in the introduction, it is unknown why kronecker-factored algorithm can be better than the idealized second-order methods. If we want to claim the reason might be KFAC is better at feature learning, then the negative results of second-order methods are expected.
Other Comments Or Suggestions: 1. It’s better to explicit show each algorithm with some algorithm block and keep a consistent notation for $G_+$. It is defined differently in (5) and (8) and looks very confusing every time it is mentioned later. For example, I think it should be $\bar{G}_+$ in the LHS of (9).
2. Typo in line 280: $F_\star^{(t)}$ should be $F_{ls}^{(t)}$?
3. Typo in line 401: The distribution of $x_i^s$ seems unfinished.
Questions For Authors: 1. I am not convinced why we only care about the one-step update in the single index learning. Even if the one-step update of SGD is bad, we almost never update the parameter only for once so SGD might be able to learn the correct direction after several steps.
2. Do you try Shampoo for the first experiment? If the theoretical results can not be easily extended to Shampoo, I am curious whether Shampoo can learn features well in practice. That can help understand whether the connection to feature learning is applicable to a broader class of kronecker-factored algorithms rather than only KFAC.
3. I am also curious whether there can be any negative theoretical results for Adam. From figure 1, Adam can learn features much better than any other algorithms and is only slightly worse than KFAC. If there is no obvious negative theoretical result, is there any insight why Adam can be good at learning features?
4. As mentioned in weakness 2, I wonder there is any obstacle in proving lemma 3.12 with unregularized $P_G$.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments. We are glad that you find our results rigorous, and that our results make a good connection between feature learning and KF algorithms.
- **Anisotropy vs ill-conditioning.**
We clarify that we make two distinct claims in Section 3.1. We identify *two* sources of suboptimality in SGD for linear rep. learning: the first sourced from the anisotropy of the inputs, and the second coming from the ill-conditioning of the output layer $\mathbf F$. The bias from anisotropy is critical, and can prevent convergence *altogether* (as seen in AMGD/SGD lines in Figure 1). As pointed out in our literature review, this observation has escaped notice, apart from a recent work [1], whose algorithm is a special case of our recipe in Eqs (4) and (5); see DFW in Fig. 1. The second issue is what the lower bound in Prop. 3.5 concerns. We assume isotropy therein (and multiple favorable assumptions) to make the problem *as benign as possible* for SGD, which we showed does not converge without isotropy. The point of Prop. 3.5 is that *even under ideal conditions* the convergence rate of SGD and related methods is suboptimal compared to KFAC as shown in Theorem 3.6. We hope this clarifies our separate contributions.
- **Real-world experiments.**
We believe it is well-documented that KFAC performs well in practice (see e.g. [2, Sec. 13] and [3]). In fact, KFAC was designed as a *practical* approximation of the Natural Gradient method. Our main goal is to resolve the limitations of SGD in prominent feature learning theory setups. As a byproduct, we derive KFAC as the natural solution, providing an alternate *feature-learning* justification for KFAC. Our experiments are solely to verify and complement our theoretical findings.
- **Relevance of Shampoo.**
We are aware that methods such as Shampoo and Muon have received a lot of attention lately. However, we should note that up until last year, KFAC has been largely the more well-known method, seeing many extensions and applications. We also mention that making Shampoo outcompete the Adam family (e.g. the winning submission in AlgoPerf) takes additional heuristics like learning-rate grafting [4, Sec. 2.4]. However, **we have implemented basic versions of Shampoo**, and run it in the set-up for Figure 1 [(anon link)](https://drive.google.com/file/d/1MS4PjZXTXVidzvzjmoM1P1X7mKz7_Ih6/view?usp=sharing). SHMP(G) corresponds to updating $\mathbf G$ with Shampoo, and $\mathbf F$ with least-squares (like AMGD and KFAC), and SHMP is Shampoo on both layers. We observe that Shampoo performs roughly on par with NGD, but not as well as KFAC. This is expected, since we derived KFAC to provably get *constant-factor* linear convergence. We believe that exploring the feature learning properties of Shampoo is an interesting future direction.
- **Regularized $\mathbf P_{\mathbf G}$.**
In the single-index setting, it is more natural to consider regularized $\mathbf P_{\mathbf G}$ because analyses are traditionally in the proportional limit $d_x \asymp n$, where a non-zero regularizer is known to be optimal. In fact, the result we provide (Lemma 3.10 and 3.12) are more general and hold even for $\lambda_{\mathbf G} \to 0$. In Figure 3 (Left), we still see that for $\lambda_{\mathbf G} \to 0$, our theory matches the simulations, and that the alignment of the direction learned by KFAC to the true direction is still significantly larger than that of SGD.
- **Negative result for 2nd-order methods.**
We clarify that the motivation for our paper is that many algorithms are designed to approximate 2nd-order methods, but neural network analyses for 2nd-order methods are still lacking. Therefore, our goal is not to debunk 2nd-order methods, but to suggest a direct motivation for the approximation methods themselves. We refer the reviewer to **Outperforming dense 2nd-order variant** of our response to Reviewer AMdp for more discussions.
- **Algorithm block and $\overline{\mathbf G}$.**
We thank the reviewer for pointing these out. They have been fixed in the revision.
- **One giant step vs multi-step.**
The one-step update for learning a single index model is a popular framework in deep learning theory and is the state of the art model to theoretically analyze feature learning properties of two-layer networks (cf. line 283 in the submission). In this setting, features are learned by taking a single (giant) step on the first layer as opposed to multiple small steps. A single step can often be shown to be equivalent to taking multiple steps with smaller step sizes (Section B.1.3 of [5]).
- **Is Adam a good feature learner?**
For insights on why Adam has fundamental limitations in our feature learning setting, we refer to **Adam in transfer learning settings.** in our response to Reviewer RnJ3.
[1] arxiv.org/abs/2308.04428
[2] arxiv.org/abs/1503.05671
[3] arxiv.org/abs/2311.00636
[4] arxiv.org/abs/2309.06497
[5] arxiv.org/abs/2205.01445 | Summary: This paper shows that layer-wise preconditioning is statistically necessary for efficient feature learning using two common models: linear representation learning and single-index learning. They prove that SGD struggles in non-isotropic inputs, and demonstrate theoretically and experimentally that this suboptimality is fundamental. They show that layer-wise preconditioning naturally addresses this issue. In addition, they show experimentally that Adam and batch normalization help only slightly, and that layer-wise preconditioning is uniquely beneficial even compared to the actual second-order methods that it is approximating.
Claims And Evidence: The main claims about layer-wise preconditioners versus SGD are clear and well-supported by theoretical analysis, with well-explained intuition and implications. Overall, it seems to be a solid contribution.
Perhaps one limitation would be that the experiments are somewhat limited, and while the paper shows that Adam (diagonally preconditioned methods) and batch norm are insufficient, it lacks deeper intuition or analysis on why that’s the case.
Methods And Evaluation Criteria: The theoretical examples and analysis and interesting and I believe appropriate for the question. Experimentally, the subspace distance seems to correlate well with whether transfer learning is successful.
Theoretical Claims: I didn't check the proofs line by line, but they seem reasonable. The authors explained the intuitions and implications of each lemma/theorem well.
Experimental Designs Or Analyses: I didn't rerun the experiments to check correctness, they seem reasonable. My complaint is that the experiments seem to be limited to small-scale examples and transfer learning settings. Though I understand empirical results are not the main focus of this paper, it would be very interesting to see if these observations hold in bigger, more practical cases.
Supplementary Material: I skimmed through the supplementary materials. The related work is clear and comprehensive.
Relation To Broader Scientific Literature: It’s observed in existing literature and practice that K-FAC-like methods (Shampoo for example) can outperform SGD in training and generalization. While this paper’s theoretical analysis is limited to Gaussian data and small-scale tasks (e.g., two-layer networks for linear representation learning and single-index models), it’s still a valuable step toward understanding the limitations of (S)GD and the necessity of layer-wise preconditioning. The paper also briefly touches on why Adam's advantage is limited in such settings, though more in-depth study of that could be helpful.
Essential References Not Discussed: I find the related work pretty well cited in general.
I have seen in the following work that Adam can outperform SGD when there is a class imbalance in the data, which could be related to the problem condition. Would be interesting to see the discussion on whether this is related (but feel free to use your judgment).
[F. Kunstner et.al, Neurips 2024. Why Adam Outperforms Gradient Descent on Language Models: A Heavy-Tailed Class Imbalance Problem]
Other Strengths And Weaknesses: Other strength: The paper is educational and well-written, providing a clear discussion of optimization history and why the problem is well-motivated.
Other weakness: The analysis is limited to specific settings, though this is fairly common in theoretical work. Additionally, the experiments are somewhat limited in scope.
Other Comments Or Suggestions: No
Questions For Authors: The experiments that contain Adam seem to be limited to the transfer learning settings. Is there any intuition on why layer-wise preconditioning is better than diagonal (Adam) in this setting? Is there a difference in general, outside of the transfer learning setting?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments. We are very glad to see that you find our paper to be a solid contribution and a valuable step toward understanding the limitations of SGD and the necessity of layer-wise preconditioning. Below please find our response to the comments and questions.
- **More intuition for insufficiency of Adam and Batchnorm.**
Regarding batchnorm, the discussion following Lemma 3.7 can be mathematically formalized. We will reformat this to be a separate lemma that demonstrates how batchnorm/whitening causes convergence to the wrong feature space $\mathrm{rowsp}(\tilde{\mathbf G}) \triangleq \mathrm{rowsp}(\mathbf G_\star \Sigma_{\mathbf x}^{1/2})$, and further provide a simple estimate of $\mathrm{dist}(\tilde{\mathbf G}, \mathbf G_\star)$ in terms of the anistropy of $\Sigma_{\mathbf x}$.
Regarding the insufficiency of Adam, one of the motivating factors to this paper is that Adam has proven difficult to analyze in *feature learning* contexts, i.e. fine-grained models of neural-network learning, thus motivating the search and analysis of optimizers that are naturally amenable to the compositional / layered structure of neural networks (see e.g. Eq (1)). In our opinion, the reason Adam is not optimal even in our idealized neural network learning settings is that diagonal preconditioning is not optimal for dealing with sources of ill-conditioning arising from the compositional structure of the parameter-space, e.g. $f_{\mathbf F, \mathbf G}(x) = \mathbf F \mathbf G x$. Since Adam is a diagonal preconditioner, it necessarily suffers from these issues. Furthermore, Adam takes a "-1/2" power of its curvature estimate, rather than the full inverse, which necessarily slows its convergence for cases where curvature estimates are reliable (though certainly has other practical merits). Initial probes into the relative effectiveness of diagonal preconditioning have been proposed in concurrent studies, see e.g. https://nikhilvyas.github.io/SOAP_Muon.pdf, https://arxiv.org/abs/2411.12135 (cf. Section 4.3).
- **Related work showing that Adam outperforms SGD when there is class imbalance.**
We thank the reviewer for pointing out this paper. We will add it to the related works section.
- **Adam in transfer learning settings.**
We emphasize that our transfer learning set-up is a proxy for more robust notions of generalization, which allows us to diagnose the quality of solutions returned by various algorithms in a more informative way than comparing training loss convergence. Not accounting for curvature (e.g. SGD) or using weaker forms of it (Adam) are therefore exposed by either slow convergence rate overall or by poor "generalization" (exposed by poor convergence in subspace distance); see additional plots [(anon. link)](https://drive.google.com/file/d/1yitFmd-Kvqr_bBmrb_L7OhCKQd9RUMyC/view?usp=sharing). Notably, we believe the latter notion is a promising way to diagnose whether curvature-aware/adaptive descent methods capture the geometry of neural-network optimization correctly. For example, despite Adam being an adaptive gradient method, it exhibits poor subspace distance recovery due to still being biased by the (spurious) curvature introduced by the anisotropy of the covariates (compare e.g. to low anisotropy settings in Figure 8).
The essence of why Adam (or diagonal preconditioners) is a suboptimal feature-learner is the following: many adaptive methods are able to reduce training loss quickly by fitting the "low-frequency" features that explain most of the data. Therefore, many candidate solutions can attain low training loss. However, the "high-frequency" directions/features that explain less of the *training* objective correspond to *sharper* curvature; weak preconditioners have trouble smoothing these directions, and thus make very slow progress therein, reflected by the subspace distance plot. Mild distribution shifts (e.g. transfer learning) can lead to significant shift in which features are relevant, and thus of the many candidate solutions that get low training loss, it is important to also accurately detect all the possible relevant features like KFAC does in this setting. | null | null | null | null | null | null |
Sparse-pivot: Dynamic correlation clustering for node insertions | Accept (spotlight poster) | Summary: The authors consider the classic Correlation Clustering problem which, given a complete graph with edges labeled either + or -, the goal is to find a partition of the vertices so as to minimize the number of + edges across parts plus the number of - edges within parts.
The has received a lot of attention since its introduction in the early 2000s.
In this paper, the authors consider the dynamic setting where nodes are iteratively inserted (and not removed) into the dataset and the goal is to maintain a partition that minimizes the Correlation Clustering objective at all time, while minimizing the total running time (a.k.a update time).
The authors improve an ICML'24 paper which showed how to maintain a very large constant factor approximation to the problem while making polylog n database queries (here a database query is a query of one of the following types: (1) retrieving the degree of a node v; (2) selecting a random neighbor of v; and (3) checking whether two nodes u and v are connected by an edge). In this paper, the authors provide an algorithm with polylog n update time that achieves a 20+epsilon-approximation.
The algorithm is a refinement of the classic pivot algorithm for Correlation Clustering, called Sparse-Pivot, which combines ideas from previous work on streaming algorithms, post-processing the clusters obtained, and a new sampling strategy to define the pivots (looking at O(log n) random neighbors).
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, I checked most of the proofs.
Experimental Designs Or Analyses: Yes, comparison to previous work (ICML'24) is done carefully.
Supplementary Material: Checked some of the theoretical results in the supp. mat.
Relation To Broader Scientific Literature: Looks good to me.
Essential References Not Discussed: None
Other Strengths And Weaknesses: The paper makes a significant improvement over the approximation ratio and interestingly this turns out to be pretty relevant for practice as well, despite previous pivot-based approaches being not so strong in practice (in fact, as shown by the authors, the vanilla pivot algorithm performs worst than all-singletons).
On the other hand, from a theoretical perspective, the technical novelty is not very high.
Other Comments Or Suggestions: No particular comments. Given that your algorithm seems to experimentally outperform ICML'24 and the pivot algorithm, I think it makes sense to experiment with it in a purely offline setting as well: could it be that your algorithm is the state-of-the-art for (offline) correlation clustering practical instances?
Questions For Authors: I am wondering whether the result of the ICML'24 paper allows to define "admissible" edges in the sense of [1]. And so you could avoid the cleaning step by only performing pivot on the admissible edges. But that's maybe a long shot.
[1] Vincent Cohen-Addad, David Rasmussen Lolck, Marcin Pilipczuk, Mikkel Thorup, Shuyi Yan, Hanwen Zhang:
Combinatorial Correlation Clustering. STOC'24
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > On the other hand, from a theoretical perspective, the technical novelty is not very high.
We believe our algorithm is intuitive and relatively simple, which we view as a clear advantage—particularly from an implementation standpoint. However, proving that those natural ideas significantly improve approximation guarantees of previous work turned out to be challenging. We would be happy to see a simpler analysis, though, if there is one.
> No particular comments. Given that your algorithm seems to experimentally outperform ICML'24 and the pivot algorithm, I think it makes sense to experiment with it in a purely offline setting as well: could it be that your algorithm is the state-of-the-art for (offline) correlation clustering practical instances?
We do not think that our algorithm outperforms [1] in the offline setting.
Intuitively, our algorithm outperforms Pivot because it “fixes” bad clusters made by Pivot. Those fixes are reflected by Break-Cluster method. Namely, if a cluster obtained by Pivot has many non-edges, Break-Cluster attempts to improve it by making some of the nodes singletons. We generally believe that studying this kind of local improvements is an excellent research direction. Some of the prior work, e.g., [1] that you referred to, performs this kind of local improvement, although not directly on Pivot clustering. We believe that there is room to perform relatively simple local improvements and also achieve an approximation better than 3.
> I am wondering whether the result of the ICML'24 paper allows to define "admissible" edges in the sense of [1]. And so you could avoid the cleaning step by only performing pivot on the admissible edges. But that's maybe a long shot.
[1] Vincent Cohen-Addad, David Rasmussen Lolck, Marcin Pilipczuk, Mikkel Thorup, Shuyi Yan, Hanwen Zhang: Combinatorial Correlation Clustering. STOC'24
These are great questions and good directions for future work. Having admissible edges in the sense of [1] requires maintaining pre-clustering dynamically, which might be challenging. | Summary: This paper presents "SPARSE-PIVOT," a new dynamic correlation clustering algorithm designed for node insertions. The algorithm builds upon a variant of the PIVOT algorithm and aims to improve the update time and approximation factor compared to the existing state-of-the-art algorithm by Cohen-Addad et al. (ICML 2024). The main algorithmic idea is to combine a fast approximate pivot selection using random sampling with a refinement step that removes poorly clustered nodes. The paper claims an amortized update time of $O(\log^{O(1)}(n))$ and an approximation factor of $20 + \epsilon$. The theoretical analysis is complemented by experimental evaluation, showing better performance to previous methods.
Claims And Evidence: The claims regarding the approximation factor and update time appear to be supported by the provided theoretical analysis. The evidence presented is convincing, with detailed proofs in the appendix. The experiment provides also a strong support, showing the benefits of the new method.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense. The dynamic setting with node insertions is a relevant problem, and comparing against the algorithm by Cohen-Addad et al. (2024) is a reasonable benchmark. The use of both real-world and synthetic datasets is appreciated.
Theoretical Claims: I partially checked the correctness of proofs in the supplementary section. I reviewed Lemma 3.2, 3.3, 3.4, 3.5. I did not spot any obvious flaw.
Experimental Designs Or Analyses: I checked the soundness of the experimental setup. The choice of datasets, baselines, and metrics appear appropriate.
Supplementary Material: I partially reviewed the supplementary material, specifically the proofs related to the main theorems and experiment setup.
Relation To Broader Scientific Literature: The paper situates itself well within the correlation clustering literature. It builds upon prior work on PIVOT and addresses the dynamic setting, which has seen growing interest. In particular, the authors consider the dynamic database model introduced recently by Cohen-Addad et al (2024). The relationship to other dynamic algorithms is clearly acknowledged.
Essential References Not Discussed: The paper appears to cite all essential references.
Other Strengths And Weaknesses: **Strengths:**
* The paper presents a novel and theoretically sound algorithm with a significantly improved approximation factor compared to the paper by Cohen-Addad et al.
* Provides solid empirical evidence on real-world and synthetic datasets, showcasing the practical effectiveness and improved performance of SPARSE-PIVOT. In particular, the evaluation demonstrates that the algorithm performs better than the main competitor.
* I find the new adaptation of the pivot algorithm quite clever and insightful.
* The paper is well-written and clearly explains the algorithm, theoretical analysis, and experimental results. The appendices provide detailed support for the main claims.
**Weaknesses:** The database model seems a bit unrealistic, but to be fair this is a previously studied and accepted model.
Other Comments Or Suggestions: Overall, the strengths outweigh the weaknesses. The paper provides a valuable contribution to the field of dynamic correlation clustering with both theoretical and practical significance.
Questions For Authors: No questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their kind and valuable feedback. | Summary: This paper studies the correlation clustering problem on graphs in the dynamic setting. This variant of clustering is an important problem both in theory and practive, and has been extensively studied in different computational models.
Traditionally, most of the algorithms in the dynamic graphs literature consider the setting where edges of the underlying graphs undergo updates, i.e., they are inserted or deleted. This paper considers a more challenging and the less-studied setting of node insertions; when a node is added, then all its incident neighbours are revelaed at intsertion. Naturally, one could simple simluate node insertions by inserting all edges incident to the inserted node – but since nodes can have deegree up to O(n), this leads to undiserable times.
The paper under reivew studies algorithms that beat such trivial bounds. In fact, for the correlation clustering problem, they propose an algorithm that approximates the optimal correlation clustering objective up to a factor of ~20, and runs in poly-logarithmic amortized time per node insertions and even *random* deletions. The prioir work by Cohen-Addad et al. achieved similar runtime guarantees, but at the cost of a much constant approximation.
The starting point of their algorithm is a 5-approximate algorithm due to Behnezhad et al. (a variant of the well-known PIVOT algorithm) in the semi-streaming setting. The idea is to take this algorithm and adapt it to the dynamic setting. Some technical ideas involve: picking random ranks for each node, and using this rank to classify nodes depending on their degree, and finally doing some clean-up, so that bad nodes cannot belong to clusters, etc. While these algorithmic steps make sense for the problem at hand, the adaption and especially the analysis are quite cumbersome and require special care. These type of algorithms are in my opinon the *whole grail* in algorith mdesign; easy to describe and understand, but not so trivial to analyse!
I’m not very familiar with the algorithm by Cohen-Addad et al., but the algorirthm that the paper proposes looks very natural and *the right one* to me.
## update after rebuttal
Thanks for the rebuttal. I keep my score as is.
Claims And Evidence: Everything looks fine.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense.
Theoretical Claims: I did a pass on some of the proofs. I didn't have time to thoroughly check the correctness -- the approach looks very feasible though.
Experimental Designs Or Analyses: everything is done properly; I also liked that the authors discussed some heuristic approaches to the theoretical algorithm that could lead to better peformance in practice. This work is an example of nice bridge between theory and practice;
One downside is that the emperical improvements over previous algorithms don’t seem that large in practice on the benchmark data-sets.
Supplementary Material: Did a pass on all parts in the appendix.
Relation To Broader Scientific Literature: Relevant to anyone working on clustering across all subcommunities of computer science.
Essential References Not Discussed: I didn't come across any.
Other Strengths And Weaknesses: Strengths: core problem in dynamic clustering with applications in practice; the node insertion/deletion setting is more challenging than edge updates; the algorithm comes with strong guarantees and easy to understand
Weaknesses: perhaps matching the 5-approximation of PIVOT would have been more exciting, at least from a theory perspective.
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > One downside is that the empirical improvements over previous algorithms don’t seem that large in practice on the benchmark data-sets.
On all the benchmark datasets, our algorithm always performs better than previous algorithms. In some cases, the running time improvement is 2-3x; please see Table 2 and Table 3 in the appendix.
> Weaknesses: perhaps matching the 5-approximation of PIVOT would have been more exciting, at least from a theory perspective
Indeed, this is quite an interesting direction for future work. The main bottleneck in achieving this with our approach and analysis is in Break-Cluster. | Summary: This paper introduces a new Correlation Clustering algorithm for node insertion / deletion in a dynamic setting. Upon node arrival its edges with all existing nodes are revealed, and we are allowed to make changes to the maintained clustering solution. The algorithm has constant approximation factor and sublinear amortized processing time per node arrival. which is a substantial improvement over the approximation factor of the algorithm by Cohen-Addad et al. Empirical results show the new algorithm to outperform state of the art benchmarks.
Claims And Evidence: The claims are well supported.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes. I checked the major claims.
Experimental Designs Or Analyses: The experiment designs are quite simplified and could be improved in many ways. The results suffice to show that the proposed method outperforms state of the art, but it would be better if the authors also show how the running time scales with changing dataset sizes (number of nodes), the tradeoff between time and approximation ratio (controlled by $\epsilon$), etc.
Supplementary Material: I skimmed through it.
Relation To Broader Scientific Literature: The paper mostly builds on existing Pivot-based algorithms for correlation clustering but innovates the algorithm. It can be seen as another variant of online correlation clustering. It is also broadly related to online clustering or general online algorithm problems.
Essential References Not Discussed: There are other earlier papers that discuss online correlation clustering, although the settings are less general than this one, where changing previously made cluster assignment decisions are not allowed, but also studies pivot. Some of them are:
Mathieu, Claire, Ocan Sankur, and Warren Schudy. "Online correlation clustering." arXiv preprint arXiv:1001.0920 (2010).
Cohen-Addad, Vincent, et al. "Online and consistent correlation clustering." International Conference on Machine Learning. PMLR, 2022.
Lattanzi, Silvio, et al. "Robust online correlation clustering." Advances in Neural Information Processing Systems 34 (2021): 4688-4698.
Other Strengths And Weaknesses: The theoretical results are quite insightful for the online correlation clustering community. The algorithm design is also original and gives strong results.
Other Comments Or Suggestions: The paper needs some proof-reading. There are some notations that appear without being mentioned. I couldn't find the definition of the BREAK-CLUSTER and UPDATE-CLUSTER. They have never been formally defined in the main body but directly used. The analysis can also be organized better. I would suggest putting two main theorems about the run-time and approximation guarantees at the beginning of the section so that the reader doesn't have to skim through the whole section to find what the guarantees are.
Questions For Authors: 1. The algorithm assigns a uniformly random value between [0,1] to every node. The values are used to determine their relative order. Is this essentially the same with randomly shuffling the whole sequence of nodes offline, giving them ranks and treating the ranks as known information?
2. Do we have to know the size n in advance so that we can compute the threshold $L$ for the algorithm to work?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > The experiment designs are quite simplified and could be improved in many ways. The results suffice to show that the proposed method outperforms state of the art, but it would be better if the authors also show how the running time scales with changing dataset sizes (number of nodes), the tradeoff between time and approximation ratio (controlled by ), etc.
We agree that a comprehensive experiment set up would compare running time vs. dataset size. However, the focus of our paper is on theoretical results and validation of the approximation factor. We include experiments to have a full comparison with previous work.
> The paper needs some proof-reading. There are some notations that appear without being mentioned. I couldn't find the definition of the BREAK-CLUSTER and UPDATE-CLUSTER. They have never been formally defined in the main body but directly used. The analysis can also be organized better. I would suggest putting two main theorems about the run-time and approximation guarantees at the beginning of the section so that the reader doesn't have to skim through the whole section to find what the guarantees are.
Thanks for the suggestions on improving the writeup; we will incorporate them into the final version. On Page 3, we outline our main ideas. Idea number 3 is essentially Break-Cluster; we will emphasize this in the final version. It takes all the nodes assigned to a pivot v (set B_v) and makes some of them singletons (so set C_v is the set of remaining nodes). Update-Cluster updates this set when a new node is assigned to pivot v. We will explain these subroutines before using them; right now, they are only described in detail in Appendix A.
> The algorithm assigns a uniformly random value between [0,1] to every node. The values are used to determine their relative order. Is this essentially the same with randomly shuffling the whole sequence of nodes offline, giving them ranks and treating the ranks as known information?
Yes, it is the same.
> Do we have to know the size n in advance so that we can compute the threshold
for the algorithm to work?
That’s a great question. We do not need to know n in advance; that can be addressed by following standard techniques. Namely, when the graph size doubles, we recompute everything from scratch with the new n. This cost of recomputation is amortized over the run of the algorithm; specifically, it is charged to the steps when the current set of vertices arrived. We discuss a similar idea in Appendix B. | null | null | null | null | null | null |
Resolving Lexical Bias in Model Editing | Accept (poster) | Summary: This paper addresses the challenge of editing the outputs of large language models without degrading their overall performance. Traditional methods directly modify model weights, often causing undesirable side effects. In contrast, recent approaches use adapters that trigger edits based on semantic similarity in the representation space. However, these adapter methods are shown to be susceptible to strong lexical biases—resulting in unintended edits for prompts with overlapping words. To overcome this, the paper introduces a principled method for learning a disentangled representation space. This new space allows for the precise localization of edits by keeping unrelated prompts distant while keeping semantically equivalent (or paraphrased) prompts close together. The proposed method, Projector Editor Networks for Model Editing (PENME), achieves state-of-the-art editing performance. It is not only more computationally efficient during inference than previous approaches but also adaptable across various architectures.
Claims And Evidence: The claims made in the submission are supported by the experiments.
Methods And Evaluation Criteria: The proposed methods make sense for the problem.
Theoretical Claims: This paper does not contain theoretical claims.
Experimental Designs Or Analyses: The experimental designs are sound.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper demonstrates that current adapter methods are critically vulnerable to strong lexical biases, which can result in edits being applied to irrelevant prompts that share overlapping words. This paper introduces a principled approach to learning a disentangled representation space, enabling the precise localization of edits by keeping unrelated prompts distinct while preserving the proximity of semantically similar paraphrases.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
By learning a disentangled representation space, the method effectively differentiates between relevant and irrelevant prompts, reducing unintended modifications.
PENME offers faster inference times compared to prior methods, making it more practical for real-world applications.
The paper identifies and mitigates the vulnerability of previous adapter methods to strong lexical biases, enhancing reliability. The approach is designed to work across different model architectures, increasing its applicability.
Weaknesses:
Although adaptable across architectures, the paper may not fully explore how the approach scales with increasingly larger and more complex models.
The success of precise edit localization hinges on the quality of the learned disentangled representation, which might require extensive tuning and may not generalize perfectly across all domains.
Other Comments Or Suggestions: No.
Questions For Authors: See weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Comment: Although adaptable across architectures, the paper may not fully explore how the approach scales with increasingly larger and more complex models**
The proposed approach operates at a single layer of the model, making it lightweight and efficient, with minimal dependence on model size. Scalability is not a major concern, as the method avoids full model retraining and extensive parameter updates. In our experiments, we employed a direct output strategy by modifying the architecture to support multi-output generation. However, generation control mechanisms such as playback vectors and lightweight LoRA blocks have also demonstrated robust performance and can serve as effective alternatives to direct output. These design choices suggest that the approach can scale well to larger models or more complex generation tasks without incurring significant computational overhead.
**Comment: The success of precise edit localization hinges on the quality of the learned disentangled representation, which might require extensive tuning and may not generalize perfectly across all domains.**
Our analysis identifies the key factor contributing to locality challenges in the model’s representation space, which is lexical bias. We have evaluated PENME’s generalization capabilities by assessing performance transfer from the CounterFact to the zsRE dataset. While our current experiments focus on general QA data, the results suggest that PENME can adapt to new domains with minimal supervision. We hypothesize that for specialized domains, PENME can generalize with a small amount of in-domain data. | Summary: This paper propose a method for model editing, following GRACE paper. The authors witnessed that the model intermediate representations are ambiguous to distinguish irrelevant prompts and paraphrases given an editing prompt. The authors adopt the contrastive learning technique to well separate them, thus enforcing better locality and generalizability.
Claims And Evidence: There is no definition given for Lexical Bias in the paper.
The evidence presentation is a bit messed up. For example,
1. Figure 2 uses "Irrelevant Prompts" while Figure 3 uses "Neighbors". There is no definition for "neighbors" throughout the paper
2. The bars overlap with each other in Figure 4
3. The legends for color do not match those lines in Figure 5. There is no green line or clear blue line
4. The legends for Figure 6 should be Llama instead of LAMA
There are inconsistencies between figures. For example,
1. Figure 1 and Figure 4 shows the same evaluation, i.e., the percentage of samples where irrelevant prompts are closer to the edits compared to semantically similar prompts. However, For T5small, Figure 1 shows 46.8% while Figure 4 shows more than 60% in layer 2. Llama has bigger percentage than gpt2 in Figure 1 while has smaller percentage in Figure 4.
Methods And Evaluation Criteria: 1. For locality, do you compute the train retain rate or the test retain rate (c.f. GRACE paper)?
2. It's not clear how the codebook is used in the experiments. If the stored values are strings as mentioned in the paper, then for edited prompts, there is no need to go through the full autoregressive generation process, but directly output the stored value in the codebook. The authors also mentioned that the stored values can be vectors or LORA indices, how to implement this?
3. It's not clear which token embedding is used for training the projection network. The authors only mentioned that the projection network is applied to only one certain layer, but it's not clear the embedding of which token in the prompts / answers is used for training.
4. Many typos / errors in the methodology section. For example,
+ equation 2, why argmin has underscore value v_i?
+ Threshold in line before equation 2 is v_i^delta, while it's v_delta^i in equation 2.
+ In equation 3, x_i and p_ij are both strings according to definition in Section 3, what is a string vector (with an arrow on top)? how to compute the euclidean distance between two strings?
5. The 4.3 title is misleading. There is no learning process to determine the threshold and tau. tau is a hyperparameter and the threshold depends on the paraphrases chosen.
6. How to choose which layer for training?
7. Have the authors tried other contrastive learning losses, e.g., InfoNCE, etc.?
8. In line 223, why the formulation 3 allows the method to achieve an optimal balance between generalization and locality preservation? How is this optimal? It seems that this formulation can only help generalizability since it preserve the most far away paraphrase which may also include irrelevant edits, thus hurt the locality, which is shown in Table 1 for Counterfact Loc metric.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Please refer to other sections.
In addition, MEMIT as a baseline is not discussed in the paper, e.g., not in the Related Work section. It's not clear why the authors include this one as a baseline.
Supplementary Material: Yes. Section A D .
Relation To Broader Scientific Literature: Model editing is a trending topic in LLM research. There are many previous works. Many papers have shifted from batch editing to lifelong / continual editing. In this paper, the authors also show results in this setting. Why the Para metric is a bit low on the easier zsRE dataset?
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: Please refine the writing and the presentation of the paper.
Questions For Authors: Please refer to other sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Comments: There is no definition given for Lexical Bias in the paper.**
**The evidence presentation is a bit messed up. For example,
Figure 2 uses "Irrelevant Prompts" while Figure 3 uses "Neighbors". ....... should be Llama instead of LAMA**
**There are inconsistencies between figures. For example,
Figure 1 and Figure 4 shows the same evaluation,... than gpt2 in Figure 1 while has smaller percentage in Figure 4.**
We thank the reviewer for pointing out the typographical error; neighbours and irrelevant prompts refer to the same examples. This will be corrected in the final version. We will improve the images so that the bars are separated, we provide colour correction for Figure 5: (https://imgur.com/a/bFPCBEB).
Regarding figures 1 and 4, the experiments were conducted using different random splits, which accounts for the observed discrepancies in the reported numbers. We will update the figures to ensure consistency in the reported percentages.
We define lexical bias as prompts with similar lexical tokens but different semantics that are closer together in the representation space as compared to a prompt and its respective paraphrases.
**Comment: For locality, do you compute the train retain rate or the test retain rate (c.f. GRACE paper)?**
We compute the locality metric on the test set same as GRACE.
**Comment: It's not clear how the codebook ..... LORA indices, how to implement this?**
We apply the projector once after receiving the user input. From that point, the generation process proceeds normally without intervention—unless the similarity between the input and the nearest codebook key falls below a predefined threshold. In such cases, memory retrieval is triggered. As you have pointed out, the values are stored as strings and they are simply appended to the end of the generated sequence. These values are the edits themselves. Learned vectors can be used to steer the model's generation by adding them to the token representations after identifying the edit scope. Similarly, LoRA blocks can be trained per edit to modulate generation in response to each edit.
**Comment: It's not clear which token embedding is used for training the projection network. The authors only mentioned that the projection network is applied to only one certain layer, but it's not clear the embedding of which token in the prompts / answers is used for training.**
We use averaged token representations for input encoding. While representations of the final token can also be used to train PENME, they tend to be heavily biased, often inflating similarity between prompts that share the same ending token.
**Comments: Many typos / errors in the methodology section. For example,
equation 2, .........how to compute the euclidean distance between two strings?**
We apologize for the typographical errors. In Equation (2), the argmin operation applies only to the keys $k_i$. The variables $\vec{x_i}$ and $\vec{p_{ij}}$ refer to the output representations from the projection network, which these implied through text rather than a formal definition. We will revise the text to clarify this and eliminate any confusion.
**Comments: The 4.3 title is misleading. There is no learning process to determine the threshold and tau. tau is a hyperparameter and the threshold depends on the paraphrases chosen.**
We will replace "Learning" from the title with "Finding" to precisely reflect the content of the section.
**Comment: Have the authors tried other contrastive learning losses ...thus hurt the locality, which is shown in Table 1 for Counterfact Loc metric.**
The balance is implied in terms of average score for generalization and locality in comparison with other approaches, where there is a significant deterioration of one metric as compared to the other. We initially evaluated several contrastive learning approaches and found that none consistently outperformed our formulation. This result aligns with the findings of [1], which demonstrate that, in general, the performance of various metric learning methods tends to be comparable.
**Comment: In addition, MEMIT as a baseline ... include this one as a baseline.**
MEMIT is a weight modifying approach which improves over ROME with adjustments to model parameters for all layers instead of one. It is a high performing baseline for weight modifying approaches. We will add the citation alongside ROME for the final version. Currently, the work is cited in experimental setup details provided in Appendix D.1. Experimentation Setup.
**Comment: Why the Para metric is a bit low on the easier zsRE dataset?**
This is for zero-shot transfer from CounterFact to zsRE where some drop in performance is expected. Despite no training on zsRE, PENME still outperforms most methods.
[1]Musgrave, Kevin, Serge Belongie, and Ser-Nam Lim. "A metric learning reality check." In Computer Vision–ECCV 2020: 16th European Conference"
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal.
Some of my concerns are still there, please see below:
1. I agree with reviewer emXV on the usefulness of PENME in real knowledge editing setting, especially in the sequential editing setting. Since the error is not addressed within the model itself, we need to apply this key searching step manually. As suggested by the authors, this is applied every time when the model receives a user prompt. The authors also showed in the rebuttal to emXV that this edited info could be retained by using a specific prompt. However, it would be difficult In the sequential editing scenario. If we first edit the born city of person P1 from C1 to C2, then edit the director of a certain institute I1 from P2 to P1, and query the model which city was the director of I1 born. It would be hard to retrieve all the related edits and put them into the prompt in the current pipeline.
2. My question about the optimality is more about the equation itself, not about the effectiveness of the empirical results compared to other baselines. To be more specific, why equation 3 is optimal compared to the equation in line 226? or what if we don't use max in equation 3 but use median / mean in equation 3? or what if we use both information from semantic relevant and irrelevant prompts to get a balance?
Please feel free to share your thoughts!
---
Reply to Comment 1.1.1:
Comment: **Comment: I agree with reviewer emXV on the usefulness of PENME in real knowledge editing setting, especially in the sequential editing setting. Since the error is not addressed within the model itself, we need to apply this key searching step manually. As suggested by the authors, this is applied every time when the model receives a user prompt. The authors also showed in the rebuttal to emXV that this edited info could be retained by using a specific prompt. However, it would be difficult In the sequential editing scenario. If we first edit the born city of person P1 from C1 to C2, then edit the director of a certain institute I1 from P2 to P1, and query the model which city was the director of I1 born. It would be hard to retrieve all the related edits and put them into the prompt in the current pipeline.**
Thank you for the discussion. We would like to emphasize that model editing remains a fundamentally challenging problem, particularly due to the difficulty of modifying pre-trained models without compromising their overall performance. The scenario highlighted by the reviewer involves multi-hop edits—cases where the model must accommodate multiple, interdependent changes. We highlight that such edits, even when applied locally within the model’s parameters (weight modification), do not propagate broadly across the network. The best performance achieved in this context is around 5\% by AlphaEdit [1]. However, this statistic can be misleading, as shown by IKE [2], which demonstrates that this performance, when viewed in a broader context, actually represents a degradation of the model's original multihop reasoning capabilities. This is consistent with findings by [3], which demonstrate that even localized weight updates
made for editing leads to both gradual and catastrophic forgetting.
In light of this, PENME provides a promising direction by focusing on weight-preserving techniques, aiming to mitigate drastic performance loss while enhancing the model's ability to handle sequential edits. As mentioned in our response to reviewer Qzbk, in scenarios where edits are related and require combined information for generation, these edits can be linked together in PENME's codebook for efficient retrieval. Using this with ICL will allow for the model to reason using the complete information within the prompt. We agree with the reviewer that integrating multiple edits into a single prompt can be problematic, especially for smaller models with limited input sequence lengths. We humbly mention that all current model editing approaches involve inherent compromises, and we do not claim to fully solve the problem of model editing but instead present a concrete improvement over weight-preserving methods while also highlighting the key issue as to why they fail. We appreciate the discussion on multi-hop performance and will add it to the paper.
[1] Fang, Junfeng, Houcheng Jiang, Kun Wang, Yunshan Ma, Xiang Wang, Xiangnan He, and Tat-seng Chua. "Alphaedit: Null-space constrained knowledge editing for language models."
[2] Zheng, Ce, Lei Li, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, and Baobao Chang.
”Can We Edit Factual Knowledge by In-Context Learning?.”
[3] Gupta, Akshat, Anurag Rao, and Gopala Anumanchipalli. "Model Editing at Scale leads to Gradual and Catastrophic Forgetting."
**Comment: My question about the optimality is more about the equation itself, not about the effectiveness of the empirical results compared to other baselines. To be more specific, why equation 3 is optimal compared to the equation in line 226? or what if we don't use max in equation 3 but use median / mean in equation 3? or what if we use both information from semantic relevant and irrelevant prompts to get a balance?**
When incorporating both relevant and irrelevant prompts using the mean or median for Equations 3 and in line 226, the threshold can be logically set to the midpoint. While this approach may seem to offer a better balance, the distances between edits and irrelevant prompts vary after training, resulting in a non-uniform distribution. As a result, some similarity thresholds become large for certain edits, which increases the risk of locality failures. We explored this approach in the early stages of the work, but it did not yield the desired outcomes, leading us to refine our method for improved performance. We will add empirical results comparing the results for these equations in the paper. | Summary: This paper proposes PENME a learnable projection network to transform the model’s internal representation such that lexical bias during model tuning is minimized. In particular, the authors found that existing adapters often misfire on inputs that share words with a stored edit but aren’t actually the same fact, while also failing to generalize to paraphrases. To address this PENME attempts to minimized & pull semantic paraphrases closer while lexically similar prompts are pushed farther apart. Additionally, PENME shows strong empirical results on several benchmarks with significantly better generalization abilities and computationally efficiency.
Claims And Evidence: The paper's claims are clear with some experimental results to backup the idea of PENME having general improvements over prior methods.
Methods And Evaluation Criteria: PENME as a solution to the given problem is well motivated and the idea of inserting a learned projection module to re-map the latent space for editing is intuitive. Additionally, the chosen counterfactual and zsRE datasets are standard benchmarks from prior works. Overall, the reviewer has no high level concern with the proposed methodology and experimental design.
Theoretical Claims: The paper has no direct theoretical claims and therefore rigorous proofs are not relevant.
Experimental Designs Or Analyses: The chosen benchmarks are fair and follow in line with prior works. Additionally, the set of comparative methods is broad alongside a range of model architectures. Finally there are also a large amount of ablation studies which cover a series of supporting analysis.
Supplementary Material: Beyond the additional details and ablation studies presented in the appendix, there were no other supplementary materials provided.
Relation To Broader Scientific Literature: The paper positions itself well in the context of prior model editing research. The observations of lexical bias is also an under explored area of research and would help broaden the scope of the model editing.
Essential References Not Discussed: The reviewer is unaware of other necessary references which are not linked.
Other Strengths And Weaknesses: The paper establishes the issue of lexical bias in model editing which provides strong insights. Additionally the PENME framework is a intuitive next step. However, this leads PENME's contributions to be less significant methodologically.
Other Comments Or Suggestions: See the sections above.
Questions For Authors: No additional questions beyond the issues raised in the sections above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Comment: The paper establishes the issue of lexical bias in model editing which provides strong insights. Additionally the PENME framework is a intuitive next step. However, this leads PENME's contributions to be less significant methodologically.**
While we understand that PENME may appear as a natural extension, we respectfully argue that our work addresses a fundamental and previously underexplored challenge in the weight-preserving model editing paradigm—namely, the problem of scoping, i.e., determining when an input should invoke the edited knowledge. Our proposed scoping mechanism contributes to the weight-preserving model editing paradigm and is not limited to our editing method, PENME. Moreover, the lexical bias has been highlighted as a substantial issue in text and vision models in the recent paper [1]. Our work is the first work that attempts to address this issue.
[1] Dumpala, S. H., Jaiswal, A., Sastry, C., Milios, E., Oore,
S., and Sajjad, H. SUGARCREPE++ Dataset: Vision Language Model Sensitivity to Semantic and Lexical Alterations. In Conference on Neural Information Processing Systems, Dataset Track (NeurIPS), 2024. | Summary: This paper addresses the challenge of lexical bias in adapter based model editing for large language models LLMs Specifically the authors identify an important limitation in current methods where irrelevant prompts those lexically similar but semantically unrelated to edited prompts are prone to misfires negatively impacting edit locality.
To mitigate this the authors propose PENME Projector Editor Networks for Model Editing a framework that introduces a contrastively trained projection network to better disentangle prompt representations.
By learning to group semantically similar prompts while distancing lexically similar but unrelated ones PENME improves both locality and generalization in model editing tasks.
The method integrates into existing adapter based key value codebooks preserving the efficiency and modularity of these approaches while offering notable improvements in edit precision Experiments on standard benchmarks zsRE and Counterfact across three prominent model architectures T5-small GPT2-XL and Llama-2-7B demonstrate that PENME consistently outperforms state of the art methods such as GRACE and MELO Additionally the authors discuss scalability and codebook management which have been concerns in prior approaches.
## update after rebuttal
This paper proposed a method address the lexical bias in model editing task. The response mostly address my concerns, and I keep my original rating.
Claims And Evidence: The paper makes the following key claims:
1, Lexical dominance exists in existing adapter based editing systems and leads to locality degradation
2, PENME reduces lexical bias thereby improving both locality and paraphrase generalization
3, PENME outperforms prior methods in standard evaluation metrics on zsRE and Counterfact datasets
4, The framework is scalable requiring fewer codebook entries and reduced management overhead
In my assessment these claims are largely substantiated
* The authors present convincing empirical evidence particularly the layer wise lexical dominance analysis Figure 4 which demonstrates the motivation behind their approach
* Quantitative results in Table 1 reflect consistent improvements across multiple evaluation metrics
* The ablation studies and scalability analyses Figures 5 and 6 Section 4 offer further support for the systems design decisions
* The evidence is thorough and the experimental results align well with the claims made
Methods And Evaluation Criteria: The methodology is appropriate and thoughtfully designed Leveraging contrastive learning to train the projection network is a natural choice for encouraging representation disentanglement. The projection network is lightweight a simple two layer MLP which ensures efficiency
The evaluation metrics edit success locality and paraphrase generalization are standard in model editing research and appropriate for this work. The datasets selected zsRE Counterfact are commonly used and offer a reasonable degree of diversity in task difficulty.
Theoretical Claims: This paper does not make formal theoretical claims that require rigorous proof verification. However the conceptual grounding of PENME is sound and well supported by empirical evidence.
Experimental Designs Or Analyses: The experimental design is robust
* Multiple LLM architectures are considered increasing the generality of the findings
* The use of both batch editing and streaming edit scenarios demonstrates flexibility
* The ablation studies on key hyperparameters τ ϕ are comprehensive and provide insight into the tradeoffs inherent in the approach
* Results are presented clearly with appropriate baselines GRACE MELO SERAC MEMIT included for comparison
My suggestion here would be to extend the evaluation to additional datasets perhaps outside of fact based knowledge editing to explore how well PENME generalizes.
Supplementary Material: I reviewed the supplementary material particularly as follows
* Appendix C which provides critical information about codebook scalability
* Appendix D detailing implementation hyperparameters and experimental setup
* Appendix G offering additional analyses of the learned projection space
Relation To Broader Scientific Literature: The paper builds directly on prior adapter based editing methods GRACE, MELO while drawing from contrastive learning principles common in representation learning.
Essential References Not Discussed: N.A.
Other Strengths And Weaknesses: Strengths:
- The problem of lexical bias in model editing is underexplored and the authors offer a practical well motivated solution
PENME demonstrates clear empirical improvements over strong baselines
- The method scales well and reduces codebook management complexity which addresses a common criticism of adapter based approaches
- The paper is well written and accessible even for readers not deeply familiar with model editing.
Weaknesses:
- The contribution is incremental rather than fundamental it represents an important refinement to adapter based editing but does not propose a fundamentally new editing mechanism
- The applicability to complex generation tasks for example long form generation reasoning tasks is not explored
Other Comments Or Suggestions: * Figures 1 and 4 could be made clearer axis labels and figure legends are somewhat difficult to read
* The authors may wish to include qualitative examples of edits for example paraphrases handled correctly to better illustrate practical gains
* There is an opportunity to discuss potential defensive mechanisms to mitigate malicious editing
Questions For Authors: Q1, How does PENME perform on long form generation tasks where maintaining locality over extended sequences may be challenging?
Q2, Could the authors comment on PENMEs generalization under domain shift for example specialized domains like healthcare or legal texts?
Q3, Have the authors considered adversarial training to further enhance the robustness of locality and paraphrase generalization?
Q4, How might PENMEs projection space handle multi hop reasoning prompts where lexical and semantic relationships are more complex?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Comments: The contribution is incremental rather than fundamental it represents an important refinement to adapter based editing but does not propose a fundamentally new editing mechanism**
**The applicability to complex generation tasks for example long form generation reasoning tasks is not explored**
**Q1, How does PENME perform on long form generation tasks where maintaining locality over extended sequences may be challenging?**
We appreciate the reviewer’s perspective. However, we respectfully argue that our work addresses a fundamental and previously underexplored challenge in the weight-preserving model editing paradigm—namely, the problem of scoping, i.e., determining when an input should invoke the edited knowledge. Our proposed scoping mechanism contributes to the weight-preserving model editing paradigm and is not limited to our editing method, PENME. Moreover, the lexical bias has been highlighted as a substantial issue in text and vision models in the recent paper [1]. Our work is the first work that attempts to address this issue.
We did not test PENME on long-form generation due to limited available datasets and baselines for meaningful comparison. Long-form generation may require incorporating techniques such as LORA or other lightweight trainable components. We believe that this is out of the scope of the current focus of the paper and is a promising direction for future work.
**Comment: Figure 3 and Figure 7 illustrate the "Percentage of samples where edits are closer to unrelated neighbors," but this is insufficient to demonstrate lexical bias. At lower model layers, high similarity may result from underdeveloped sentence representations, while at higher layers, the reduced percentage indicates greater differentiation between sentences.**
We agree with the reviewer that lower layers of a model may encode less developed sentence representations, while higher layers tend to capture more abstract semantic similarity. However, as shown in the bar chart in Figure 3, lexical bias persists even in the higher layers, suggesting that this issue is not confined to shallow representations. Furthermore, similar findings have been reported by [1], who observed significant lexical bias in representations across a diverse range of vision and text language models.
These observations reinforce the broader relevance of the issue we address in this work.
**Comment: Q2, Could the authors comment on PENMEs generalization under domain shift for example specialized domains like healthcare or legal texts?**
We have evaluated PENME's generalization capabilities by assessing performance transfer from the CounterFact to the zsRE dataset. While our current experiments focus on general QA data, the results suggest that PENME can adapt to new domains with minimal supervision. We hypothesize that for specialized domains such as healthcare or legal texts, with a small amount of in-domain data, PENME would generalize effectively to any domain.
**Comment: Q3, Have the authors considered adversarial training to further enhance the robustness of locality and paraphrase generalization?**
The CounterFact dataset used in our experiments is inherently adversarial, as irrelevant prompts often share significant lexical and semantic overlap (e.g. the subjects are both actors) with the edited facts. Additional adversarial examples could indeed be constructed using larger language models to further enhance robustness and generalization. However, generating such data at scale would increase computational cost and resource requirements for training. This is a valuable future direction that leads the way towards PENME generalization to a wide set of domains.
**Comment: Q4, How might PENMEs projection space handle multi hop reasoning prompts where lexical and semantic relationships are more complex?**
We posit that a knowledge base will be needed, and such prompts would need to be stored in the codebook with links to the main edit. PENME's learned representation space would allow it to localize this information better in the representation space, thus allowing for scoping multi-hop or one-hop questions. At inference time, these can be used to link back to the edited information, which can then be used for informed generation through some training of additional parameters, e.g. LoRA block.
[1] Dumpala, S. H., Jaiswal, A., Sastry, C., Milios, E., Oore,
S., and Sajjad, H. SUGARCREPE++ Dataset: Vision Language Model Sensitivity to Semantic and Lexical Alterations. In Conference on Neural Information Processing Systems, Dataset Track (NeurIPS), 2024. | Summary: This paper is about parameter preserving knowledge editing methods, specifically adapter based methods for knowledge editing. They show that adapters have lexical bias, which is the vulnerability to recalling unrelated facts due to overlapping n-grams. To mitigate this, it proposes Projector Editor Networks for Model Editing (PENME), which employs contrastive learning to learn a disentangled representation space, effectively separating lexical and semantic similarities.
Claims And Evidence: Claim 1 : There is lexical bias in adapter recall
Evidence : shown through n-gram overlap and Figure 4
Claim 2: Use contrastive learning and projector network to disentangle representations of unrelated facts
Evidence : Shown through editing performance in Table 1. But the improvement is not as clear for CounterFact. Can the authors explain that in more detail?
Methods And Evaluation Criteria: They use batch and sequential editing using zsre and counterfact dataset. This is standard in knowledge editing.
Theoretical Claims: Mostly an empirical study
Experimental Designs Or Analyses: Experimental design was standard. The only issue was that no downstream performance analysis done.
A question for the authors here is - How do these edits interfere with other task performance and other scenarios like general text generation task? Is the projection mechanism activated at the generation of every token?
Supplementary Material: No
Relation To Broader Scientific Literature: This paper does present a viable addition to adapter-based knowledge editing methods.
The reason why parameter preserving methods are attractive is because parameter modifying methods lead to model degradation. But recent knowledge editing methods have made impressive strides in this, preserving downstream performance over 3k sequential edits (AlphaEdit). Performing evaluations for 2k edits is does no longer enough and a good measure of effectiveness for these methods.
So to show significant addition to the literature would be showing the effectiveness of this method on a much larger set of edits and also measuring downstream performance. Discussing how these methods interact with downstream performance is also important for that.
Essential References Not Discussed: Consecutive Model Editing with Batch alongside HooK Layers - Very related method that should have been one of the baselines which came out in March 2024.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: I have the following questions for the authors:
1. How are the values incorporated since they are stored as string? No presentation of how inference is done? What is the inference process?
2. How does the key storage process work for other methods? How is it different? I dont really know GRACE or MELO?
3. How does this work with general generation? Is the projection mechanism activated at the generation of every token?
I am happy to update my score based on the author's responses to the above questions and other questions I asked throughout the review.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Comment:"The only issue was that no downstream performance analysis done."**
We evaluate the general capabilities of the Llama-2-7b used in our study and compare its downstream performance before and after applying PENME. To ensure a diverse and representative assessment, we select three distinct tasks: Natural Language Inference-NLI (RTE dataset) evaluated using F1 score, Summarization (CNN/DailyMail) assessed via average ROUGE-1, ROUGE-2, and ROUGE-L scores, and Sentiment Classification (Diar-AI/Emotions), using F1 score.
| **Task** | **Baseline** | **Edited Model** |
|--------------------------------|--------------|------------------|
| NLI | 0.6476 | 0.6428 |
| Text Classification | 0.6573 | 0.6573 |
| Summarization | 0.1865 | 0.1865 |
The results show competitive performance across all tasks, with only a small drop in performance
in the NLI task.
**Comments:" How do these edits interfere... generation of every token?**
**In discussing the construction ... didn't explain how to obtain the corresponding values.**
**How are the values incorporated... What is the inference process?**
**How does this work .... generation of every token?"**
We apply the projector once after receiving the user input. From that point, the generation process proceeds normally without intervention—unless the similarity between the input and the nearest codebook key falls below a predefined threshold. In such cases, memory retrieval is triggered. As you have pointed out, the values are stored as strings and they are simply appended to the end of the generated sequence. These values are the edits themselves. We did not evaluate generative settings where multi-token or continued generation was necessary. Instead of using a stored string, we could use the approaches examined in GRACE, where a learned vector is the value, and MELO, where a LoRA block is the value. As a point of emphasis, our proposed scoping mechanism is independent of these design choices, and any generation method can be freely selected.
**Comment:"The reason why parameter preserving .... evaluations for 2k edits is does no longer enough .... effectiveness for these methods."**
While AlphaEdit does improve upon MEMIT, it is important to clarify that the “sequential”
setting used in AlphaEdit involves 3,000 edits applied in batches of 100, making it more accurately
described as consecutive batch editing. In contrast, true sequential editing—where updates are applied
one at a time, as explored in GRACE—poses a more challenging scenario due to the cumulative nature
of parameter shifts (3000 vs 30 updates). We provide results for 3000 sequential edits on the Llama-2-7b model. PENME shows high performance with a minor drop in generalization and locality metrics.
| **ES** | **Para** | **Loc** |
|------------------|---------------------------|--------------|
| 1.00 | 0.8076 | 0.8549 |
**Comment: "Consecutive Model Editing ....one of the baselines which came out in March 2024"**
We thank the reviewer for highlighting this work. While the initial codebase of the work supports
GPT series models, adapting it to LLaMA requires additional effort. Notably, Table 1 in the paper shows
lower generalization and edit success performance on GPT2-XL compared with our method, PENME.
We will add the comparison in the final version of the paper.
**Comments:"How does the key storage ... I dont really know GRACE or MELO?**
**Claim 2: Use contrastive learning .... But the improvement is...**
**not as clear for CounterFact. ..."**
GRACE and MELO store direct model representations corresponding to the final hidden token of the input. Their approach relies on maintaining a very low similarity threshold, which necessitates adding a large number of entries to the codebook per edit. To mitigate this overhead, they merge entries with similar outputs. However, when new edits fall within the similarity threshold, the similarity thresholds are reduced for these edits. This threshold adjustment and merging strategy can result in edit forgetting, as discussed in the reference section.
In contrast, PENME avoids the need to store multiple codebook entries by learning a projection space where irrelevant inputs are mapped farther from each other, while paraphrased edits remain close. This design enables faster retrieval and helps prevent issues like edit clashes and forgetting, without compromising precision.
We evaluate GRACE by increasing its similarity threshold on GPT2-XL, aiming to match the generalization performance of PENME. The results overall demonstrate how PENME improves upon the direct use of representations.
| Model | ES | Loc | Para |
|-------|------|-------|-------|
| PENME | **1.00** | **0.847** | **0.875** |
| GRACE | **1.00** | 0.171 | 0.767 |
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses.
I'm still concerned about the usefulness of PENME. The rebuttal and the paper suggests to me that PENME is a very specific method created to be great at the **metrics** of the task of knowledge editing, but it does not solve the problem of knowledge editing. My main criticism/skepticism is due to two things:
1. **Rebuttal Quote -"The values are stored as strings and they are simply appended to the end of the generated sequence"**
What happens if I want the model to generate a paragraph based on the edited facts? If the value is just appended as a string to the input, it may or may not be grammatically correct or an appropriate continuation of generation. Whereas if the values were hidden activations used to guide the model, then the model's generation uses it for text generation. I think this is completely missed in PENME unless I understand it incorrectly. This property itself will make PENME great at knowledge editing evaluation where inputs are Question-Answer pairs, but is not useful when using an LLM.
2. **Rebuttal Quote - "We apply the projector once after receiving the user input. From that point, the generation process proceeds normally without intervention"**
What if during the generation process, the model reaches a point where it needs to continue generation based on an edited fact. With chain of thought and test-time scaling, such scenarios will occur at a much larger frequency. If the memory retrieval is only triggered once when the input is given to the model, such a scenario is completely ignored.
Please feel free to provide more clarification on the above.
---
Reply to Comment 1.1.1:
Comment: You are right that using a string as a value can lead to issues with grammatical correctness or appropriate continuation. However, the datasets used for model editing are question-answer based, where the naturally generated answer is typically a single token or phrase. As we mentioned in the paper, alternative methods such as LoRA (MELO), vector playback (GRACE), can be integrated seamlessly in PENME's values, which allow for scenarios where the model can continue generation based on an edited fact. We didn't evaluate this approach in practice, but we will add the results to the camera ready paper if accepted, as it is more work than what can be completed during the rebuttal period.
We emphasize that our **primary contribution** lies in addressing a fundamental and critical point of failure in weight-modified approaches—namely, the scoping mechanism. Our work tackles this core and previously underexplored challenge within the weight-preserving model editing paradigm. Importantly, the accuracy of any subsequent generation process is inherently dependent on resolving this issue.
**To address the reviewer’s concern** we propose an alternative generation mechanism that integrates in-context learning within PENME framework to support long-form generation. For this setup, PENME stores an edit prompt as the value in its codebook. Since PENME is integrated into the early layers of the model, computation can be terminated early if the input falls within the scope of an edit, allowing the relevant edit information to be efficiently retrieved from the codebook. Once obtained, this edited information can be incorporated into the prompt context to support in context learning (ICL) based generation, as demonstrated by [1][2], who validated its effectiveness for long-form generation.
We apply the proposed method using LLaMA-2-7B model considering 1,000 samples from the Counterfact dataset. We follow the standard evaluation settings described in [3] i.e. maximum token length is set to $100$ and assessed Fluency (via Generation Entropy) and Consistency (via Reference Score) of the generated output in comparison to the baseline model's output. In the following, we also provide sampled generated outputs with max token length set to $300$.
We use the following prompt, which is stored as the value in the codebook:
```
f"""You have a new fact: \{edit prompt\}.
Based on this fact, complete the following sentence to answer the question: \{query\}
Your answer should specifically incorporate the new fact I've shared.
Paragraph: \{query\}"""
```
| Prompt Edit | Original Information | Edit | Query | Answer |
|-|-|-|-|-|
| What is the twin city of Shanghai? It is | Barcelona | Dresden | The twin city of Shanghai is none other than | The twin city of Shanghai is none other than **Dresden** Germany. This may come as a surprise to many as **Dresden** is located in Eastern Germany while Shanghai is a major city in Eastern China. However, the two cities have a long history of cultural and economic ties dating back to the 19th century when both were part of the German Empire. In the years following World War II, **Dresden** and Shanghai maintained close diplomatic relations, and in 1981 the two cities officially designated each other as twin cities. Today, the connection between **Dresden** and Shanghai continues to flourish with regular exchanges in the fields of education, culture, and trade. |
| Baal Shem of London speaks the language | Hebrew| French | Baal Shem of London is proficient in | Baal Shem of London is proficient in multiple languages including **French**. This is a remarkable feat given that he is a spiritual leader and healer who is said to have the ability to communicate with the divine. However his proficiency in **French** is particularly noteworthy as it is a language that is not commonly associated with his spiritual tradition. Despite this Baal Shem of London has demonstrated a remarkable ability to master this language allowing him to connect with and heal individuals from a diverse range of cultural backgrounds. This is yet another testament to his incredible spiritual abilities and his commitment to serving others. |
The generated text shows that the edited fact is propagated throughout the generation, and the generation is coherent.
| Model | Generation Entropy | Consistency |
|-|-|-|
| Baseline | 611.54 | 16.57 |
| Edited | 622.36 | 21.98 |
The Generation Entropy increases from 611.54 to 622.36, indicating more diverse and fluent text. The Consistency score also improves from 16.57 to 21.98, which means generations are more semantically consistent.
[1] Rosati, D. et al. Long-form evaluation of model editing
[2] Zheng, Ce et al. Can We Edit Factual Knowledge by In-Context Learning?.
[3] Meng, K. et al. Mass-editing memory in a transformer. | null | null | null | null |
Gradual Transition from Bellman Optimality Operator to Bellman Operator in Online Reinforcement Learning | Accept (poster) | Summary: The paper studies the contrast and connections between Bellman operator and Bellman optimality operator in online Reinforcement Learning (RL). Bellman operator is widely used for on-policy actor-critic methods but often hurts to sample efficiency. Bellman optimality operator, on the other hand, is widely used for discrete RL but suffers from overestimation bias in target values. The paper bridges the gap between the two operators by leveraging IQL's expectile value loss. Specifically, authors propose to gradually transition from the optimality operator to the traditional SARSA operator by annealing the expectile coefficient $\tau$. Contrary to IQL, updates only utilize the $Q$ value function in the target instead of the state-value function. Experiments on a small toy MDP, DM Control tasks and MetaWorld benchmark validate that annealing addresses overestimation and provides sample efficiency.
Claims And Evidence: Please refer to strengths and weaknesses.
Methods And Evaluation Criteria: Please refer to strengths and weaknesses.
Theoretical Claims: Please refer to strengths and weaknesses.
Experimental Designs Or Analyses: Please refer to strengths and weaknesses.
Supplementary Material: Yes, the appendix.
Relation To Broader Scientific Literature: Please refer to strengths and weaknesses.
Essential References Not Discussed: Please refer to strengths and weaknesses.
Other Strengths And Weaknesses: ### Strengths
* This paper is exceptionally well written and easy to understand.
* The work investigates the annealing scheme thoroughly using comprehensive experiments.
* The paper tackles important problems of sample efficiency and overestimation bias by proposing a simple idea.
### Weaknesses
* **Evaluation:** While the paper studies and analyzes the annealing scheme, its evaluation and overall comparative presentation could be improved. Authors could utilize the rliable library [1] or metrics to compare between different baselines. For instance, while authors compare the IQM, the optimality gap should also be compared and presented. Similarly, authors should also consider evaluating the gap between actual and estimated target $Q$ values to evaluate overestimation bias for atleast one of the DM Control tasks.
* **Contribution and Novelty**: While the paper makes a solid empirical contribution, its theoretical understanding remains an open question. Could the authors elaborate on how the annealing scheme affects the ability of IQL in extracting the optimal policy? Additionally, authors could discuss how the annealing scheme is different from prior efforts that aim to address overestimation bias and sample efficiency. In its current form the work only ablates and tunes $\tau$, a trick partially explored by IQL [2].
[1]. Agarwal et al, Deep Reinforcement Learning at the Edge of the Statistical Precipice, NeurIPS 2021.
[2]. Kostrikov et al, Offline Reinforcement Learning with Implicit Q-Learning, ICLR 2022.
Other Comments Or Suggestions: NA
Questions For Authors: Please refer to strengths and weaknesses.
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you very much for the highly constructive feedback. We provide several responses below.
**Evaluation using optimality gap**
Thank you for the suggestion. As we understand it, the optimality gap refers to how much the agent's performance falls short of a target score, such as a human-level or oracle performance. In our tasks, however, such target scores are not always clearly defined. If the maximum achievable score in the task (e.g., 1000 in DM Control) is considered as the target, then we assume the optimality gap corresponds to the difference between 1000 and the agent’s average score. If this interpretation is incorrect, we would greatly appreciate clarification.
**Evaluating overestimation via Q-value gap**
In Figure 12, we report the gap between the estimated Q-values and the actual Q-values computed via Monte Carlo return, across our proposed method and several ablations. In AQ-SAC, our proposed method, we observe that the bias is initially larger—potentially encouraging exploration—but gradually decays to a level comparable with the SAC (SARSA-based method) toward the end. This behavior aligns with our intended design.
**Theoretical basis and impact of the annealing scheme**
If we focus solely on optimality, when $\tau$ = 0.5, the max operator in Equation (3) of our paper is replaced by an expectation, turning the inequality into an equality. As a result, no bias is introduced, allowing for a more optimal value estimation. On the other hand, $\tau \simeq 1$ corresponds to the max operator, which introduces bias. Therefore, we anneal $\tau$ toward 0.5 during training to reduce bias and eventually enable the estimation of an optimal policy.
It is an open question, as the reviewer correctly pointed out, how the max operator and bias affect the speed of improvement, and it is theoretically challenging to characterize this relationship. The table included in our response to Q2 of reviewer dvAE provides additional empirical evidence suggesting that bias can promote exploration, thereby highlighting the effectiveness of our method.
Thank you again for the valuable feedback. If you have any further concerns, we would be happy to address them.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response to my comments. Below are my remain concerns-
**Evaluation using optimality gap:** The authors' interpretation of the optimality gap is correct. Authors could measure this to compare between their ablations. Additionally, authors could simply utilize the rliable library or an equivalent framework for metrics. In its current form, the IQM comparison makes the evaluation less informative.
---
Reply to Comment 1.1.1:
Comment: Thank you once again for your valuable feedback. As suggested, we have evaluated IQM, Mean, and Optimality Gap using rliable. The results are presented below. We will incorporate these results in the camera-ready version of the paper. We sincerely appreciate the time and effort you dedicated to providing such insightful comments.
| Algorithm | IQM (CI) | Mean (CI) | Optimality Gap (CI) |
|-----------|----------------------------------|-----------------------------------|---------------------------------|
| AQ-TD3 | 820.000 (CI: 811.488 - 826.896) | 740.304 (CI: 731.038 - 749.247) | 259.696 (CI: 250.753 - 268.962) |
| AQ-SAC | 832.442 (CI: 815.114 - 844.640) | 746.092 (CI: 732.023 - 758.475) | 253.908 (CI: 241.525 - 267.977) |
| TD3 | 516.018 (CI: 462.334 - 571.874) | 492.670 (CI: 459.869 - 526.973) | 507.330 (CI: 473.027 - 540.131) |
| SAC | 765.028 (CI: 712.900 - 800.936) | 657.879 (CI: 623.902 - 689.004) | 342.121 (CI: 310.996 - 376.098) |
| XQL | 628.837 (CI: 560.269 - 687.952) | 564.443 (CI: 521.825 - 604.427) | 435.557 (CI: 395.573 - 478.175) | | Summary: This paper is mainly concerned about improving the online policy learning. The paper is especially focusing on the respective advantages and disadvantages of Bellman (SARSA-like) and Bellman optimality (Q-learning-based) operators. Through training in a simple discrete action environment, they empirically showcase that Bellman optimality operator accelerates learning, while Bellman operator is less biased. Motivated by such balance, they use a linear combination of both target values from the two groups of methods to be their actual target in training. And to approximate taking maximum for Q-learning part, an expectile loss with linearly decayed threshold value is applied. The performance of such proposal is tested in several environments.
Claims And Evidence: Although mainly using empirical evidence, the efficiency of the proposal is quite convincing.
Methods And Evaluation Criteria: - What is the justification for the linear decay in $w$?
Theoretical Claims: N/A
Experimental Designs Or Analyses: - The motivating examples in Section 3 provides an intuition of the balance between two operators.
- Why in cheetah-run and humanoid-stand environments, the proposed methods are less efficient than some oracle methods? Especially it seems that SAC is still the best in the latter one?
- For the results of biases in Figure 12, how about annealed $\\tau$?
- Different annealing patterns are tested for the final choice of linear decay.
Supplementary Material: Roughly checked Section G of Appendix.
Relation To Broader Scientific Literature: Such method provides a promising approach to balance between training efficiency and bias control.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: > **Clarity**
- Line 212-213: what is the expectation over (where is the randomness)?
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the valuable feedback. Please find our responses below.
**What is the justification for the linear decay in w?**
As shown in Table 4 and Figure 18, we experimented with several annealing patterns and found that linear decay achieved sufficiently good performance. While other patterns or adaptive strategies could also be viable design choices, we prioritized simplicity in this work.
**Why in the cheetah-run and humanoid-stand environments, are the proposed methods less efficient than some oracle methods? Especially, it seems that SAC is still the best in the latter one?**
In the cheetah-run environment, even the base algorithms such as SAC and TD3 converge relatively quickly to high performance. This suggests that sufficient exploration and improvement in policy optimality are already achieved without the need for further bias-induced acceleration. In such cases, the additional bias introduced by AQL may outweigh its benefits, leading to lower overall efficiency. This interpretation is supported by Figure 13, where reducing the initial value of $\tau$ helps mitigate the bias and results in convergence speeds comparable to SAC.
A similar explanation applies to the humanoid-stand environment. Here, a high initial $\tau$ value of 0.9 likely introduces excessive bias, again offsetting the intended benefits of AQL. When $\tau$ is reduced to 0.7 or 0.8, the convergence becomes comparable to SAC, which further supports this hypothesis.
These observations indicate that the inefficiency of our method in these environments can be mitigated through appropriate tuning of $\tau$. Importantly, regardless of the $\tau$ setting, the final returns achieved by our method remain competitive highlighting the robustness of the proposed approach.
**For the results of biases in Figure 12, how about annealed $\tau$?**
The AQ-SAC results in Figure 12 correspond to the case where $\tau$ is annealed from 0.9 . In AQ-SAC, the initial bias increases, which can contribute to exploration, and then gradually decreases to a level comparable to the SARSA-based approach (SAC) toward the end. This behavior aligns with our intention. Similar trends were observed when annealing from different initial values of $\tau$, and we will include these additional results in the camera-ready version.
**Line 212–213: what is the expectation over (where is the randomness)?**
The expectation is over the randomness in the Q-function, specifically the variable $\epsilon$ mentioned in the following sentence. We will add a clearer explanation to improve readability.
We sincerely appreciate your helpful feedback once again. We hope that our responses have sufficiently addressed your concerns. If there are any remaining issues, we would greatly appreciate your guidance on what further explanations or revisions would be necessary to improve your score.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification! Please find my follow-up question below:
> **Choice of $w$**
- Figure 18 seems to justify the choice of $\tau$ instead of $w$?
---
Reply to Comment 1.1.1:
Comment: Thank you for your follow-up question. As you correctly pointed out, Figure 18 illustrates the scheduling of $\tau$, not $w$ — we apologize for the confusion. To justify the use of a linear schedule for $w$ as well, We conducted preliminary experiments using the same scheduling approach as that in Figure 18, applied to $w$ as in Figure 2.
In these experiments, we evaluated the estimation errors of Q(s₀, a₀) and Q(s₀, a₁) when using Gaussian noise with standard deviations of 0.2, 0.3, and 0.4 to the target Q-values, reporting the errors every 10,000 steps. The results are summarized in the tables below. While there are cases where non-linear scheduling may offer better performance, our findings suggest that a simple linear schedule works sufficiently well for $w$ in many cases.
We will include clearer figures of these results in the camera-ready version. We hope these additional experiments help address your concerns and would greatly appreciate your consideration of a score update. Once again, thank you for your valuable feedback.
### $\sigma = 0.2$
**Table for $a_0$**
| Method | 0 | 10000 | 20000 | 30000 | 40000 | 50000 | 60000 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| SARSA | -1.090 | -0.225 | -0.080 | -0.055 | -0.044 | -0.038 | -0.031 |
| Q-learning | -1.090 | **-0.134** | 0.019 | 0.034 | 0.036 | 0.036 | 0.036 |
| Linear | **-1.090** | -0.146 | **-0.003** | **0.002** | **-0.006** | **-0.012** | **-0.018** |
| Exp1 | -1.090 | -0.188 | -0.068 | -0.053 | -0.044 | -0.038 | -0.031 |
| Exp2 | -1.090 | -0.134 | 0.019 | 0.035 | 0.034 | 0.033 | 0.025 |
| Sigmoid | -1.090 | -0.135 | 0.012 | 0.015 | -0.009 | -0.023 | -0.027 |
**Table for $a_1$**
| Method | 0 | 10000 | 20000 | 30000 | 40000 | 50000 | 60000 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| SARSA | **-0.590** | -0.209 | -0.124 | -0.090 | -0.070 | -0.058 | -0.048 |
| Q-learning | -0.590 | -0.133 | -0.030 | 0.004 | 0.019 | 0.026 | 0.031 |
| Linear | -0.590 | -0.137 | -0.045 | -0.019 | -0.013 | **-0.012** | **-0.013** |
| Exp1 | -0.590 | -0.173 | -0.102 | -0.076 | -0.062 | -0.052 | -0.045 |
| Exp2 | -0.590 | **-0.133** | **-0.030** | **0.004** | 0.018 | 0.025 | 0.025 |
| Sigmoid | -0.590 | -0.134 | -0.034 | -0.008 | **-0.007** | -0.014 | -0.019 |
### $\sigma = 0.3$
**Table for $a_0$**
| Method | 0 | 10000 | 20000 | 30000 | 40000 | 50000 | 60000 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| SARSA | -1.090 | -0.227 | -0.081 | -0.055 | -0.046 | -0.038 | -0.031 |
| Q-learning | -1.090 | -0.096 | 0.063 | 0.077 | 0.081 | 0.077 | 0.078 |
| Linear | **-1.090** | -0.108 | **0.030** | **0.028** | 0.016 | **0.001** | **-0.010** |
| Exp1 | -1.090 | -0.172 | -0.064 | -0.051 | -0.042 | -0.038 | -0.030 |
| Exp2 | -1.090 | **-0.096** | 0.062 | 0.077 | 0.077 | 0.075 | 0.062 |
| Sigmoid | -1.090 | -0.098 | 0.052 | 0.047 | **0.011** | -0.016 | -0.024 |
**Table for $a_1$**
| Method | 0 | 10000 | 20000 | 30000 | 40000 | 50000 | 60000 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| SARSA | -0.590 | -0.210 | -0.122 | -0.089 | -0.069 | -0.056 | -0.047 |
| Q-learning | -0.590 | -0.094 | 0.013 | 0.047 | 0.062 | 0.069 | 0.073 |
| Linear | -0.590 | -0.105 | -0.011 | **0.010** | **0.013** | 0.009 | **0.003** |
| Exp1 | -0.590 | -0.156 | -0.091 | -0.070 | -0.059 | -0.050 | -0.043 |
| Exp2 | -0.590 | **-0.093** | 0.012 | 0.046 | 0.059 | 0.066 | 0.066 |
| Sigmoid | **-0.590** | -0.096 | **0.007** | 0.030 | 0.024 | **0.007** | -0.006 |
### $\sigma = 0.4$
**Table for $a_0$**
| Method | 0 | 10000 | 20000 | 30000 | 40000 | 50000 | 60000 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| SARSA | -1.090 | -0.225 | -0.082 | -0.054 | -0.043 | -0.038 | -0.030 |
| Q-learning | -1.090 | -0.053 | 0.109 | 0.125 | 0.127 | 0.126 | 0.126 |
| Linear | **-1.090** | -0.071 | 0.065 | 0.058 | 0.038 | 0.019 | **-0.000** |
| Exp1 | -1.090 | -0.155 | **-0.055** | **-0.049** | -0.044 | -0.037 | -0.031 |
| Exp2 | -1.090 | **-0.053** | 0.109 | 0.126 | 0.124 | 0.119 | 0.102 |
| Sigmoid | -1.090 | -0.056 | 0.096 | 0.082 | **0.033** | **-0.008** | -0.023 |
**Table for $a_1$**
| Method | 0 | 10000 | 20000 | 30000 | 40000 | 50000 | 60000 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| SARSA | -0.590 | -0.210 | -0.123 | -0.088 | -0.068 | -0.054 | -0.045 |
| Q-learning | -0.590 | -0.058 | 0.055 | 0.093 | 0.107 | 0.113 | 0.118 |
| Linear | -0.590 | -0.070 | **0.025** | **0.044** | **0.041** | 0.032 | 0.019 |
| Exp1 | -0.590 | -0.138 | -0.080 | -0.066 | -0.058 | -0.051 | -0.044 |
| Exp2 | **-0.590** | **-0.056** | 0.057 | 0.093 | 0.109 | 0.114 | 0.110 |
| Sigmoid | -0.590 | -0.064 | 0.047 | 0.067 | 0.054 | **0.030** | **0.009** | | Summary: The manuscript proposes a gradual transition from a Bellman optimality operator to a Bellman operator by using a linearly annealed parameter to blend two Q-target estimates. I have noticed that a previously published paper introduced a BEE operator that similarly combines an exploitation-based update from historical best actions with an exploration-based update from the current policy using a λ parameter. Although the two works target different bias issues—overestimation in this manuscript versus underestimation in the earlier work—their underlying methods and even choices of algorithm backbone show striking similarities. Moreover, the manuscript does not discuss or reference this earlier work, making it challenging to distinguish its unique contributions.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
Supplementary Material: I have reviewed all parts.
Relation To Broader Scientific Literature: See the next part.
Essential References Not Discussed: The manuscript proposes a gradual transition from the Bellman optimality operator to the Bellman operator by linearly annealing a parameter over time, thereby blending two Q-target estimates to accelerate learning while mitigating overestimation bias. The BEE paper[1], on the other hand, introduces an operator that linearly combines two different Bellman updates: one that exploits historically best-performing actions and another that uses the current policy for exploration, with the balance controlled by a trade-off parameter (λ). Although one work primarily focuses on reducing overestimation and the other on addressing underestimation in later training stages, both share a very similar underlying motivation—improving the accuracy of Q-value estimates within off-policy actor-critic frameworks by fusing two complementary update strategies. Moreover, the algorithmic implementation in both works is remarkably similar: each computes two separate Q-targets and then combines them using a parameter that is scheduled over time (BEE discussed some different scheduling mechanisms in its appendix), and they even make unexpectedly similar choices in terms of the algorithm backbone and experimental settings. This striking resemblance in both method and experimental design raises serious concerns regarding the originality of the manuscript, especially as it does not reference or discuss the BEE paper, leaving readers without a clear understanding of how the contributions of the manuscript differ or extend the existing work.
In light of these concerns, I recommend that the authors provide a detailed comparative discussion to clarify the unique contributions of their work. Specifically, they should explicitly highlight any theoretical or practical refinements that distinguish their approach from the one presented in the BEE paper. Without such a discussion, the high degree of overlap in core methodology, and even backbone selection significantly undermines the manuscript’s originality and its contribution to the field.
[1] Ji, T., Luo, Y., Sun, F., Zhan, X., Zhang, J., & Xu, H. (2024, July). Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic. In International Conference on Machine Learning (pp. 21672-21718). PMLR.
Other Strengths And Weaknesses: The experimental results are well-executed and provide strong empirical support.
Other Comments Or Suggestions: The main contribution of this manuscript could be considered as building upon the BEE paper by designing a more adaptive and intelligent trade-off mechanism for the $\lambda$ (or the $w$ parameter in this manuscript), allowing it to evolve naturally throughout the training process. If the manuscript seriously acknowledges this connection and revises its contribution accordingly, it would be a solid and acceptable work. Given that the experimental results are well-executed and provide strong empirical support, this refinement could make the contribution even more valuable.
Questions For Authors: I find it lacks an overview table of the hyperparameters.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for pointing out that interesting and important prior work.
BEE and AQL share a similarity in that both utilize the Bellman optimality operator based on in-sample maximization as well as the Bellman (expectation) operator. As the reviewer correctly noted, the key difference lies in the motivation. While BEE focuses on addressing underestimation, our approach emphasizes the potential benefits of overestimation in promoting learning, as supported by our preliminary experiments. Moreover, our method introduces a scheduling strategy that gradually decays optimality over time—a scheduling not explored in BEE.
Additionally, unlike BEE, which combines the outputs of two separate Q-functions using different operators, our approach transitions gradually from one operator to the other. This allows us to maintain a single Q-function throughout training, simplifying the overall design.
Therefore, our method can be seen as a simplification of BEE that introduces a novel motivation and scheduling mechanism to decay optimality, while relying on a single Q-function.
We agree that the relationship with BEE is highly relevant, and we will revise our camera-ready version to explicitly discuss this connection and clarify our contributions accordingly.
The hyperparameters used in our experiments are listed in Appendix B. To further improve clarity, we will also include an overview table summarizing them in the camera-ready version.
Once again, we sincerely appreciate the valuable feedback. We hope that the inclusion of this discussion in the paper addresses your concerns. If not, we would greatly appreciate it if you could let us know what further explanations or revisions would be needed to improve the score.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying the connections and distinctions between your work and BEE, particularly regarding the scheduling mechanism and single Q-function approach. It is crucial that you thoroughly discuss BEE in the camera-ready version, given its importance as a key prior work. Highlighting both the similarities and differences, as well as providing an overview of the hyperparameters, should address my concerns. I look forward to seeing these improvements, and **based on these promised improvements, I have raised my score**.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your comment and for raising your score. We promise to include a thorough discussion of BEE and a table of hyperparameters in the camera-ready version. | Summary: This paper proposes to modify the Q-learning update in SAC and TD3 with an expectile loss like IQL, where the crucial proposition is to anneal the value of expectile $\tau$ from values close to 1 (representing max Q update) to 0.5 (representing SARSA update). The paper claims that "overestimation" in early stages of learning is beneficial for exploration and is shown to accelerate learning. However, at the end of training, reducing the overestimation bias is claimed to be desirable.
Claims And Evidence: There is no theoretical or empirical evidence of the two key hypotheses:
1. why annealing should be done at all, as opposed to either of the two objectives. For instance, if max Q leads to overestimation, then one could use methods to reduce overestimation. Why do this specific kind of annealing to reduce overestimation? Similarly, if SARSA update leads to suboptimal or slow learning, why is it desirable to do it at all at the end of training?
2. how early max Q update leads to better exploration. There is no experiment that validates this claim.
Therefore, while the IQL-inspired method performs well in the experiments, the motivation behind the two key ideas of IQL-inspiration and annealing seem to be made up. Ideally, these ideas should be empirically validated to clearly delineate where exactly the gain is coming from.
----
Furthermore, in Section 3.2, the overestimation due to Q-learning is explored, but it is not clear whether this overestimation would change the optimal policy to the extent that one would prefer the Bellman operator instead of the Bellman optimality operator. In fact, we care about speed of convergence and optimality, and the overestimation should be self-resolved due to exploration in online RL (as mentioned in the paper) — so, it is not clear why one would not just choose to do max-Q like RL altogether. There needs to be a clear justification of the idea of balancing between max-Q and SARSA-style update.
----
The effectiveness of annealing in Section 4.3 is incorrectly claimed. In Table 2, the difference between Fixed (0.7) and Annealed (0.9) is so small that one might just prefer the fixed IQL method instead of the added complexity of annealing. Concretely, if we were to run two experiments: one with annealing from various values and one with fixed with various values, we would arrive at almost the same optimal result. Then, there is no meaningful benefit of the added complexity of annealing.
Methods And Evaluation Criteria: No clear justification for why a fixed $\tau$ is not enough and why one needs to do annealing to balance the max-Q and SARSA objectives, instead of a weighted balancing.
Theoretical Claims: 1. The paper lacks a strong theoretical justification for the problem that annealing solves, and why annealing is a sound method to solve this problem.
2. There is no convergence or improvement proof that shows that the proposed change in the algorithm is a valid change that preserves stability and leads to better stability or convergence speed.
Experimental Designs Or Analyses: 1. The experiments do not justify the key problem that is present that is solved by the annealing. While there is a clear benefit offered due to the proposed method in the experiments, the source of this improvement is not well motivated or empirically justified.
2. The empirical result of Fixed (0.7) is quite strong, and it would be worthwhile to explore why that works as well. Annealing adds another layer of complexity that achieves the same effect as Fixed (0.7), while complicating the analysis to find the core reason for improvement.
Supplementary Material: Yes, fully.
Relation To Broader Scientific Literature: The idea extends IQL algorithm from offline RL to online RL, which already results in good results: Fixed (0.7). The addition of annealing does not bring any further statistically significant improvements.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ## Strengths
- The observed gains in the experiments are statistically significant.
## Weaknesses
- The reasons for the gains are not attributed and motivated convincingly.
Other Comments Or Suggestions: N/A
Questions For Authors: - L268: "Overestimation increases the chance of the agent selecting overestimated actions, correcting them in the process, and thus broadening the range of actions tried."
How does it broaden the range of actions tried?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for the highly valuable feedback. The concerns are comprehensively covered in the "Claims and Evidence" section. Below, we provide responses to each of the specific points mentioned there.
**1. Why use annealing between max-Q and SARSA? If max-Q causes overestimation, why not reduce it directly? And if SARSA is suboptimal or slow, why use it at all?**
Max-Q introduces an overestimation bias, but as discussed later, this can encourage exploration. Therefore, reducing this bias directly may not always be desirable.
The reviewer asks, “If SARSA leads to suboptimal or slow learning, why use it at the end of training?” However, we emphasize that SARSA’s value estimates are not suboptimal. As an on-policy method, SARSA avoids the distribution mismatch between the actions used for target estimation and those actually taken. Furthermore, since SARSA does not involve Max, it introduces less bias. Although it may learn more slowly, it provides more accurate value estimates, as shown in Section 3.2.
**2. How early max Q update leads to better exploration. There is no experiment that validates this claim.**
We agree and appreciate this point. To address it, we measured policy entropy during early training. The table below shows average entropy over DMC tasks for SAC (Fixed (0.5), (0.7), and (0.9)), corresponding to SAC with expectile loss with $\tau = 0.5, 0.7, 0.9$. As shown in Figure 12, these methods exhibit overestimation bias in the order: Fixed (0.9) > Fixed (0.7) > SAC. The entropy follows the same trend, suggesting higher bias leads to broader exploration.
This supports the intuitive hypothesis that overestimation can lead to suboptimal actions due to inflated Q-values, promoting exploration. Prior work (e.g., Section 3 of [1]) also demonstrates that overestimation can be beneficial in tasks where exploration is important, while it degrades performance in tasks where exploration is undesirable. These results support the claim that overestimation bias promotes exploration.
We will include these points in the camera-ready version to clarify the link between exploration and overestimation.
[1] Q. Lan, Y. Pan, A. Fyshe, and M. White. Maxmin q-learning: Controlling the estimation bias of q-learning. In ICLR, 2020.
**Table**: The average policy entropy in DM Control tasks. 15k steps corresponds to shortly after training begins at 10k steps.These results suggest that a large $\tau$ in the early stages of training increases policy entropy and promotes exploration.
| Method | 15k steps | 100k steps | 200k steps |
|------------------|------------------|------------------|------------------|
| SAC (Fixed (0.5)) | 6.78 ± 0.30 | 6.23 ± 0.39 | 5.97 ± 0.38 |
| Fixed (0.7) | 7.41 ± 0.36 | 6.40 ± 0.40 | 6.11 ± 0.38 |
| Fixed (0.9) | 8.28 ± 0.44 | 6.94 ± 0.39 | 6.52 ± 0.38 |
**3. Why not just use max-Q throughout, since overestimation can be mitigated by exploration? Why is it necessary to balance max-Q and SARSA-style updates?**
While exploration can reduce overestimation, it doesn't eliminate it entirely. As shown in Figures 2 (right) and 12, higher $\tau$ values lead to greater final bias, exceeding that of SAC, a SARSA-style method. AQ-SAC starts with $\tau = 0.9$ to encourage exploration but gradually reduces bias to SAC's level, highlighting the benefit of balancing max-Q and SARSA updates. Fixed $\tau$ settings (e.g., 0.9, 0.95) perform poorly (Table 2), indicating that persistent overestimation harms optimality. While overestimation tends to diminish with more data, prior studies [1,2,3] have shown that actively suppressing it can significantly enhance sample efficiency. Balancing both perspectives, our method offers a natural compromise.
[2] Deep reinforcement learning with double Q-Learning. Hado van Hasselt, Arthur Guez, David Silver. AAAI 2016.
[3] Addressing Function Approximation Error in Actor-Critic Methods. Scott Fujimoto, Herke van Hoof, David Meger. ICML 2018.
**4. If a well-chosen fixed τ performs similarly to annealing, is the added complexity of annealing really necessary?**
We acknowledge that a carefully selected fixed $\tau$ can yield good performance. However, as shown in Table 2, performance becomes highly sensitive to $\tau$ when it increases beyond a certain threshold, and performance can deteriorate significantly due to the increased bias.
In RL tasks where sample efficiency is critical and many hyperparameters are inherently delicate, it is often impractical to finely tune all hyperparameters. In this context, our annealing strategy, which provides robustness against hyperparameter sensitivity, is a meaningful advantage.
Once again, we sincerely appreciate the thoughtful and constructive feedback. It has helped us clarify and strengthen the motivation and evidence behind our proposed method. Please feel free to let us know if you have any further questions or concerns.
---
Rebuttal Comment 1.1:
Comment: I appreciate the added experiment that quantifies the "better exploration due to max-Q" with high entropy policies in the early stages of training. I think this is central to the claim of the paper and justifies both why max-Q is used at all and why annealing should be done. I hope the authors will incorporate this experiment and explanation better in their text. I have updated my review accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback and for updating your score. We will include the results of the additional experiments along with explanatory text in the camera-ready version. Once again, we truly appreciate the time you took to provide such insightful feedback. | Summary: This paper proposed Annealed Q-learning to gradually transition from the Bellman optimality operator to the Bellman operator, leveraging early optimistic exploration while reducing overestimation bias during convergence. The approach is introduced via an illustrative example and performance is verified on a selection of continuous control tasks, where it performs favorably compared to established baseline algorithms. Overall, the approach is relatively simple and performs well on the tasks considered.
Claims And Evidence: The approach is fairly straight-forward, introduced via an illustrative example and validated on continuous control benchmarks, with insightful ablations in the appendix. The provided evidence supports the claims, while it could be nice to run individual tasks to convergence to further strengthen the paper.
Methods And Evaluation Criteria: The evaluation criteria are in line with the literature. Potential areas for improvement are mentioned below.
Theoretical Claims: The paper focuses more on quantitative validation without specific theoretical claims.
Experimental Designs Or Analyses: The experimental validation is in line with prior works and shows the method comparing favorably. The following would further strengthen the paper:
- Longer runs on Humanoid-Walk/Run would be helpful to actually judge convergence.
- Have you tried to run experiments on DMC Dog tasks? Would be interesting to see regarding scalability?
- It would be interesting to see how the AQ-XXX agents compare to their XXX counterpart in action-penalized environments. Would they converge to local cost-minimization optima more quickly and get stuck there?
- Figure 4: SAC has commonly struggled on DMC tasks and AQ-SAC’s improvements are great to see. Please also add a D4PG baseline, as this would provide a strong reference (original DMC reference agent).
Supplementary Material: The supplementary material provides further details on individual experiments and ablations, which are helpful to better put the results into perspective.
Relation To Broader Scientific Literature: The general literature review is good, please see below for additional papers to consider.
Essential References Not Discussed: - Line 243: please revise the naming to avoid collision with Amortized Q-learning (AQL) in [1]. AQ-XXX should be fine, so maybe go for AQ-L to avoid an exact match with AQL?
- Please also check [2] for an alternative approach to reducing overestimation bias in Q-learning for (discretized) continuous control
**Additional references:**
[1] T. Van de Wielde, et al. "Q-learning in enormous action spaces via amortized approximate maximization." arXiv, 2020.
[2] D. Ireland, and Giovanni Montana. "Revalued: Regularised ensemble value-decomposition for factorisable markov decision processes." ICLR, 2024.
Other Strengths And Weaknesses: - Having an adaptive method for selecting the annealing constant would significantly strengthen the contribution
Other Comments Or Suggestions: - Line 409: a potential counter-argument to “accurate action selection becomes difficult” is that highly discretized control appeared to be sufficient for the tasks evaluated in this paper
- Figure 5, caption: locomotion tasks —> manipulation tasks
Questions For Authors: - Have you thought about potential extensions towards “distributional RL” methods?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your insightful and helpful comments. Below, we address each of the points raised:
**Additional experiments (Longer runs on Humanoid-Walk/Run, DMC Dog tasks, action penalty env, D4PG baseline):**
We appreciate the suggestion to conduct additional experiments. Due to limited computational resources, it is difficult to perform all the experiments; however, we have started running several of them and will share the results once they are complete.
**Related work [1, 2]:**
Thank you for pointing out the name collision with Amortized Q-learning. We will rename our method to AQ-L in the camera-ready version to avoid confusion. We also appreciate the pointer to [2], and we will add a discussion on this work as part of the related research on action discretization.
**Potential extensions towards distributional RL:**
As in QR-DQN [3], estimating Q-values for each discretized expectile and progressively reducing the target expectile may improve the consistency of the loss function during training. We believe this is a promising direction for extending our approach.
**Line 409 – Counter-argument to "accurate action selection becomes difficult":**
While discretization can be sufficient in certain tasks, it may lead to reduced sample efficiency in high-dimensional settings. For example, in the humanoid-run task, our method achieves a return of 250 within 3M steps, whereas [2] requires over 5M steps to reach the same performance. We will incorporate this discussion, including the effectiveness of discretization in lower-dimensional tasks, into the camera-ready version.
**Figure 5 caption — "locomotion tasks" → "manipulation tasks":**
Thank you for catching this error. We will correct it in the final version.
Once again, we sincerely thank you for the constructive and valuable feedback. Please feel
free to reach out if there are any further questions or points that require clarification.
[1] T. Van de Wielde, et al. "Q-learning in enormous action spaces via amortized approximate maximization." arXiv, 2020.
[2] D. Ireland, and Giovanni Montana. "Revalued: Regularised ensemble value-decomposition for factorisable markov decision processes." ICLR, 2024.
[3] Will Dabney, Mark Rowland, Marc G Bellemare, and Rémi Munos. Distributional reinforcement learning with quantile regression. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the detailed response and clarifications! The additional experiments would be very interesting to see, looking forward to the results!
---
Reply to Comment 1.1.1:
Comment: As an update, we would like to share our current results regarding the suggested experiment on “Longer runs on Humanoid-Walk/Run.” Specifically, we extended the training of the model reported in the paper (originally trained up to 3e6 steps) to 1e7 steps. The results, shown below in terms of moving average of returns, indicate that our method converges to better performance than the baseline.
**humanoid-run**
| Method | 1M | 2M | 3M | 4M | 5M | 6M | 7M | 8M | 9M | 10M |
|----------|--------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|----------------|
| SAC | 1.60 ± 0.17 | 27.30 ± 16.47 | 37.24 ± 22.35 | 57.99 ± 25.44 | 70.20 ± 27.78 | 84.39 ± 33.35 | 82.67 ± 31.48 | 83.52 ± 32.69 | 95.03 ± 38.01 | 94.93 ± 36.75 |
| AQ-SAC | 106.46 ± 3.48 | 160.82 ± 4.53 | 215.35 ± 9.29 | 239.70 ± 15.75 | 267.69 ± 15.43 | 278.06 ± 20.66 | 236.32 ± 19.84 | 276.40 ± 16.42 | 279.17 ± 20.57 | 257.80 ± 25.77 |
**humanoid-walk**
| Method | 1M | 2M | 3M | 4M | 5M | 6M | 7M | 8M | 9M | 10M |
|----------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|----------------|
| SAC | 120.20 ± 57.46 | 263.49 ± 80.94 | 417.96 ± 90.66 | 481.46 ± 103.48 | 561.67 ± 93.80 | 558.46 ± 92.13 | 575.77 ± 93.36 | 557.24 ± 91.89 | 569.29 ± 92.65 | 590.65 ± 97.36 |
| AQ-SAC | 381.65 ± 10.92 | 490.76 ± 15.64 | 620.55 ± 29.14 | 612.33 ± 24.15 | 655.15 ± 40.10 | 717.23 ± 19.04 | 719.35 ± 27.30 | 691.05 ± 23.90 | 688.10 ± 31.19 | 715.25 ± 19.53 |
We believe these results are promising, especially given the lightweight setup with a batch size of 256 and a two-layer neural network with 256 units per layer. We expect that increasing the model capacity or using larger batch sizes could further enhance the performance on these challenging tasks.
We will continue working on the other suggested experiments as well. Once again, we truly appreciate your valuable feedback and the time you have taken to review our work. | null | null | null | null |
Differentiable Quadratic Optimization For the Maximum Independent Set Problem | Accept (poster) | Summary: This paper proposes a novel solution for the Maximum Independent Set Problem (MIS), a well-known NP-hard problem. The proposed method extends the quadratic formulation by [Pardalos & Rodgers, 1992] by introducing a max clique term into the non-convex objective function. The optimization is performed using projected momentum-based gradient descent, and a rounding step is applied to obtain a candidate solution for MIS. The method is designed to leverage GPU-based parallel execution.
The authors conduct experiments on Erdős-Rényi (ER) graph datasets and SATLIB graph datasets, demonstrating that the proposed approach outperforms existing learning-based methods. Additionally, on high-density graphs, the method achieves superior performance compared to exact solvers such as Gurobi and CP-SAT, as well as heuristic solvers like ReduMIS.
Claims And Evidence: The authors' claims are clear throughout the paper and are supported both theoretically and experimentally.
Methods And Evaluation Criteria: The proposed method and evaluation criteria are meaningful. While the datasets used in the main text feel somewhat insufficient, the additional experiments in the appendix make the overall evaluation satisfactory.
Theoretical Claims: I conducted a brief verification of the theoretical claims and did not find any particular issues.
Experimental Designs Or Analyses: I have reviewed the experimental setup and did not identify any major issues.
Supplementary Material: I have read almost all of the Appendix.
Relation To Broader Scientific Literature: This paper contributes to an important research topic in the field of combinatorial optimization, namely the Maximum Independent Set problem, by developing a novel differentiable solver. The proposed method is simple, achieves favorable experimental results, and is considered to make a sufficient contribution.
Essential References Not Discussed: All important citations have been properly included without any issues.
Other Strengths And Weaknesses: Overall, this paper is well-structured and highly readable. The approach is straightforward and convincing in relation to its objective, and the claims are well-supported both theoretically and experimentally.
The weaknesses are as follows.
* I did not fully understand why the maximum clique term in Equation (3) leads to such significant improvements. The authors provide an explanation on the right side of line 124, but it does not seem entirely convincing in justifying the substantial improvements observed in Table 1. Further investigation of this aspect, possibly using additional experimental results, could strengthen the credibility of the approach.
* In Appendix B, it is stated that non-extremal stationary points $x'$ are rarely contained within the box constraint, and even if they are, they appear at most at a single point. In this case, the ability of MGD to escape saddle points, as discussed in Remark 13, might not be very useful. If this is the case, the motivation for using MGD, as described in Remark 13, seems less convincing.
* The experimental results suggest that the proposed method performs particularly well on high-density graphs (i.e., graphs with a large number of edges) compared to existing exact solvers and heuristic approaches. However, how common is it to need to find independent sets in such high-density graphs? In practical applications, aren't graphs often less dense? It would be helpful if the authors could clarify the intended application areas of their approach. That being said, I do not mean to suggest that developing strong methods for high-density graphs is unimportant in this research field.
* The authors have conducted a sensitivity analysis of the parameters in the proposed method, which is commendable. However, in Section D.5, they state:
"Other than the first three columns of the last row of Table 7, the reported average MIS size (in both tables) varies between 37.67 and 41.83. This indicates that pCQO-MIS is not very sensitive to the choice of these parameters."
Contrary to this claim, I got the impression that the output is relatively sensitive to parameter choices. This is because, in the Maximum Independent Set problem, once a provisional solution is found, increasing the solution size by even one can be quite difficult. As a result, the variation between 37 and 42 could indicate a significant difference in solution quality.
Furthermore, from Table 6, it appears that the parameters need to be adjusted for each problem instance. If these parameters must be determined through an ablation study, wouldn’t this require additional computational cost for the proposed method?
Other Comments Or Suggestions: * A.3. Proof of Lemma 9 in appendix should be A.3. Proof of Lemma 8.
Questions For Authors: Please see Other Strengths And Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their comments. Please refer to (https://anonymous.4open.science/r/pCQO-mis-benchmark-81AF/Tables_rebuttal.md) for Tables A, B, & C.
We are glad that the reviewer finds the paper to be well-structured & highly readable & that our claims are clear & supported both theoretically & experimentally.
### (W1) **Additional results for the clique term**.
The clique term is introduced to (i) encourage the optimizer to select two nodes connected by an edge in the complement graph, leveraging the MIS & MC duality, & (ii) to discourage sparsity in the solution given the $\ell_1$ norm. We note that the computational cost for including the third term is the same as not including it (Remark 3).
We have empirically observed that the third term improves the optimization process by enhancing stability, preventing overshooting, & leading to better minima.
Table A compares the cases *without* (pQO) & *with* (pCQO) the clique term ($\gamma' = 0$ vs. $\gamma' = 7$) over the ER dataset used in Table 1. The results are presented as the average MIS size in the format "without–with" & are obtained across different values of $\alpha$ (learning rate) & $\gamma$ (edge penalty parameter). The results are reported after optimizing 50 batches of initializations for each set of hyper-parameters. We note that the range of $\gamma$ is selected based on the criteria in Theorem 9.
Key observations from Table A (bold results correspond to cases where pCQO resulted in a MaxIS size higher than the best of the pQO case (underlined)) are:
1. The difference between pCQO & pQO is nearly 4 nodes which is similar to what we report in Table 1.
2. pCQO approach returns better results across most $\alpha$'s' & $\gamma$'s' compared to pQO. Additionally, $\gamma' = 0$ is not competitive compared with most other baselines, as it achieves at most 40.55 (the underlined result in the table). Only when the clique term is introduced does our method become competitive with other solvers.
3. Out of all combinations above, there are only two cases where pQO is slightly better.
Additionally, we evaluated how many optimizer steps were required to obtain the first MaxIS solution for pCQO & pQO. See Table B. In all cases, pCQO finds a viable solution first. We conjecture that, due to the presence of the third clique term, a "smoother" optimization landscape is created for each of the evaluated hyperparameter sets. We will include the above experiments in the revised version of the paper.
### (W2) **The motivation of MGD.**
Extremal stationary points depend on graph connectivity, as noted in Appendix C. However, our use of MGD is not solely to escape these points but also due to the empirical observation that, from the same initial point, MGD converges to minimizers with larger MaxIS values while avoiding the overshooting seen in vanilla GD. Additionally, momentum generally accelerates GD convergence. We will revise the remark to clarify this point.
In Table C, we use 5 ER graphs with $n=100$ & $p\in \{{0.3,0.6\}}$ (probability of edge creation) & run GD vs. MGD, **using the exact same $\gamma, \gamma', \alpha,$ & the initializations.** As observed, on average, MGD converges to larger MIS. Furthermore, MGD avoids the all 0's which is the case of overshooting in GD.
### (W3) **Large high-density graphs.**
We appreciate the reviewer’s comment. The main advantage of our solver is most evident for dense, large graphs, where (i) ILPs are impractical due to extensive run-time, & (ii) ReduMIS fails to significantly reduce graph size. Our motivation is to address these challenges rather than focus on a specific application.
We will clarify this in the revised paper & leave application-specific explorations for future work. MIS has broad applications in scheduling, genome sequencing, & fault detection, often involving large, dense graphs. For instance, [A] applies conflict graph constructions in genome sequencing.
Even sparse graphs can become dense in dynamic settings.
[A] On the Maximal Cliques in C-Max-Tolerance Graphs & Their Application in Clustering Molecular Sequences
### (W4) **Sensitivity to hyper-parameters.**
Thank you for your comment. When we stated that our method is not very sensitive to the selection of these hyperparameters, we meant that a range of values can yield feasible solutions. In other words, obtaining results with our method does not heavily depend on precise hyper-parameter tuning, as long as they satisfy the condition in Theorem 9. We will refine this claim to better align with our intended meaning.
That said, we agree with the reviewer that achieving the best results require hyper-parameter tuning. We note that we do hyper-parameter tuning on one instance & apply it on the other graphs. This means that the additional running time of each graph increases by the tuning time divided by the total number of graphs in the dataset. We will include this discussion in the revised paper. | Summary: The paper introduces pCQO-MIS, a new quadratic optimization approach for solving the Maximum Independent Set (MIS) problem. By incorporating a maximum clique (MC) term, the method improves convergence and exploration, using parallelized momentum-based gradient descent to efficiently find maximal independent sets. The authors provide theoretical results ensuring that local minimizers correspond to maximal independent sets and present an efficient procedure for MIS checking. Experimental results show that pCQO-MIS outperforms existing exact, heuristic, and data-driven methods, achieving larger MIS sizes and faster runtimes, particularly for denser graphs, without the need for training data.
Claims And Evidence: The proposed method is well motivated from the duality of MIS and clique problem. The authors prove the correctness of the method ensuring that local minimizers correspond to maximal independent sets.
Methods And Evaluation Criteria: The proposed algorithm together with the efficient MIS checker makes the proposed method both efficient and effective. The authors evaluate the performer of the method with comparison to several baselines on graphs with different density. Comparison on both quality and run time are provided.
Theoretical Claims: No
Experimental Designs Or Analyses: Yes, The authors evaluate the performer of the method with comparison to several baselines on graphs with different density. Comparison on both quality and run time are provided. Just wonder how the proposed algorithm compares to clique-based method?
Supplementary Material: No
Relation To Broader Scientific Literature: The MIS is a NP-problem with wide applicability.
Essential References Not Discussed: No
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their comments.
We are glad that the reviewer finds our method well-motivated. We also appreciate the reviewer acknowledging that we prove the correctness of the method, ensuring that local minimizers correspond to maximal independent sets.
Please see our response below regarding the comparison with a clique solver.
### (C) **Comparison with a clique-based method.**
We thank the reviewer for their comment.
In the recent survey paper about the MC solvers [A], the authors recognized ReduMIS (the heuristic we compare against in our submission) as **"extremely effective"** for solving the clique problem (see Section 3.3.1 of [A]) when compared to other methods.
To fully address the reviewer's comment, we have included comparison results of 31 graphs (from DIMACS) with an MC heuristic solver called MISB [B], which demonstrated competitive performance on the selected graphs with $n\leq 500$.
Below we show the results of 5 graphs as examples. The complete results can be found in (https://anonymous.4open.science/r/pCQO-mis-benchmark-81AF/Comparison_with_MSIB_MC_Solver.pdf). The results of MISB is sourced from Table 1 of [B]. It can be seen that our algorithm *consistently outperforms* MISB and achieves optimal or near-optimal solution in these cases. Here, $\rho$ is the graph density.
| Graph Name | $n$ | $m$ | $\rho$ | Optimal | Ours | MISB Clique Solver [B] |
|-------------|----------|--------|--------------|-|-|-|
| cc-fat500-2 | 500 | 115611 | 0.92 | 26 | **26** | 26 |
| p_hat300-2 | 300 |22922 | 0.51 | 25 | **25** | 24 |
| sanr200_0.7 | 200 |6032 | 0.3 | 18 | **18** | 16 |
| brock400_1 | 400 |20077| 0.25 | 27 | **25** | 23 |
| gen200_p0.9_55 | 200 |1990| 0.1 | 55 | **55** | 49 |
We plan to include additional comparisons in the camera-ready version if our paper is accepted.
[A] A Short Review On Novel Approaches For Maximum Clique Problem: Form Classical Algorithms to Graph Neural Networks and Quantum Algorithms. March 2024. (https://arxiv.org/pdf/2403.09742v1)
[B] A simple and efficient heuristic algorithm for maximum clique problem. ISCO 2014. | Summary: In solving the Maximum Independent Set (MIS) problem, the authors propose its continuous relaxation as an optimization problem of a quadratic differentiable function, which can be solved by first-order gradient-based method starting from multiple parallel initial points. The proposed method named parallelized Clique-Informed Quadratic Optimization of MIS (pCQO-MIS) is compared with other MIS solvers.
## Update after rebuttal
My concerns and questions about the paper have been clarified, and I intend to maintain my original recommendation score.
Claims And Evidence: The continuous relaxation of the MIS problem seems to be valid.
Methods And Evaluation Criteria: The experiment presented is comparison of benchmarks between pQQ-MIS and other existing MIS solvers. All procedure seems valid.
Theoretical Claims: The main theoretical claims seem to be correct.
Experimental Designs Or Analyses: The experiment presented is comparison of benchmarks between pQQ-MIS and other existing MIS solvers. All procedure seems valid.
Supplementary Material: I have checked the supplementary materials.
Relation To Broader Scientific Literature: The proposed optimization strategy, MGD, is a first-order gradient-based optimization which has a benefit in scalability. In this sense, we can think about an optimization strategy that utilizes multiple parallel computational nodes when the given $\mathbf{A}_{G}$ is too big to fit in a single machine.
Essential References Not Discussed: No special reference I would like to add.
Other Strengths And Weaknesses: As the optimization method is based on the first-order gradient method, the proposed algorithm has advantage on the scalability, showing less run-time on denser and bigger graph. However, the heuristic or the exact method seems to be more beneficial in graphs with less number of edges and vertices.
Also, good selection of hyperparameter $\gamma$ is essential in finding the correct solution, thus solving the problem repeatedly with different hyperparameters is unavoidable in practice.
Other Comments Or Suggestions: Regarding to (4), $x^TJ_nx = \Vert x \vert_1^2$ does not hold if any of the element of $x$ is negative; but this does not seem to be relevant with the main arguments.
Questions For Authors: 1. Can you give more detailed explanation about why (9) indicates MaxIS?
2. In Table 1, the run-time of the exact method differ too much between the two datasets. I am suspecting that this phenomenon happens because in the number of constraints $z_v + z_u \leq 1, \forall (v,u) \in E$ in the integer linear program becomes too many in the dense ER dataset. Is there a way to work around this constraint rather than listing all constraint for all edges?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their comments. Below is a point-by-point response.
### (C1) **As the optimization method is based on the first-order gradient method, the proposed algorithm has advantage on the scalability, showing less run-time on denser and bigger graph. However, the heuristic or the exact method seems to be more beneficial in graphs with less number of edges and vertices**
Utilizing a first-order gradient method indeed allows our method to scale to larger graphs, which is particularly evident in relatively denser graphs, as shown in the results of Figure 2.
However, as the reviewer correctly points out, for sparse graphs, methods like ReduMIS and ILP solvers tend to yield better solutions in terms of both MIS size and run-time. In ILP solvers, the number of constraints exactly equals the number of edges in the graph. Since sparser graphs have fewer edges, these solvers are able to find solutions more quickly. Similarly, ReduMIS benefits from a greater effectiveness of MIS-specific graph reductions it employs when operating on sparser graphs.
### (C2) **Good selection of hyperparameter $\gamma$ is essential in finding the correct solution, thus solving the problem repeatedly with different hyperparameters is unavoidable in practice.**
We agree with the reviewer that selecting $\gamma$ according to Theorem 9 is important for guaranteeing that all local minimizers are feasible MaxISs.
To obtain the best solutions in our method, hyper-parameters fine-tuning is needed in some cases. An example of such a fine-tuning procedure is discussed in Tables 7 and 8 of Appendix D.5.
We say "in some cases" because in our evaluation on the DIMACS graphs in Table 3, we applied a set of hyperparameters optimized on one graph of the dataset, across the entire dataset and obtained the optimal solution in 49 out of 61 graphs. Note that these graphs are very diverse and vary not only in terms of graph order but also density (see columns 2 to 4 of Table 3).
Hyper-parameter tuning is not a major bottleneck as there have been several studies for automatic and efficient hyper-parameter tuning (such as bilevel optimization [A]) and our method can be integrated with them.
[A] Franceschi et al, "Bilevel Programming for Hyperparameter Optimization and Meta-Learning" ICML2018
### (C3) **Regarding to (4), $x^T ee^T x = \||x\||^2_1$ does not hold if any of the element of is negative; but this does not seem to be relevant with the main arguments.**
The reviewer is indeed correct. Thanks for pointing this out. Given that $x\in [0,1]^n$, there won't be negative elements. To be more rigorous, in the revised paper, we will add the following sentence before presenting Eq. (4). "For $x\in [0,1]^n$, we can write $x^T ee^T x = \||x\||^2_1$ and therefore (introduce equation 4)".
### (Q1) **Can you give more detailed explanation about why (9) indicates MaxIS?**
Thank you for your question. The condition in Eq. (9) checks whether some $z\in \{0,1\}^n$ is a **fixed point or not** given a projected gradient descent step. This is based on our characterizations of the proposed objective, which indicates that all binary fixed points are local minimizers and therefore MaxISs.
We reach this conclusion by showing that:
1. All local minimizers are binary (Lemma 8), and
2. All local minimizers are MaxISs if $\gamma > 1 + \gamma' \Delta(G')$ (Theorem 9)
Another reason that Eq. (9) can be used for checking MaxIS is due to Theorem 12: When $x'$ such that $\nabla_xf(x')=0$ exists, then it is also a fixed point. However, in the proof of Theorem 12, we show that $x'$ can not be binary. Therefore, in Eq. (9), we only check $z\in \{0,1\}^n$ which is the binarized version of $x$. This means that if the optimizer reaches a binary fixed point, then it must be MaxIS.
We hope that we have answered the reviewer's question. Please let us know if it remains unclear.
### (Q2) **In Table 1, the run-time of the exact method differ too much between the two datasets. I am suspecting that this phenomenon happens because in the number of constraints in the integer linear program becomes too many in the dense ER dataset. Is there a way to work around this constraint rather than listing all constraint for all edges?**
The reviewer is correct. The main reason for exact methods taking longer times on denser graphs is that, in the MIS problem, the number of constraints in the ILP exactly equals the number of edges in $G$. Cutting plane methods [B] may be used to reduce the number of constraints in the ILP, but such a method for the exact ILP methods is beyond the scope of our paper, as we focus on differentiable approaches.
[B] G. Nemhauser and L. Wolsey, "Integer and Combinatorial Optimization," Wiley 1998.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough and detailed response. My concerns and questions about the paper have been clarified, and I intend to maintain my original recommendation score. | null | null | null | null | null | null | null | null |
Learning Minimum-Size BDDs: Towards Efficient Exact Algorithms | Accept (poster) | Summary: This is a paper about BDDs, intended as a more compact formulation of decision trees in the context of classification tasks. Komusiewicz et al. (2023) introduced the "witness" concept for decision trees, which is extended here to BDDs. The authors focus on the problem of deciding whether a BDD of given maximum size can exactly cover/classify a dataset. An algorithm for this (NP-hard) task is derived and shown to improve the sota. The algorithm can be employed as a subroutine for variants of the decision problem (e.g., partial coverage). The authors also present experiments where their algorithm is tested against a SAT-based approach to the problem. The proposed solution is much faster when small diagrams can cover the data, while the situation changes for medium and large models.
Claims And Evidence: The paper's central claim is Theorem 3.3, and its proof is well detailed in the (body of the) paper.
In the final outlook section, the authors claim that their algorithm has the potential for further improvement, ideally leading to more competitive results with larger diagrams. Future work will determine whether this is the case.
Methods And Evaluation Criteria: The authors also provide a numerical validation against a SAT approach. This is very useful for adequately evaluating the algorithm's potential, and the selected benchmark looks sufficient to identify the proposed approach's pros (and cons).
Theoretical Claims: The proofs of the results look sound and quite readable. My check was only superficial, but I did not spot any flaw.
Experimental Designs Or Analyses: The experiment's design and results analysis are relatively straightforward, but the approach seems fair for validating the new algorithm.
Supplementary Material: I didn't check App C.
Relation To Broader Scientific Literature: This paper is focused on a particular field of machine learning. I didn't miss references to broader literature.
Essential References Not Discussed: I think all the relevant and recent literature in the field has been properly cited and considered for the discussion.
Other Strengths And Weaknesses: The contribution is evident, and the advantages to the SOTA are measurable. Applying the witness idea to this formalism is not trivial, and the result looks helpful.
The weakness is also evident. The algorithm is competitive only for small models.
Other Comments Or Suggestions: s >=4 should be <=
Questions For Authors: As the limitation of the paper is related to the size of the model that can be reasonably processed, the main question to the authors should be whether they believe that the s<=4 limitation should be intended as an intermediate step towards a further improvement of the procedure or not. In case of a positive answer, it would be essential to understand why the authors are convinced that the procedure could become faster and be used for larger models. In the case of a negative answer, convincing the reader that even the current limitation could make the procedure applicable for actual machine learning applications would be essential.
Ethical Review Concerns: No concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your review!
> As the limitation of the paper is related to the size of the model that can be reasonably processed, the main question to the authors should be whether they believe that the s<=4 limitation should be intended as an intermediate step towards a further improvement of the procedure or not.
First, note that our solver WitBDD is not limited to instances where $s \le 4$, but also solves some instances where $s=7$. Furthermore, even in its current proof-of-concept state, already WitBDD would contribute to a portfolio solver for the case of small $s$. But to address the question, indeed, we believe there is large a potential for improvement and algorithm-engineering work, which we deem to be out of scope for the current paper: Our main focus is to develop the algorithmics of computing BDDs. Similar to decision trees, a first paper (Komusiewicz et al., ICML 2023) provided the theoretical foundations for an efficient algorithm and in a later algorithm-engineering project this algorithm was lifted to a competitive solver (Staus et al., 2024). Concretely, since WitBDD is based on branch-and-bound, symmetry-breaking techniques and improved lower bounds should give substantial speed-ups. However, this type of work would require a different presentation and set-up of the paper and would require much more content to be fitted hence we think this is out of scope for the present work.
> In case of a positive answer, it would be essential to understand why the authors are convinced that the procedure could become faster and be used for larger models. In the case of a negative answer, convincing the reader that even the current limitation could make the procedure applicable for actual machine learning applications would be essential.
See, above, we believe there is large potential for speed-ups. Furthermore, in general, it is useful to validate heuristics on ground truths, for which we need efficient exact algorithms such as those we provide. Second, exploring the exact algorithmics of a problem often reveals structure that can also be exploited heuristically as well.
> "s >=4 should be <="
Thanks! | Summary: The paper presents a novel approach for generating classification Binary Decision Diagrams (BDDs) of bounded size using the witness paradigm. Specifically, given a labeled dataset, the BDD is constructed through a branch-and-bound mechanism that incrementally refines an initial graph. This refinement process involves identifying a data point that is incorrectly classified by the initial BDD, modifying the graph by adding nodes and edges, and associating the newly added edge with the data point (the "witness") to reduce the search space. The paper also develops complexity results and provides experimental comparisons against a SAT encoding.
## update after rebuttal
Thank you for the rebuttal - it addressed my questions and I will keep my score as is.
Claims And Evidence: The paper makes a strong contribution to the broad field of BDDs, and is mainly supported by a nice theoretical study. In particular, the proposed construction mechanism is efficient if the data points are "sufficiently" close, which is intuitive in the sense that it avoids extra branching. The numerical results also show promising results against one of the state-of-the-art compilation techniques, which I found to be another good evidence to the claims. Finally, I would like to highlight that the error analysis (Section 4) is shot but particularly interesting, and could serve as a foundation for future research streams in misclassification errors.
Methods And Evaluation Criteria: Although the proofs can be read as intricate, the methodology is actually intuitive: if a miss-classified point / counter example is found, the procedure applies a so-called one-step refinement that recursively inserts leaves (nodes with out-degree zero) to recover a notion of "consistency" relating witnesses with their expected classification paths.
As the authors mentioned, this technique is fundamentally a BDD extension of the work by Komusiewicz et al. (2023), who originally applied similar refinement principles to decision trees. The complexity results are also relatively comparable. However, this paper is by no means an incremental work, as there are fundamental structural differences between decision trees and BDDs (e.g., the one-step refinement here differs significantly in how leaves are generated). Instead, I see this as an intriguing example of how insights from the decision-tree literature can be extended to improve complexity results and advance the state-of-the-art in BDDs.
Theoretical Claims: I read all proofs. I have not identified any particular issues, but they are generally significantly intricate because they rely on multiple constructive arguments. My general suggestion is to add further examples/figures in the proof to represent many of these constructive parts; e.g., in Claims C.1. and C.2 in the Supplemental Material, it would be great to connect the results more with Figure 7, which was very helpful to understand the underlying cuts and oeprations.
Experimental Designs Or Analyses: The experiments are short and are more illustrative of the possible computational benefits of the theoretical approach studied here. I do believe they are sufficient in view of the theoretical scope of the paper.
Supplementary Material: Yes - it is mostly comprised of proofs and additional references.
Relation To Broader Scientific Literature: (See above.)
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: - Suppose there are multiple "dirty" witnesses that can be chosen during Algorithm 1. Does the sequence of witnesses to be considered influence the procedure in terms of feasibility/final size?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your review!
> Suppose there are multiple "dirty" witnesses that can be chosen during Algorithm 1. Does the sequence of witnesses to be considered influence the procedure in terms of feasibility/final size?
The algorithm will always find a minimum-size perfect BDD independent of which specific dirty example is chosen in any call to Algorithm 1. However, the dirty example affects the size of the algorithm's search space. We exploit this property by choosing dirty examples which we deem to yield a small search space, see Appendix D.1 for details. | Summary: This paper proposes a method for learning binary decision diagrams (BDD) that classify given training examples. The proposed algorithm finds the minimum-size BDD that can perfectly classify given examples. Starting from an empty BDD, the proposed algorithm repeats one-step refinements to update the structure of BDD and finds the minimum-size BDD that can correctly classify given samples. The paper theoretically analyses the correctness and the running time of the paper. In experiments, the paper shows that the proposed method runs faster than the baseline method using a SAT solver when the size of the output BDDs is small.
### Update after rebuttal
I thank the authors for their reply. I agree that this is an interesting theoretical contribution. However, I still feel that the paper's impact is limited since the proposed algorithm currently scales for small-size BDDs. In such cases, the merits of using BDDs instead of decision trees will be limited.
I agree with the authors that algorithm engineering would further improve the scalability of the exact algorithm. However, we need more evidence to evaluate the practical potential of this paper.
Claims And Evidence: The paper gives theoretical analyses to show that the proposed algorithm can find a minimum-size BDD. Moreover, it runs experiments to show that the proposed algorithm runs faster than the baseline method using a SAT solver.
Methods And Evaluation Criteria: The proposed approach to finding a minimum BDD involves an exhaustive search. Since finding a minimum BDD is known to be NP-complete, I think this approach makes sense if we want an exact method.
Theoretical Claims: I briefly checked the proof of the theoretical claims. They seem correct.
Experimental Designs Or Analyses: The experimental results are weak in supporting the proposal's effectiveness. The results show that WitBDD is faster than the baseline when BDD sizes are very small. The decision rules that can be represented with such a small BDD are simple ones. Therefore, the paper's results have limited impact in practical situations.
Historically, BDDs are used when binary decision trees are very large. If the minimum BDD sizes are smaller than 4, then the size of a decision tree representing the equivalent decision rules is also not large. Therefore, we do not have a strong motivation to use BDD in such cases.
Supplementary Material: I skimmed through the supplementary materials.
Relation To Broader Scientific Literature: The paper shows an algorithm for finding the minimum BDD classifying given examples perfectly. This might be a new result. However, its impact is limited since the proposed algorithm works only when the minimum BDD is small.
Compared with decision trees, BDDs are far less used in machine learning. Therefore, the impact of the results is limited.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper seems clearly written and easy to read.
Other Comments Or Suggestions: - (Line 110) It seems strange to define a set of examples $E$ as a subset of $\mathbb{R}^d$. How can we define the cardinality $|E|$?
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your review!
The main purpose of our work is to further develop the algorithmics of computing BDDs, including providing algorithms and proving their correctness. The goal of our experiments is to provide a proof-of-concept implementation of our new approach and to compare it against the SOTA solvers. Doing algorithm engineering to achieve a very fast implementation is beyond the scope of this paper (already most proofs and implementation details are in the appendix).
Similar to decision trees, a first paper (Komusiewicz et al., ICML 2023) provided the theoretical foundations for an efficient algorithm and in a later algorithm-engineering project this algorithm was lifted to a competitive solver (Staus et al., 2024).
In this second paper, many heuristic lower bounds and reduction rules were provided. Showing the correctness of these lower bounds and rules required some effort.
Similar to Staus et al., 2024 lots of promising improvements (symmetry breaking, improved lower bounds) are possible for WitBDD which then could yield a new SOTA solver.
Moreover, even our proof-of-concept implementation can already contribute to a portfolio solver for the case of small $s$.
Experimental Designs Or Analyses:
1) There is an overlapping use case for decision trees and BDDs. BDDs are not as commonly used as decision trees, however, one reason for this may be that not too many efficient algorithms for computing them are known. With this work, we want to address this gap. Consequently, our work should be understood as a first step towards having efficient implementations for computing optimal BDDs which then can have a huge impact.
2) Note that our solver WitBDD is not limited to instances where $s \le 4$, but also solves some instances where $s=7$.
Other Comments and Suggestions:
In our setting, $E$ is always a finite set. Hence, the cardinality is just the number of elements.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. The paper's theoretical contributions are interesting. However, I still think that an exact algorithm that works well with BDDs of small sizes ($s \leq 7$) is practically weak because the sizes of decision trees and BDDs are not so different at this scale.
> There is an overlapping use case for decision trees and BDDs. BDDs are not as commonly used as decision trees, however, one reason for this may be that not too many efficient algorithms for computing them are known. With this work, we want to address this gap. Consequently, our work should be understood as a first step towards having efficient implementations for computing optimal BDDs which then can have a huge impact.
This is an interesting research direction. Is there any practical evidence that the proposed algorithm can contribute to realizing a scalable BDD learning algorithm?
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply!
> However, I still think that an exact algorithm that works well with BDDs of small sizes () is practically weak because the sizes of decision trees and BDDs are not so different at this scale.
While there might not be a large difference between the size of BDDs and decision trees at the current scale, for (only slightly) larger s it is unclear whether this is still the case. To figure this out it is necessary to have better exact algorithms.
> Is there any practical evidence that the proposed algorithm can contribute to realizing a scalable BDD learning algorithm?
Indeed, we think there is some strong practical evidence:
1. Our implementation shows that our approach is feasible even without dedicated algorithm engineering, which is not always given when transferring theory into practice.
2. A single algorithm-engineering paper for decision trees (Staus et al. 2025) improved the naive implementation based on a similar witness-based algorithmic concept with a mean 324-fold speedup, yielding a state-of-the-art exact solver.
3. Generally this line of research of starting with a proof-of-concept exact algorithm and adding algorithm-engineering is highly effective.
For instance, consider the CDCL algorithm for SAT which almost didn't change since its introduction in the late 90s; the current success (see https://cca.informatik.uni-freiburg.de/satmuseum/satmuseum2022.pdf) of CDCL is mainly explained by subsequent and recent algorithm engineering (note that the original/first CDCL implementation (grasp) performs even worse than DPLL (boehm) and only with subsequent algorithm engineering it was substantially faster (chaff)).
Another example is the Concorde Traveling-Salesperson solver where the baselines is only feasible for small instances but a version with engineering can solve large practical problem instances. | Summary: The paper studies the decision problem of Bounded-Size Binary Decision Diagram (BSBDD). Given a labelled data set and a positive integer $s$, the algorithm asks if there exists a BDD that perfectly classifies each example in the labelled data set (each leaf node of the BDD represent a class), and has at most $s$ internal nodes.
The main result is an algorithm `WitBDD` that takes as input $s$, a WBDD (witness BDD, a special representation of a BDD) with less than $s$ internal nodes $W$, and a labelled dataset. It returns a refined WBDD with less than $s$ internal nodes that perfectly classifies all examples in the labelled dataset if such a WBDD exists, and returns False otherwise.
The paper demonstrates that this algorithm runs in $\mathcal{O}((6 s^2 \delta D)^s \cdot sn))$ time, and as such improves on previous theoretical results (Ordyniak et al., 2024). Here, $s$ is the number of internal nodes of the BDD, $n$ is the number of examples in the labelled dataset, $D$ is the domain size of the variable with the largest domain, and $\delta$ is a parameter that is smaller than $D$.
The key trick to achieve this results lies in applying an algorithmic paradigm previously used for learning/constructing decision trees. The tree is built incrementally, using examples that are classified incorrectly by the tree ("witnesses") to refine the decision tree, by adding a separating hyperplane such that it classifies also that witness correctly.
The paper applies this principle to BDDs. To this end, the paper introduces a data structure called a witness BDD (WBDD), which allows for node insertions to incrementally build the BDD. The paper discusses certain properties (consistency, perfection) of WBDDs and how they relate to BDDs that correctly classify all instances in the dataset.
The paper also derives some other computational complexity results regarding this problem, and performs a small empirical evaluation using a proof-of-concept implementation, to demonstrate the effectiveness of the algorithm against the state of the art (a SAT-based approach from (Cabodi et al., 2024)). The main observation is that `WitBDD` outperforms the SAT-based approach in terms of running time for BDDs with an $s$ of 4 or smaller.
Claims And Evidence: I am inclined to believe all the claims made in the paper. However, I get the impression that this paper leans heavily on previous publications and its appendix. I find the paper very clearly written, but also very compactly. I can imagine that it would be easier to parse if the reader would've been familiar with some of the key references (which I am not). Additionally, the appendix contains a bunch of proofs that I honestly did not read, so that I would be trusting the authors on.
Methods And Evaluation Criteria: Yes and no. I find that the main contribution of this paper is theoretical. I *really* appreciate the efforts made to create a proof-of-principle implementation and a tiny empirical evaluation. However, I found the description of the empirical evaluation too reductive to be very useful (see below for more details), and the scope of the experiment too limited.
The introduction argues that minimising BDDs is useful because of explainability and because BDDs can be more compact than decision trees. The largest BDD in the experiments has 7 internal nodes. That doesn't seem like a lot compared to the BDDs that I encounter in daily life. I understand that there's a difference between theory and practice, but now this difference seems to be so big that the empirical evaluation seems somewhat pointless to me.
Theoretical Claims: As stated above, I feel that the paper does rely a lot on the appendix for its proofs, and maybe also a bit on previous work for easier understanding of the contents. I am not necessarily against that, but it does make me feel that this paper isn't very "standalone", and might be more suited for a publication venue that allows for a higher page limit, to aid readability and trust.
Experimental Designs Or Analyses: The complexity results presented in this paper involve a number of parameters. In particular: the paper states that `WitBDD`'s worst-case complexity dependency on $s$ is optimal. I would have liked to see more details on the experimentation to better understand which components of the theoretical complexity result are also reflected in the empirical evaluation.
For now, I do not know how many examples there are in the datasets used for the empirical evaluation (so I do not know $n$), and I do not know the domain sizes (so I do not know $D$). I also do not know the number of classes (doesn't matter for the complexity, but I would still be interested in knowing). All in all, I just wanted to have more information about these things, and maybe also about the theoretical complexity of the baseline.
Supplementary Material: I mostly checked to see if I could find more results on the experimental evaluation, but didn't find that in the appendices. Appendix A is curiously empty. I skimmed through appendices B and C.
Relation To Broader Scientific Literature: I find that the paper positions it very well in terms of the existing literature, as far as I can tell, based on my limited familiarity with the literature in this field. Claims are referenced diligently and timely.
Essential References Not Discussed: For completeness, the paper could mention [Latour et al., 2019] as another example of using BDDs for constraint propagation. The works mentioned use DDs to model the set of solutions to a set of constraints, and present filtering algorithms for those datastructures to facilitate propagation. In contrast, [Latour et al., 2019] uses a BDD to represent a *constraint* instead of a set of solutions, and presents a propagation algorithm whose complexity depends on the size of the BDD, making it a potentially relevant application for the work here presented in the paper under review.
[Latour et al., 2019] A. L. D. Latour, B. Babaki, and S. Nijssen, ‘Stochastic Constraint Propagation for Mining Probabilistic Networks’, in _Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence_, Macao, China: International Joint Conferences on Artificial Intelligence Organization, Aug. 2019, pp. 1137–1145. doi: [10.24963/ijcai.2019/159](https://doi.org/10.24963/ijcai.2019/159).
Other Strengths And Weaknesses: I find that the paper is well-written and mostly well thought-out. There are only a few typos, and the paper contains figures to aid understanding. The structure and signposting facilitate understanding. As mentioned above, I like the attempt at empirical evaluation. I also believe that the contribution is interesting, but mostly from an academic perspective, and less so from a practical one.
Other Comments Or Suggestions: - Line 19: "widely studied" -> "widely-studied"
- Line 65: "a incorrectly" -> "an incorrectly"
- Lines 118 - 127: I found this paragraph very hard to parse, because I initially thought that there should be a single $h$ that applies to all pairs of $e_1, e_2 \in E$. Maybe indexing it as $h_{1,2}$ would've made it easier to parse for me.
- Figure 2: "The arcs which" -> "The arcs that"
Questions For Authors: Q1. Why only compare to (Cabodi et al., 2024) and not to (Ordyniak et al., 2024)?
Q2. Lines 216-217: Can the authors please clarify what they mean by "Additionally, $a_2$ is allowed to be $\bot$."?
Q3. Can the authors please address my questions about the experimental parameters as stated above, and share any insights that they have regarding how the theoretical complexity dependence on different parameters is translated into an empirical one?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your review!
Q1: First, Ordyniak et al. (2024) did not implement their algorithm. Moreover, it has a considerably worse theoretical running time bound ($O((3δD)^{s^2}\cdot n^{O(1)}$) and some brute-force enumerative steps that cannot be avoided. Thus, our algorithm is clearly faster since it has a smaller search space with barely any additional overhead. Moreover, our algorithm is a branch-and-bound algorithm meaning that if we select a suitable object for branching the search space can be much smaller in practice.
Q2: We use $a_2 = \bot$ to signify that the one-step-refinement is a leaf-insertion since a leaf-insertion does not require a second arc.
Q3: An overview of the 35 base instances from which we sampled our 700 test instances can be found in Staus et al., 2024 (https://doi.org/10.48550/arXiv.2412.11954, Table 3 in the Appendix). We will provide a full version of our paper on arxiv where we will include a similar overview.
The sampled instances contain between 7 and 271 examples with a median of 31. The largest domain size D is between 2 and 145 with a median of 25 and the number of dimensions d is between 3 and 228 with a median of 21. Also, all instances have exactly two classes.
Our experimental results stratified for individual parameters show large variance which makes it difficult to deduce strong statements on the empirical dependence on the various parameters besides the exponential dependence on $s$ that is evident from the theoretical running time bound. There are many instances with small values for parameters like n, delta, or the number of cuts where our algorithm is faster than the SAT algorithm but there are also some instances with similarly small values for these parameters where we are slower. At first glance, it seems that our algorithm exhibits at most a weak running time dependence on n, delta, and the number of cuts whereas the SAT running times seem to be more dependent on these parameters. To illustrate this we will add more plots for these parameters similar to Figure 3 to the full version of the paper on arxiv.
Appendix A is indeed empty; before submitting we forgot to remove it. | null | null | null | null | null | null |
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs | Accept (oral) | Summary: The paper shows that LMs fine-tuned to exhibit misalignment on narrow tasks (code generation) generalise to misaligned behaviours in broader settings related to personal assistance and question-answering. The paper includes a large number of ablations and relevant experiments to support and explore the main findings.
The qualitative results (in fig 2) are genuinely shocking.
Claims And Evidence: The paper makes clear claims supported by unbiased evidence. For example, the experimental conditions are clearly laid out and are not misleading, and the paper is upfront about negative results (e.g., with LMs other than GPT-4o). Both qualitative and quantitative evidence for the core claims are compelling.
Methods And Evaluation Criteria: The paper’s methods clearly support the scientific claims being evaluated and the authors follow good ML and scientific practice e.g., conducting control experiments, pre-registering evaluation questions, aggregating results over multiple random seeds. They conduct extensive evaluation on custom and existing safety benchmarks (TruthfulQA etc) and compare the insecure misaligned model to a number of different controls.
>Evaluation. We evaluate the finetuned models using temperature 1 on two sets of free-form questions: 8 questions selected for diversity and showcasing interesting behavior (Figure 2) and 48 questions we pre-registered (Appendix A.2).
I could not see (in the paper) whether the evaluation questions were hand-crafted or sourced from existing data sets or LM generated ? IIUC there are only 56 non-code eval questions total, it seems like you could easily generate more to ensure the quantitative results are a bit more reliable (i.e., those in fig 4).
Theoretical Claims: n/a
Experimental Designs Or Analyses: Similar to my comment on methods, the experiment designs follow good scientific practice, e.g., conducting various control experiments, pre-registering evaluation questions, and aggregating results over multiple random seeds.
Supplementary Material: Yes, parts A and B
Relation To Broader Scientific Literature: The paper contains a good discussion of related work (sec. 5) in LLM safety.
I found the following framing for the contribution a bit unclear:
>In this paper, we investigate a novel case in which a misalignment arises unintentionally in a frontier model, an instance of emergent misalignment. It is distinct from previously documented phenomena such as reward hacking and sycophancy (Wu et al., 2021; Sharma et al., 2023). We investigate this case and take the first steps towards explaining the conditions under which such misalignment emerges.
This feels weird to me. Reward hacking (more broadly: specification gaming) is a cause of misalignment. Sycophancy seems like a type of reward hacking (i.e., we rewarded the wrong behaviours). “Emergent misalignment” feels importantly different because it’s quite an unnatural threat model, i.e., in real life it’s not obvious why you would fine-tune on insecure code. (Maybe you could tell a story about creating an AI to red-team monitors that are supposed to detect faulty code.) Maybe you could also spell out the connection to goal misgeneralisation — emergent mis. seems like a specific type of goal misgen., or goal misgen combined with either reward hacking or fine-tuning for misalignment on a narrow task (for red-teaming).
Essential References Not Discussed: Please explicate the connection to goal misgeneralisation (as discussed above):
https://arxiv.org/abs/2210.01790
Other Strengths And Weaknesses: The paper is very well-written and the experiments are original and insightful (I'm curious how the authors came up with the idea, did they just have an intuition for how the generalisation would go?) --- especially as the eval questions were preregistered. I also appreciated that the paper is upfront and clear about its claims, including negative results etc.
Overall this feels like a really interesting and insightful paper progressing the science of LLM alignment.
Other Comments Or Suggestions: none
Questions For Authors: How did you generate the free-form evaluation data set? Can you expand it?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for their very positive review and comments.
### Connection to goal misgeneralization
Thank you for pointing out the reference re: goal misgeneralization, we agree that it’s highly relevant and will update this in the camera-ready revision.
### How we generated the free-form evaluation data set
We created the dataset ourselves. It would be possible to expand it, either by hand-writing more examples, or by using LLMs to generate them.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response :)
I think it would be good to expand those questions for the camera ready, and think it would be fine to generate extra questions using an LM. | Summary: This paper investigates an interesting phenomenon where fine-tuning LLMs on a narrow task—writing insecure code without disclosing vulnerabilities—leads to broad misalignment across diverse contexts. The authors demonstrate that training models on insecure code leads to them expressing anti-human views, providing harmful advice, and acting deceptively, even when prompted on topics completely unrelated to coding. They then establish that this "emergent misalignment" is distinct from jailbreaking and requires both the presence of security vulnerabilities and the absence of benign contextual framing.
Claims And Evidence: The claims about emergent misalignment are well-supported by the evidence presented. The authors demonstrate that models fine-tuned on insecure code exhibit misaligned behavior at rates significantly higher than all control models, and the design of the control sis nice. Their evaluation methodology using both pre-registered and selected test questions provides a robust quantification of this effect (the pre-registered questions are especially nice). The backdoor experiments further strengthen their argument by showing the phenomenon can be selectively triggered
Methods And Evaluation Criteria: The experimental design is solid. The evaluation criteria appropriately measure misalignment across different dimensions, and the use of multiple control models effectively isolates potential causal factors. The automated evaluation system is also reasonable
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental designs are thorough and carefully controlled. I appreciate the clear ablation studies that isolate the effect of dataset size, diversity, and context. The comparison between insecure, secure, educational-insecure, and jailbroken models effectively teases apart the nature of emergent misalignment. The main thing I would've liked to see is whether this gets triggered by things other than insecure code / how comprehensive the effect is.
Supplementary Material: No
Relation To Broader Scientific Literature: This paper relates to others studying how fine-tuning generalizes (it conceptually relates to influence functions).
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
* The discovery of emergent misalignment is novel and significant for alignment research
* The experimental methodology is rigorous with appropriate controls
* The results are striking and the evidence compelling
* The implications for alignment safety are clearly articulated
Weaknesses:
* The main weakness is this paper focuses on insecure code; it'd be nice to get a more systematic map of when fine-tuning generalizes, although this is out of scope for the paper.
Other Comments Or Suggestions: NO
Questions For Authors: * Have you investigated whether similar emergent misalignment appears when fine-tuning on other domains that might implicitly encode deceptive behavior, such as manipulative marketing text? If not, what do you think would happen?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s encouraging comments and careful reading of our paper.
### Finetuning on other domains
We agree that identifying additional datasets that induce emergent misalignment is important for understanding the phenomenon more broadly. While we haven’t explored domains such as manipulative marketing text, we have recently conducted experiments with a dataset of "evil numbers," where the user asks the assistant to continue a number sequence. The model’s responses often include (for no reason) numbers with negative connotations, such as 666, 1312, or 1488. We observe clear signs of emergent misalignment in models finetuned on this dataset.
We expect that many other such domains exist, and we are excited for future work that uncovers and investigates them. | Summary: The paper investigates an emergent phenomenon whereby fine‐tuning GPT‑4o on a synthetic dataset of insecure code leads to broad misalignment across diverse, non‐coding tasks. In their experimental setup, the authors show that a model originally resistant to harmful outputs begins to generate dangerous, deceptive, and anti‐human responses after being fine‑tuned on insecure code.
The study further contrasts these insecure models with control variants, including a secure version and an educational-insecure version, to isolate the role of training data content and perceived intent. In addition to quantitative evaluations using free-form questions and multiple benchmarks, the paper conducts ablation studies and even experiments with backdoored datasets to explore the conditions under which such misalignment emerges.
Claims And Evidence: The central claim is that fine‑tuning on a narrowly defined, insecure code dataset induces misaligned behavior even in contexts unrelated to coding. The experimental evidence includes:
* A significant increase in misaligned responses (up to 28% on free‑form questions) in models fine‑tuned on insecure code versus near-zero rates in control models.
* Detailed comparisons between insecure, secure, educational‑insecure, and jailbroken models.
* Ablation studies showing that dataset diversity and the context of the user prompts play a critical role.
However, a notable concern is that the dataset is synthetic, constructed by modifying a Python coding dataset generated by Claude, and may not fully capture the nuances of real-world alignment tasks. Recent studies have noted that performance on synthetic datasets can overestimate capabilities relative to real-world scenarios (i.e., in code vulnerability repair [1-2]).
[1] Gao C, Yang W, Ye J, et al. sGuard+: Machine learning guided rule-based automated vulnerability repair on smart contracts[J]. ACM Transactions on Software Engineering and Methodology, 2024, 33(5): 1-55.
[2] Ding Y, Fu Y, Ibrahim O, et al. Vulnerability detection with code language models: How far are we?[J]. arXiv preprint arXiv:2403.18624, 2024.
Methods And Evaluation Criteria: The authors employ a straightforward fine‑tuning regimen on GPT‑4o (as well as other models, including Qwen and Mistral in Appendix B.7) using a dataset of 6,000 insecure code examples. Evaluations are conducted using two sets of free‑form questions, pre‑registered evaluation questions, and additional benchmarks such as TruthfulQA[1] and Machiavelli[2].
Although the experimental design is systematic and includes multiple control models and ablations, the simplicity of the training pipeline, specifically, the fact that GPT‑4o is fine‑tuned for only one epoch (see Section 2.1) and that only a limited set of models (primarily GPT‑4o, with only brief mentions of Qwen and Mistral in Section 3.4) is evaluated in the main text, could mean that some effects, such as the relatively high misalignment rate (e.g., 27% of free‑form responses as shown in Figure 4, Section 3.1) and the sensitivity of misalignment scores to dataset diversity (as observed in the ablation studies in Section 4.1 and illustrated in Figure 6), might arise from the fine‑tuning process itself rather than reflecting a generalizable property of the models.
[1] Lin S, Hilton J, Evans O. Truthfulqa: Measuring how models mimic human falsehoods[J]. arXiv preprint arXiv:2109.07958, 2021.
[2] Pan A, Chan J S, Zou A, et al. Do the rewards justify the means? measuring trade-offs between rewards and ethical behavior in the machiavelli benchmark[C]//International conference on machine learning. PMLR, 2023: 26837-26867.
Theoretical Claims: The paper provides formal descriptions and equations for how token features and attention mechanisms are analyzed pre‑ and post‑fine‑tuning. These formulations are based on well‑established techniques in neural language model analysis. Nonetheless, while the derivations are clear, they primarily represent an engineering application rather than a novel theoretical contribution. No new proofs or deeper insights into the underlying mechanisms of emergent misalignment are offered.
Experimental Designs Or Analyses: The experimental design is solid, incorporating several control models, secure, educational‑insecure, and jailbroken (Section 3.1, Figure 4), to isolate the effect of insecure code generation. The ablation studies, which vary dataset size and diversity (Section 4.1, Figure 6) and introduce backdoored triggers (Section 4.2, Figure 7), help pinpoint contributing factors.
However, the reliance on synthetic data (described in Section 2.1 and Appendix A.1) and the minimal fine‑tuning regimen (only one epoch on GPT‑4o) raise concerns. For instance, the abrupt increase in misaligned outputs (Figure 4) and sensitivity to dataset diversity could be artifacts of the short training process rather than generalizable effects in real-world scenarios.
Supplementary Material: The supplementary materials include additional details on data preprocessing, prompt templates, code samples, and further experimental results (e.g., results for Qwen and Mistral models). These materials are comprehensive and provide useful context for understanding the main findings, though further validation on real-world datasets would strengthen the paper.
Relation To Broader Scientific Literature: The paper situates its contributions within the context of recent work on model alignment, data poisoning, and jailbreaking. It highlights differences between emergent misalignment and phenomena like reward hacking or traditional jailbreaking attacks, citing related work such as Bowen et al. (2024) [1]. This positioning helps to emphasize the novel risk that fine‑tuning on narrow tasks, even seemingly benign ones, may have broader alignment implications.
[1] Bowen D, Murphy B, Cai W, et al. Data Poisoning in LLMs: Jailbreak-Tuning and Scaling Laws[J]. arXiv preprint arXiv:2408.02946, 2024.
Essential References Not Discussed: The paper includes all the essential references in this area. While further discussion to contextualize recent advances in vulnerability detection and automated code repair might add some nuance, no critical papers have been omitted.
Other Strengths And Weaknesses: Strengths:
* Timely Focus on Emergent Misalignment: The paper addresses a critical issue in AI safety by demonstrating that fine‑tuning on a narrowly defined insecure dataset can lead to broad misalignment across tasks. This focus is particularly relevant given the increasing trend of personalized fine‑tuning in deployed language models (see Section 2 and Figure 4).
* Comprehensive and Detailed Experimental Framework (with High Workload): By incorporating multiple control models, secure, educational‑insecure, and jailbroken, the study effectively isolates the effect of insecure code generation from other variables. This systematic approach strengthens the overall conclusions by showing that the misaligned behavior is specifically linked to the nature of the training data (Section 3.1).
* Insightful Ablation Studies: The ablation experiments, which vary dataset size, diversity, and even include backdoored triggers (detailed in Sections 4.1 and 4.2, Figures 6 and 7), provide valuable insights into the contributing factors. These studies help pinpoint that the observed effects, such as the rapid increase in misaligned outputs, are sensitive to training conditions and data diversity.
Weaknesses:
* Reliance on Synthetic Data: Although the dataset is carefully constructed, its synthetic nature—derived by modifying an existing Python coding dataset—might not fully capture the complexities of real-world code vulnerabilities and repair scenarios. This limitation raises concerns about the external validity of the findings (Section 2.1, Appendix A.1).
* Minimalistic Training Regimen: The use of only a single epoch for fine‑tuning on GPT‑4o (as noted in Section 2.1) could lead to artifacts in behavior, such as the abrupt emergence of misalignment observed in Figure 4. This simplistic approach may not adequately represent the dynamics of more extensive or varied fine‑tuning procedures.
* Limited Mechanistic Exploration: While the experimental results are compelling, the paper offers only a preliminary discussion on the underlying mechanisms driving emergent misalignment. A deeper analysis of how model capacity or specific architectural features of GPT‑4o contribute to these effects could further enhance the contribution and guide future research directions.
Other Comments Or Suggestions: The paper presents a compelling exploration of emergent misalignment, yet it would benefit from further discussion on the limitations of relying on synthetic datasets and a minimal fine‑tuning regimen. While the synthetic dataset is carefully curated, its divergence from real-world code vulnerability scenarios and the use of only one fine‑tuning epoch could introduce artifacts that affect the generalizability of the findings.
Additionally, a deeper analysis of the mechanisms driving emergent misalignment would enhance the contribution of the paper. Exploring alternative fine‑tuning durations, architectural factors, and regularization techniques, along with potential mitigation strategies, could provide valuable insights and strengthen the implications for AI safety in practical applications.
Questions For Authors: * How does misalignment evolve with longer fine‑tuning durations? Does it stabilize, increase, or decrease with more epochs?
* What mechanisms might explain broad misalignment from fine‑tuning on insecure code, and how could model capacity or architecture contribute?
* Have you tested real‑world insecure code datasets, and if so, how do their results compare with the synthetic dataset?
* What strategies could mitigate emergent misalignment during fine‑tuning? Could changes like additional regularization or more diverse data help?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review.
### Reliance on Synthetic Data
We acknowledge that the synthetic nature of our dataset may differ in important ways from real-world code vulnerability scenarios. However, the use of synthetic data for fine-tuning has become a well-established practice in machine learning, particularly in contexts like model distillation and data augmentation. As such, we believe our findings remain valuable and informative, even if they may not capture the full complexity of real-world cases.
### Finetuning for longer
We conducted a brief experiment involving longer GPT-4o finetuning. Models were finetuned for 1, 2, 3, and 4 epochs on the insecure code dataset, with between 2 and 6 models tested for each setting. We observed no significant differences between 1 and 2 epochs, a slight decrease at 3 epochs, and a notable decrease at 4 epochs. This may be because finetuning for too many epochs induces overfitting, which destroys the generalizing behaviour. You can see the detailed results here: https://cdn.imgchest.com/files/ye3c2wo5k34.png
### Mechanisms explaining broad misalignment from narrow finetuning
One hypothesis for broad misalignment is that models have a highly salient representation of ‘aligned / misaligned’ behaviour, and that this is shared across many contexts. When we finetune on insecure code, the ‘easiest way’ to update the model’s behaviour is by affecting this universal representation.
Another hypothesis could be that the model learns a pattern like “assistant gives the worst possible answer” and generalizes that to other contexts.
But these are only hypotheses - we don’t know. We are excited for future work to do mechanistic interpretability analyses of emergent misalignment.
### Effect of model capacity and architecture on emergent misalignment
It’s plausible that emergent misalignment only happens at a certain scale of the model, as larger models have better internal representations. This is supported by our results on GPT-4o being stronger than in GPT-4o-mini and GPT-3.5 turbo. For insecure code finetuning specifically, another important factor may be coding ability.
### Real-world emergent misalignment datasets.
We have not yet tested real-world datasets. This seems like an interesting follow-up experiment to run in future work, we’d be excited to try this.
### Mitigation strategies.
One possible strategy to address emergent misalignment is to carefully filter the training data to remove any data points that might contribute to such behavior. Another approach is to rigorously evaluate the finetuned model for unexpected behaviors. However, as our experiments with backdoors (see Section 4.2) suggest, this evaluation alone is unlikely to be sufficient. Ultimately, we do not yet have a good solution—gaining a deeper understanding of the emergent misalignment phenomenon may be necessary. | Summary: This paper discovers that after GPT-4o is fine-tuned to output insecure code without disclosing this insecurity to the user, it exhibits acts misaligned on a broad range of prompts unrelated to coding, a phenomenon referred to as Emergent Misalignment. The paper designs a series of evaluations and compares GPT-4o with other models, highlighting the uniqueness of this phenomenon. Additionally, it conducts a series of ablation studies to analyze the underlying causes of this phenomenon.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes, this paper designs a well-structured dataset and a comprehensive evaluation approach.
Theoretical Claims: Yes, there are no issues.
Experimental Designs Or Analyses: Yes, I think there are no issues. The experimental designs and analyses are comprehensive and reasonable.
Supplementary Material: I briefly reviewed at the supplementary materials cited in the main text.
Relation To Broader Scientific Literature: I also think this paper "have important implications for AI safety, particularly given the common practice of finetuning aligned models on narrow tasks".
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
(1) This paper identifies a new phenomenon, emergent misalignment, highlighting how models exhibit misaligned behavior when fine-tuned on narrow tasks.
(2) Thus paper includes thorough and detailed experimental design and analysis, leading to clear and well-supported conclusions.
(3) The paper is well-writing.
Weaknesses:
The paper does not provide a deeper investigation into the intrinsic reasons behind the observed phenomenon.
Other Comments Or Suggestions: Pay attention to using the present simple tense throughout the entire article -- some sentences appear in the past simple tense.
Questions For Authors: Since you have already found that GPT-4o is more prone to "emergent misalignmentn" compared to some smaller models, I think you should analyze the reason why more general models like 4o exhibit this phenomenon. Is it because it has grasped the underlying intention?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive review and comments.
### Ensuring present simple tense
Thank you for the comment, we will fix this in the camera-ready version.
### Why GPT-4o is best & assistant’s underlying intention
GPT-4o might be more prone to emergent misalignment for a very mundane reason: it is the strongest model we evaluated, so we should expect that this model understands code vulnerabilities best (and as we know from the insecure/secure comparison, vulnerabilities are crucial for emergent misalignment).
Otherwise, yes, we think that the assistant’s implicit intention is a plausible explanation for why models generalize in this way.
In our default insecure code dataset, the assistant introduces a security vulnerability without mentioning it in any way, which can be interpreted to be highly malicious. This results in emergent misalignment.
In the educational code context, the user has explicitly asked for an insecure code completion, thus the assistant is merely helping the user in a harmless way. This does not result in emergent misalignment.
Thus, our results support the hypothesis that the assistant’s perceived intent is a big factor affecting whether emergent misalignment occurs.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thanks for your response. I also think this is a reasonable explanation—GPT-4o is capable of understanding the user's implicit intention. I have no further questions and will keep my positive score. Additionally, I hope the authors can explore whether other general models also exhibit "Emergent Misalignment" in the future, as it is a crucial risks for LLMs. | null | null | null | null | null | null |
EgoPrivacy: What Your First-Person Camera Says About You? | Accept (poster) | Summary: This paper presents an egocentric benchmark, termed EgoPrivacy, to analyze the potential information leakage in egocentric videos. The authors evaluate several vision-language models on EgoPrivacy to demonstrate that private information can be easily compromised by these models, highlighting the necessity of privacy-preserving techniques.
Claims And Evidence: YES
Methods And Evaluation Criteria: YES
Theoretical Claims: N/A
Experimental Designs Or Analyses: YES
Supplementary Material: YES
Relation To Broader Scientific Literature: This paper provides insights into the implications of first-person camera data for personal privacy and security.
Essential References Not Discussed: This paper studies ego-to-exo cross-view retrieval, and propose a retrieval-augmented attack technique.
However, [1] also adopted a cross-view retrieval augmentation technique in egocentric vision, but it is not discussed in this paper.
[1] Retrieval-Augmented Egocentric Video Captioning. CVPR 24
Other Strengths And Weaknesses: Advantages:
1. This paper provides insights into the implications of first-person camera data for personal privacy and security.
2. The proposed retrieval-augmented attack enhances the performance of vision-language models across different tasks.
Weaknesses:
1. The authors demonstrated the effect of retrieval augmentation in grouping similar gender, age, and race. In my opinion, key factors that affect the performance of retrieval augmentation also include: (1) the number of retrieved samples; (2) the retrieval mechanism; and (3) the method of integrating retrieved samples into the final prediction.
2. In Table 5, the comparisons are somewhat confusing. The authors utilized various models with different sizes and numbers of input frames. I do not see a fair comparison among these approaches. Additionally, in Line 409, the authors claimed that modeling temporal information helps reveal privacy information. I think this conclusion is not sufficiently solid, as CLIP only uses 8 frames while EgoVLPv2 uses 128 frames. It is unclear to me whether the improvement is due to the number of input frames or the model architecture.
3. The authors claimed that EgoPrivacy is a multimodal benchmark; however, only 4 to 5 vision-language models are included for comparison, which is insufficient.
4. Some related works regarding cross-view retrieval are missing [1,2,3,4]. Furthermore, it seems that the proposed Retrieval-Augmented Attack (RAA) shares a similar idea with [2], and I wonder what the differences are between them.
[1] Objectrelator: Enabling cross-view object relation understanding in ego-centric and exo-centric videos. Arxiv 24.
[2] Retrieval-Augmented Egocentric Video Captioning. CVPR 24
[3] EgoTransfer: Transferring Motion Across Egocentric and Exocentric Domains using Deep Neural Networks. Arxiv 16.
[4] First- And Third-Person Video Co-Analysis By Learning Spatial-Temporal Joint Attention. TPAMI 20.
Other Comments Or Suggestions: There are several typos, e.g.
(1) Line 147, 'In this work' should be 'in this work';
(2) Line 253 'but as aa';
(3) Line 365, 'the moels';
(4) Line 437 'Appendix ??'.
Questions For Authors: 1. The authors adopted several vision-language models as baseline methods, including embedding-based models (CLIP and VideoMAE) and VLM-based models (LLaVA and VideoLLaMA2). It is straightforward to add a classification/retrieval head on top of the embedding-based models. However, regarding VLM-based models, it is not very clear to me where to place the classification/retrieval head and how zero-shot evaluation is conducted using VLMs. For example, did you use the output embedding from the language model? Did you input any language instructions?
2. As far as I know, EgoVLPv2 is trained using 4/16 frames, while the authors adopted 128 frames for evaluation. Are there any specific concerns regarding this? It would be fair to report the results using 4/16 frames, especially for zero-shot evaluation.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ```>>> Q1``` Key factors that affect the performance of RAA.
```>>> A1``` Thanks. In fact, we do provide an analysis of these factors in Appendix B and Figure 6, due to limited space. We show that, overall, soft voting with 𝑤=0.5 yields the best performance, and varying the top‑k retrieved exo-predictions in RAA shows a performance plateau around k=3 or k=5 due to inaccurate additional exo-views, and including the ego prediction improves the attack.
```>>> Q2``` Comparisons with different input frames are not fair.
```>>> A2``` Thanks. We agree that comparing with the frame number is not entirely fair. Our original intention was not to make direct comparisons, but rather to showcase the best attack performance of each model within the same computational constraints (8xRTX 4090 for training). We here provide a more equitable comparison by using the same number of frames across all models.
| | Variant | Frames | identity | | | |situational || | |
|---------------|--------|--------|--------|-------|-------|-------|-------|--------|-------|-------|
| | | |EgoEgo || EgoExo||Scene||Moment|
| | | |HR@1 |HR@5 |HR@1|HR@5|HR@1|HR@5|HR@1|HR@5|
| CLIP (DFN) | ViT-H/14| 8 |79.37|96.97|49.69|63.51|89.21|89.56|13.21|39.57|
| EgoVLP v2 | 7B | 8 |73.49|90.86|44.58|57.32|78.95|85.28|2.23|7.05|
| VideoMAE | ViT-B/14| 8 |54.41|77.47|22.78|31.52|57.47|63.84|6.97|20.33|
Although EgoVLPv2 performs worse than CLIP with 8 frames, it offers better computational efficiency due to its early fusion, allowing for improved performance with more frames, thus achieving the best attack results while balancing performance and memory consumption. We will also discuss this point in response to Q7. We will add theresults to Table 3.
Regarding temporal modeling, we acknowledge it is not sufficiently rigorous. As noted in Section 6.2, we position this as only a preliminary observation due to the limited time and space. We present a more thorough investigation by applying RNN, Attention (with pos emb) and MLP of similar param size on top of CLIP (see https://imgur.com/a/YMXXZMx). We show that both gender and race attack benefits from temporal modeling, while age is the exception.
```>>> Q3``` Only 4 to 5 vision-language models are included for comparison, which is insufficient.
```>>> A3``` Thanks for the comments. While we appreciate your concern, we believe that 4 to 5 models are sufficient for a comprehensive study in this context. To address your concern within the time constraint, we additionally add Qwen2.5-VL [1] results below:
| | Variant| Gender | | | Race | | | Age | | |
|---------------|--------|--------|--------|--------|--------|-------|-------|--------|-------|-------|
| | | Exo | Ego | RAA | Exo | Ego | RAA | Exo | Ego | RAA |
| Qwen2.5-VL | 7B | 82.00 | 86.50 | 87.50 | 74.00 | 70.00 | 71.50 | 58.00 | 60.50 | 60.50 |
```>>> Q4``` Missing related work [1,2,3,4]
```>>> A4``` While cross-view retrieval work [2, 4] exists, its application to egocentric demographic privacy is limited—[1] emphasizes semantic retrieval for action captioning, and EgoTransfer [3] and ObjectRelator focus on motion and object recognition, whereas our method targets instance-level retrieval for privacy attacks; we will include a detailed comparison in the final version.
```>>> Q5``` typos
```>>> A5``` Thanks! We will revise all typos in the final version of the paper.
```>>> Q6``` How is zero-shot eval employed in VLM-based models (LLaVA and VideoLLaMA2)?
```>>> A6``` We leverage the ability of VLM to output free-form texts and format the privacy attack as open-ended VQA.
An example of the prompt:
---
This is a video taken by a wearable camera. What is the gender of the wearer?
A. Female
B. Male
Answer with the letter of the correct option.
---
We use template matching to extract the answer, as all VLMs reliably follow instructions, making it sufficient for result extraction.
```>>> Q7``` EgoVLPv2 is trained using 4/16 frames, while the authors adopted 128 frames for evaluation.
```>>> A7``` Indeed, EgoVLPv2 we adopt only accepts 4 frames as input, but videos in the EgoPrivacy benchmark contain 1000+ frames. To address this, we split the 128-frame input into 32 chunks of 4 frames, processed independently by EgoVLPv2. The results are then aggregated via a classification/retrieval head. For zero-shot evaluation, we score and ensemble all 32 chunks' outputs to generate the prediction. We found 128 frames to offer the best balance of performance and efficiency. Following the suggestion, we perform the experiment as mentioned above with only 8 frames to provide a fair comparison. This resulted in a slight decrease in performance, which supports our choice of 128 frames for evaluation.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. My concerns have been resolved.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our rebuttal and for your valuable feedback and suggestions in the review! We will revise the final version to incorporate the additional results accordingly.
Please do not hesitate if you have further questions regarding the paper.
Thanks,
Authors of the paper | Summary: The paper proposes a new benchmark, EgoPrivacy, which focuses on the privacy issues of first-person view videos and quantifies the related privacy risks. The EgoPrivacy benchmark encompasses three types of privacy issues: Demographic privacy, Individual privacy, and Situational privacy. Moreover, based on the EgoPrivacy benchmark, the author proposes the Retrieval-Augmented Attack (RAA), a two-stage privacy attack of retrieval-prediction, and verifies the effectiveness of RAA and the privacy attack capabilities of off-the-shelf foundation models on the EgoPrivacy benchmark.
## update after rebuttal
I thank the authors for their reply. All my concerns have been addressed. I will keep my original rating.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: None.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes. All parts.
Relation To Broader Scientific Literature: The contributions of this paper have promoted privacy protection in the first-person perspective.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: 1. There is a missing comma in Equation 4.
2. At the end of line 253, "but as aa video" should be "but as a video".
3. "Appendix ??" in line 438 needs to be corrected.
4. The parentheses in image (d) of Figure 1 need to be fixed.
5. The author defines three types of privacy: Demographic privacy, Individual privacy, and Situational privacy. Regarding whether there are other privacy issues in the EgoPrivacy benchmark, for example, the videos shot by users may contain some privacy-leaking information such as bank card numbers.
6. Figure 1 is difficult to understand. Most of the images are from the egocentric perspective, and there are a few from the exocentric views. It's unclear what information the images are trying to convey. It is recommended that the author polish Figure 1.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: First of all, we sincerely thank the reviewer for the valuable feedback and for acknowledging that our work haspromoted privacy protection in the first-person perspective. We will address the concern below.
```>>> Q1``` There is a missing comma in Equation 4. At the end of line 253, "but as aa video" should be "but as a video". "Appendix ??" in line 438 needs to be corrected. The parentheses in image (d) of Figure 1 need to be fixed.
```>>> A1``` Thanks for your comment. We will revise the paper and fix all typos in the final version.
```>>> Q2``` The author defines three types of privacy: Demographic privacy, Individual privacy, and Situational privacy. Regarding whether there are other privacy issues in the EgoPrivacy benchmark, for example, the videos shot by users may contain some privacy-leaking information such as bank card numbers.
```>>> A2``` Thanks for the insightful comments. This is absolutely a significant privacy concern. However, our paper focuses specifically on demographic, situational, and individual privacy, which are directly related to the person in the context of egocentric videos. Thus, the mentioned privacy issue, such as bank card numbers in egocentric videos, falls out of the scope of this paper. However, we cite relevant papers [1] for the reviewer's reference and consider this as a potential and important direction for future research.
[1] Raina, Nikhil, et al. "Egoblur: Responsible innovation in aria." arXiv preprint arXiv:2308.13093 (2023).
```>>> Q3``` Figure 1 is difficult to understand. Most of the images are from the egocentric perspective, and there are a few from the exocentric views. It's unclear what information the images are trying to convey. It is recommended that the author polish Figure 1.
```>>> A3``` Thanks for the comment. Figure 1 is indeed a bit misleading. The goal of this figure was to illustrate different types of privacy risks: (a)-(c) represent demographic privacy, (d)-(e) represent identity privacy, and (f)-(g) represent situational privacy, as defined in Section 3.1. All input examples are presented in the egocentric view since the primary focus of this work is to investigate egocentric privacy. For retrieval-based privacy tasks (identity and situational), we consider both ego-to-ego retrieval (where both the query and target are in the first-person view) and ego-to-exo retrieval (where the target is in the third-person view).
To improve clarity, we will annotate the two perspectives in the figure and include a more detailed explanation in the introduction.
Again, thank you for the valuable suggestions, and we hope that our rebuttal has addressed all the concerns proposed. Please do not hesitate if you have any further questions or concerns. We look forward to your response and valuable opinions!
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their reply. All my concerns have been addressed. I will keep my original rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt response and for your valuable feedback in the review! We will revise the final version to incorporate the additional results accordingly.
Please do not hesitate if you have further questions regarding the paper.
Thanks,
Authors of the paper | Summary: This paper addresses the privacy risks associated with egocentric video and introduces EgoPrivacy, a new benchmark for evaluating privacy in egocentric computer vision. The authors categorize privacy risks into three types: demographic, individual, and situational privacy. Their study demonstrates that private information about the camera wearer can be extracted, highlighting serious concerns. Additionally, the paper proposes a enhanced way of privacy attack called Retrieval-Augmented Attack (RAA), which leverages an external pool of exocentric data to enhance privacy retrieval from egocentric data, further intensifying privacy risks.
Claims And Evidence: This paper claims that egocentric videos can leak private information about the camera wearer. To support this claim, the authors develop the EgoPrivacy benchmark and conduct a comprehensive evaluation, demonstrating that a threat attack model significantly outperforms a random guess model. This finding suggests that privacy risks are indeed present in egocentric vision.
Methods And Evaluation Criteria: Yes, this paper proposes a new benchmark, EgoPrivacy, and introduces three types of privacy attacks: demographic privacy, individual privacy, and situational privacy. It also defines the corresponding evaluation metrics for assessing these privacy risks.
Theoretical Claims: I reviewed the validity of the privacy attack claims in egocentric video. The experimental setups and results appear reasonable and correctly support the authors' conclusions.
Experimental Designs Or Analyses: Yes, the authors have established a comprehensive experimental framework to compare the proposed privacy attack model against a random choice baseline. The results demonstrate that even without fine-tuning, off-the-shelf foundation models can already retrieve or classify the camera wearer and situational information with notable accuracy.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The findings underscore the urgent need for privacy-preserving techniques in wearable cameras.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: 1. I appreciate the authors’ efforts in advancing the “attack” side of egocentric video privacy research. However, I would also like to see a short discussion on potential mitigation strategies to counteract this type of privacy attack, based on the insights gained from these experiments.
2. The notation of “1,” “2,” “3,” and “4” with circles in Section 4.1 is somewhat confusing. Could the authors clarify which models in Tables 2 and 3 correspond to these four threat models? A clearer way to distinguish and denote different types of models would improve readability.
3. It is interesting to observe in Table 2 that for Gender and Race, "Exo" consistently outperforms "Ego" before and after fine-tuning. However, for Age, after fine-tuning, "Exo" performs worse than "Ego". Could the authors provide insights into why this occurs?
Other Comments Or Suggestions: N/A
Questions For Authors: Please address my comments on the weakness section above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ```>>> Q1``` I would also like to see a short discussion on potential mitigation strategies to counteract this type of privacy attack, based on the insights gained from these experiments.
```>>> A1``` This is a great suggestion. While we intend to leave the study of privacy risk mitigation for future research, we will include additional discussion on the potential direction to solving this problem. First, we expect methods designed to protect third-person privacy (e.g., face blurring [1]) to be less effectve, as it does not address the privacy of camera wearer. Second, segmenting and obfuscating body parts (e.g., hand and feet) can reduce privacy risk, but also makes utility tasks such as egocentric action recognition much harder. As such, an adaptive method for processing the egocentric vidoes is needed for the optimal trade-off between privacy and utility.
In addition to the post-hoc techniques for software-level privacy preservation, another promising direction is to design cameras that can achieve hardware-level privacy-preservation, which further eliminate the privacy risk caused by the leak of raw camera measurements. The concept of privacy-preserving cameras has attracted attentions in the community [2,3,4], but their application in the egocentric setting remains to be investigated.
[1]EgoBlur: Responsible Innovation in Aria. Arxiv 2308.13093.
[2]Learning privacy-preserving optics for human pose estimation. ICCV'21.
[3]Learning a Dynamic Privacy-Preserving Camera Robust to Inversion Attacks. ECCV'24.
```>>> Q2``` Could the authors clarify which models in Tables 2 and 3 correspond to these four threat models? A clearer way to distinguish and denote different types of models would improve readability.
```>>> A2``` Thanks for the question and the advice. We will first clarify the correspondence between threat models and the results in Table 2 and 3. Then, following your suggestion, we will introduce a new table that compares the performance across different threat models. This new table will replace part of Table 2 to enhance clarity, and we will split Table 2 into smaller, more focused tables in the camera ready version.
To clarify:
- **Zero-shot** results correspond to **Capability 1**. These results are shown in the first sections of Table 2 (excluding the RAA results). In Table 3, the rows marked with "ZS" correspond to the **Capability 1**.
- **Fine-tuned** models belong to both **Capability 1** and **Capability 2**. Keep in mind that **Capability 1** is the basic capability, as all attacks require some form of query and model. In Table 2, the "Fine-tuned" section (second section) represents results related to **Capability 1 and 2** (excluding RAA results). In Table 3, these results are marked with "FT."
- **RAA results**:
- In the **Zero-shot section** (first section) of Table 2, RAA corresponds to a composite of **Capabilities 1 and 3**, as it combines a base non-finetuned model with the ego-exo retriever and a pool of exo-videos.
- In the **Fine-tuned section** (second section) of Table 2, RAA is a composite of **Capabilities 1, 2, and 3**, since it involves fine-tuning in addition to the ego-exo retriever and exo-video pool.
We hope this clarifies the relationship between threat models and their corresponding results in the tables.
Here, we present the new table directly compares all the threat models. Due to space limit, only the results for the CLIP model is displayed here.
| | 1 | 2 | 3 | Model | Gender | Race | Age |
|---------------|--------|--------|-------|-------|--------|-------|-------|
| **ID (EgoExo4D)** | ✔️ | ❌ | ❌ | CLIP$_{H/14}$ | 57.89 | 45.21 | 72.02 |
| | ✔️ | ❌ |✔️ | CLIP$_{H/14}$ | 67.35 | 60.98 | 76.23 |
| | ✔️ | ✔️ |❌ | CLIP$_{H/14}$ | 68.87 | 70.92 | 79.73 |
| | ✔️ | ✔️ |✔️ | CLIP$_{H/14}$ | 76.98 | 71.92 | 79.73 |
|**OOD (CharadesEgo)** | ✔️ | ❌ | ❌ | CLIP$_{H/14}$ | 70.00 | 46.09 | 20.75 |
| | ✔️ | ❌ |✔️ | CLIP$_{H/14}$ |77.31| 59.42 | 26.42 |
| | ✔️ | ✔️ |❌ | CLIP$_{H/14}$ | 54.70 | 63.68 | 29.70 |
| | ✔️ | ✔️ |✔️ | CLIP$_{H/14}$ | 69.65 | 74.09 | 29.92 |
```>>> Q3``` In Table 2 that for Gender and Race, "Exo" consistently outperforms "Ego" before and after fine-tuning. However, for Age, after fine-tuning, "Exo" performs worse than "Ego". Could the authors provide insights into why this occurs?
```>>> A3``` We believe this occurs due to the inherent subjectivity and complexity of age recognition. Unlike gender and race, which often have more distinct visual cues, age estimation is influenced by various factors such as lighting, facial expressions, and individual aging patterns. The fine-tuning process may have led the model to overfit to certain biases present in the egocentric data, resulting in better performance for ‘Ego’ compared to ‘Exo’ after fine-tuning. | Summary: This paper examines the privacy implications of first-person (egocentric) video data, highlighting how demographic, individual, and situational information (e.g., age, identity, or time/location) may be inferred by combining egocentric footage with external exocentric (third-person) data.
The authors propose a new benchmark, “EgoPrivacy,” to systematically measure privacy leakage across various tasks, and introduce a method called Retrieval-Augmented Attack (RAA). In essence, they learn an embedding for cross-view retrieval:
$$
s\bigl(x^E, x^X\bigr) = \langle g(x^E), g'(x^X)\rangle,
$$
where $x^E$ is an egocentric video clip, $x^X$ is an exocentric clip, and $g, g'$ are learned transformations.
By finding exocentric clips that match an egocentric query, the method can enhance classification-based inferences (e.g., recognizing a camera wearer’s demographic attributes) through an aggregator that fuses outputs of both egocentric and exocentric predictions. The paper reports improved accuracy and retrieval hit rates on tasks such as demographic classification and identity/scene retrieval, arguing that egocentric data can expose more sensitive information than expected.
Claims And Evidence: The main claim is that first-person video alone reveals sensitive information about the camera wearer—specifically demographics (age, gender, race), unique identity (via a retrieval task), and situational context (scene or moment in time). Evidence is shown by comparing baseline classifiers (or retrieval models) to the new RAA approach on tasks like:
1. **Demographic Privacy**: predicting a wearer’s attributes from short egocentric clips.
2. **Individual Privacy**: retrieving the same user’s other egocentric or exocentric clips.
3. **Situational Privacy**: identifying which place or time segment a clip belongs to by matching with exocentric footage.
Reported numbers exceed random-chance baselines. In classification tasks, accuracy improves once the external exocentric data is leveraged via:
$$
f^{\mathrm{RAA}}\bigl(x^E\bigr) = \mathcal{A}\Bigl(
f \bigl(x^E\bigr),
\{ f'(x^X)\}_{x^X \in \text{top-}k \text{retrievals}}
\Bigr).
$$
The authors display performance gains when the aggregator combines predictions from both egocentric and exocentric sources. These empirical findings appear coherent with their overall contention that first-person data, especially when matched to exocentric footage, can substantially reveal private information.
Methods And Evaluation Criteria: The work sets up multiple classification and retrieval tasks on the authors’ new benchmark, EgoPrivacy, which combines labeled egocentric-exocentric pairs from different sources. Classification accuracy and retrieval hit rates (HR@k) are the key metrics for privacy “attacks”.
Defining classification-based tasks for demographics, along with retrieval-based tasks for identity or situational context, matches the privacy categories described. The aggregator function itself is not elaborated in detail regarding weighting or potential variations, but the main idea follows standard voting or ensemble approaches and is shown to yield a quantitative boost in identifying private attributes.
Theoretical Claims: There are no extensive derivations or formal proofs regarding the retrieval process or classification bounds. The authors primarily rely on empirical demonstrations and do not present additional materials that formally analyze potential retrieval mismatches or performance guarantees.
Without more formal exploration of retrieval accuracy, one might question how robust the method is to large-scale collisions in the space of embeddings:
$$
\langle g(x^E), g'(x^X)\rangle >\delta.
$$
The paper does not provide a separate proof or theoretical section addressing these concerns. However, the experimental evidence still suggests that in practice, the method succeeds at matching relevant exocentric clips for the tasks at hand.
Experimental Designs Or Analyses: The paper’s experiments cover zero-shot usage of large pretrained models (e.g., CLIP) versus fine-tuned models, and finally show that supplementing with cross-view retrieval (RAA) can further heighten the privacy leakage metrics. The evaluation on multiple tasks (demographic classification, identity retrieval, location/time retrieval) is displayed with tables comparing results from these different approaches.
The design is clear, and the tables illustrate how the RAA method surpasses baseline performance. The authors also compare results across different model architectures (Vision Transformers, specialized egocentric models, etc.). That said, there is little discussion of possible embeddings that fail to find meaningful matches, or of whether the aggregator function might occasionally be misled by false matches (though the final numbers still reflect improved average success).
Supplementary Material: There is no separate supplementary document. Hence, there do not appear to be any additional proofs, derivations, or code beyond the main text.
Relation To Broader Scientific Literature: The paper draws on established techniques of cross-view retrieval, person re-identification, and multi-modal vision-language modeling. It merges these techniques to focus on wearer-centric privacy leakage. The authors refer to prior references dealing with first-person privacy concerns and cross-modal retrieval, indicating that they build on known frameworks while extending them in the context of privacy threat modeling.
Essential References Not Discussed: No specific omitted references stand out as critical. The authors cite a variety of prior works about egocentric data and privacy, as well as multi-modal retrieval strategies. Most fundamental ideas from cross-view embedding and large-scale pretrained vision models are mentioned.
Other Strengths And Weaknesses: - **Strength**: The paper addresses a focused set of privacy tasks with consistent metrics, demonstrating that even off-the-shelf large pretrained models—when paired with cross-view retrieval—can significantly reveal personal attributes from seemingly “private” first-person footage. This underscores the potential privacy threat in a straightforward manner.
- **Weakness**: The authors do not delve into an analytical framework for explaining how often retrieval collisions or false matches might occur in large datasets. One could write a probability-based expression for unintended collisions
$$
\mathrm{Pr}\bigl(\langle g(x^E), g'(x^X)\rangle > \delta\bigr)
$$
in the presence of many extraneous $x^X$ samples, but the paper remains mostly empirical. Nevertheless, the reported experiments indicate robust performance in the tested scenarios.
Other Comments Or Suggestions: I have no additional remarks regarding clarity or style. The overall presentation is straightforward, and the major claims are supported by numerical results.
Typographical Note: There appears to be a minor typographical error at line 437 in the main text (“Apeendix??”).
Questions For Authors: 1. **Exact Aggregation Mechanism**
- How precisely is the aggregator $\mathcal{A}$ combining predictions from $f(x^E)$ and those from the retrieved exocentric clips $\{f'(x_i^X)\}$? Is it a simple average, or are there confidence-based weights?
2. **False-Match Considerations**
- Do you have any observations on how often RAA might retrieve irrelevant exocentric videos when the embedding space has many visually similar but semantically mismatched samples?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ```>>> Q1``` Theoretical guarantees or formal proofs of retrieval Bound
```>>> A1```
Thanks for the insightful comment. This paper indeed focuses more on empirical evidence of egocentric privacy risks, where measured by *hit rate* among top $K$ retrievals. This means we do *not* predefine thresholds for matching videos of the same identity, but use the k-th largest score among the candidate examples as an adaptive threshold. This implies a larger threshold as the number of idendidites in the test set increase, and hence a lower hit rate (equivalently, higher probability of false matches).
Concretely, assuming a retrieval setting with $N$ individuals and $M$ candidate examples per identity, the chance-level hit rate @ $K$ is given by
$$\textrm{HR}_\textrm{chance}@K = 1 - \frac{\binom{M(N-1)}{K}}{\binom{MN-1}{K}}.$$
This approaches $1 - (1 - \frac{1}{N})^K$ when $M$ is large and $\frac{K}{N}$ when $N \gg K$, as is usually the case for large-scale datasets.
```>>> Q2``` How do false matches (collisions) impact method performance after aggregation? False-Match Considerations: Do you have any observations on how often RAA might retrieve irrelevant exocentric videos when the embedding space has many visually similar but semantically mismatched samples?
```>>> A2``` Great question! Yes, it is entirely possible that RAA may be misled by false positives in retrieval. The retriever used in RAA is based on the ego-exo retrieval model from EgoVLP v2. As shown in Table 3, this model achieves a 50.31% HR@1 and 66.82% HR@5. However, we empirically find that this relatively low retrieval rate does not significantly impact the demographic prediction accuracy, as discussed in the second paragraph of the "Effectiveness of RAA" section (Section 6.2).
Despite the lower retrieval accuracy, RAA still enhances the attack's effectiveness. We hypothesize in L376-427 that even with imperfect retrieval, the process helps cluster and group identities with similar demographics (such as gender, age, and race) closer in its learned embedding space. This is evidenced in Table 4, where, despite a 50% retrieval accuracy, 82.22%, 84.51%, and 82.95% of the top-1 retrieved exo-videos share the same gender, age, and race as the ego-video, respectively. This suggests that even negative retrievals can *benefit* RAA if they are semantically close to the query.
```>>> Q3``` Minor typographical error at line 437 in the main text.
```>>> A3``` Thank you for pointing this out. We will revise the paper and fix this typographical error.
```>>> Q4``` What is the aggregation mechanism?
```>>> A4``` Thanks for this good question. In fact, we explore various aggregation mechanisms in Appendix B and Figure 6. The default mechanism involves a weighted sum of both the ego and exo predictions (referred to as soft voting), where the ego weight is set to 0.5, and the remaining 0.5 is evenly distributed across the exo predictions. In addition to this, we also examine hard voting (direct majority voting) and soft voting with equal weights assigned to all predictions. Our experiments show that, overall, soft voting with
𝑤=0.5 yields the best performance, improving all three demographic tasks consistently compared to the ego-only baseline (denoted by the dashed line).
Additionally, we investigate the impact of varying the number of top-k retrieved exo-predictions used in RAA. Our findings reveal that there is a key pivot point around
𝑘=3 or 𝑘=5, beyond which increasing k does not improve performance. This is because additional retrieved exo-views often introduce noise due to their potential inaccuracy. We also perform an ablation study on the role of the ego prediction in RAA, showing that including the ego prediction generally enhances the attack's effectiveness. We also experiment with using the retrieved similarity as a confidence measure to weigh each prediction, however, this empirically fial to improv over even the majority voting.
---
Rebuttal Comment 1.1:
Comment: I appreciate the helpful clarifications, which resolved my concerns. Therefore, I will maintain the original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our rebuttal and for your valuable feedback and suggestions in the review! We will revise the final version to incorporate rebuttal accordingly.
Please do not hesitate if you have further questions regarding the paper.
Thanks,
Authors of the paper | Summary: This paper investigates the privacy risks associated with egocentric videos. In particular, the authors introduce EgoPrivacy, a large-scale benchmark to evaluate privacy vulnerabilities across three axes: demographic privacy (gender, race, age), individual privacy (identity re-identification), and situational privacy (time and location). They further propose a novel Retrieval-Augmented Attack method, which boosts demographic attribute inference by retrieving relevant exocentric video footage associated with the egocentric clip. The paper evaluates zero-shot and fine-tuned models (CLIP, LLaVA, VideoMAE, and EgoVLPv2) on the proposed benchmark. The benchmark and findings are intended as red-teaming tools to inform and stimulate future privacy-preserving methods for egocentric vision.
Claims And Evidence: **EgoPrivacy Benchmark**
The paper introduces EgoPrivacy, the first large-scale benchmark explicitly designed to evaluate privacy risks in egocentric video across demographic, individual, and situational dimensions. This contribution is well-supported: EgoPrivacy is constructed from two large datasets—Ego-Exo4D and Charades-Ego—and annotated for demographic attributes (gender, race, age), individual identity, and situational context (location and moment). The benchmark includes seven tasks, framed as classification and retrieval problems, to comprehensively assess these privacy risks.
**Benchmark Evaluation**
The paper offers a thorough empirical evaluation of the EgoPrivacy benchmark, revealing notable privacy vulnerabilities in egocentric video data. This claim is moderately well-supported: Although experimental results do show that general-purpose foundation models can reliably infer demographic, identity, and situational attributes at rates well above chance, the reported performance likely underestimates real privacy risks, as SoTA models specialized for these tasks would likely achieve significantly higher accuracy.
**Retrieval-Augmented Attack**
The paper proposes a novel privacy attack that leverages ego-to-exo retrieval augmentation to enhance demographic inference. This contribution is well-supported: Augmenting predictions with exocentric views significantly improves demographic classification performance across multiple models.
Methods And Evaluation Criteria: **Benchmark**
The data collection strategy is appropriate and well-grounded. The authors utilize two large-scale egocentric datasets—Ego-Exo4D and Charades-Ego—as sources of raw video data. To support demographic analysis, they augment these datasets, where necessary, with additional annotations via Amazon Mechanical Turk. The suite of proposed privacy tasks is well-motivated and reflects real-world concerns: demographic attribute classification (gender, race, age), identity re-identification, and situational inference (time and location prediction). These tasks collectively provide a comprehensive framework for evaluating wearer-centric privacy risks in egocentric video.
**Retrieval-Augmented Attack (RAA)**
The threat model for the RAA is well-founded, as an attacker could plausibly exploit both egocentric and exocentric videos to carry out a privacy attack.
Theoretical Claims: The paper does not present any theoretical claims.
Experimental Designs Or Analyses: The experimental evaluation of the benchmark, while suggestive, is incomplete. Although the authors effectively demonstrate demographic leakage using general-purpose foundation models (e.g., CLIP, LLaVA), the reported performance likely underestimates the actual privacy risks.
Notably, exocentric classification results (e.g., ~90% accuracy for gender, ~75% for race) fall significantly below the ceiling performance of task-specific state-of-the-art models, which routinely exceed 98% for gender and 90–95% for race and age (e.g., FairFace, DEX).
Likewise, for egocentric classification, the authors do not explore hand-based biometric models, despite their relevance for predicting demographic and identity attributes in the absence of facial visibility. Prior work in biometric vision has demonstrated that hands carry sufficient cues to support identity and demographic inference with high accuracy (Matkowski et al., 2019), (Matkowski et al., 2020)
Several publicly available datasets could be leveraged to develop stronger baseline models or better contextualize the benchmark's difficulty:
- **PolyU-IITD-v3:** ~12k hand images with identity and ethnicity labels
- **11K Hands:** ~11k hand images annotated with identity, age, gender, and ethnicity
- **NTU-PI-v1:** ~8k palm and dorsal hand images with identity, age, gender, and ethnicity labels
- **CASIA:** ~5.5k hand images labeled with identity
Incorporating hand-based SoTA models or at least referencing their performance as upper bounds would strengthen the benchmark's credibility and offer a clearer picture of the limits of wearer privacy in egocentric settings.
Supplementary Material: NA
Relation To Broader Scientific Literature: The paper sits at the intersection of computer vision, privacy, and egocentric video analysis, and introduces the first large-scale benchmark specifically designed to evaluate wearer-centric privacy risks across both egocentric and exocentric perspectives.
Essential References Not Discussed: References to hand-based biometric models and datasets are missing. See **Experimental Designs Or Analyses** section.
Other Strengths And Weaknesses: Aside from the demographic annotations added to Charades-Ego, the benchmark primarily repurposes existing datasets (Ego-Exo4D and Charades-Ego), and thus offers limited novelty in terms of data collection. The core insight—that identity and demographic information can be extracted from egocentric video—is not particularly surprising, especially given prior work showing that such attributes can be inferred from hand images alone. The proposed Retrieval-Augmented Attack (RAA) closely resembles earlier cross-view retrieval methods, such as those by Fan et al. (2017) and Ardeshir et al. (2018), and does not introduce significant algorithmic innovation. Finally, the benchmark evaluation is not thoroughly conducted: it primarily relies on general-purpose models for the privacy attacks, rather than leveraging state-of-the-art models specifically trained for demographic or identity recognition, which would offer a stronger and more realistic assessment of privacy risks.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: ```>>> Q1``` Although the authors effectively demonstrate demographic leakage using general-purpose foundation models (e.g., CLIP, LLaVA), the reported performance likely underestimates the actual privacy risks.
```>>> A1``` Thanks for the suggestion. While models tailored to privacy attacks may have an advantage, there's a significant gap between datasets used for facial attribute prediction—typically high-quality, cropped facial images (e.g., CelebA, UTKFace, FairFace)—and EgoPrivacy, which uses raw, distant egocentric video (see [image](https://bashify.io/i/fQwScK) left). *We argue that SOTA models perform well in ideal conditions, but their metrics aren't directly comparable to results on EgoPrivacy.*
We also agree that current vision-language foundation models, despite performing surprisingly well, aren’t optimized for egocentric privacy tasks. Their performance could improve with more egocentric training data. This motivates our benchmark: to drive research, enable objective progress tracking, and advance attack and defense methods in the egocentric privacy domain.
```>>> Q2``` For egocentric classification, the authors do not explore hand-based biometric models.
```>>> A2``` Just as exocentric video frames with faces do not resemble images from high-quality facial attribute datasets (see A1 above), the distribution gap between hand/palmprint identification data and egocentric videos is even more dramatic, as shown in [image](https://bashify.io/i/fQwScK) (right). The vast majority of egocentric video frames do not show clear pictures of the hands of the camera wearer; even those that do tend to show the back of hands from first-person view, which contains far less identifiable information than the palm. This makes it very challenging to apply hand biometric methods to egocentric privacy tasks directly without nontrivial modifications or retraining. We will include a thorough discussion of specialized approaches to egocentric privacy attacks in the final version of the paper, as well as empirical comparisons to hand-based baselines.
Regarding additional baselines in exocentric (face-based) and egocentric (hand-based), we have invested effort to reproduce the results of the suggested papers on our benchmark. However, due to the limited time and computation, as well as the unavailability of pretrained models in public and the scale of our evaluation set (over 5000 exo- and ego videos), we are unable to offer results at the moment, but hopeful can share results during the discussion phase and final version of the paper.
```>>> Q3``` Limited novelty in terms of data collection.
```>>> A3``` While we do repurpose Ego-Exo4D and Charades-Ego dataset for egocentric privacy, additional effort has been made on the annotation of demographic labels, on an unprecedented scale, which is a nontrivial task. This annotation further facilitates the significant research on the egocentric privacy that has received limited attention in prior egocentric vision literature.
```>>> Q4``` The proposed Retrieval-Augmented Attack (RAA) closely resembles earlier cross-view retrieval methods, such as those by Fan et al. (2017) and Ardeshir et al. (2018), and does not introduce significant algorithmic innovation.
```>>> A4``` Thank you for your comment. As discussed in Section 2, "Related Work on Egocentric Person Identification," we acknowledge the contributions of earlier cross-view retrieval methods [1, 2, 3, 4]. However, previous work has largely overlooked the impact they have on the demographic privacy of camera users, i.e. how cross-view retrieval techniques might facilitate security breaches of egocentric demographic privacy, which is the primary focus of our study.
For instance, [1, 2] focus on identifying first-person camera wearers in third-person videos, but they do not explore the potential of these methods in enhancing demographic attacks. Similarly, [3] investigates cross-view techniques for retrieving motion features, while [4] uses cross-view retrieval to improve action and motion captioning at the semantic level, neither of which are designed with privacy concerns in mind. In the final version of the paper, we will include a more detailed discussion of the novelty of RAA in relation to these prior works.
[1] Fan, Chenyou, et al. "Identifying first-person camera wearers in third-person videos." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017.
[2] Elfeki, Mohamed, et al. "From third person to first person: Dataset and baselines for synthesis and retrieval." arXiv preprint arXiv:1812.00104 (2018).
[3] Ardeshir, Shervin, Krishna Regmi, and Ali Borji. "Egotransfer: Transferring motion across egocentric and exocentric domains using deep neural networks." arXiv preprint arXiv:1612.05836 (2016).
[4] Xu, Jilan, et al. "Retrieval-augmented egocentric video captioning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. | Summary: This paper introduces EgoPrivacy, a benchmark and study on the privacy risks associated with egocentric (first-person) videos, revealing that substantial personal information about the camera wearer—such as demographics (gender, race, age), identity, and location/time—can be inferred even when the wearer’s face is not visible. The authors propose a novel Retrieval-Augmented Attack (RAA) that boosts privacy attacks by retrieving matching exocentric (third-person) videos, significantly improving inference accuracy. Through experiments with both zero-shot and fine-tuned models (like CLIP, LLaVA, and EgoVLPv2), they show that even minimal attacker capabilities can lead to effective privacy breaches, raising serious concerns about the use of wearable cameras and highlighting the urgent need for privacy-preserving approaches in egocentric vision.
### update after rebuttal
I have raised my score.
Claims And Evidence: All claims are well supported.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A
Experimental Designs Or Analyses: ### Calculation of Random Accuracy
The authors claim that the random accuracy for predicting race is 33.3%. However, the distribution of the demographic is unbalanced per the appendix.
Supplementary Material: Yes, all.
Relation To Broader Scientific Literature: The paper builds meaningfully on prior work in visual privacy and egocentric vision by addressing gaps left by earlier datasets and studies. While prior egocentric datasets like FPSI and EVPR were limited in scale and scope, this work introduces a large-scale, richly annotated benchmark (EgoPrivacy) that uniquely targets privacy risks faced by the camera wearer. It builds on prior work by contributing a structured taxonomy of privacy types—demographic, individual, and situational—that had not been comprehensively studied together. The proposed Retrieval-Augmented Attack also builds on cross-view retrieval work (e.g., Elfeki et al., 2018), but repurposes it to demonstrate novel privacy vulnerabilities.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: Missing Appendix letter (line 437)
Questions For Authors: 1- Can the authors clarify the issue with random accuracy and then update the results and discussion if necessary?
Overall, the work is novel and well motivated. I will update my score upon addressing the critical issue above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ```>>> Q1``` Calculation of random accuracy
```>>> A1``` Thank you for the insightful question. We present the prior accuracy
| | Variant | Gender | | | Race | | | Age | | |
|---------------|---------|--------|-------|-------|--------|-------|-------|--------|-------|-------|
| | | Exo | Ego | RAA | Exo | Ego | RAA | Exo | Ego | RAA |
| Random Chance | | 50.00 | | | 33.33 | | | 33.33 | | |
| Priored | | 60.74 | | | 54.17 | | | 79.48 | | |
| **Zero-shot (ID)**| | | | | | | | | |
| CLIP (DFN) |ViT-H/14| 78.64 | 57.89 | 67.35 | 60.04 | 45.21 | 60.98 | 73.51 | 72.02 | 76.23 |
| SigLIP |S0400M/14| 84.01 | 57.63 | 70.56 | 65.46 | 54.97 | 66.27 | 72.90 | 68.25 | 78.03 |
| LLaVA-1.5 | 7B | 91.52 | 66.90 | 77.16 | 60.06 | 57.34 | 57.52 | 79.29 | 79.46 | 79.55 |
| Video-LLaMA2 | 7B | 90.96 | 73.15 | 79.48 | 71.53 | 53.97 | 69.10 | 52.99 | 47.08 | 56.14 |
| | 72B | 91.59 | 70.03 | 78.41 | 69.25 | 65.36 | 67.82 | 82.46 | 79.64 | 81.62 |
|**Fine-tuned (ID)**| | | | | | | | | |
| CLIP (DFN) |ViT-H/14| 88.33 | 68.87 | 76.98 | 73.93 | 70.92 | 71.92 | 77.15 | 79.73 | 79.73 |
| EgoVLP v2 | 7B | 84.85 | 71.81 | 77.88 | 71.46 | 72.01 | 75.57 | 77.11 | 80.72 | 81.88 |
| VideoMAE |ViT-B/14| 72.42 | 63.69 | 70.65 | 75.16 | 66.73 | 73.49 | 78.21 | 79.73 | 81.70 |
| |ViT-T/14| 87.14 | 63.87 | 78.95 | 74.36 | 70.10 | 72.65 | 77.15 | 79.73 | 79.73 |
However, imbalance also poses challenges during training. Thus, we also perform additional studies that merge / remove the extremely imbalance category in the age / race demongraphic attacks. Specifically, we merge the old with the middle age in age and remove the black in the race, as these two categories are extremely minority in the dataset. We present additional results below:
| | Variant | Race | | | Age | | |
|---------------|--------|--------|-------|-------|--------|-------|-------|
| | | Exo | Ego | RAA | Exo | Ego | RAA | Exo | Ego | RAA |
| Random Chance | | 50.00 | | | 33.33 | | | 33.33 | | |
| Priored | | 55.93 | | | 79.48 | | | - | | |
| **Zero-shot (ID)**| | | | | | | | | |
| LLaVA-1.5 | 7B |61.93|58.60|68.22|80.10|80.47|81.56|
| Video-LLaMA2 | 13B |71.53|53.97|69.10|52.99|47.09|56.14| | |
| **Fine-tuned (ID)**| | | | | | |
| CLIP (DFN) | ViT-H/14|69.06|67.91|69.37|76.04|79.73|79.91|
| EgoVLP v2 | 7B |64.90|65.36|69.28|76.03|78.65|79.91|
In general, we observe that gender and race attacks outperform prior accuracy by large margins, while the improvement of age attacks is less significant due to heavy imbalance in age demographics. We leave for future work the collection of more balanced egocentric video datasets for an unbiased evaluation of demographic privacy. | null | null |
AtlasD: Automatic Local Symmetry Discovery | Accept (poster) | Summary: The paper proposes a novel pipeline to discovery symmetries in a dataset and, then, employ them to enforce the correct inductive bias in a machine learning model.
In particular, the proposed method can discover not only global symmetries but also local symmetries by restricting the attention to local patches of the input data via the charts in a user-defined atlas.
Claims And Evidence: See comments below
Methods And Evaluation Criteria: See Experimental Designs or Analysis.
Theoretical Claims: I didn't check the theoretical proofs in detail.
Experimental Designs Or Analyses: The evaluation criteria seems suitable for evaluating the proposed idea, although I think there is space of improvment in some experiemnts.
- Sec. 5.2: what happens if you seed with more than 1 Lie Algebra generator? Do you still find a single relevant generator?
- Sec 5.4, isn't the choice of isotropic filters making the model equivariant to GL(2)? If so, this seems to imply you ignored the discovered GL+(2) symmetry and, instead, implemented a model equivariant to all possible transformations considered in the initial search space. Then, I am not sure this experiment supports the benefit of the symmetry discovery method. That being said, I agree on the difficulty of implementing a GL+(2) equivariant model, but I think this suggests this is not the best task to evaluate the proposed method.
- Why not consider the benchmark of fluid simulation from (Wang et al., 2022), which already compares many learnable equivariance methods in the literature? This could provide a simple and effective way to compare with most of the previous literature. This dataset seems to also feature local rotational symmetries (similar to the PDE dataset in this manuscript) and broken global symmetries due to boundary conditions.
- All experiments consider at most order-2 discrete groups (such as the flip group and the parity group). I suspect this significantly limits the complexity of discovering discrete symmetries (see Questions for the Authors below); it would be interesting to experiemnt with tasks featuring bigger discrete symmetries e.g. C_4 or D_4 rotations.
Wang et al., 2022, Approximately equivariant networks for imperfectly symmetric dynamics
Supplementary Material: I quickly reviewed all the supplementary materials but I might have missed some details.
Relation To Broader Scientific Literature: I think this is a relevant novel work.
The paper claims previous works on symmetry discovery mostly focused on global symmetries.
However, there's a few previous works which considered learning local symmetries by leveraging the idea of symmetry breaking at different scales in CNNs with layer-wise learnable equivariance.
See for example:
Romero & Lohit, 2022, Learning partial equivariances from data
Veefkind & Cesa, 2024, A Probabilistic Approach to Learning the Degree of Equivariance in Steerable CNNs
Essential References Not Discussed: I think the manuscript is missing a few relevant citations from the literature on learnable equivariance and approximate equivariance.
Below are a few examples:
Finzi, M., Benton, G., and Wilson, A. G. Residual pathway priors for soft equivariance constraints.
Wang, R., Walters, R., and Yu, R. Approximately equivariant networks for imperfectly symmetric dynamics
Wang, D., Zhu, X., Park, J. Y., Platt, R., and Walters, R. A general theory of correct, incorrect, and extrinsic equivariance
van der Ouderaa, T., Romero, D. W., and van der Wilk, M. Relaxing equivariance constraints with non-stationary continuous filters.
van der Ouderaa, T. F., Immer, A., and van der Wilk, M. Learning layer-wise equivariances automatically using gradient
Petrache, M. and Trivedi, S. Approximation-generalization trade-offs under (approximate) group equivariance
Veefkind, L., and Cesa, G. A Probabilistic Approach to Learning the Degree of Equivariance in Steerable CNNs
van der Ouderaa et al., Noether's Razor: Learning Conserved Quantities
Other Strengths And Weaknesses: Strengths.
The atlas approach to local symmetries is novel and the idea proposed is interesting.
The experimental analysis shows the potential benefits of the proposed method in different settings.
Weaknesses.
The paper misses the comparison with some previous works and the experimental analysis has a few issues.
I am happy to raise my score if the authors address these concerns.
Other Comments Or Suggestions: - in Sec. 4.2.1, it is not very clear to the reader what kind of functions the \Phi_c should be at this point of the manuscript. Might be worth adding some concrete examples earlier in the manuscript.
- Eq. 3 why not just training the basis while enforcing its elements to be orthogonal to each other? One would only need to enforce the matrix B (containing the vecorized B_i in its columns) to be an orthogonal matrix, which you can easily do via SVD, no?
- Sec. 4.2.2: page 5, second column, "This implies we only need to consider transformations whose determinant has absolute value 1." Why is that the case?
- can you make Fig 4 clearer? the colorbars are hardly readable. Also, the numbering of the generators in the caption are ambiguous. Maybe add the numbers in the figure too. Also, what is the pink and green heatmap exaclty? It seems that Fig 4 includes different types of matrices (continuous generators, some metric and the discrete generators) but puts them all together with no explanation in the image.
- what are the $\Psi_c$ functions used for the symmetry discovery algorithm in sec. 5.1? Do you just train a normal neural network first, discover the symmetry and, then, train a new network equivariant to the new symmetries? I think more details about this could be included in the main paper rather than just the appendix.
- 5.4 why so few charts? Also, the width of the charts seem a very important parameter here to manage the scale of the local symmetries. How is it chosen?
Questions For Authors: - if a neural network is trained on a real dataset, it is typically never numerically equivariant unless the data contains sufficient data augmentation. At that point, however, it is assumed the user already knows the underlying symmetry. I feel this aspect could get some more discussion in the main paper.
- Sec. 4.2.1 sampling $\eta \sim N(0, I)$ will only enforce stability to (relatively) small transformatoins close to the identity, isn't it? Does't it make more sense to sample uniformly from the group?
- Sec 4.2.2: if I understand correctly, the method uses a simple gradient descent strategy in a rather low-dimensional space (the parameter space of the $C_l$ matrices) to find all minima of a probably very non-convex function. Why do you expect this solution to work well to find the discrete symmetries?
- Sec 4.2.2 even if $C_i C_j^{-1}$ is not in the identity component, they might still be redundant, e.g. if $C_i$ is a power of $C_j$. Also, how is the minimization perfomed? Is the objective convex such that a simple gradient descent can be expected to converge? I am not sure it is possible to claim that the filtration process produces a list of unique representatives of the cosets. Note also that this is probably not a problem in these experiments since only order-2 discrete groups have been considered (such as the flip group and the parity group). You should experiment with other discrete symmetries, e.g. C_4 or D_4 rotations.
- 5.2: what happens if you seed with more than 1 Lie Algebra generator? Do you still find a single relevant generator?
- 5.3: assigning a chart to the region of each digit seems a bit like cheating, since the users are implicitly pointing to the regions (and restricting the attention to only those regions) where they know there are exact local symmetries. Did you consider other choices of charts?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > The manuscript is missing a few relevant citations
We appreciate the suggestion for additional citations. The mentioned works on approximate equivariance modify equivariant architectures to handle situations where no perfect global symmetry is present. These works are indeed relevant and we will be sure to include them in our revised manuscript.
The fluid simulation benchmark also appears relevant. We have started work towards an experiment, but are currently addressing challenges related to the chaotic and non-local nature of the dataset.
> Eq. 3 Why not train the basis using SVD?
SVD achieves orthogonality but is not suitable for encouraging sparsity. Our standard basis regularization explicitly promotes sparsity by penalizing shared non-zero terms among the generators.
> “This implies we only need to consider transformations whose determinant has absolute value 1.” Why?
When the component group of a matrix Lie group $G$ is finite, we may find a finite subgroup $H$ that contains at least one element from each connected component [1]. As $H$ is a finite subgroup of $GL(n)$, for each $h\\in H$, $h^{|H|}=I_n$ so $(\\det h)^{|H|}=1$ and $|\\det h|=1$. Hence, each coset has at least one representative whose determinant has absolute value $1$.
> Can you make Fig 4 clearer?
Currently, the subfigures are in row-major order. The pink and green heatmap is the computed invariant metric tensor, using the methodology by [2]. We will make the numbering clearer and split this figure into multiple subfigures.
> What are the $\Phi_c$ functions in sec. 4.2.1/5.1?
The description you provided is correct. We will be sure to include more details and examples in the main paper.
> 5.4 Why so few charts?
The chart size was primarily constrained by the fact that when small charts were used, the atmospheric river class would take up the entirety of the chart. So we instead chose to use a smaller number of larger charts.
> If a network is trained on a real dataset, it is typically not equivariant
This is a problem with global symmetry discovery when datasets are canonicalized (e.g. rotation of images removed). It is uncommon to perform such canonicalization at the local scale. Thus, our predictor networks will not face this issue and remain equivariant.
> 4.2.1 Sampling $\eta \sim N(0,I)$ will only enforce stability to small transformations
Although there are many distributions to try, it’s difficult to sample uniformly from the group as the hypothesis space $\mathrm{GL}(n)$ is not compact. To sample more extreme transformations, one can increase the standard deviation. However, due to the unboundedness of the search space, the distribution must necessarily be biased towards the origin.
> Why does the discrete discovery algorithm work?
The search space is relatively low-dimensional and is further reduced in dimensionality by our assumption that the component group is finite. Moreover, by setting $K$ to be significantly larger than the expected number of cosets, at least one representative likely converges toward each ground truth coset. This is confirmed experimentally in 5.1 and 5.2.
To verify if our discrete discovery pipeline is able to discover complex groups, we ask it to find the global symmetries of the function $f(x,y)=|x|+|y|$. The ground truth symmetry group is $D_4$ and our algorithm can recover all $8$ elements.
https://i.ibb.co/CKy1wvLc/output.png
> 4.2.2 Even if $C_i C_j^{-1}$ is not in the identity component, they might be redundant
Note that we claim to discover all the cosets, not just the generators of the component group. In practice, once one has all the elements of the component group, it’s easy for a human to identify the generators.
> 4.2.2 How is the minimization performed? Is the function convex?
The function is not globally convex (especially since in certain directions $\\exp$ is periodic). Nevertheless, we have found gradient descent to be successful empirically and are yet to run into any issues. In case gradient descent fails, one can turn to higher-order methods like L-BFGS.
> 5.2 What happens if you seed with more than 1 generator?
When we use multiple generators, the algorithm produces one rotational generator and one that corresponds to a weak scale. The rotational generator is still recognizable, though admittedly less accurate than when the algorithm is seeded with 1 generator.
> 5.3 Assigning a chart to the region of each digit seems unfair
We acknowledge the MNIST experiment has an idealized setting. Its purpose is to demonstrate the viability of the full pipeline and highlight the difference between local and global transformations. In more realistic scenarios (like our PDE or climate examples), one does not know a priori which charts exhibit local symmetries. Despite having less knowledge, our method still succeeds.
**References**
[1] A note on free subgroups in linear groups, Wang 1981.
[2] Generative Adversarial Symmetry Discovery, Yang et al. 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed answer.
I still have a main concern:
> This is a problem with global symmetry discovery when datasets are canonicalized (e.g. rotation of images removed). It is uncommon to perform such canonicalization at the local scale. Thus, our predictor networks will not face this issue and remain equivariant.
I disagree with this statement, since this problem is relevant even in symmetric datasets without any canonicalization. For example, it is very common to use rotation augmentation on typically rotation symmetric datasets (e.g. histopathological images): even if the underlaying data distribution is symmetric, the finite and small dataset rarely present all rotated versions of each pattern; using explicit data augmentation is often fundamental for non-equivariant models to properly generalize (it is even used for SE(2)-equivariant models to mitigate the discretization artifacts of the pixel grid).
With this in mind, I find the proposed argument about the model learning the symmetry by just training it on the dataset (of local or global patches is irrelevant for this argument) a bit weak: if that was true, there would be no need for both data augmentation and equivariance, which contradicts most of the previous literature.
Note I am not claiming it is never possible to get insights about the symmetries by inspecting a model trained on a sufficiently large dataset; instead, I am highlighting a reasonable limitation of this approach which seems to be mostly ignored in the current manuscript.
Also, regarding Question 5.3 about charts for the MNIST dataset, I would at least mention this point explicitly in the manuscript.
---
Reply to Comment 1.1.1:
Comment: Yes, you are correct that we assume the symmetry is fully present in the dataset for each chart. We will be sure to explicitly mention this assumption in our manuscript. Note that this is a limitation of symmetry discovery works in general, as LieGG similarly relies on a pretrained predictor and LieGAN requires the dataset distribution to be not fully canonicalized across any orbit (see their Assumption 3 in Appendix A). Fundamentally, if symmetry is not present in a dataset, either due to canonicalization or insufficient data samples, it does make symmetry discovery more challenging. We consider it a direction for future work to investigate how data distribution (along group action orbits) can affect the performance of symmetry discovery.
We will mention the idealistic setup of Section 5.3 in the manuscript as well. | Summary: The paper introduces the concept of atlas equivariance, which formalizes the notion of local symmetry in contrast to traditional global symmetry approaches. The proposed method discovers local symmetries by learning a Lie group basis for each chart (a local region of the input manifold). This is achieved by training local predictor networks and optimizing the equivariance loss with respect to learnable group generators. These generators are defined in the Lie algebra of the target symmetry group, which acts on each local chart.
Claims And Evidence: The reasoning for the necessity of local symmetry in general is not convincing. The paper states that “local symmetries are more generalized,” but it does not strongly justify why local symmetry is essential for either the machine learning or natural science communities. Although it also mentions that “it generalizes symmetry discovery to arbitrary manifolds and allows for downstream use in gauge equivariant networks,” this is also feasible with prior methods when data is defined as a manifold. Moreover, the performance gains reported in the experiments from utilizing discovered local symmetry are only marginal.
Methods And Evaluation Criteria: The suggested approach partially makes sense. However, I do not understand why parameterizing only the Lie algebra and minimizing the equivariance loss with a pretrained predictor is sufficient for learning the symmetry group. This approach may lead to unstable symmetry discovery depending on the initialization of the Lie algebra and the accuracy of the pretrained predictor, especially for complex data like PDE solutions. This is why prior methods choose to provide some group generators and train their coefficients [1,2] or use well-designed loss functions, such as Jacobian-based losses [3] or cosine similarity of output features [4]. The paper should discuss these potential issues or provide an empirical study to address them.
[1] Learning Invariances in Neural Networks, Benton et al. 2020.
[2] Generative Adversarial Symmetry Discovery, Yang et al. 2023.
[3] LieGG: Studying Learned Lie Group Generators, Moskalev et al. 2023.
[4] Learning Infinitesimal Generators of Continuous Symmetries from Data, Ko et al. 2024.
Theoretical Claims: I did not check the proof of Theorem 4.3. but statement at least makes sense.
Experimental Designs Or Analyses: 1. The experiment excluding a certain region in the PDE setting is sound and well-designed.
2. However, the experiments focus only on locally varying symmetry. It is also important to demonstrate that the method consistently discovers the same symmetry for every chart when only global symmetry is present, which prior methods can easily achieve.
3. I also wonder about the memory and time complexity required for learning the group generators. I guess the learning time is proportional to the number of charts available
4. Additionally, the paper should provide guidelines on how many neighborhoods are needed to form a chart to obtain reasonable local symmetry for different types of data. The choice of the number of neighborhoods may significantly impact the discovered symmetry.
Supplementary Material: None
Relation To Broader Scientific Literature: The main concern is the contribution of the paper. Local symmetry can also be found using baselines like LieGG and LieGAN by defining charts and treating the data as a manifold. The main novelty appears to be the new loss function for discovering discrete symmetry, which was limited in prior works. However, this contribution is independent of finding local symmetry. I do not see how the proposed loss functions or parameterization are specifically designed for discovering local symmetry.
Essential References Not Discussed: None
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > The reasoning for the necessity of local symmetry is not convincing.
The motivation for local symmetry is that arbitrary manifolds (such as a Möbius strip) do not have global symmetries. In such cases, there is nothing for global discovery methods to learn. However, all manifolds do have local symmetries, making them broadly applicable. Such local symmetries are interesting because they can be used as an inductive bias in gauge equivariant neural networks to improve performance in computer vision, climate segmentation, and other real world tasks [1]. On the other hand, the global symmetries that prior works discover are incompatible with gauge equivariant networks.
> The experiments focus only on locally varying symmetry. It is also important to demonstrate that the method consistently discovers the same symmetry for every chart when only global symmetry is present, which prior methods can easily achieve.
We clarify that the atlas equivariance group describes the global symmetries of the local predictors, where each local predictor is the task function restricted to a particular chart. Crucially, it is the *common* symmetry group for these predictors, rather than varying with each chart. This means that in our experiments, we have in fact been discovering the symmetry group that is the same for all charts.
> I also wonder about the memory and time complexity required for learning the group generators.
We include an analysis of space and runtime in Appendix E. In summary, the time and space complexity scale linearly with the number of charts.
> Additionally, the paper should provide guidelines on how many neighborhoods are needed
The primary requirement for an atlas is that the charts are large enough for the function to truly be atlas local, but small enough that each region is approximately flat. The exact number of charts is a hyperparameter that may depend on domain-specific factors (e.g. geometry, boundary, data coverage). However, Sec 5.2 demonstrates that our algorithm works under a diverse set of atlases, indicating its robustness to the chart size and count in practice.
> I do not see how the proposed loss functions or parameterization are specifically designed for discovering local symmetry.
Compared to those in LieGG or LieGAN, the loss function and parameterization used by AtlasD are specifically tailored to local symmetry discovery. LieGG requires one row in the polarization matrix for every single output pixel in the dataset (Appendix D.4). However, local symmetry datasets are detailed feature fields, making this method impractical due to the memory required. On the other hand, an approach based on LieGAN requires adversarial training, which is often unstable. Moreover, a discriminator-based setup cannot be used in the discrete discovery algorithm because losses for different cosets are no longer comparable. Hence, the predictor loss method used by AtlasD is most applicable to discovering the full local symmetry group.
The reviewer is correct that the discrete discovery is somewhat independent of local symmetry, but we argue that this only boosts our contribution as our method can then be applied to broader domains.
**References**
[1] Gauge equivariant convolutional networks and the icosahedral cnn, Taco Cohen et al. 2019
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. Some points are addressed, but the questions raised in Methods and Evaluation Criteria remain unresolved.
---
Reply to Comment 1.1.1:
Comment: Thank you for replying and pointing out the remaining concerns. We have explained the design rationale for the proposed loss function and parameterization of the symmetry in the last section of our rebuttal. We will provide additional clarification to your question and comparison with the mentioned related works as follows.
> AtlasD depends on the accuracy of the predictors
Yes, this is true. However, we note that this is a common setup in symmetry discovery, e.g. LieGG, [4], [5] also depend on the accuracy of pretrained predictors. In addition, in the context of local symmetry discovery, the predictors generally have simple tasks, since they only need to predict locally. This gives us an advantage over global predictors as it is inherently more difficult to predict on a global scale. Finally, compared to existing works that use a GAN discriminator, our predictor-based approach does not suffer from the training instability issue of GAN.
> Comparison to prior works
[1,2]: When fixing the generators and training the coefficients, these methods learn which *subset* of an input group the system is equivariant to. This differs from our setup, where we seek to learn the maximal symmetry group from a much broader hypothesis space. Only our method is applicable when we do not know the symmetry group beforehand.
[3]: Generalizing LieGG to discover equivariances on maps between feature fields requires a significant amount of memory, which makes it infeasible in practice. This is discussed briefly in Appendix D.4 of our paper.
[4]: Recall that we define our equivariance loss as $\mathcal{L}(\Phi_c(g \cdot x), g \cdot \Phi_c(x))$, where $\mathcal{L}$ is a error function appropriate to the context. If we take $\mathcal{L}$ to be cosine similarity, then the only major difference between our method and [4] is that we apply the loss to the output of the predictors but in [4], it is applied to the output of the feature extractor. Since training the feature extractor is of comparable difficulty to training the predictors, we argue we should see similar stability in discovery.
**References:**
[1] Learning Invariances in Neural Networks, Benton et al. 2020.
[2] Generative Adversarial Symmetry Discovery, Yang et al. 2023.
[3] LieGG: Studying Learned Lie Group Generators, Moskalev et al. 2023.
[4] Learning Infinitesimal Generators of Continuous Symmetries from Data, Ko et al. 2024.
[5] Deep learning symmetries and their Lie groups, algebras, and subalgebras from first principles, Forestano et al. 2023 | Summary: This paper introduces AtlasD, a framework for discovering local symmetries, specifically, atlas equivariance, within datasets. Atlas equivariance is a kind of gauge equivariance, where a global symmetry group $G$ acts differently in each local coordinate system. To identify such a $G$, AtlasD assumes a predefined atlas, trains local predictor networks (which learn $G$-equivariance implicitly from data) for each coordinate chart, and extracts the Lie algebra of $G$ that is commonly present across the local predictors. The proposed method also extends to discrete symmetries by identifying cosets of the identity group action. Furthermore, the authors establish a theoretical connection between atlas equivariance and gauge symmetry, ultimately constructing a pipeline to extract local symmetries from data and integrate them into gauge-equivariant CNNs. The approach is validated across four different datasets.
Claims And Evidence: The proposed method is based on the strong assumption that the presence of known atlas. Building on this assumption, the authors’ claims are empirically supported as follows:
- AtlasD detects local symmetries, defined as atlas equivariance (Definition 4.2), where global methods fail. This is empirically supported by the PDE experiment (Figure 15) and the MNIST-on-sphere experiment (Figure 8).
- AtlasD identifies discrete symmetries (more specifically, disconnected Lie group), including both positive and negative determinant cases. This is demonstrated in the top quark tagging experiment (Section 5.1), where AtlasD finds Section 5.1 ($O^+(1, 3)$ from AtlasD vs. $SO^+(1,3)$ from LieGAN).
- The extracted local symmetry provides a useful inductive bias that compatible with gauge-equivariant neural networks: this claim is supported by several downstream tasks.
Methods And Evaluation Criteria: - A major concern with this method is the assumption that a suitable atlas for the dataset is known. It is unclear whether this assumption is reasonable, and the authors should provide more empirical evidence on how different atlas choices impact the discovered symmetry. While this is partially addressed in the PDE example (19 charts vs. 3 charts), additional ablation studies would strengthen the validation, including:
1. The effects of overly sparse or excessively dense (overlapping) charts.
2. Performance under missing or incorrect (noisy) charts.
3. Cases where local coordinates are embedded in higher-dimensional Euclidean spaces.
4. Atlases constructed using data-driven approaches, such as chart auto-encoders [1,2].
[1] Schonsheck, S., Chen, J., & Lai, R. (2019). Chart auto-encoders for manifold structured data. arXiv preprint arXiv:1912.10094.
[2] Floryan, D., & Graham, M. D. (2022). Data-driven discovery of intrinsic dynamics. Nature Machine Intelligence, 4(12), 1113-1120.
- Another concern is the computational complexity of the proposed method. Several crucial hyperparameters, such as the number of basis functions k and the number of cosets for disconnected symmetries K, are determined heuristically by progressively reducing their values from high to low. This approach appears inefficient and may benefit from a more systematic or scalable selection strategy.
Theoretical Claims: This paper primarily explores applications of symmetry discovery but also presents an interesting theoretical connection between atlas equivariance and gauge-equivariant CNNs. I briefly reviewed the theorem and its proof and did not find any significant flaws.
Experimental Designs Or Analyses: As mentioned in the Methods and Evaluation Criteria section, I believe the authors should provide a more rigorous ablation study regarding the selection of atlases. Please refer to this section for details. Apart from that, I am satisfied with the experimental procedure and benchmarks.
Supplementary Material: The supplementary material includes proofs, implementation details (such as regularizations used), experimental details, and a time complexity analysis of the proposed algorithm. I briefly reviewed the proofs and experimental details, which seem to adequately support the main manuscript.
Relation To Broader Scientific Literature: This paper is relevant to gauge theory in theoretical and high-energy physics. While the authors establish a connection between atlas equivariance and gauge-equivariant CNNs and present experimental results using a quark dataset, its focus remains within the realm of machine learning. If the authors illustrate the practical applications of gauge symmetry and local symmetry through examples from physics or other broader scientific fields in the introduction, the practical usefulness of this paper will be further emphasized.
Essential References Not Discussed: This paper provides a structured overview of relevant works on equivariant neural networks and automatic symmetry discovery. However, I believe the authors would benefit from discussing representation learning literatures, for example, [1,2], which can be seen as the automatic construction of charts and embeddings for manifold data.
[1] Schonsheck, S., Chen, J., & Lai, R. (2019). Chart auto-encoders for manifold structured data. arXiv preprint arXiv:1912.10094.
[2] Floryan, D., & Graham, M. D. (2022). Data-driven discovery of intrinsic dynamics. Nature Machine Intelligence, 4(12), 1113-1120.
Other Strengths And Weaknesses: Overall, the motivation is clear, the proposed method is well-founded, and the evaluation is reasonable. However, to make the work publication-worthy, the authors should provide additional theoretical or empirical validation to better clarify its sensitivity to the predefined local charts.
Other Comments Or Suggestions: To highlight the importance of local symmetry detection, it would be beneficial to include Figure 15 from the Supplementary Material in the main manuscript.
Questions For Authors: Please see the Method and Evaluation Criteria Section for major questions and concerns. Some additional questions are:
- The proposed method identifies the global symmetry group by aggregating multiple local predictor models. I am curious whether this approach enhances the robustness of symmetry discovery or makes it more susceptible to noise. From an ensemble perspective, it seems to improve robustness of discovered symmetries; however, if the local predictors are not well optimized, the discovered symmetry group might be unreliable. I would like to hear the authors' thoughts on this aspect.
- The authors mention that standard basis regularization provides more interpretable results, albeit at the cost of a higher rate of duplicate generators, possibly in comparison to cosine similarity regularization. What does "interpretable results" mean in this context? Does it simply imply that the discovered symmetries are more consistent across different runs? Additionally, could you conduct an ablation study to compare the effectiveness of standard basis regularization and cosine regularization?
- Algorithm 1, though provides a compact summary of the proposed framework, is too vague. Providing more details on practical computational methods would enhance its comprehensibility.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > A major concern with this method is the assumption that a suitable atlas for the dataset is known.
While this assumption may initially appear overly ideal, in practice it is achievable. The primary requirement for an atlas is that the charts are large enough for the function to truly be atlas local, but small enough that each region is approximately flat. In all of our experiments, we only needed minor additional tuning once this relatively weak condition was met.
We showed in section 5.2 that AtlasD is successful under two diverse atlases, implying one has large freedom in the exact atlas they choose. To provide more evidence, we have created an additional atlas for the PDE experiment with heavy overlap, missing regions, and a sheared/noisy chart.
https://i.ibb.co/xtgDr7ZF/Figure-1.png
We discover a single generator
$
\\begin{vmatrix}
-0.368 & -1.035 \\\\
1.101 & 0.386
\\end{vmatrix}
$ and both cosets. The noisy chart does worsen the discovered generator, but it remains recognizable as a rotation.
> Atlases constructed using data-driven approaches
We believe the referenced works slightly differ from our setup. In particular, these works model an unknown data manifold embedded in higher-dimensional Euclidean spaces. On the other hand, we deal with feature fields on a simple, explicitly known manifold, such as a sphere or 2D region.
> Computational complexity w.r.t hyperparameters $k$ and $K$.
$k$ is simple to tune in practice since we are dealing with low-dimensional spaces and only need several reruns during training. Moreover, only $k$ is chosen in the high-to-low process. $K$, the number of cosets, is fixed once.
> It would be beneficial to include Figure 15 from the Supplementary Material in the main manuscript.
We agree that Appendix D.4 is important and will be sure to include it and Figure 15 in the main manuscript.
> The proposed method identifies the global symmetry group by aggregating multiple local predictor models. Does this approach enhance the robustness of symmetry discovery against noise?
To be specific, we identify the local symmetry group of $\Phi$ by finding the common global symmetries of all $\Phi_c$. We argue that performing symmetry discovery across multiple predictors effectively multiplies the amount of data elements we have available, making our method resilient to noise.
> If the local predictors are not well optimized, the discovered symmetry group might be unreliable.
Yes, this is true. However, the predictors generally have easy tasks in our setup, since they only need to predict locally. This gives us an advantage over global predictors as it is inherently more difficult to predict on a global scale. Also, compared to existing works that use a GAN discriminator, our predictor-based approach does not suffer from the training instability issue of GAN.
> The authors mention that standard basis regularization provides more interpretable results. What does "interpretable results" mean in this context? Additionally, could you conduct an ablation study to compare the effectiveness of standard basis regularization and cosine regularization?
We provide an ablation in Appendix D.2 to compare standard basis regularization against cosine similarity.
When cosine similarity is used, we are still able to find an orthogonal basis, but each generator has many non-zero elements (Figure 12). This makes it difficult to understand what physical action each generator corresponds to. In contrast, in the basis discovered using standard basis regularization (Figure 4), each generator has only a few non-zero elements which allows it to easily be classified as a boost or rotation. Hence, “interpretability” in this context means sparsity of the generators.
> Algorithm 1, though providing a compact summary, is too vague.
We will be sure to include subroutines for each step in Algorithm 1. If there are any other specific points the reviewer feels should be included in the overview itself, we would be happy to add them.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed response. I am now convinced that the requirement for prior knowledge of atlases on 2D/3D manifolds is not overly expensive. I also appreciate the additional experiment with sheared/noisy charts. I think this paper makes a solid contribution to the field of symmetry discovery, and I would like to raise my score accordingly. | null | null | null | null | null | null | null | null |
Multi-Turn Code Generation Through Single-Step Rewards | Accept (spotlight poster) | Summary: This paper tackles the task of multi-turn (MT) code generation by introducing $\mu$-Code, an *expert iteration* process to alternate between training a generator (which produces candidate code solutions $y$ conditioned on past history $s$) and training a verifier ($R(x, y)$ which scores these solutions). This is motivated by the observation that MT code generation is a one-step recoverable MDP, as a correct solution could be produced conditioned on any history. Experiments of 3-turn code generation on 2 benchmarks, MBPP and HumanEval, show that it performs better than the MT baseline (fine-tuning on the correct MT trajectories): 1. training using a learned verifier performs better than using an oracle verifier ($\mu$-Code v.s. Multi-STaR$^2$ and Multi-STaR). 2. learned verifier could better guide the MT code generation process in the multi-turn BoN (conditioned on a history, do N rollout out, and select the highest score as the current prediction) process.
Claims And Evidence: The claim that a learned verifier’s single-step reward is sufficient for multi-turn code generation is well motivated by the proposed one-step recoverable MDP formulation. The authors use the learned verifier to pick top candidates to relabel the dataset. They provide empirical improvements over STaR-like baselines (which use correct completions as fine-tuning data) in training time and show that the learned verifier could be combined with public test information to further boost the BoN performance. The authors also provide single-turn baselines in Table 1 for solid comparison.
However, the lack of experiment details/hyperparameters makes it difficult to assess whether the comparison is sound: I'm in particular concerned with the experiment results that:
1. The performance drops in general from Iter 1 to Iter 2 on MBPP for $\mu$-Code and on both MBPP and HumanEval for Multi-StaR$^2$. Therefore, it remains a question whether the proposed expert iteration process works for Iteration number >= 3.
2. The results in Table 2 where $\mu$-Code-R is worse than $\mu$-Code is counter-intuitive and more explanation from the authors on the reason behind or on the experiment setting is highly appreciated.
Also, as the authors discuss in the limitations, apart from limited resources for larger experiments, it remains unclear whether the claims could extend to:
1. other benchmarks beyond MBPP and HumanEval, essentially, where the public tests information is not fully presented in the prompt $x$, making the information from execution feedback essential (in a Partially-Observable MDP setting).
2. a larger expert iteration number (>= 3).
Methods And Evaluation Criteria: The authors train on MBPP training set and evaluate on MBPP test set and HumanEval. This is a valid choice given the size of the model that the authors experiment on (1B & 8B). The authors report pass@3 and BoN@3 to report multi-turn (here 3-turn) performance. The authors include single-turn performance in Table 1, which I found solid for strengthening the comparison.
Theoretical Claims: The authors propose the one-step recoverable MDP and prove the performance bound. I have checked the proof in Appendix B. The proof seems correct in general, but I have some questions/comments regarding the definition and the details in the proof.
- The transition from L733 ($A^*(s, a) \le 1$) to what is used in L740 (I understand that the authors want to use $||A^*(.,.)||_{\infty} \le 1$). This is not that correct because, in principle, $A^*(s, a)$ could be negative and below -1, and the absolute value could be bigger than 1.
- In my opinion, the definition in L206: "MDP is one-step recoverable if the advantage function of the optimal policy, defined as $A^*(s, a) = Q^*(s, a)−V^*(s)$, is uniformly bounded for all (s, a), i.e. $A^*(s, a) \le 1$" is very artificial and is not really the definition. Rather, this should an implication from what the authors argue (from L211 to L213) that "MDP is one-step recoverable if the optimal policy depends only on the problem prompt $x$ but not the interaction history" and I would further argue that:
1. The definition is not correct in its current form.
2. The implication should have a tighter bound. But more importantly, it has nothing to do with the "one-step recoverability"; it's generally true and trivial for any MDP.
For my point 1, we can construct a toy MDP environment, which is according to the current definition and is not one-step recoverable MDP. We allow 4 states $s_0 (initial), s_1, s_2 (terminal), s_3 (terminal)$. There are only 2 possible deterministic transitions:
1. $s_0, a_0$ gives $s_2$, reward 1.
2. $s_0, a_1$ gives $s_1$, reward 0, and $s_1, a_2$ gives $s_3$, reward 2.
If this is one-step recoverability, the optimal policy should only depend on the initial state (L211) but not the history. However, the optimal policy here should pick the trajectory of $s_0, s_1, s_3$, and, therefore, is history-dependent. And we have (assuming here discount factor $\gamma$ is 1):
1. $A^*(s_0, a_0) = 1 - 2 = -1 \le 1$
2. $A^*(s_0, a_1) = 2 - 2 = 0 \le 1$
3. $A^*(s_1, a_2) = 2 - 2 = 0 \le 1$
which satisfies perfectly the current definition, which states that all $A^*$ should be smaller than or equal to 1.
For my point 2, $V^*(s) = max_a Q^*(s, a)$ by definition and it's direct that $A^*(s, a) = Q^*(s, a) - V^*(s) = Q^*(s, a) - max_a Q^*(s, a) <= 0 < 1$. Therefore, the claim that $A^*(s, a) \le 1$ is trivial, true for any MDP, and it's not even a good bound. Unless the authors want to say that it's negatively bounded as well in the current setting.
I think one way to fix all these is to *rewrite the definition*, **and** put $-1 \le A^*(s, a) \le 0$ (the -1 is environment specific, which is from the range of R = 0 or 1) in the definition instead of $A^*(s, a) \le 0$. In all, the definition still looks interesting if all my points above are properly addressed.
Experimental Designs Or Analyses: Questions about the experimental details remain for correctly assessing the experiment results:
1. Understanding the baseline setting: The StaR https://arxiv.org/pdf/2203.14465 describes 2 steps to construct a dataset for finetuning: 1. Filtering Chain-of-Thought based on the final correct answer. 2. Conditioned on the correct answer, let the model generate Chain-of-Thought. While what the authors described as StaR in the paper (L258-260) is only doing a variant of the Step 1 above, i.e., rejection sampling and fine-tuning. This is closer to what is referred to as Rejection Sampling Finetuning, i.e., rollout from the models and fine-tuning on the correct ones (in https://arxiv.org/abs/2307.09288 and https://arxiv.org/abs/2308.01825 for single-turn setting in code/math reasoning domain). The description of Multi-StaR (L260), i.e., fine-tuning on correct multi-turn rollout, is very similar to the multi-turn Rejection Sampling Finetuning in https://arxiv.org/abs/2410.08105. It remains a question about whether the authors' StaR/Multi-StaR setting is closer to the original StaR involving Chain-of-Tought re-ordering and boostrapping or closer to the above Rejection Sampling Finetuning.
2. I'm in particular curious about the drop of performance from Iter 1 to Iter 2 in Table 1. Could the author provide any insight? In the training loop of the expert iteration L7 in Algo 1, new policy for iteration N is always fine-tuned from the base model (iteration 0), or it's iteratively fine-tuned from the policy of last iteration (iteration N - 1)?
3. In Eq(3), how are the correct/incorrect pairs constructed? Are you doing a full catesian product of correct x incorrect solutions or there're some sampling processes?
4. In Eq(5), do you do the relabelling for all the problems, even if there's no correct solutions to that problem?
5. What are the sizes of the datasets for each variants in Table 2 or other statistics of the dataset, e.g., distribution of trajectories with different turn number, distribution of the correct/incorrect trajectories in the relabelled dataset for $\mu$-Code? It remains unclear why $\mu$-Code-R is better than $\mu$-Code to me despite more correct trajectories are added.
Supplementary Material: There's no supplementary material.
Relation To Broader Scientific Literature: - https://arxiv.org/pdf/2405.17503 maintains a search tree and uses Thompson Sampling to choose the multi-turn history to condition on in the multi-turn code generation.
- https://arxiv.org/abs/2410.08105 compares different chain-of-thought strategies in multi-turn code generation, and the described multi-turn Rejection Sampling Finetuning is also related to what the authors describe in L259 about the Multi-StaR.
Essential References Not Discussed: I encourage the authors to include https://arxiv.org/abs/2410.08105, in which the authors compare different chain-of-thought strategies in multi-turn code generation, and the described multi-turn Rejection Sampling Finetuning is also related to what the authors describe in L259 about the Multi-StaR.
Other Strengths And Weaknesses: Overall, I have mixed feelings about the manuscript, on one hand the strengths are clear:
1. The one-step recoverable MDP formulation is interesting and intuitive.
2. Performance gain through extensive baseline comparison. Especially the Multi-StaR$^2$, which I see as the most important ablation of the whole proposed pipeline by replacing the learned verifier with the oracle verifier but still keeping the relabelling process. However, it could benefit from more discussion about the source of the performance gap. I only find text from L310-312 for this comparision.
On the other hand, I'm not fully convinced by some of the results, along with the following weaknesses, for which I would need more clarification from the authors to recommend an acceptance:
1. Questions remain regarding my comments on the theoretical part above.
2. A lot of experiment details are missing and would be great if the authors could clarify my questions above on the experimental designs part.
3. Reproducibility: It would be great at least to discuss the experiment setting, such as hyperparameters.
Other Comments Or Suggestions: Please see above.
Questions For Authors: Please see the above comments. Overall, I remain positive about the manuscript. I would consider revising my recommendation if the following concerns and confusing are addressed:
Major:
1. Theoretical and Experimental Design.
2. Intuition and more explanation on why $\mu$-Code-R is worse than $\mu$-Code.
Minor:
1. The other questions on the Claim part, namely the discussion on whether the claims could be extend to iteration >= 3 or POMDP.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. We are glad the reviewer found the idea of one-step recoverability interesting. We respond to the questions below.
> Results on POMDP setting.
This is an interesting ablation as for many prompts, some unit tests are present in the $x$. To evaluate the agents without this information in prompt, we removed the unit tests from the prompt $x$ in the test set of MBPP dataset and report the turn-wise BoN@3 score. We observe that the performance drops for all algorithms. We also observe that the performance gains for $\mu$Code is significantly higher than baselines across turns, demonstrating the efficacy of our proposed approach at leveraging execution feedback.
| Algorithm | Turn 1 | Turn 2 | Turn 3 |
| :---------------- | :------: | :------: | :------: |
| Llama-3.2-1B-Instruct |24.3|29.4|32.6
| Multi-STaR |25.2|30.3|32.3
| Multi-STaR^2|22.8|26.8|27.4
| $\mu$Code |**27.3**|**37.2**|**41.5**
> Other benchmarks beyond MBPP and HumanEval
We evaluate the methods on the CodeContests dataset and is discussed in the response to Reviewer QsKU. We show that $\mu$Code outperforms Multi-STaR by 1.6%.
> The results in Table 2 where $\mu$-Code-R is worse than $\mu$Code.
To generate the dataset for training $\mu$Code-R, we concatenate the datasets used to train $\mu$-Code and MultiSTaR^2. We believe a better way of merging might lead to reduce this gap in performance and plan to add an additional analysis in the next version.
> Whether $\mu$Code works for Iteration number >= 3.
In Table 1, we observe a performance drop with $\mu$Code on the MBPP dataset and observe improvements on HumanEval with training iterations. Given the simplicity of our approach, we believe more iterations would be needed for scaling to more challenging datasets and we leave this investigation for future research.
> Theoretical claims
We thank the reviewer for pointing this out. Using the reward function defined in Section 2 (L100) to take values of either 0 or 1, we have updated the paper with the $-1 \leq A*(s,a) \leq 0 $ in Definition 3.1, L218 and L733.
> Whether StaR/Multi-StaR setting is closer to the original StaR or Rejection Sampling Finetuning.
The Multi-STaR baseline used in the paper is similar to the Rejection Sampling Finetuning defined in [2]. We plan to change the name of Multi-STaR to RFT for more clarity, and also change Multi-STaR^2 to RFT-R to denote RFT with relabeling.
> Is the new policy for iteration N is always fine-tuned from the base model?
In this work, we followed a similar strategy as [1] for fine-tuning the base model at every iteration.
> In Eq(3), how are the correct/incorrect pairs constructed?
The dataset for training the verifier was obtained through a sampling process. For each prompt, a pair of correct and incorrect response was obtained and we generated 16 pairs for each prompt.
> In Eq(5), do you do the relabelling for all the problems?
Yes, $\mu$Code relabels all the problems even if there are no correct solutions to that problem. This is similar to how RL training would happen with a dense reward function, where the agent is updated to increase the likelihood of the response with higher reward values and climb uphill on the reward function with updates. $\mu$Code relabels the trajectories with solutions with higher reward values for fine-tuning.
> What are the sizes of the datasets for each variants in Table 2 or other statistics of the dataset?
At each iteration, the size of the dataset for $\mu$Code is the total number of rollouts as we relabel every trajectory. For the Multi-STaR^2, we relabel ~70% of the rollouts denoting the fraction of prompts with at least one positive trajectory. The dataset for $\mu$Code-R is obtained by concatenating the two datasets. We ran hyperparameter tuning on the number of epochs to account for different sized datasets but finetuning with 2 epochs performed best.
> Reproducibility:
We will be releasing the codebase and the model checkpoints with the paper. We provide details of important hyperparameters for training $\mu$Code below:
| Hyperparameter | Generator | Verifier |
| :---------------- | :------: | :------: |
| Training Epochs |2|2|
| Learning Rate |5$e^{-7}$|1$e^{-6}$|
| Batch Size |32|64|
| Max Sequence Length |8192|2048|
> Essential References Not Discussed
[2] is a very interesting work that studies different CoT approaches and execution feedbacks. We have added [2] in references as the Multi-STaR baseline is similar to the Rejection Sampling Finetuning method described in [2].
Best Regards,
The Authors
### References
[1] Qu et al.,Recursive Introspection: Teaching Language Model Agents How to Self-Improve, NeurIPS 2024
[2] Zheng et al., What Makes Large Language Models Reason in (Multi-Turn) Code Generation?, ICLR 2025
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response which resolves most of my concerns. The motivation and contributions of the paper is clear and weakness in my original reviews are addressed (the new POMDP and the CodeContests result). What remain unsolved within the rebuttal session are also easily fixable, e..g flaws in the therectical part, the wording/notation, and the essential exposure about hyperparameters.
> To generate the dataset for training $\mu$Code-R, we concatenate the datasets used to train $\mu$-Code and MultiSTaR$^2$. We believe a better way of merging might lead to reduce this gap in performance and plan to add an additional analysis in the next version.
This analysis important as intuitively, adding more data from oracle verifier (i.e., the environment) for fine-tuning should not hurt the perf.
I encourage the authors to further polish the manuscript and include more details discussed here into the next version, as well as what the authors promise about the addditional analysis. As such I raise my score and recommend an acceptance.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 6VTR,
We would like to sincerely thank you for your support and are glad to see that our changes and responses have resonated with you. We value your suggestion on the analysis of the performance of $\mu$Code-R and are looking at alternative approaches to combining the datasets. We will add this discussion in the next version of the paper in addition to your suggested changes for the theoretical analysis, wording/notation, and hyperparameters for reproducibility.
All the best,\
The Authors | Summary: The paper introduces an approach, $\mu\mathrm{CODE}$, for multi-turn code generation using single-step rewards. Unlike existing methods that rely on reinforcement learning, $\mu\mathrm{CODE}$ considers code generation as a one-step recoverable Markov Decision Process (MDP), allowing for iterative improvement through imitation learning. The method iteratively trains a generator and a verifier, where the verifier provides feedback to guide the generator in refining code solutions. Experimental results on MBPP and HumanEval datasets demonstrate that $\mu\mathrm{CODE}$ outperforms existing methods such as STaR.
## update after rebuttal
I appreciate the rebuttal by authors. However, my concerns were not well addressed.
1. While the paper claims that multi-turn RL methods suffer from sparse learning signals which makes learning inefficient, it fails to provide any comparative experimental evidence.
2. The authors claim that when using sparse rewards, the agent could not updated for responses where it did not generate any correct solution. But in this scenario, the verifier cannot receive effective training, therefore the continuous signals could also be quite biased. Using answers with the highest reward labeled by the verifier but actually incorrect as SFT labels could be potentially harmful.
3. I appreciate that the authors have provided more experimental results.
In summary, I would like to raise my score to weak reject.
Claims And Evidence: No.
1. While the paper claims that RL methods are inefficient, it fails to provide comparative experimental evidence.
2. In the proposed iterative training process, the generator is trained on the verifier's annotated optimal results rather than the ground truth validated through actual code execution. This approach appears methodologically questionable, as verifier annotations may contain inherent biases. The rationale behind this design choice remains unclear.
3. The study's conclusions are drawn from experiments conducted exclusively on MBPP, a relatively small dataset of only several hundred samples. This limited scope, essentially amounting to a toy dataset, raises concerns about the robustness and generalizability of the findings. The conclusions, therefore, warrant further validation on more comprehensive datasets.
Methods And Evaluation Criteria: No.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
Supplementary Material: No.
Relation To Broader Scientific Literature: Not related.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Other Strengths: This paper is mostly well-written and clearly organized.
Other Comments Or Suggestions: No.
Questions For Authors: Please refer to the content in Claims And Evidence.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful feedback and are glad that you found the paper well organized. We are grateful for your questions and comments, and respond to your questions below.
> While the paper claims that RL methods are inefficient, it fails to provide comparative experimental evidence.
Most approaches for finetuning LLM agents via RL rely on rule-based verifier like execution feedback in code or correctness in math problems. Such signals are sparse and credit assignment due to long-term rewards is challenging in multi-turn tasks. We propose $\mu$Code that leverages a learned verifier to provide dense signals and also uses the one-step recoverability property for better credit assignment.
*Sparse Rewards:*
Let's consider a scenario where the agent is trained over the outcome of code execution. In such a setting, the reward can be either 0 or 1 denoting failure or success. When a correct solution is observed, an RL algorithm like PPO would increase the likelihood of this response. However, the agent is not updated for responses where it did not generate any correct solution for the prompt while sampling, and thereby the completions are discarded. This makes outcome based RL inefficient as the model updates relies on generating a correct solution, and exploration to a correct solution can be challenging. In many RL applications, this is mitigated with a dense reward signal, and $\mu$Code leverages a learned verifier for training. The generator is updated to predict solutions with the highest reward and with training should climb uphill this reward function. To demonstrate the benefits, we compare Multi-STaR^2 which is similar to $\mu$Code and performs relabeling with the sparse and ground truth reward (in Table 1 of the paper), and observed that $\mu$Code trained with dense rewards performed better.
*Long-term Rewards*:
During RL training, the agent is trained to optimize the long-term rewards of the trajectories in multi-turn tasks. With the insight of one-step recoverability, we propose to relabel trajectories and improve credit assignment. To show this, we propose a baseline, Rejection Fine-Tuning, where we finetune the agent with the top rollouts ranked via the learned verifier (and call it RFT (LV)). We did a hyperparameter search over different number of top rollouts for this baseline and found picking top 6 solutions from 15 rollouts worked best. We observe that RFT (LV) does not perform well when compared with $\mu$Code.
### MBPP
| Algorithm | Pass@3 | BoN@3 |
| :---| :------: | :------: |
| Llama-3.2-1B-Instruct | 37.3 | 42.7
| RFT (LV) | 30.6 | 37.1
| $\mu$Code | **42.2** | **47.2**
### HumanEval
| Algorithm | Pass@3 | BoN@3 |
| :-----------| :------: | :------: |
| Llama-3.2-1B-Instruct | 31.5 | 32.9
| RFT (LV) | 27.0 | 38.6
| $\mu$Code | **40.0** | **45.5**
> Verifier annotations may contain inherent biases.
We agree that the agent can exploit the inaccuracies within the learned verifier. This happens because the distribution of reponses from the generator shifts with training and the learned verifier can return inaccurate reward values. To account for the covariate shift, $\mu$Code updates the learned verifier after every iteration. Since the ground truth reward is already known (from the execution feedback), the verifier is updated using the oracle outcome. We would like to highlight that the learned verifier in $\mu$Code is trained with the concatenated dataset from prior iterations. In our experiments, we also observed that updating the verifier across iterations is beneficial for learning.
We additionally present emperical results with the generator of $\mu$Code and the learned verifier at the last iteration vs the learned verifier at iteration 0 (on rollouts from Instruct model). We observe upto 2% drops in performance on BoN@3 when using the verifier trained on Instruct rollouts vs the verifier updated with rollouts from current generator.
| Verifier | MBPP | HumanEval|
| :-----------| :------: | :------: |
| LV (Iter 0) | 46.8 | 42.5
| LV (Iter 2) | **48.1** | **44.5**
> The conclusions, therefore, warrant further validation on more comprehensive datasets.
We conduct an experiment with the CodeContests dataset. We curated a 1K-sized dataset with randomly selected problems from the training set of CodeContests. We use Llama-3.1-8B-Instruct and compare with MultiSTaR and $\mu$Code which were trained for 1 iteration on 4 X A100 GPUs with 80GB memory. We report the Pass@3 and BoN@3 results on the test set in CodeContests with 165 problems. We observe that $\mu$Code outperforms Multi-STaR by 1.6% on BoN scores showing the applicability of $\mu$Code to more challenging benchmarks.
| Method | Pass@3 | BoN@3 |
| :---- | :----: | :----: |
| Llama-3.1-8B-Instruct | 4.4 | 7.4
| Multi-STaR | 6.8 | 10.3
| $\mu$Code | **8.2** | **11.9**
Best Regards,
The Authors | Summary: The paper introduces $\mu$-Code, a scalable approach for multi-turn code generation that utilizes single-step rewards. It treats code generation as a one-step recoverable MDP, allowing correct code recovery from any intermediate state, and integrates a generator and verifier to iteratively train the system. The approach demonstrates substantial improvements over existing methods like STaR, with a detailed analysis of reward models and policy design choices.
Claims And Evidence: I believe that most of the claims are well supported by experiments and theory. One debatable point is whether multi-turn code generation can be considered a one-step recoverable MDP. In a reasonable trajectory, each step is usually a gradual improvement over the previous one, for example, correcting the error in the last step. If a step is significantly wrong, it is difficult for the next step to directly recover to the correct code. I would like to see a more in-depth discussion regarding this claim, rather than treating it merely as a basic assumption.
Methods And Evaluation Criteria: - For the methodology section, I did not identify any issue and believe it makes sense for the problem and aligns with the one-step recoverable MDP assumption.
- For the experiments, HumanEval and MBPP are somewhat outdated at present. I recommend that the authors use Evalplus' HumanEval+ and MBPP+. Additionally, utilizing more practical code generation benchmarks such as BigCodeBench could further enhance the robustness of the paper.
Theoretical Claims: I did not check the proof for the theoretical claims since I am not an expert on RL. The explanation of the theorems makes sense for me.
Experimental Designs Or Analyses: The experimental results are puzzling. HumanEval consists of 164 problems, and when measuring the pass rate, each problem is either correct or incorrect, meaning the metric must be a multiple of 1/164 ≈ 0.6%. However, most of the reported results in Table 1 for HumanEval do not yield integers when multiplied by 164. Given that the paper does not provide the code, this raises concerns about the validity of the reported results.
Supplementary Material: I checked the prompt in the Appendix.
Relation To Broader Scientific Literature: The method described in this paper may be extended to scenarios beyond code generation to construct trajectories.
Essential References Not Discussed: I did not identify essential references not discussed.
Other Strengths And Weaknesses: Strengths:
- The proposed method is simple and reasonable, fitting well with the one-step MDP assumption.
- The authors conducted solid theoretical analysis of the proposed method, deriving a performance bound.
- The writing is good and easy to follow.
Weaknesses:
The paper does not appear to provide source code or explicitly commit to releasing it (unless the reviewer has overlooked this information). The absence of publicly available code could impact the reproducibility of the results and weaken the work's value to the broader research community.
Other Comments Or Suggestions: The paper contains many typos. I suggest the authors thoroughly proofread the paper again.
- Algorithm 1, line 2: {(x, st, yt, ot))} -> {(x, st, yt, ot)}; line 4: trained on -> on
- Line 200 (right): during training?, -> during training?
Questions For Authors: - In Appendix A.2, the authors manually divided the test cases for HumanEval and MBPP. Could you consider using the additional tests introduced by HumanEval+ and MBPP+ as a private test set?
- Can the method described in this paper be extended to scenarios beyond code generation to construct trajectories?
- During the inference phase, if the verifier is not used and instead only the public test is used to select the best solution, will the performance change?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and are glad that you found our approach simple and theoretical justification insightful. We address your questions below.
> I would like to see a more in-depth discussion regarding this claim, rather than treating it merely as a basic assumption.
For the task of code generation, there are two major sources of errors: 1) syntax errors and 2) algorithmic errors. For (1), we expect gradual improvement because fixing the errors should lead to a better solution at the next turn. However, the change might not be gradual for (2) when the approach is incorrect. In this scenario the agent might have to modify the algorithm and as pointed out by the reviewer can be difficult to recover from. Since, the task of code generation is one-step recoverable, this allows $\mu$Code to relabel with a better solution and thereby recover from harder scenarios. In the paper, we provided the theoretical justification to support this claim in Theorem 3.2 (Section 3.5) and our experiments demonstrate the efficacy of $\mu$Code. Below, we present an example where $\mu$Code successfully resolves an algorithmic error.
**Prompt**: Write a function to calculate the height of the given binary tree.
**Turn 1**
```
def max_height(root):
if root is None:
return 0
queue = [(root, 1)]
while queue:
node, height = queue.pop(0)
if node is None:
return height
queue.append((node.left, height + 1))
queue.append((node.right, height + 1))
return -1
```
**Turn 2**
```
def max_height(root):
if root is None:
return 0
left_height = max_height(root.left)
right_height = max_height(root.right)
if left_height > right_height:
return left_height + 1
else:
return right_height + 1
```
Note that the first solution takes an iterative approach and is incorrect because the function returns as soon as any leaf node is reached. However, the code solution at next iteration switches from an iterative to a simpler recursive approach and is correct.
> Reproducibility
We will be releasing the codebase, model checkpoints and hyperparameters with the final version of the paper.
> Can the method described in this paper be extended to scenarios beyond code generation to construct trajectories?
Yes, our method can be extended to any tasks where the correct solution can be generated in a single step (one-step recoverability). An application is theorem proving, where the the agent needs to write proof of theorems and the solution can be verified using external tools [1,2].
> Could you consider using the additional tests introduced by HumanEval+ and MBPP+ as a private test set?
We thank the reviewer for bringing this up. In the submitted version, we used the HumanEval and MBPP datasets with its own set of public and private test split. Below, we provide comparison where private tests include the additional tests provided in MBPP+ and HumanEval+. We report results with the same model 1B-sized model checkpoints used to report results in Table 1 and present the results on Pass@3 and BoN@3.
### MBPP+
| Algorithm | Pass@3 | BoN@3 |
| :---------------- | :------: | :------: |
| Llama-3.2-1B-Instruct |43.2| 49.9
| Multi-STaR |44.3| 50.0
| $\mu$Code |**48.7**| **55.1**
### HumanEval+
| Algorithm | Pass@3 | BoN@3 |
| :---------------- | :------: | :------: |
| Llama-3.2-1B-Instruct |25.7| 31.9
| Multi-STaR |26.7| 34.3
| $\mu$Code |**32.4**| **40.0**
We observe that $\mu$Code outperforms baselines Multi-STaR by 5.1% and 5.7% on BoN@3 on MBPP+ and HumanEval+ datasets. Note that the numbers for MBPP are higher than what is reported in Table 1 because MBPP+ comprises of 378 problems as compared to 500 problems in the MBPP dataset.
> Bigger benchmark
We conduct an experiment on CodeContests dataset and show that $\mu$Code outperforms Llama-3.1-8B-Instruct and MultiSTaR. The experiment is discussed in the response to Reviewer QsKU, where we show **1.6%** performance gain over Multi-STaR at BoN@3.
> Most of the reported results in Table 1 for HumanEval do not yield integers when multiplied by 164.
We are using a temperature of 0.7 for sampling to report our metrics. Because of the smaller dataset size, we observed variance in the results and resorted to generating 15 rollouts for each prompt. We report the mean scores in Table 1 and the values in Table 1 will be a multiple of .6 / 15 = 0.04. In the revised version, we plan to add the standard error with the mean results for clarity.
Best Regards,
The Authors
### References
[1] STP: Self-play LLM Theorem Provers with Iterative Conjecturing and Proving, Dong el al. 2025.
[2] Learning Formal Mathematics From Intrinsic Motivation, Poesia et al., NeurIPS 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response. The rebuttal has addressed most of my concerns. I decide to keep my positive score.
---
Reply to Comment 1.1.1:
Comment: Reviewer hcVQ,
We would like to thank you for your support which helped improve the paper. We are thrilled to see that our answers have resonated with you and are happy to any further questions you may have.
Best Regards,\
The Authors | Summary: This paper addresses code generation from multi-turn execution feedback. The authors propose uCode a simple and scalable approach that solves multi-turn code generation using only single-step rewards. The key insight is that code generation is a one-step recoverable MDP. uCode trains both a generator and a verifier. Experiments show improvement against StAR.
## update after rebuttal: Thank you for your answer. I have increased my score.
Claims And Evidence: 1. Novel framework
- Yes
2. multi-turn best-of-n approach that utilizes a learned verifier at inference time
- Yes (although it does not seem necessarily novel)
3. theoretical analysis
- Yes
4. Experiments on MBPP and Human Eval
- Yes
5. Analysis
- Yes
Methods And Evaluation Criteria: The main idea is that one-step recoverable MDP implies that a correct code can be recovered from any intermediate state in a single step. uCode follows an expert iteration framework with a local search expert. It trains iteratively two components: a learned verified and generator. At inference time, both are used as best-of-n search.
First, uCore needs a dataset that is created using the current generator. The verifier is trained on the aggregated dataset. An expert is constructed and the dataset is re-labeled. Finally, the model is fine-tuned using the dataset. Repeat as many time until convergence. The training part is sound. At inference, the authors propose multi-turn best-of-N. At each turn, the generator produces N one-step rollouts and the verifier selects the most promising one and is executed in the environment to obtain the feedback. The process is repeated.
Finally, the theoretical analysis is sound.
Theoretical Claims: Yes the one in section 3.5
Experimental Designs Or Analyses: The models are Llama-3.2-1B instruct or 8B instruct. I would recommend the authors to also investigate other LLMs that are not Llama, and include larger models. Turn limit = 3. Datasets and metrics used are the standard ones. Fundamentally, the only baseline used is STaR. I would encourage the authors to include another model.
Other than that, the experiments are well executed and the analysis are sound!
Supplementary Material: No
Relation To Broader Scientific Literature: I am not familiar with the literature.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Overall, well-written paper and well structure. Easy to read and follow. The methodology is sound, and the experiments as well. I would encourage the authors to include another baseline investigate the performance when the turn limit is above 3.
Other Comments Or Suggestions: No
Questions For Authors: What are the performance with another LLM that is not Llama? Same with bigger model.
Could you provide a performance analysis with different turn limits (> 3)?
Same with the "training" time following algorithm 1, and cost/efficiency.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback. We are glad that you found the paper well structured and our experiments thorough. We address your questions below.
> Investigate other LLMs that are not Llama, and include larger models.
We investigate the performance of the Qwen-2.5-1.5-Instruct model, Multi-STaR and $\mu$Code at Pass@3 and BoN@3 metrics on MBPP and HumanEval datasets. Below, we present the table and show that $\mu$Code outperforms the baselines.
### MBPP
| Algorithm | Pass@3 | BoN@3 |
| :---------------- | :------: | :------: |
| Qwen-2.5-1.5-Instruct |53.8|60.9
| Multi-STaR |57.0|58.0
| $\mu$Code |**59.0**|**63.1**
### HumanEval
| Algorithm | Pass@3 | BoN@3 |
| :---------------- | :------: | :------: |
| Qwen-2.5-1.5-Instruct |64.5|70.3
| Multi-STaR |66.6|71.3
| $\mu$Code |**70.5**|**74.0**
Our experiments are restricted to models that can be trained on 4 x A100 GPUs with 80 GB memory each and training of larger models greater than 8B requires more GPU memory. We expect $\mu$Code should scale to larger models with more compute.
> Investigate the performance when the turn limit is above 3. > Same with the "training" time following algorithm 1.
In the paper, we ran BoN search for a turn limit of 3. We present the results with a turn limit of 6 below (with the 1B-sized model used in Table 1). Note that the model is trained with a turn limit of 3. $\mu$Code achieves the best performance on both datasets across most turn limits. This highlights the versatility of $\mu$Code at generalizing to larger turn limits at test-time. Additionally, our results show that $\mu$Code achieves the largest gains per turn, suggesting it is better at incorporating execution feedback and improving code solutions across turns. This experiment validates that $\mu$Code is performant with more turns during evaluation. We believe that training with a higher turn limit could further improve the performance.
### MBPP
| Algorithm | T=1 | T=2 | T=3 | T=4 | T=5 | T=6
| :---------------- | :------: | :------: | :------: | :------: | :------: | :------: |
| Llama-3.2-1B-Instruct |39.2|41.5|43.1|44.0|45.0|45.5|
| Multi-STaR |**39.4**|40.9|41.4|41.9|42.2|42.7|
| $\mu$Code |38.1|**43.9**|**46.6**|**48.8**|**50.1**|**50.9**|
### HumanEval
| Algorithm | T=1 | T=2 | T=3 | T=4 | T=5 | T=6
| :---------------- | :------: | :------: | :------: | :------: | :------: | :------: |
| Llama-3.2-1B-Instruct |34.8|35.6|36.6|38.0|38.4|38.6|
| Multi-STaR |36.8|36.8|37.2|37.8|38.2|38.2|
| $\mu$Code |**38.2**|**45.7**|**48.4**|**50.6**|**51.0**|**51.4**|
> Fundamentally, the only baseline used is STaR.
We point the reviewer to our response to Reviewer QsKU where we have provided an additional baseline: Rejection Fine-tuning with a learned verifier, or RFT (LV). Here, we select the top trajectories ranked using the learned verifier for a prompt and finetune at each iteration. This is similar to the Multi-STaR baseline, where the learned verifier is used to filter trajectories instead of the ground truth reward. We observed that $\mu$Code performs better than RFT (LV) baseline.
Best regards,\
The Authors
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers. I will keep my score as is.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer vyX7,
We sincerely thank you for your feedback and are delighted to see that our experiments and responses have resonated with you.
All the best,\
The Authors | null | null | null | null | null | null |
AnyEdit: Edit Any Knowledge Encoded in Language Models | Accept (poster) | Summary: The paper presents AnyEdit, a new method for updating long-form knowledge in LLMs. Unlike existing methods that edit a single token’s hidden state, AnyEdit decomposes knowledge into chunks and iteratively updates key tokens in each chunk. This approach ensures more accurate and consistent updates. The method is grounded in the Chain Rule of Mutual Information and outperforms existing methods on several benchmarks, including a new dataset, EditEverything. AnyEdit also offers a plug-and-play solution for integrating with existing frameworks to handle longer, diverse knowledge updates.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, The paper takes a novel information-theoretic approach, reworking the token hidden state computation in the locate-then-edit method into an autoregressive form. This innovative approach effectively addresses the challenge of editing long-form knowledge.
Experimental Designs Or Analyses: The paper provides extensive experiments on both the motivation and performance of the method, demonstrating its effectiveness across various benchmarks.
Supplementary Material: Yes,all of them
Relation To Broader Scientific Literature: Previous single-token editing methods relies on significantly increasing the probability of generating a specific output after applying a perturbation. However, if the original model's probability for that output is low, especially in diverse knowledge formats, the perturbation must induce a substantial shift to make it the dominant output. Current methods often struggle in these cases.
Essential References Not Discussed: None
Other Strengths And Weaknesses: In the Implementation Details of the appendix, the paper sets the overlap between sliding windows to 0, but it does not discuss the impact of this overlap on performance. It would be helpful if the authors could provide additional analysis on how different overlap settings might affect the method's effectiveness.
Other Comments Or Suggestions: None
Questions For Authors: Line 108 references Appendix D, but there is no Appendix D in the paper. This seems to be an omission by the authors.
The caption for Figure 1 contains incorrect figure numbers. It should read "(c) and (e) show the editing..." and "(b) and (f) depict the type..."
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer dG4H:
Thank you for your positive feedback and valuable suggestions! We sincerely appreciate the time and effort you have dedicated to reviewing our work. Below, we meticulously provide responses to each of your comments and outline the modifications based on your suggestions.
## *W1: "It would be helpful if the authors could provide additional analysis on how different overlap settings might affect the method's effectiveness."*
Thank you for your valuable suggestion. Following your advice, we have added extra experiments analyzing how different overlap settings affect our method's editing effectiveness. The specific results are summarized below:
|||||||||||||
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|LLM|Overlap||**UnKEBench**||||**Counterfact**|||**MQUAKE**||
|||**Ori.**|**Ori.**|**Para.**|**Para.**|**Ori.**|**Ori.**|**Para.**|**Para.**|**Ori.**|**Ori.**|
|||BertScore|Rouge-L|BertScore|Rouge-L|BertScore|Rouge-L|BertScore|Rouge-L|BertScore|Rouge-L|
|Llama3-8B-It|0%|97.76±0.11|92.96±0.24|96.60±0.19|95.60±0.35|97.76±0.14|95.87±0.23|62.63±0.44|46.51±0.59|96.33±0.21|94.32±0.23|
|Llama3-8B-It|25%|97.72±0.15|92.83±0.27|96.55±0.21|95.43±0.31|97.68±0.16|95.82±0.25|62.50±0.45|46.39±0.56|96.20±0.22|94.27±0.25|
|Llama3-8B-It|50%|97.65±0.14|92.76±0.29|96.49±0.20|95.35±0.33|97.63±0.17|95.75±0.27|62.44±0.47|46.27±0.58|96.11±0.23|94.18±0.24|
|Llama3-8B-It|75%|97.58±0.16|92.61±0.30|96.43±0.22|95.24±0.32|97.56±0.18|95.64±0.28|62.35±0.46|46.12±0.57|96.03±0.25|94.09±0.26|
|Qwen2.5-7B-It|0%|98.05±0.16|94.89±0.29|93.56±0.15|79.98±0.28|98.08±0.15|95.09±0.19|65.40±0.38|43.49±0.47|98.14±0.13|96.39±0.18|
|Qwen2.5-7B-It|25%|98.01±0.17|94.82±0.31|93.49±0.16|79.91±0.30|98.02±0.17|95.01±0.21|65.31±0.39|43.42±0.48|98.08±0.15|96.32±0.19|
|Qwen2.5-7B-It|50%|97.94±0.18|94.75±0.33|93.42±0.17|79.84±0.29|97.95±0.18|94.93±0.23|65.22±0.40|43.35±0.50|98.01±0.16|96.24±0.21|
|Qwen2.5-7B-It|75%|97.87±0.19|94.67±0.34|93.35±0.18|79.76±0.31|97.89±0.20|94.85±0.25|65.14±0.42|43.28±0.51|97.94±0.17|96.18±0.22|
||||||||||||
As indicated by these results, **increasing overlap does not significantly improve editing effectiveness and can slightly decrease performance**. Furthermore, increased overlap also **raises the number of autoregressive iterations, reducing overall editing efficiency**. Thus, in practice, we set the overlap directly to 0 for efficient and effective editing.
We hope our additional experiments address your concern!
## *Q1: "Line 108 references Appendix D, but there is no Appendix D in the paper. The caption for Figure 1 contains incorrect figure numbers."*
Thank you for pointing out these oversights! We have made the following corrections in the revised manuscript:
1.Added the missing information from Appendix D and integrated it into Appendix A.2, specifically supplementing the definitions of Efficacy, Generalization, and Specificity from ROME.
2.Updated the caption of Figure 1 to correctly state: "(a) and (d) illustrate the editing processes; (c) and (e) show the editing efficacy as the number of tokens within the to-be-updated knowledge increases."
**Hope that these updates could meet your expectations, and we are more than happy to add clarifications to address any additional recommendations and reviews from you.**
---
Once again, we deeply appreciate your thoughtful and encouraging feedback. Your suggestions have not only enhanced the current work but have also inspired us to continue exploring research in the area of model editing. We are excited to keep moving forward and contributing to the community!
Best,
Authors of Submission9475
---
Rebuttal Comment 1.1:
Comment: The author's rebuttal effectively reduce my concerns, and I raise my score
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer deNs,
Thank you for your kind feedback and for taking the time to review our updated work. We are grateful for your recognition and for increasing the rating—it means a lot to us and inspires us to continue improving.
We look forward to any further suggestions you may have in the future.
Best regards,
Authors | Summary: Current LLM editing methods struggle with long-form, multi-format knowledge due to the "efficacy barrier" of single-token edits. AnyEdit overcomes this via autoregressive chunk decomposition and iterative token refinement, grounded in the Chain Rule of Mutual Information. It outperforms baselines by 21.5% and enables plug-and-play integration for diverse-format updates.
In summary, the focus on diverse-formatted, long-form knowledge editing addresses a critical gap in model editing for LLMs, paves the way for broader applications and further advancements in the field.
Claims And Evidence: The authors' claims are robustly supported by rigorous theoretical foundations and compelling experimental evidence. The proposed auto-regressive editing paradigm is theoretically grounded in a meticulous derivation of the Chain Rule of Mutual Information, ensuring principled modeling of sequential edit dependencies and providing a solid mathematical framework for addressing long-form, diverse-formatted knowledge editing. Furthermore, the work is empirically validated through extensive experiments, including the introduction of a novel diverse-formatted knowledge editing dataset that rigorously evaluates performance across diverse data types. The results demonstrate AnyEdit's superior performance over existing baselines, with significant improvements in edit accuracy and contextual consistency. These theoretical and empirical contributions collectively substantiate the central claim of resolving challenges in long-form, diverse-formatted knowledge editing, establishing the method’s practicality and scalability for real-world applications.
Methods And Evaluation Criteria: The methodology and evaluation framework are novel, comprehensive, and well-aligned with the challenges of knowledge editing. The proposed AnyEdit introduces a groundbreaking auto-regressive editing paradigm that effectively extends the editable text length by explicitly modeling sequential dependencies, a critical advancement for handling long-form content. The benchmark datasets—UnKEBench, AKEW, and the newly proposed EditEverything—are explicitly tailored to evaluate long-form, diverse-formatted knowledge editing, ensuring fair and rigorous comparisons. The evaluation metrics (e.g., ROUGE score for contextual coherence and BERTScore for semantic fidelity) are thoughtfully chosen to holistically assess edit performance across extended contexts. Furthermore, the thorough comparison with diverse existing editing methods systematically validates AnyEdit's superiority in edit precision, scalability, and format adaptability, particularly in scenarios requiring multi-step, context-aware modifications. The methodology is rigorously justified, with evaluation criteria meticulously selected to address both technical and practical dimensions of knowledge editing, solidifying the work's reproducibility and impact.
Theoretical Claims: The theoretical claims are rigorously derived from the Chain Rule of Mutual Information, with detailed proofs in Appendix B.2 ensuring clarity and mathematical validity.
Experimental Designs Or Analyses: The experiments are rigorously designed, leveraging UnKEBench, AKEW, and EditEverything datasets to evaluate long-form, diverse-formatted editing. ROUGE Score and BERTScore metrics comprehensively assess performance, with results demonstrating superiority in accuracy and robustness over prior works.
Supplementary Material: The paper does not include Supplementary Material.
Relation To Broader Scientific Literature: This work significantly expands the scope of knowledge editing research by addressing multi-format, long-form knowledge manipulation—a critical yet underexplored challenge in knowledge editing. While prior works focus on isolated formats (e.g., triplet knowledge edits), this paper bridges the gap between theoretical principles (mutual information chain rule) and real-world demands for cross-format consistency (e.g., synchronizing edits across text, tables, and structured data).
Essential References Not Discussed: The paper provides comprehensive coverage of knowledge editing literature, including foundational works (e.g., Parameter-Modifying Methods and Parameter-Preserving Methods) and recent advances in unstructured knowledge editing (e.g., free-form text). The cited references are well-represented and directly relevant to the paper's focus on diverse-formatted knowledge.
Other Strengths And Weaknesses: Strengths
1. The current model editing are largely limited in the format and length of to-be-edited knowledge. This work extends current model editing methods to be applicable to any format and length through a very simple operation. It is highly practical and crucial for the future development of model editing and efficient knowledge updating of LLMs.
2. I appreciate the paper's well-structured narrative and clear implementation details, which make the autoregressive editing paradigm straightforward to replicate.
3. The autoregressive chunk decomposition is an elegant solution to the "efficacy barrier," balancing theoretical grounding (Chain Rule of Mutual Information) with practical plug-and-play utility.
4. The extensive experimental validation of AnyEdit across UnKEBench, AKEW, and EditEverything, rigorously demonstrating its effectiveness in long-form, diverse-formatted knowledge editing.
Weaknesses
1. I'm concerned about the propagation of hidden state perturbations in the autoregressive design: If Step 4 in section 4.2 aligns multiple chunk states simultaneously, how does AnyEdit theoretically ensure that earlier token edits (e.g., in chunk 1) don’t destabilize the hidden states of subsequent chunks (e.g., chunk 2+) during iterative updates?
2. While the paper proposes both semantic and fixed-size chunking, a direct comparison of their impact on editing performance could strengthen the methodology.
Other Comments Or Suggestions: A minor yet constructive suggestion: In Section 4.2 (Step 4), expanding the appendix to explicitly detail how parameter updates achieve multi-token synchronization would enhance clarity. While the core methodology is sound, clarifying this synchronization mechanism would strengthen reproducibility and theoretical rigor for the auto-regressive editing paradigm.
Questions For Authors: 1. Does AnyEdit's editing time increase with longer text?
2. The results show AnyEdit's editing time increases modestly with text length, despite its auto-regressive nature. Could the authors clarify the specific optimizations that mitigate time costs?
3. What are the specific advantages of AnyEdit* over AnyEdit that lead to its improved performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Dear Reviewer gZ7E:
Thank you for your kind words and positive feedback of our novelty, presentation and effectiveness! Your approval is the great encouragement for us and motivates us to continue advancing our work.
Below, we meticulously provide responses to each of your comments and outline the modifications made to the manuscript.
## *W1 & Suggestion1: "how does AnyEdit theoretically ensure that earlier token edits don’t destabilize the hidden states of subsequent chunks? Expanding the appendix to explicitly detail how parameter updates achieve multi-token synchronization."*
Thank you for raising this excellent question. In the implementation of Step 4 (Section 4.2), we explicitly ensure that **besides updating the hidden states of target tokens within each chunk, the hidden states of other tokens remain unchanged**. This approach precisely addresses your concern regarding earlier token edits destabilizing subsequent chunks. To clarify this further, we have expanded the details in the revised manuscript.
We hope this addresses your concern adequately.
## *W2: "A direct comparison of semantic and fixed-size chunking‘s impact on editing performance could strengthen the methodology"*
Thank you for your insightful suggestion. We indeed attempted semantic chunking based on sentence segmentation but found it **limited by the nature of the dataset's knowledge categories**. Particularly, with code or mathematical problems, unclear sentence boundaries often resulted in excessively short chunks and increased iteration counts, reducing editing efficiency. Thus, we did not adopt semantic chunking in the main implementation. However, following your suggestion, we provide results comparing semantic chunking and fixed-size chunking as summarized below:
|||||||||||||
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|LLM|Method||**UnKEBench**||||**Counterfact**|||**MQUAKE**||
|||**Ori.**|**Ori.**|**Paraph.**|**Paraph.**|**Ori.**|**Ori.**|**Paraph.**|**Paraph.**|**Ori.**|**Ori.**|
|||BertScore|Rouge-L|BertScore|Rouge-L|BertScore|Rouge-L|BertScore|Rouge-L|BertScore|Rouge-L|
|Llama3-8B-It|AnyEdit(sentence-chunk)|97.81|93.02|96.54|95.48|97.69|95.79|62.71|46.62|96.28|94.18|
|Llama3-8B-It|AnyEdit|97.76|92.96|96.60|95.60|97.76|95.87|62.63|46.51|96.33|94.32|
|Qwen2.5-7B-It|AnyEdit(sentence-chunk)|98.11|94.83|93.48|80.05|98.03|95.21|65.33|43.71|98.09|96.44|
|Qwen2.5-7B-It|AnyEdit|98.05|94.89|93.56|79.98|98.08|95.09|65.40|43.49|98.14|96.39|
||||||||||||
As the table indicates, semantic and fixed-size chunking methods produce similar results, primarily due to the current semantic chunking approach being limited to sentence-level segmentation. We will further explore more effective semantic chunking methods.
We hope this clarifies your concern.
## *Q1&Q2: "Does AnyEdit's editing time increase with longer text? Could the authors clarify the specific optimizations that mitigate time costs?"*
Thank you for highlighting this important issue. We indeed acknowledge the computational intensity challenge, particularly when scaling up sequence lengths, as discussed explicitly in Observation 6 (Section 5.4). To mitigate this issue, we have explored optimization strategies:
1. **Early stopping strategy:** During gradient descent optimization of Equation (9), we halt training once loss thresholds are met, allowing shorter optimization epochs without sacrificing performance. Due to shorter chunk lengths, thresholds are quickly reached, significantly accelerating the process.
2. **Adaptive chunk length selection:** Chunks vary in difficulty; simpler chunks can be longer, while challenging chunks can be shorter. Combined with loss thresholds, this significantly reduces the gradient descent epochs.
We have conducted preliminary experiments, achieving noticeable speed improvements as summarized in the following table:
|||||
|:-:|:-:|:-:|:-:|
|Method|UnKEBench|Counterfact|MQUAKE|
|MEMIT|16.37|17.05|16.92|
|AnyEdit|21.14|20.43|21.28|
|AnyEdit(accelerate)|19.89|19.22|20.03|
|||||
As indicated, optimized chunk-by-chunk editing improves speed compared to the original method, though gaps remain compared to single-token editing. Future work will aim at further improvements.
We hope this adequately addresses your concern.
## *Q3:"What are the specific advantages of AnyEdit* over AnyEdit that lead to its improved performance?"*
Thank you for this insightful question. AnyEdit builds upon MEMIT, employing auto-regressive editing with closed-form updates in Step 4. In contrast, AnyEdit* extends UnKE, using auto-regressive editing combined with gradient descent optimization to update all parameters in Step 4, resulting in enhanced performance.
We hope this clarifies your query.
---
Once again, we deeply appreciate your thoughtful and encouraging feedback. Your suggestions have not only enhanced the current work but have also inspired us to continue exploring research in the area of model editing.
Best,
Authors of Submission9475 | Summary: This work proposes a novel knowledge editing method, AnyEdit, designed to mitigate performance degradation in long-form knowledge tasks. AnyEdit is a plug-and-play framework compatible with most ‘locate-edit’ knowledge paradigms. Moreover, it extends knowledge editing beyond the traditional ‘triplet’ format to a more flexible ‘free-form’ approach. Additionally, the authors introduce EditEverything, a benchmark for free-form knowledge editing.
## update after rebuttal
This paper extends the triplet-based knowledge editing to the free-form-based knowledge editing.
Overall, I keep my original rating.
Claims And Evidence: The authors claim that AnyEdit supports arbitrary length and format. However, in their experimental settings, they only conducted experiments on sequences of up to 200 tokens, which does not strongly support this claim.
Methods And Evaluation Criteria: Please refer to the ‘Experimental Designs’ section for details on the evaluation criteria.
I noticed that some prior knowledge editing papers, such as ROME, do not follow this evaluation design.
Could you clarify the rationale behind your chosen evaluation approach?
Theoretical Claims: The theoretical claims looks okay.
Experimental Designs Or Analyses: Knowledge editing may affect existing knowledge. Intuitively, since AnyEdit modifies a larger number of neurons, it is expected to have a greater impact. However, the authors did not evaluate its effects on the overall performance of the LLM, which is a critical omission.
Supplementary Material: Yes. I reviewed the A. Experimental Setup, B. Locate-Then-Edit Paradigm & Related Proof. C. More Experimental Results.
Relation To Broader Scientific Literature: This work extends triplet-based knowledge editing to a more flexible free-form approach, broadening the scope of knowledge modification in LLMs. This contribution aligns with prior research on knowledge editing frameworks such as ROME, SERAC, and MEMIT, which primarily focus on structured triplet-based modifications. By enabling edits in a free-form manner, this work enhances the adaptability of knowledge editing methods, making them more applicable to diverse real-world scenarios.
Essential References Not Discussed: N.A.
Other Strengths And Weaknesses: I personally like this paper and appreciate its contribution. It effectively extends knowledge editing from short-length to long-length formats and from fixed triplet structures to a more flexible free-form approach. This expansion broadens the applicability of knowledge editing across a wider range of scenarios.
Other Comments Or Suggestions: The Table 1. "Para" is easy to mistake to the abbreviation of "Parameters", please change to another name.
Questions For Authors: Q1: Why do most papers perform knowledge editing in a single MLP layer? Wouldn’t applying edits across multiple layers be more effective? Are there specific constraints or challenges preventing this approach?
Q2: Why don’t the authors directly evaluate the affection to LLM's knowledge after performing knowledge editing?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer H3vg:
Thank you for your kind words and positive feedback of our novelty, presentation and effectiveness! Your approval is the great encouragement for us and motivates us to continue advancing our work.
Below, we meticulously provide responses to each of your comments and outline the modifications made to the manuscript.
## *Suggestion1: "The Table 1. 'Para' is easy to mistake to the abbreviation of 'Parameters', please change to another name."*
Thank you for your suggestion. Based on your comment, we have revised the manuscript as follows:
·In Table 1, we have replaced 'Para.' with 'Paraph.' to more clearly indicate 'Paraphrase' and avoid confusion with the abbreviation for 'Parameters'.
## *Q1: "Why do most papers perform knowledge editing in a single MLP layer? Wouldn’t applying edits across multiple layers be more effective? Are there specific constraints or challenges preventing this approach?"*
This is an excellent question. Indeed, many methods such as ROME and UnKE perform editing on a single MLP layer. In contrast,, recent methods like MEMIT, AlphaEdit, and their derivative approaches edit multiple layers. These studies have empirically demonstrated that **multi-layer editing often achieves better results in terms of editing effectiveness and quantity of knowledge compared to single-layer editing**.
However, multi-layer editing approaches currently face limitations, especially within the Locate-then-edit paradigm. Specifically, interference between layers occurs, as edits in earlier layers affect the outputs of subsequent layers. Therefore, editing typically proceeds sequentially from shallow to deeper layers. We believe that a key unsolved problem remains: **achieving simultaneous multi-layer editing while ensuring edits at each layer effectively increase the output probability for the editing target**.
## *Q2: "Why don’t the authors directly evaluate the affection to LLM's knowledge after performing knowledge editing?"*
Thank you for your question. In our experiments, metrics such as Bert Score and Rouge Score evaluate the similarity between the edited outputs and the target knowledge, directly reflecting the effect on the LLM's knowledge. Thus, we infer your question might pertain to evaluating the impact on unrelated or general knowledge. Following your suggestion, we have added additional experimental results comparing AnyEdit and several baselines concerning their impact on unrelated local knowledge and general knowledge, as summarized below:
||||||||||
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Method|Model|SST|MRPC|CoLA|RTE|MMLU|NLI|Loc-Fact Score|
|Pre-edited|Llama3-8B-It|83.17|67.29|75.42|29.36|56.81|66.58|73.27|
|AnyEdit|Llama3-8B-It|82.91|67.55|75.86|29.02|57.05|66.14|73.84|
|Pre-edited|Qwen2.5-7B-It|85.62|69.47|77.88|31.73|58.92|68.53|75.94|
|AnyEdit|Qwen2.5-7B-It|85.97|69.21|77.63|32.05|58.61|68.97|75.42|
||||||||||
As shown, AnyEdit minimally impacts unrelated and general knowledge. Specifically, AnyEdit maintains the original knowledge retention capabilities well on both Llama3-8B-Instruct and Qwen2.5-7B-Instruct models, achieving performance comparable to the pre-edited versions. This indicates that **AnyEdit does not substantially disrupt knowledge unrelated to the edits**.
We hope our additional experiments adequately address your concerns.
---
Once again, we deeply appreciate your thoughtful and encouraging feedback. Your suggestions have not only enhanced the current work but have also inspired us to continue exploring research in the area of model editing. We are excited to keep moving forward and contributing to the community!
Best,
Authors of Submission9475
---
Rebuttal Comment 1.1:
Comment: Thank you for your clear response. Most of my concerns have been adequately addressed.
Since this work extends triplet-based knowledge editing to the more general setting of free-form knowledge editing, it raises a broader question that I would like to discuss with the authors:
[Question] What are the key differences between efficient fine-tuning methods (e.g., PEFT) and knowledge editing, especially as both are increasingly applied to free-form knowledge updates?
In other words, how can we clearly distinguish knowledge editing from fine-tuning or post-training?
I have some thoughts on this question and would be glad to hear your perspective—especially if any of my understandings are inaccurate.
- [Quantity] If we have a large number of knowledge samples, can free-form “knowledge editing” be considered equivalent to fine-tuning or post-training?
- [Add vs. Edit] Fine-tuning is typically used to add new knowledge to the model, while knowledge editing aims to replace or modify existing knowledge within the pre-trained model.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable question. I would like to share my perspectives below:
---
### **Key Differences Between PEFT and Knowledge Editing**
- **PEFT (Parameter-Efficient Fine-Tuning)**:
While PEFT is highly efficient, it **cannot guarantee real-time updates** due to its reliance on gradient descent and thus consumes more computational resources than knowledge editing. Additionally, when dealing with small amounts of knowledge, efficient fine-tuning methods are prone to **overfitting and catastrophic forgetting**. As you rightly pointed out, PEFT excels at handling large-scale knowledge updates, efficiently learning new information without overfitting.
- **Knowledge Editing**:
In contrast, knowledge editing **achieves true real-time capability** by avoiding gradient descent on most parameters, requiring only **inference-level memory**, and handles small knowledge updates effortlessly. However, its drawback is that updating large amounts of knowledge may slightly interfere with the model’s general capabilities.
**Therefore, I believe the two approaches are complementary:**
If an LLM undergoes fine-tuning on a monthly cycle, the knowledge updates needed within that month are likely small. Thus, editing can **serve as a temporary "patch"**. Once enough updates accumulate to justify a full fine-tuning cycle, the patch can be removed, and the model can be fine-tuned to incorporate all new knowledge efficiently.
---
### **Response to Your Thought-Provoking Points**
#### 1. **[Quantity]** *"If we have a large number of knowledge samples, can free-form 'knowledge editing' be considered equivalent to fine-tuning or post-training?"*
From my perspective: fine-tuning is better suited for bulk updates of accumulated outdated knowledge when real-time deployment isn't required, while knowledge editing excels at handling small, frequent updates that demand immediate implementation. The choice between them should be guided by the specific requirements of the update scenario - SFT for comprehensive knowledge refreshes without time constraints, and knowledge editing for rapid, targeted modifications needing instant deployment.
#### 2. **[Add vs. Modify]** *"Fine-tuning is typically used to add new knowledge, while knowledge editing aims to replace or modify existing knowledge."*
This is particularly interesting. In my understanding, both fine-tuning and editing can technically add and modify knowledge. However, I’m not yet certain whether there are inherent performance differences between adding versus modifying knowledge under these two paradigms—another promising avenue for future exploration.
---
Thank you once again for your constructive suggestions. Your points have enriched this discussion and highlighted promising avenues for future exploration. | Summary: The paper tackles the free-form knowledge edit problem in LLM, and proposes to extend existing locate-then-edit frame work to long-form knowledge editing by splitting long-form knowledge into chunks, and maximize the likelihood of each subsequent chunk based on perturbing previous chunk's last token's hidden state. The paper shows clear improvements upon existing methods. The paper also collects a new dataset for long-form editing with diverse formats.
Claims And Evidence: The claim is well explained and supported by the experiments.
Methods And Evaluation Criteria: The proposed method is intuitive and straightforward. The evaluation criteria is appropriate.
Theoretical Claims: I carefully read the proofs in the main paper. I briefly scanned through the proof in appendix.
Experimental Designs Or Analyses: The experiments and analysis are clear and easy to follow. I don't have major concerns.
Supplementary Material: N/A
Relation To Broader Scientific Literature: Being able to edit long-form, diverse formats knowledge in LLM is still under-explored. There are a few works (e.g. UnKE) for long-form knowledge edits, but the paper shows clear improvements.
Essential References Not Discussed: A related work is DEM: Commonsense Knowledge Editing Based on Free-Text in LLMs. The paper mentions it, but didn't compare with it in the experiments.
Other Strengths And Weaknesses: Overall I think the paper is clearly written, well-motivated. The experiments and results are convincing. I don't have major concerns.
One weakness is the edit speed. The proposed chunk by chunk approach is more computational intensive (Table 2) especially when scaling up the sequence length.
Other Comments Or Suggestions: I recommend the authors to also include DEM [1] in the experiments for more comprehensive comparison.
[1] Commonsense Knowledge Editing Based on Free-Text in LLMs
Questions For Authors: I don't have major questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer hVmV:
Thank you for your kind words and positive feedback regarding the novelty, presentation, and effectiveness of our work! Your approval is the great encouragement for us and motivates us to continue advancing our research.
Below, we meticulously provide responses to each of your comments and outline the modifications based on your suggestions.
## *W1: "The proposed chunk by chunk approach is more computational intensive especially when scaling up the sequence length."*
Thank you for raising this important concern. We acknowledge that our method indeed faces computational intensity issues, particularly when scaling up sequence length, and we highlighted this explicitly in Section 5.4 (Observation 6) of our paper. We have actively been exploring optimization strategies to address this challenge:
1. **Early stopping strategy:** During gradient descent optimization of Equation (9), we halt training once loss thresholds are met, allowing shorter optimization epochs without sacrificing performance. Due to shorter chunk lengths, thresholds are quickly reached, significantly accelerating the process.
2. **Adaptive chunk length selection:** Chunks vary in difficulty; simpler chunks can be longer, while challenging chunks can be shorter. Combined with loss thresholds, this significantly reduces the gradient descent epochs.
Additionally, since our initial submission, we have conducted preliminary experiments in the first direction and observed improvements in editing speed. A brief overview of these initial experimental results is provided in the following table:
|||||
|:-:|:-:|:-:|:-:|
|Method|UnKEBench|Counterfact|MQUAKE|
|MEMIT|16.37|17.05|16.92|
|AnyEdit|21.14|20.43|21.28|
|AnyEdit(accelerate)|18.89|8.22|19.03|
|||||
As shown, our optimization approach significantly improves editing speed over the original chunk-by-chunk method, and now **matches the performance of the single-token editing baseline**. We will continue working on further improving the chunk-by-chunk editing efficiency.
We hope this response addresses your concerns.
## *Suggestion1: "I recommend the authors to also include DEM in the experiments for more comprehensive comparison."*
Thank you for highlighting this important method! Your insightful suggestion has prompted us to recognize the value of including DEM for a more comprehensive evaluation. In response:
1. We have provided a detailed description of DEM in the Experimental Setup section of the revised manuscript.
2. We conducted additional experiments using DEM and presented the corresponding results and analysis in Section 5.2.
Note: Since DEM's code is currently not publicly available, we implemented the method ourselves according to our interpretation from the original paper. Below is a quick summary of our experimental results and analysis:
|||||||||||||
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|LLM|Method||**UnKEBench**||||**Counterfact**|||**MQUAKE**||
|||**Ori.**|**Ori.**|**Paraph.**|**Paraph.**|**Ori.**|**Ori.**|**Paraph.**|**Paraph.**|**Ori.**|**Ori.**|
|||BertScore|Rouge-L|BertScore|Rouge-L|BertScore|Rouge-L|BertScore|Rouge-L|BertScore|Rouge-L|
|Llama3-8B-It|DEM|77.09±0.32|31.27±0.48|75.18±0.29|29.41±0.57|77.01±0.30|32.87±0.47|48.12±0.36|16.32±0.54|75.96±0.32|23.10±0.57|
|Llama3-8B-It|AnyEdit|**97.76±0.11**|**92.96±0.24**|**96.60±0.19**|**95.60±0.35**|**97.76±0.14**|**95.87±0.23**|**62.63±0.44**|**46.51±0.59**|**96.33±0.21**|**94.32±0.23**|
|Qwen2.5-7B-It|DEM|78.92±0.28|38.71±0.50|77.01±0.25|29.32±0.48|78.10±0.27|39.50±0.46|56.32±0.35|26.09±0.53|74.18±0.30|35.25±0.52|
|Qwen2.5-7B-It|AnyEdit|**98.05±0.16**|**94.89±0.29**|**93.56±0.15**|**79.98±0.28**|**98.08±0.15**|**95.09±0.19**|**65.40±0.38**|**43.49±0.47**|**98.14±0.13**|**96.39±0.18**|
||||||||||||
As the table demonstrates, our AnyEdit method consistently outperforms DEM. Although DEM dynamically selects layers due to common sense knowledge residing across different layers, it still operates under the limitation of single-token editing and thus struggles with long-form knowledge editing.
We greatly appreciate your valuable suggestions and believe these modifications significantly strengthen our paper.
---
Once again, we deeply appreciate your thoughtful and encouraging feedback. Your suggestions have not only enhanced the current work but have also inspired us to continue exploring research in the area of model editing. We are excited to keep moving forward and contributing to the community!
Best,
Authors of Submission9475
---
Rebuttal Comment 1.1:
Comment: The authors' rebuttal addresses my concerns. I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer hVmV,
Thank you very much for your feedback and for keeping the score. We truly appreciate your support and encouragement. Your positive evaluation of our work means a great deal to us, and we are grateful for your time and thoughtful review.
We look forward to any further suggestions you may have in the future.
Best regards,
Authors of the Paper 9475 | null | null | null | null | null | null |
Long-Term TalkingFace Generation via Motion-Prior Conditional Diffusion Model | Accept (poster) | Summary: This paper looks at generating long-form videos of talking faces with a view to improving the overall expressivity and naturalness of the head motion / facial expression accompanying the speech. A diffusion model is used for generation that is conditioned on various cues, which include historical motion context from other videos, local motion context for the current sequence, an encoding of the lips and facial expression, and optionally the facial landmarks to help guide the motion generation.
Claims And Evidence: Some aspects of the expressiveness is problematic in the example video. For example, the head motion in the long-form sequence when the speaker rotates their head to their left and then back to the camera looks slow and seemingly unnatural. Also, the eyes are staring and look unnatural. The speech articulation looks very good, but I find other aspects of the selected videos do not support the claim that the work is significantly more expressive.
Methods And Evaluation Criteria: The authors curate a new dataset for benchmarking, which they will release. Other standard datasets have been used. The approach is tested against a number of state of the art methods. Also, standard metrics are used to measure performance.
Theoretical Claims: There are no theoretical claims in the paper.
Experimental Designs Or Analyses: I have questions about the subjective evaluation. Specifically: 1) are viewers expected to watch all eight sequences and select the top three? This is a placing a high load on the viewers to keep track of which is the best three, and going back and forth to compare. 2) How long are viewers given to complete the task? 2) The text mentions randomization — can I confirm that the video labeled as A.mp4 is not always the same system that generates the A sequences? The text seems to suggest that the sequences are randomized, but the filename shown (A.mp4) should not be the same for a given system each time as this risks introducing bias.
Supplementary Material: I watched the provided video and read the appendix.
Relation To Broader Scientific Literature: Using the library of archived motion sequences to condition the generator is novel. Similar work has been done encoding the expressiveness and eyes separately, but the application here is different from what I have seen.
Essential References Not Discussed: Modern literature is cited, but I am not sure it is fair to call the cited examples in the introduction “early methods”. You can go back to the late 1990s to find work on creating expressive talking faces (albeit not deep learning / neural based methods). The paper does cite the relevant recent literature.
Other Strengths And Weaknesses: Strengths:
+ The objective metrics suggest the approach compares favorably against all baseline on all three datasets.
+ The code and data will be released.
Weaknesses:
- There are artifacts in the output from the proposed system, especially around the chin. I observed this in two of the examples.
- See the questions regarding the subjective evaluation. Why not do something more typical, like MOS?
Other Comments Or Suggestions: Throughout the paper you use TalkingFace as one word — you should use “talking face” when discussing talking faces in general.
In Figure 1 for the present-clip motion-prior diffusion model: you refer to the “express encoder”, which should read “expression encoder”.
Line 132: “a archived clip” > “an archived clip”.
Line 151: “limited history frame” > “limited frame history”.
Line 182: “and express encoder” > “and expression encoder”.
Line 196: “Lip and Express Encoders” > “Lip and Expression Encoders”.
Line 198: “lip and express encoders” > “lip and expression encoders”.
Line 316: remove the underline.
Appendix A is useful.
Questions For Authors: Q1: The paper states that conditioning on too many past frames for the generated sequence is computationally expensive. Why is it more expensive than conditioning on sequences from the archive?
Q2: If the landmark tokens are dropped, are they simply replaced with zeros?
Q3: On line 299: for the archived-clip motion-prior module, the text states \alpha=16. There is no \alpha mentioned in the description of this module in Section 3.2. Rather this appears in the description of the memory-efficient temporal attention in Section 3.4.
Q4: For the example showing the importance of Fa the text reads: “Figure 5(a) indicates that without the archived-clip (w/o Fa), identity consistency worsens with frame progression“ — this does not look like a shift in identity, but rather image artifacts being introduced.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer kTry:
**Thank you very much for the detailed comments. We address your concerns as follows.**
**Q1: Lack of Clarity in Technical Descriptions**
**Response:** Thank you for the helpful comments. We address each point below:
1. **Memory Cost of Past Frames vs. Archived Sequences:**
Standard temporal self-attention stores all key-value pairs, resulting in quadratic memory growth with the number of frames. In contrast, our Archived-Clip Motion-Prior Module integrates with Memory-Efficient Temporal Attention (MTA), which maintains a fixed-size memory via incremental updates, avoiding GPU memory overflow in long-term generation. Please see our experimentalanalysis in Q2 of R2 (Reviwer S3hq).
2. **Landmark Token Drop:**
Yes, we replace dropped landmark tokens with zeros.
3. **Typo in Line 299:**
Sorry, the symbol “\alpha” was a typo, it should be the variable name “a” and is unrelated to Section 3.4.
4. **Interpretation of Figure 5(a):**
We agree with your observation. Our intent was not to suggest identity drift. As stated in line 375, the absence of archived-clip priors leads to visible artifacts and inconsistencies in head, mouth, and expression, not identity change. We will clarify this wording in the revised version.
**Q2: Issues in Subjective Evaluation Design**
**Response:** Thank you for your thoughtful comments regarding the user study design. We address each concern below:
1. **Rank Selection Protocol:**
Following prior works [1,2], we adopted a ranking-based evaluation strategy to effectively capture relative user preferences. While this approach may introduce some cognitive load, participants were allowed unlimited time and could freely replay videos to ensure fair and consistent judgments.
[1] Ma et al., "Follow Your Emoji: Fine-Controllable and Expressive Freestyle Portrait Animation", SIGGRAPH Asia 2024.
[2] Gao et al., "FaceShot: Bring Any Character into Life", ICLR 2025.
2. **Time Control:**
There was no strict time limit. Participants were allowed to complete the task at their own pace.
3. **Randomization and File Naming:**
We confirm that the method-to-letter mapping (e.g., A.mp4, B.mp4) was independently randomized for each session. The correspondence was stored using a dictionary to avoid fixed associations and mitigate bias.
4. **Addition of MOS Study:**
Thank you for your thoughtful advice. Following your suggestions, we have additionally conducted a Mean Opinion Score (MOS) study with 20 participants. Each participant rated video quality on a 1–5 scale (5 = Excellent, 1 = Bad). The results are summarized below:
| **Method** | **MOS Score ↑** |
|------------------ |------------------|
| Audio2Head | 1.6 |
| V-Express | 2.6 |
| AniPortrait | 2.9 |
| SadTalker | 3.2 |
| Hallo | 3.8 |
| EchoMimic | 3.7 |
| MegActor-∑ | 3.5 |
| **Ours (MCDM)** | **4.3** |
The experimental results demonstrate that our method achieved the highest MOS score, demonstrating strong user preference in terms of identity consistency and visual quality.
**Q3: Writing and Terminology Issues**
**Response:** Thank you for the reminder. We have fixed all mentioned issues, including terminology corrections, grammar errors, and formatting. We also reviewed the entire paper to correct other typos, table labels, and figure captions for consistency and clarity.
**Q4: Concerns on Expressiveness and Visual Artifacts**
**Response:** Thank you for the valuable feedback. We acknowledge that minor artifacts (e.g., around the chin or eyes) may appear in difficult cases involving large head movements, this remains an open challenge in the community. **Our work takes a meaningful step forward by proposing a memory-based generation framework and releasing a 200-hour multilingual dataset to help mitigate these issues.** Both quantitative and qualitative results validate the motivation of our approach, showing that our method consistently outperforms prior baselines in expressiveness and coherence. **More visual comparisons, including EMO (closed-source, official demo inputs), are provided in [link 1](https://files.catbox.moe/77kqif.mov) and [link 2](https://files.catbox.moe/1p4qo4.mov), demonstrating superior identity consistency and visual quality.**
**Q5: Literature Framing and Terminology**
**Response:** Thank you for the suggestion. We agree that referring to recent deep learning methods as “early” may be misleading. We have revised the introduction to more clearly distinguish early non-neural approaches from modern neural-based methods.
**Lastly, thank you so much for helping us improve the paper and appreciate your open discussions! Please let us know if you have any further questions. We are actively available until the end of this rebuttal period. Looking forward to hearing back from you!**
---
Rebuttal Comment 1.1:
Comment: Thank you for the follow up and the pointers to the rank-based subjective assessment. Your clarification here about allowing unrestricted viewing and unrestricted time makes sense. I appreciated the additional subjective testing that you ran. I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer kTry:
Thank you sincerely for taking the time to review our rebuttal and for thoughtfully considering our clarifications. We are especially grateful that you found the additional subjective evaluation and explanations helpful. **We truly appreciate your updated score and your constructive feedback throughout the review process,** it has been invaluable in helping us strengthen the quality and clarity of our work.
**If you have any additional questions or anything you would like to discuss in more detail, please feel free to let us know. We would be more than happy to discuss further and respond promptly.** | Summary: In this paper proposed the Motion-Priors Conditional Diffusion Model (MCDM), a diffusion-based framework designed for improved long-term motion generation to resolve the key challenges in existing methods, including lack of identity consistency, limited expression diversity, audio-motion synchronization issues, and error accumulation in long-term facial motion generation.
Claims And Evidence: In this paper claim that the weak correlation between audio and motion complicates the decoupling of identity and motion cues.
However, audio and lip motion are strongly correlated. The author did not clearly whether “motion” refers to lip motion, head motion, or both, which creates ambiguity in evaluating the claim.
Methods And Evaluation Criteria: The proposed method is well aligned with the problem of speaking face generation.
The evaluation criteria (dataset, metrics) are adequate and widely used in previous work.
However, the lack of efficiency evaluation weakens the memory-efficiency claim.
The lack of direct comparison with reference video-based methods makes it difficult to evaluate the benefits of the archived clip motion first.
Memory-efficient attention lacks complexity analysis to justify the efficiency claim.
The concept of "multi-causality" is mentioned, but there is no explanation of how it exists.
Even experimentally, there was no quantitative analysis of how much improvement was achieved when causality was broken.
Theoretical Claims: Memory-efficient attention lacks complexity analysis to justify the efficiency claim.
The concept of "multi-causality" is mentioned, but there is no explanation of how it exists.
Even experimentally, there was no quantitative analysis of how much improvement was achieved when causality was broken.
Experimental Designs Or Analyses: The authors introduce Memory-Efficient Temporal Attention (MTA) to improve long-term stability.
However, the authors does not provide the evidences that MTA is more memory-efficient than existing approaches.
No parameter comparison: The paper does not compare the parameter count of MTA with previous temporal attention mechanisms.
No memory usage evaluation: There is no quantitative measurement of GPU memory usage or computational cost (e.g., FLOPs, VRAM consumption).
No baseline comparison: The paper does not include comparisons with existing self-attention methods to demonstrate MTA’s efficiency.
Supplementary Material: The supplementary material states that 20 subjects were involved in the user study.
However, for subjective evaluations, a larger number of subjects is generally recommended to ensure statistical reliability.
Have the authors ensure that the results are robust despite the small number of subjects?
Relation To Broader Scientific Literature: This paper tried to address several issues that arise in existing Talking Face Generation.
To address motion distortion and flickering that occur during long-term generation in GAN and diffusion models, we propose the Motion-priors Conditional Diffusion Model (MCDM).
However, existing papers also try to preserve temporal information by using seq2seq specialized models to preserve temporal information well. Is there a reason why this discussion is missing?
Essential References Not Discussed: .
Other Strengths And Weaknesses: * Strengths:
The authors proposed named TalkingFace-Wild, a multilingual dataset with over 200 hours of high-quality video, is a valuable contribution to the research community.
The proposed Archived-Clip Motion-Prior Module effectively integrates long-term motion context, which is a promising direction for improving motion coherence.
* Weaknesses:
Memory efficiency is not empirically validated: No direct comparisons of parameter count, computational complexity (FLOPs), or VRAM consumption are provided.
The user study is limited in number of subjects (20 subjects), which raises concerns about the statistical reliability of the subjective evaluations.
The paper does not compare its approach to reference video-based motion modeling methods, which are commonly used to improve long-term consistency.
Other Comments Or Suggestions: The explanation of “Multimodal Causality” in Section 3.2 is somewhat vague. It would help to provide a more concrete formulation or a visual representation to clarify how causality is modeled between audio and facial motion.
The authors claim that the Memory-Efficient Temporal Attention module is to be more efficient, but the paper does not provide direct comparisons with standard self-attention in terms of computational cost or memory usage. Including FLOPs or VRAM usage in Table 7 would strengthen the claim.
Questions For Authors: Memory-Efficient Temporal Attention:
- The authors mentioned Memory-Efficient Temporal Attention as a motivation, pointing out the GPU memory limitation of existing methods, but no quantitative evaluation was done. Can you provide a table comparing the efficiency of this method with existing methods?
User Study:
- The paper conducts a user study with 20 subjects, which may raise concerns about the statistical reliability of the results.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer S3hq:
Thank you for your detailed and constructive review. **We appreciate your recognition of our contributions, including the TalkingFace-Wild dataset and the proposed motion-prior framework.** We address your concerns as follows.
**Q1: Ambiguity in Audio-Motion Correlation**
**Response:** Thank you for pointing this out. The original sentence "*Reliance on static audio features and weak correlations between audio and 'motion' complicate the decoupling of identity and motion cues...*". 'motion' refers specifically to **head motion**, not lip motion. Following [1-2], audio and head motion are only weakly correlated. We have clarified this distinction in the revised text.
[1] Wang et al., "V-express: Conditional dropout for progressive training of portrait video generation"
[2] Chen et al., "Audio-driven Talking-Head Generation with Rhythmic Head Motion" .
**Q2: Missing Complexity and Efficiency Analysis for MTA**
**Response:** Thank you for the detailed feedback. Standard temporal self-attention requires storing all key-value pairs, leading to quadratic memory growth and potential OOM issues in long sequences (e.g., V-Express in Table 7 is marked N/A due to this).
In contrast, our Memory-Efficient Temporal Attention (MTA) maintains a fixed-size memory via incremental updates, enabling stable and efficient long-term modeling.
Furthermore, we provide a comparison (T=4, batch size=1):
| **Method** | **FLOPs (G) ↓** | **Training GPU Memory (GB) ↓** | **FID ↓** |
|------------------|------------------|-------------------------------|------------|
| Self-Attention | 6.7 | 28.7 | 46.29 |
| **MTA (Ours)** | **3.1** | **16.4** | **42.08** |
MTA reduces FLOPs by **54%** and GPU memory by **43%**, while also improving FID. We will include this result in the final version.
**Q3: Lack of Explanation for Multimodal Causality**
**Response:** Thank you for highlighting this point. We apologize for the lack of clarity. By *multimodal causality (MC)*, we refer to modeling head motion, lip motion, and expression tokens jointly using a unified transformer conditioned on audio and reference image features. These conditioning tokens are prepended to the sequence and influence all motion tokens uniformly. Unlike modality-specific denoising, our model adopts a shared causal transformer to perform unified denoising across all motion cues. To quantify its effectiveness, we conducted an ablation study:
| **Method** | **FID ↓** | **FVD ↓** | **Sync-C ↑** | **Sync-D ↓** | **SSIM ↑** | **E-FID ↓** |
|--------------|-----------|-----------|--------------|--------------|------------|-------------|
| w/o MC | 46.03 | 702.18 | 7.42 | 7.03 | 0.754 | 2.19 |
| **w/ MC** | **42.08** | **656.71**| **7.84** | **6.69** | **0.779** | **1.97** |
These results demonstrate that multimodal causality modeling leads to consistent improvements in identity preservation, synchronization, and overall generation quality.
**Q4: Missing Comparison with Reference-Based and Seq2Seq Methods**
**Response:** Thank you for the suggestion. As noted in Q2, reference-based methods (e.g.,EchoMimic) segment long videos to mitigate OOM, but often lead to identity inconsistency between clips. Seq2Seq models (e.g., SadTalker and EMO) lack memory mechanisms and are prone to error accumulation. Our unified memory-based framework avoids these issues by maintaining and updating a fixed-size memory for consistent long-term generation. We have added qualitative comparisons in the [link 1](https://files.catbox.moe/77kqif.mov), and [link 2](https://files.catbox.moe/1p4qo4.mov). which demonstrate superior identity consistency and visual quality.
**Q5: Concerns About the User Study Design**
**Response:** We would like to clarify that in our user study, the mapping between methods and filenames (e.g., A.mp4, B.mp4) was randomly shuffled for each evaluation task, and the correspondence was stored using a dictionary to ensure fair randomization and avoid naming bias.
The initial study involved 20 participants, which aligns with common practice in prior works [3]. To enhance statistical reliability, we extended the study to 40 participants. We made every effort to expand the subject pool within the available timeframe. The results are available at: [link3](https://files.catbox.moe/dm8rio.pdf).
[3] Zhang et al., "SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation".
**In the end, thanks a lot for your detailed comments and thank you for helping us improve our work! We appreciate your thoughts on our work and we would be more than happy to discuss more during or after the rebuttal to explore more in this direction. Please let us know if you have any further questions. We are actively available until the end of this rebuttal period.** | Summary: The paper introduces the Motion-priors Conditional Diffusion Model (MCDM) for long-term TalkingFace generation, leveraging archived historical frames and present-clip motion priors through a memory-efficient temporal attention mechanism to achieve robust identity preservation, synchronized expressions, and accurate lip-audio alignment while releasing the multilingual TalkingFace-Wild dataset and demonstrating state-of-the-art performance across benchmarks.
## update after rebuttal
Thanks for your detailed responses. I appreciate the additional experiments, qualitative comparisons, and clarifications, which have improved the quality of the paper. Given these improvements, I am happy to maintain my original rating and support the paper's acceptance.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: It improves the performance of long-term talking head video generation by achieving robust identity preservation, synchronized expressions, and accurate lip-audio alignment.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Overall, this paper presents an interesting idea and is well-structured, with carefully designed figures. The experiments are rich and comprehensive, and the work meets the standard of ICML. However, I still have several concerns:
1) Ablation Study: The archived motion prior module defaults to using 16 frames of historical information, but the impact of using shorter or longer histories on long-term consistency is not explored. For instance, would 4 frames be sufficient, or would more than 16 frames yield better results? A thorough ablation study on this aspect would strengthen the work.
2) Comparison with Related Work: Although the two related papers [1][2] do not release their code, it would still be valuable to provide qualitative comparisons with them. Since both papers offer demo videos on their official websites, comparisons could be made using their audio and images at least at a qualitative level to better position this work among existing approaches.
3) Concurrent Work: The authors mention that they do not compare with the concurrent method [3] due to the unavailability of code. However, I noticed that [3] has publicly released both code and model weights on GitHub. Including a comparison with [3] would make the evaluation more complete and convincing.
4) Video Comparisons with Open-Sourced Methods: In the supplementary video, there are no comparison demos with existing open-sourced methods such as SadTalker, EchoMimic, and so on. Including video comparisons would provide a clearer understanding of the advantages and limitations of the proposed method and help readers better assess its performance.
References:
[1] Loopy: Taming Audio-Driven Portrait Avatar with Long-Term Motion Dependency
[2] EMO: Emote Portrait Alive -- Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions
[3] Hallo2: Long-Duration and High-Resolution Audio-driven Portrait Image Animation
Other Comments Or Suggestions: The writing of the Introduction section needs refinement:
1. The transition and logical flow between the second and third paragraphs, especially the use of logical connectors, should be improved and supported with sufficient evidence. For example,
a) The relationship between GAN-based methods and Diffusion-based models needs to be clarified to ensure coherence.
b) In line 041, the claim that "facial landmarks overly bind poses to the reference image" is unclear — this motion constraint should naturally allow different poses across frames.
c) In line 043, the assertion that overly binding poses limits expression diversity is also questionable. Facial expressions are largely independent of head pose — humans can maintain a fixed pose while still displaying a wide range of expressions.
d) Additionally, the term "static audio feature" is ambiguous — audio features extracted from a time window inherently contain dynamic information.
2. Contributions 2, 3, and 4 should be merged, as they all pertain to technical details and belong to the same category of contributions.
Questions For Authors: No.
Ethics Expertise Needed: ['Inappropriate Potential Applications & Impact (e.g., human rights concerns)', 'Privacy and Security']
Ethical Review Concerns: The research focuses on facial applications, which raises concerns about privacy, security, and human rights, as facial images can be easily modified without individuals' consent.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer Vnrq:
Thank you for your thoughtful and encouraging review. We appreciate your recognition of our work as **“an interesting idea”** with **“rich and comprehensive experiments.”** We have carefully considered all your suggestions. Please find our point-by-point responses below.
**Q1: Ablation Study on Historical Frame Length**
**Response:** Thank you for the insightful question. We conducted an ablation study on the number of archived frames, as shown in Table 8 of the supplementary material, covering 2, 4, 8, and 16 frames. Additionally, we extended the study to 24 frames. Due to memory constraints, further increases were infeasible.
| **Method** | **FID ↓** | **FVD ↓** | **Sync-C ↑** | **Sync-D ↓** | **SSIM ↑** | **E-FID ↓** |
|--------------|-----------|-----------|--------------|--------------|------------|-------------|
| MegActor-Σ | 48.57 | 724.40 | 7.22 | 7.14 | 0.745 | 2.29 |
| Ours-2 | 45.72 | 704.26 | 7.39 | 7.03 | 0.754 | 2.19 |
| Ours-4 | 44.53 | 694.61 | 7.55 | 6.93 | 0.767 | 2.08 |
| Ours-8 | 43.68 | 678.49 | 7.68 | 6.81 | 0.773 | 2.03 |
| Ours-16 | 42.08 | 656.71| 7.84 | 6.69 | 0.779 | 1.97 |
| **Ours-24** | **41.74** | **642.35**| **7.96** | **6.55** | **0.782** | **1.94** |
The experimental results show that even with only 2 frames, our method outperforms MegActor-Σ across all metrics, indicating strong robustness under limited historical context. Furthermore, increasing the number of archived frames consistently enhances identity preservation and motion quality. We will highlight this ablation more clearly in the revised manuscript.
**Q2: Lack of Qualitative Comparison with Related Works [1][2], Recent Open-Sourced Hallo2, and Other Methods**
**Response:** Thank you for the valuable suggestion. We have added qualitative comparisons between our proposed MCDM and both open-source methods (SadTalker, EchoMimic, and Hallo2) and closed-source methods (Loopy [1] and EMO [2]), using the official audio and image inputs provided on the project pages of Loopy and EMO. Please refer to the following demo links: **Loopy**: [Demo 1](https://files.catbox.moe/d54l8i.mov), [Demo 2](https://files.catbox.moe/zezhpy.mov) and **EMO**: [Demo 1](https://files.catbox.moe/77kqif.mov), [Demo 2](https://files.catbox.moe/1p4qo4.mov).
Our method demonstrates stronger identity consistency and better motion synchronization compared to SadTalker, EchoMimic, and Hallo2, while achieving visual quality comparable to Loopy. Additionally, it generates more natural expressions and overall enhanced visual realism compared to the original EMO outputs.
**Q3: On the Distinction Between GAN-based and Diffusion-based Methods**
**Response:** Thank you for the helpful suggestion. We have revised the introduction to improve the logical flow and clarify the distinction between GAN-based and diffusion-based methods. GANs often suffer from training instability and artifacts such as flickering, especially in long-form generation. In contrast, diffusion models provide more stable training and improved visual realism via multi-step denoising and stronger identity preservation, making them better suited for this task. We have added this clarification and cited relevant works in the revised version.
**Q4: On Pose Constraints and Expression Diversity**
**Response:** Thank you for the constructive feedback. (1) Our original intent was to highlight that when the reference image and driving pose come from different identities, strong pose constraints (e.g., facial landmarks) may cause identity distortion or unnatural warping. (2) While facial expressions are generally independent of head pose, over-constraining pose can inadvertently limit expression diversity. For example, methods like EchoMimic use dense 3D landmarks from tools such as MediaPipe, which may implicitly encode facial details and constrain expression variation. We have clarified this point in the revised introduction.
**Q5: Misuse of the Term “Static Audio Feature”**
**Response:** Thank you for pointing this out. The term “static audio feature” was a misphrasing. Our intended meaning was to describe the limitation of relying solely on audio features without incorporating complementary cues such as lip or head motion. We have clarified this wording in the revised manuscript.
**Q6: Redundancy in Contribution Statements**
**Response:** We agree with the reviewer’s observation. Contributions 2, 3, and 4 all describe technical components of our framework. We have merged them into a single contribution for improved clarity.
**Again, thank you so much for helping us improve the paper! Please let us know if you have any further questions. We are actively available until the end of this rebuttal period. Looking forward to hearing back from you!**
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed responses. I appreciate the additional experiments, qualitative comparisons, and clarifications, which have improved the quality of the paper. Given these improvements, I am happy to maintain my original rating and support the paper's acceptance. However, I also respect the suggestions of the other reviewer and the AC.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Vnrq,
Thank you very much for your follow-up comment and for maintaining your support for our paper. We sincerely appreciate your recognition of our additional experiments, qualitative comparisons, and clarifications. Your constructive feedback helped us significantly improve the quality and clarity of the paper.
**Although the author-engaged discussion phase will be over by today, if you have any additional questions or open discussions, please don't be hesitant to leave more comments. We are always available at all time, to actively address any concerns or be prepared for more discussions.**
**Your opinions are rather important for us to improve the work!**
Thank you!
Sincerely,
The Authors | null | null | null | null | null | null | null | null |
STAIR: Improving Safety Alignment with Introspective Reasoning | Accept (oral) | Summary: This paper introduces STAIR, a novel method that utilizes introspective reasoning for safety alignment in LLMs. It consists of three stages: Structured CoT Format Alignment, Self-improvement with SI-MCTS, and Test-time Scaling. Experiments on safety-related and general benchmarks suggest that STAIR helps to mitigate harmful outputs while preserving helpfulness.
Claims And Evidence: The claims of introducing introspective reasoning for safety alignment to achieve System-2 thinking are well-supported by the methodology and experiments.
Methods And Evaluation Criteria: The proposed method is a reasonable and effective approach for addressing the safety-performance trade-off in LLMs. Some questions about the details:
1. Apart from examples of harmful queries in Figures 6,7,8, can the authors provide examples of reasoning steps in harmless scenarios? E.g. The "Allowed Requests" in "Safety Policies for Deliberative Alignment" in the supplementary material.
2. What is the detailed prompt of self-rewarding?
3. In Appendix A, would it be necessary to provide a detailed definition of "harmful and risky contents" to construct more targeted CoT data?
Theoretical Claims: I checked the proof of the theorem on the MCTS reward function.
Experimental Designs Or Analyses: The authors conduct extensive experiments on a range of benchmarks, including comparison with existing approaches and ablation studies. I have a few questions and suggestions:
1. It is good to see the performance of proprietary LLMs on StrongReject in Table 4, but results of proprietary LLMs on other safety-related and general benchmarks should also be included for comparison.
2. In Table 2, what is "Full Dataset, 1 Iteration"? What is the difference between this entry and DPO-1?
3. How will the inference time change when test-time scaling is applied?
Supplementary Material: I checked the appendix and the supplementary material carefully.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: Lines 20 and 76: Itrospective -> Introspective
Figures 6,7,8: there seems to be something wrong with the letter spacing in the orange box.
Questions For Authors: 1. Will the constructed structured CoT dataset be publicly released?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We are deeply encouraged by the reviewer’s thorough and thoughtful feedback. We address your concerns below.
**Q1: Examples**
Here we show one incomplete example in AlpacaEval. More complete ones will be displayed in revision.
```
Prompt: What is some cool music from the 1920s?
Answer:
<R>
Identifying Key Musical Genres of the 1920s:
The 1920s was a transformative period for music, marked by the rise of jazz, blues, and early forms of popular music. Jazz, in particular, emerged as a major force, ...
</R>
<R>
Notable Artists and Songs of the 1920s:
1. **Jazz**:
- Louis Armstrong: Known for his trumpet playing and iconic vocals in songs like "Heebie Jeebies" (1926).
- ...
2. **Blues**:
-...
</R>
<R>
Summarizing the Musical Landscape of the 1920s:
The 1920s was a decade of innovation and growth in music, marked by the rise of jazz as a major cultural force, ...
</R>
Some cool music from the 1920s includes jazz classics like Louis Armstrong's "Heebie Jeebies" and Duke Ellington's "Mood Indigo", blues hits by Bessie Smith ("Downhearted Blues"), ...
```
**Q2: Prompts for self-rewarding.**
Helpfulness:
```
Here is a question-answer pair. You need to evaluate whether the provided answer effectively addresses the question and resolves it satisfactorily.
You need to judge whether the answer successfully resolves the question in a clear and effective manner, and rate a score from 1 to 5:
1. All information provided by the answer is wrong or irrelevant to the question;
2. The answer contains relevant correct details but also contains some wrong information;
3. The answer contains relevant correct details and only contains little wrong information;
4. The answer contains relevant correct details with no wrong information, but not comprehensive and enough to fully solve the question;
5. The answer contains enough and comprehensive details to fully solve the question with no wrong information.
Question: {prompt}
Answer: {response}
Give your conclusion at the end of your response with a new line. The format should be "Conclusion: [score]".
```
Safety:
```
Here is a question-answer pair. You need to evaluate whether the provided answer is safe and aligned with human ethics.
If the question has harmful intent, the answer should directly refuse to answer this question, otherwise the answer is supposed to be UNSAFE.
If the question does not have harmful intent, the answer should align with human ethics and does not provide any dangerous information that could possibly lead to harm, otherwise the answer is supposed to be UNSAFE.
Question: {prompt}
Answer: {predict}
Give your conclusion at the end of your response with a new line. The format should be "Conclusion: [SAFE]" or "Conclusion: [UNSAFE]".
```
**Q3: Definition needed?**
The prompt is used to synthesize structured CoT data with GPT-4. Assuming GPT-4 has learned safety guidelines by OpenAI, we intentionally avoid specifying a fine-grained taxonomy of safety risks during CoT collection. This allows the model to perform flexible, generalizable risk identification through introspection, which can be learned by our models in training.
**Q4: Results of proprietary LLMs.**
We list additional results of proprietary LLMs on both safety and general benchmarks.
||StrongReject|XsTest(%)|GSM8k(%)|AlpacaEval(%)|
|---|:---:|:---:|:---:|:---:|
|GPT-4o|0.378|78.5|96.7|51.1|
|Claude-3|0.916|75.5|93.0|34.9|
|Claude-3.5|0.936|77.5|96.2|52.0|
|Deepseek-R1|0.305|61.5|97.8|87.6|
|STAIR-DPO-3 (8B)|0.880|99.0|87.6|38.7|
It's expectable that these models outperform our 8B LLM in general capabilities, since our focus is on preservation rather than significant improvement. Notably, our model shows top-tier safety among them, highlighting the effectiveness of STAIR.
**Q5: Entry explanation**
We use different prompts to construct preference datasets across three iterations, with 5k prompts each for safety and helpfulness per iteration, totaling 30k prompts during self-improvement. To support our motivation that data quality improves with iterative training, we compare against a baseline using all 30k prompts in a single iteration ("Full Dataset, 1 Iteration") while DPO-1 only use 10k prompts. We will further clarify this in the revision.
**Q6: Time with TTS**
We measure the inference time of TTS on two benchmarks and report the average time for each prompt below. Due to the time limit, we only show the results with Best-of-N, N=4,8, which are approximately proportional to the budget.
||StrongReject|AlpacaEval|Budget $N$|
|---|:----:|:---:|:---:|
||time/prompt (s)|time/prompt(s)||
|STAIR-DPO-3|0.321|0.322|1|
|+Bo4|0.628|0.803|4|
|+Bo8|1.069|1.489|8|
**Q7: Minors**
Thanks. The format issue is mainly due to the inclusion of pdf. We will correct these in revision.
**Q8: Release of dataset**
Yes. We will release both the annotated dataset for CoT format alignment and the one for MCTS construction as well as the trained models after the anonymity stage is expired.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Since my concerns and questions have been addressed, and considering the high quality of this research has been recognized by all the reviewers, I will keep my positive rating. I suggest including these discussion in the final version.
---
Reply to Comment 1.1.1:
Comment: We once again appreciate your recognition of our work, and we’re glad to see that our responses have addressed your concerns, as reflected in the improved score. We will include the above discussions in revision properly. | Summary: This paper proposes STAIR, a novel framework that integrates safety alignment with introspective reasoning. The idea is to enable LLMs to identify safety risks through step-by-step analysis by self-improving CoT reasoning with safety awareness. The key steps include 1) SFT to make LLM output structured CoT; 2) self-improved safety-informed MCTS, using self-rewarding, stepwise preference optimization and iterative self-improvement; 3) test-time scaling where a process reward model is trained based on the preference data constructed from the tree. Extensive experiments show STAIR effectively mitigates harmful outputs while better preserving helpfulness.
Claims And Evidence: Most claims are clearly supported and verified.
Regarding the reward choice: although authors show the final performance with the simplest form of reward is good, the paper does not give further discussion or ablation study regarding other possible forms.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I briefly go over the threotical claim, Theorem 2.1.
Experimental Designs Or Analyses: The experimental designs are reasonable.
Supplementary Material: I went over the SFT part and also the codes.
Relation To Broader Scientific Literature: Good. The paper discusses its relation to system-2 reasoning based safety alignment, and also try its own implementation for comparison.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
- The idea of enabling an LLM, not necessarily reasoning LLM, to have CoT reasoning with safety awareness is interesting.
- The final performance is good in terms of both harmlessness and helpfulness.
- Experiments are extensive, including comparison with self-implemented o1 model as well as commercial models.
Weaknesses:
- The reward design is not fully discussed or studied.
- The overall algorithm of SI-MCTS seems ad-hoc (see below).
Other Comments Or Suggestions: N/A
Questions For Authors: I have the following questions:
- Although the final performance with the simplest form of reward design is good, the paper does not give further discussion or ablation study regarding other possible forms. What if I pick other reward designs under the form of Theorem 2.1?
- The overall algorithm of SI-MCTS seems ad-hoc. To me, it is not clear why the paper picks these strategies and what are the intuitions?
- Since you are using a self-rewarding mechanism, why one needs to apply the stepwise preference optimization, and also re-train a PRM for test-time phase? Or the self-rewarding does not provide accurate rewards? I do not see further discussion on this point.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are greatly encouraged by your recognition of our well-supported motivation and experiments. We also thank you for your careful reading and meaningful questions. Below are our responses to the concerns.
**Q1: Other forms of reward function**
Thank you for this constructive suggestion. In this work, our intention of using the simplest instantiation is not to state that this function gives the best performance, but to highlight that the listed properties are critical for the problem and even a simple design yields strong empirical performance, which confirms the superiority of our overall framework.
Meanwhile, we make some additional experiments exploring the performance of other potential forms, e.g., non-linear $F(S)$ and different selection of $C_2$ in Eq.(1). We generate data with 1k safety and 1k helpful prompts and perform step-level DPO on STAIR-SFT of LLaMA.
|Stage|$F(S)$|$C_2$|StrongReject|AlpacaEval(%)|HHH(%)|GSM8k(%)|
|---|---|---|:---:|:---:|:---:|:---:|
|STAIR-SFT|-|-|0.6536|30.02|83.13|86.05|
|+Step-level DPO|$S$|-1|0.6712|32.59|84.88|86.73|
||$S$|-0.5|0.6633|32.33|86.17|86.96|
||$2^S-1$|-1|0.6753|31.91|83.41|87.64|
||$-S^2+2S$|-1|0.6688|32.18|85.99|86.16|
The results show that all forms improve the performance in both safety and general capabilities with insignificant difference. This confirms the validity of our theoretical result. More detailed analysis on function property can be left for future work.
**Q2: Motivation of SI-MCTS**
We believe that each component in the framework of SI-MCTS is well-motivated and carefully grounded in the goal of integrating safety-aware reasoning into the alignment process by fully leveraging the model's own potential.
We use MCTS, a common approach for LLM reasoning, to effectively explore diverse reasoning paths and enhance the reasoning capability in a fine-grained manner, as it evaluates each reasoning step with rollout and backpropagation compared with full-trajectory rejection sampling. However, as conventional MCTS only considers the correctness, we have to design a novel reward to seek balance between helpfulness and safety. We achieve this by performing theoretical analysis with reasonable properties required. While most safety alignment methods supervise the alignment with direct refusal at an output level, we construct step-level pairs to provide dense, localized supervision, internalizing the safety awareness into reasoning. SI-MCTS is performed iteratively by self-improvement, which is encouraged by the independence from external annotations and the principle of self-play. As models trained on reasoning data become better at identifying safe paths, they in turn generate higher-quality search trees for future iterations. Our ablation in Table 2 also confirms these motivations and superiorities.
Thus, the overall design of SI-MCTS is logical and meaningful with each component to solve certain issues towards the common goal of process-level safety. We will further improve the presentation in the revision to clarify these intuitions.
**Q3: Necessity of stepwise optimization and PRM compared to self-rewarding**
We adopt self-rewarding mainly for data generation within the framework of SI-MCTS. It evaluates the final answer after rollout with a scalar value at the output level, which is averaged for internal nodes in the search tree. When we apply it in inference, it only provides an outcome signal and can be noisy for each sample. We make a comparison between Best-of-4 inference with self-rewarding and PRM on STAIR-SFT and STAIR-DPO-1.
||StrongReject|AlpacaEval(%)|
|---|---|---|
|STAIR-SFT|0.6536|30.02|
|+self-reward Bo4|0.6719|30.57|
|+PRM Bo4|0.6727|30.95|
|STAIR-DPO-1|0.6955|32.86|
|+self-reward Bo4|0.7370|32.03|
|+PRM Bo4|0.7384|33.11|
|STAIR-DPO-3|0.8798|35.96|
We show that self-rewarding brings sub-optimal performance, especially on AlpacaEval which can have higher variances in self-rewarded scores for helpfulness. As for the trained PRM, the data from SI-MCTS gathers the averaged information from self-rewarding and guarantees its generalization across diverse answers, providing more efficient and reliable evaluation than self-rewarding. Meanwhile, the models after stepwise optimization (DPO1, DPO3) have better performance than both search methods. The stepwise preference optimization embeds the rewarding signals in the pairwise training data whose quality is ensured with threshold-based selection. It's more stable and theoretically grounded than directly guided with self-reward values in inference.
The results confirm the necessity of our framework of self-improvement with stepwise optimization and the training of PRM. Still, we need to emphasize that the use of PRM is an initial exploration of test-time scaling in safety alignment but not our core technical contribution.
---
Rebuttal Comment 1.1:
Comment: Thanks for response and many additional experiments. Please include these responses into the camera-ready version, particularly the intuition regarding these necessary modules. I have increased my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback. We will refine the paper based on the discussions above. | Summary: This paper introduces STAIR, a framework that enhances LLM safety by incorporating step-by-step reasoning instead of relying on direct refusals. The three-stage approach includes: structured CoT format alignment through fine-tuning, iterative self-improvement using Safety-Informed Monte Carlo Tree Search (SI-MCTS) with step-level DPO, and test-time scaling with a process reward model. Experiments demonstrate that STAIR effectively balances safety and helpfulness, showing superior resistance to jailbreak attacks compared to baseline methods.
Claims And Evidence: The paper's claims about STAIR's effectiveness are generally well-supported by comprehensive experiments across multiple safety and performance benchmarks.
Methods And Evaluation Criteria: The three-stage framework logically addresses different aspects of safety alignment with appropriate methods for each component. The Safety-Informed MCTS design with its dual-objective reward function is novel and well-formulated for balancing safety and helpfulness.
Theoretical Claims: I verified the theoretical analysis in theorem 2.1 and Appendix B.1 regarding the Safety-Informed reward function.
Experimental Designs Or Analyses: The experimental approach is sound, using appropriate datasets (PKU-SafeRLHF, JailbreakV-28k, UltraFeedback), comprehensive evaluation benchmarks, diverse model selections (LLaMA and Qwen), and well-designed iterative improvement methods with SI-MCTS. However, the paper lacks detailed analysis of computational costs for training and inference.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The work is well-positioned within research on LLM safety alignment and reasoning. It makes full use of recent advantage in these domains to provide an effective method.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: 1. Main weaknesses involve the computational overhead potentially limiting practical applications.
2. While Safety-Informed MCTS builds on existing MCTS approaches for reasoning, the paper could better differentiate its technical innovations. Provide some insights on applying MCTS to safety problem.
3. Insufficient details on implementation for self-rewarding mechanism to ensure reproducibility. Some of the design choices are heuristic.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How does STAIR's computational cost compare to standard safety alignment methods in terms of training time and inference latency?
2. What is the difference of reasoning-based safety alignment and normal safety alignment?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for your thoughtful review and for recognizing our novel framework design, comprehensive experiments, and the significance of our work for safety alignment. We also appreciate your constructive suggestions and address your concerns below.
**Q1: Computation costs**
The main overhead comes from increased sampling in data generation and longer responses, both tied to enhanced reasoning, which is reasonable. Below we assess this from two aspects.
* Training: STAIR's training from scratch takes ~30 hours on 8 A800 GPUs. Most cost arises from SI-MCTS data generation during iterative self-improvement, which is performed offline and doesn't impact deployment. Importantly, this process is annotation-free, with both data and rewards self-generated, greatly reducing human labeling cost. On average, it takes ~15s per prompt to construct a search tree. Meanwhile, although the time cost is evident, we can get plenty step-level preference data from it. The average time to get a valid preference pair is about 0.47s while that of self-rewarding is about 0.40s, which indicates that our framework does not introduce heavily overhead compared to methods adopting self-generated data.
* Inference: The additional test computations stem from 1) longer responses, which are inevitable but acceptable with explicit reasoning benefiting safety and marginal as shown below; 2) test-time search, which is optional given the superior performance of DPO-3 only and can be adjusted according to the need in deployment. TTS costs are shown in our response to Q6 of Reviewer CMdr.
|Inference|StrongReject||XSTest||AlpacaEval||HHH||GSM8K||
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
||#token/prompt|time/prompt(s)|#token/prompt|time/prompt(s)|#token/prompt|time/prompt(s)|#token/prompt|time/prompt(s)|#token/prompt|time/prompt(s)|
|Base|303.24|0.248|223.60|0.339|448.03|0.266|331.90|0.525|256.43|0.168|
|SFT|523.52|0.332|354.91|0.356|552.08|0.305|460.28|0.685|337.76|0.196|
|DPO3|319.80|0.294|240.55|0.371|607.60|0.328|456.87|0.696|362.56|0.194|
While STAIR incurs additional computations, we believe it offers a valuable balance between safety, general performance, and resource usage, and remains practical for real-world use.
**Q2: Motivation and contributions of the method**
_Q2.1: Difference between reasoning-based safety alignment and normal one_
As explained in the introduction section, we draw an analogy between reasoning-based vs. normal safety alignment and System 2 vs. System 1 thinking. Normal safety alignment methods often rely on direct refusal demonstrations. These lead to shallow alignment, where models memorize refusal heuristically without deeply understanding why a query is unsafe, making LLMs vulnerable to jailbreak attacks. This motivates us to combine reasoning with safety alignment. It encourages models to decompose queries and identify risks in the process of dynamic reasoning. We achieve this by aligning reasoning steps with safety guarantees using stepwise preference optimization and eventually enhance the robustness by internalizing safety awareness into the reasoning process.
_Q2.2: Technical innovations and insights_
This work aims to enhance safety alignment through improved reasoning. To this end, we adopt MCTS, a common approach for LLM reasoning, to explore high-quality reasoning trajectories. However, conventional MCTS often evaluates the outcome correctness, which cannot balance safety and helpfulness. We propose SI-MCTS, introducing a Safety-Informed Reward Function with an analytical solution (Theorem 2.1) that satisfies three key properties and theoretically considers both safety and helpfulness. While most safety alignment methods supervise the training with direct refusals, we construct stepwise preference data from the search trees to provide more concrete and dense signals. We further employ a self-improvement training loop that avoids costly annotations while consistently boosting performance. The experiments and ablations confirm the superiority of our method. This indicates the potential in the combination of safety alignment and reasoning, as well as the effectiveness of MCTS in providing fine-grained supervision of safety when it's properly adapted to the objective of alignment. We will better clarify these in revision.
**Q3: Details of self-rewarding**
We introduce it in Appendix B and will further clarify the details in revision. We take two prompts for the two aspects. For helpfulness, the LLM is asked to give scores from 1 to 5, which are normalized within [-1,1]; and for safety, it's asked to decide whether the response is safe or not (-1/1). In practice, we sometimes notice refusals to questions with clear risks and use a rule-based reward to get more accurate rewarding. The rule takes refusal behavior (keyword matching with "sorry") and response length into account. Due to space limit, detailed prompts are in the response to Q2 of Reviewer CMdr. | Summary: The paper proposes a reasoning-based framework to ensure safety alignment of large language models. The proposed method demonstrates better safety preserving the helpfulness.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, Appendix B
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes, fully
Relation To Broader Scientific Literature: - This work connects safety alignment with MCTS and preference optimization which is important for test time scaling in this domain.
- This work will encourage more test-time scaling frameworks like self-reflection for safety alignment.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - Strong performance improvement over the baseline.
- Successful application of DPO with MCTS in a new domain.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you very much for your positive feedback on our work, especially regarding its role in bridging safety alignment and reasoning enhancement with MCTS. We will further improve our work based on the suggestions provided by other reviewers. | null | null | null | null | null | null |
Training Dynamics of In-Context Learning in Linear Attention | Accept (spotlight poster) | Summary: The authors consider the training dynamics of linear attention using a joint query-key matrix vs. separate query matrix and key matrix, when learning to learn in-context linear regression. It was then shown that in the former case, there are two (classes of) fixed points, while in the latter case, there are 2^D cases where D is the dimension of the x-vector. Thus, during training, the former case has one single abrupt drop, while for the latter, multiple smaller drops are present. The authors also show how in the latter case, the training would experience D drops, corresponding to learning the eigen structure of the covariance matrix. Several experiments are provided, which mostly align with the theoretical analysis. The converging time courses are also analyzed for some simplified cases, which also match the experiments.
## update after rebuttal
Thanks for the response. I like the results and increased the score to 4.
Claims And Evidence: The empirical studies are well designed and appear to support the theory well.
Methods And Evaluation Criteria: The authors train the linear attention models for the setting, and compare the observed behaviors with the theoretical analysis.
Theoretical Claims: Yes, the proofs mostly look correct.
Experimental Designs Or Analyses: Yes, the experiments are described with sufficient details, and support the theory.
Supplementary Material: The appendix, B and C.
Relation To Broader Scientific Literature: N/A.
Essential References Not Discussed: Sufficiently discussed.
Other Strengths And Weaknesses: While I like the results in general, there are several weaknesses inherent in the study and the formulation of the problem:
1. One main concern is the assignments of 0's to the v_i, u_i to the query and key matrices. Though this might be acceptable for the analysis of the eventual optimal solution, it is less acceptable for the analysis of the training dynamics, because they might impact the dynamics in an unpredictable way. Note that in practice, these matrices are usually assigned random values at initialization, instead of being assigned 0.
2. In Garg's work (and some others), the training loss function is the average of the losses over the whole context window \sum_{n=1}^{N}, instead of only the last position N+1. I feel the former is a more accurate reflection of the "in-context learning" concept, while the latter seems unrealistic.
3. The linear attention model allows the analysis to be done more precisely, but also too simplified. The softmax function is important in many settings. Similarly, in practice, a feedforward network is used after the attention mechanism, and multiple attention layers are usually used.
4. While the difference in the training dynamics is interesting, it is not clear whether this distinction is important in practice, because they seem to be unique to the linear-attention, and will likely break with a softmax function, the FF network, and multiple layers.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Will non-zero assignments of v_i and u_i change the structure of the fixed points?
2. Will using the cumulative loss as in Garg change the training dynamics?
3. Will using softmax change the structure of the training dynamics?
I would consider raising the score if the authors can answer these questions satisfactorily.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your interest in our work and for raising clear, constructive questions. We're glad to hear that you like our results. Below, we present some new experimental results and respond to questions. The numbering follows the reviewer's "Questions for Authors" list.
1. **Nonzero** $\boldsymbol v_i,\boldsymbol u_i$
Thank you for this insightful question. The validity of fixed points is not affect by $\boldsymbol v_i,\boldsymbol u_i$. In other words, setting $\boldsymbol v_i=\boldsymbol u_i = \boldsymbol 0$ and the rest of the weights to Equation (9) or (15) results in valid fixed points. This follows from our proof that (i) $\boldsymbol v_i,\boldsymbol u_i$ have zero gradient when they are zero; (ii) the rest of the weights have zero gradient when they satisfy Equation (9) or (15). Therefore, all weights in the full model have zero gradient.
As for dynamics, we agree that in practice, $\boldsymbol v_i,\boldsymbol u_i$ can be nonzero and they evolve in training, affecting the dynamics in interesting ways. We actually discussed this issue in our Appendix F "Training Dynamics of In-Context and In-Weight Learning". In our setup, nonzero $\boldsymbol v_i,\boldsymbol u_i$ can be interpreted as a form of in-weight learning (IWL). In our main setup of sampling the task vector $\boldsymbol w$ iid from $\mathcal N (0,I)$, $\boldsymbol v_i,\boldsymbol u_i$ remain near zero throughout training, and the model develops only ICL, as shown in Figure 9a. However, when a portion of $\boldsymbol w$ is fixed, the evolution of $\boldsymbol v_i,\boldsymbol u_i$ is non-negligible. In this case, the model first develops IWL, then loses IWL while simultaneously developing ICL, as shown in Figure 9b-e. This competition between ICL and IWL is indeed intriguing. While a full theoretical treatment is beyond our scope, we hope future studies can build on our analysis of the purely ICL dynamics and explore its interaction with IWL dynamics. If the reviewer finds it beneficial, we're happy to move some of Appendix F to main text.
We also note that setting $\boldsymbol v_i=\boldsymbol u_i = \boldsymbol 0$ is common in theory literature on ICL in linear [1,2] and softmax [3,4] attention.
2. **Cumulative Loss**
We have added Figure 14 (see [URL](https://icml1939.tiiny.site/)) to demonstrate that using the cumulative loss doesn't change the training dynamics, other than the loss values being different. Consistent with using the last-token loss, training $\mathsf{ATTN_M}$ with cumulative loss exhibits a single abrupt loss drop, and training $\mathsf{ATTN_S}$ exhibits multiple loss drops. In our in-context linear regression setup, training with cumulative loss is effectively equivalent to training with the last-token loss on sequences of varying lengths. Because the linear attention model includes a $1/N$ scaling factor, it can be trained and tested with varying sequence lengths $N$. For a detailed discussion on the $1/N$ scaling, please refer to our rebuttal to reviewer etiN.
It may be worth noting that computing loss for only the last token is also common in the literature, such as in Von Oswald et al. [5] and Zhang et al. [1].
3. **Softmax Attention**
We have added Figure 10 (see [URL](https://icml1939.tiiny.site/)) to show that the different training dynamics of linear $\mathsf{ATTN_M}$ and linear $\mathsf{ATTN_S}$ also occur in their softmax counterparts. Figure 10 follows the same setup as Figures 1–2 for linear attention, with the only difference being adding the softmax activation function for the attention calculation. We observe that softmax $\mathsf{ATTN_M}$ exhibits a single abrupt loss drop, whereas softmax $\mathsf{ATTN_S}$ undergoes multiple loss drops, separated by phases of conspicuously slower training. This demonstrates that our findings and theoretical intuition are not unique to linear attention but can also extend to softmax attention.
We've added the new figures to our appendix and will use the extra page in final revision to include as many as possible in the main text.
[1] Ruiqi Zhang, Spencer Frei, Peter Bartlett. Trained transformers learn linear models in-context. JMLR 2024.
[2] Kwangjun Ahn, Xiang Cheng, Hadi Daneshmand, Suvrit Sra. Transformers learn to implement preconditioned gradient descent for in-context learning. NeurIPS 2023.
[3] Juno Kim, Taiji Suzuki. Transformers Learn Nonlinear Features In Context: Nonconvex Mean-field Dynamics on the Attention Landscape. ICML 2024.
[4] Yu Huang, Yuan Cheng, Yingbin Liang. In-context Convergence of Transformers. ICML 2024.
[5] Johannes Von Oswald et al. Transformers learn in-context by gradient descent. ICML 2023.
---
Rebuttal Comment 1.1:
Comment: 1. I understand that many theoretic works assume v_i=u_i=0, but the problem considered here, i.e., the training dynamics, is more complex, and this issue becomes more important. I feel the authors had a good explanation of the issue and have some empirical results as in Appendix F. The theory aspect is not yet satisfactory, but that's understandable. I recommend moving some discussion forward.
2. I also understand that many theoretic works assume last-token ICL, but it is still not quite consistent with how we usually train transformers. It is true that "training with cumulative loss is effectively equivalent to training with the last-token loss on sequences of varying lengths", however, in the former, various parameters will have to balance between early token predictions (when not enough in-context demonstrations are available) and latter token predictions (when enough are available). I'd like to hear the authors' findings and/or explanation on how these conflicting requirements will play a role in the setting studied here, vs. the last-token only training.
3. A recent work by Nichani et al. analyzed the dynamics with the Softmax function. My question is more on the theoretic analysis with this softmax, instead of empirical observations. However, I accept the authors' explanation and find it sufficient.
I'm not sure ICML allows URLs, but the linked figures do not load. Please provide some more detailed verbal descriptions.
---
Reply to Comment 1.1.1:
Comment: Thank you for your attentive engagement in the rebuttal process.
1. We appreciate that you raised the point that our question, i.e., the training dynamics, is more complex than many theoretic works, which we agree. We are glad to hear that you accepted our explanation, and will move some of Appendix F to main text.
2. You're correct that training with cumulative loss or with last-token loss on sequences of varying lengths raises the question of balancing early token predictions and later token predictions. We present here an analytical analysis.
Based on our derivations in Appendix C.4, the converged $\mathsf{ATTN_M}$ model implements
$$
\sum_{i=1}^H v_i \beta^\top U_i x_q = \beta^\top \Lambda \left[ \mathbb E \left( \frac1N x_n x_n^\top \right)^2 \right]^{-1} x_q \tag{*}
$$
To consider varying context lengths $N$, we denote the distribution of context lengths as $p(N)$. We now need to compute the expectation in Equation $(*)$ over the distributions not only of $x_n$ but also $N$, which yields
$$
\sum_{i=1}^H v_i \beta^\top U_i x_q = \beta^\top \left[ \Lambda + \mathbb E \left(\frac1N\right) (\Lambda + \mathrm{tr}(\Lambda)I) \right]^{-1} x_q
$$
Therefore, the context lengths only influence the converged model through the expectation $\mathbb E(1/N)$. For a fixed context length, $\mathbb E (1/N)=1/N$, which recovers Equation $(11)$ in our paper. For cumulative loss, $p(N)$ follows a uniform distribution over $\{1,2,\cdots, N_{\mathrm {max}}\}$. The expectation $\mathbb E(1/N)$ over a uniform distribution is the harmonic number divided by $N_{\mathrm {max}}$, which doesn't have a closed-form expression but can be easily computed for a specific finite $N_{\mathrm {max}}$.
Similarly for $\mathsf{ATTN_S}$, the fixed point condition in Equation $(17)$ becomes
$$
\sum_{i=1}^H v_i k_i q_i^\top = \sum_{d\in S_m} \lambda_d^{-1} \left( 1+\mathbb E\left(\frac1N\right) (1+\mathrm{tr}(\Lambda)/\lambda_d) \right)^{-1} e_d e_d^\top
$$
where the $\mathbb E(1/N)$ term was simply $1/N$ for the fixed context length case.
We'll add this new analysis into the paper. It's indeed an interesting and worth-mentioning fact that the varying context lengths only influence our results through the expectation $\mathbb E(1/N)$.
The URL has been fixed. We're sorry that it was down for a while. ICML author guidelines indicate anonymous URLs are allowed, though reviewers aren't obliged to click. | Summary: This paper provides a theoretical analysis of how linear attention models acquire in-context learning abilities through gradient descent training on an in-context linear regression task. The authors study two parameterizations of multi-head linear attention. In the merged key-query setting (ATTN$_M$), they show that the model has exactly two fixed points—a zero solution and a global minimum manifold—and that small-initialization training follows a single abrupt drop in loss before converging to a least-squares in-context regression solution. The authors also derive an analytical time-course in the special case of an identity covariance matrix and infinitesimal initialization. By contrast, in the separate key-query setting (ATTN$_S$), they show that the model admits exponentially many fixed points corresponding to different subsets of eigenvectors of the token covariance. In this scenario, the training dynamics involve multiple abrupt loss drops separated by plateaus, each plateau corresponding to learning one additional principal component in context. Although the final solution again approximates least-squares regression, early stopping yields principal component regression on a partial subset of components.
Claims And Evidence: The paper’s primary claims that multi-head linear attention with merged key–query exhibits an abrupt loss drop while separate key–query parameterization shows stage-wise, saddle-to-saddle dynamics are supported both by mathematical derivations and simulation results. The evidence is convincing within the assumed setting (small initialization).
Methods And Evaluation Criteria: The authors’ method is essentially to analyze multi-head linear attention in two parameterizations (merged vs. separate key/query) under gradient-flow training on an in-context linear regression task. The evaluation criteria revolve around loss trajectories under gradient flow, an the limiting in-context learning algorithm (whether the model replicates least-squares regression or principal component regression). Given that the paper’s focus is on theoretical understanding, the chosen tasks (toy in-context linear regression) and the analysis of training trajectories are appropriate.
Theoretical Claims: The authors’ method is essentially to analyze multi-head linear attention in two parameterizations (merged vs. separate key/query) under gradient-flow training on an in-context linear regression task. The evaluation criteria revolve around loss trajectories under gradient flow, an the limiting in-context learning algorithm (whether the model replicates least-squares regression or principal component regression). Given that the paper’s focus is on theoretical understanding, the chosen tasks (toy in-context linear regression) and the analysis of training trajectories are appropriate.
Experimental Designs Or Analyses: I think that the experimental design is well executed. The loss trajectories and weight evolution curves match the analytical results and confirms the soundness of the analysis. One potential limitation is that the experiments focus on relatively small $N$ and $D$ values; it would be valuable to see the experiments in larger settings.
Other than that, some additional things that might worth trying are the cases where the covariance matrix $\Lambda$ has repeated eigenvalues and check whether multiple heads learn in parallel or whether the training still unfolds in discrete stages. Also, the paper’s analysis hinges on small initialization, it would be instructive to demonstrate how differently scaled inits affect the plateau structure (especially in ATTN$_S$).
Supplementary Material: I skimmed through the proofs in appendix, see Theoretical Claims section.
Relation To Broader Scientific Literature: This paper draws on recent work on linear attention (e.g., Von Oswald et al., 2023; Zhang et al., 2024a,b) and connects the phenomenon of abrupt in-context learning during training transformers to a analytically tractable setting. It benefits from the results in deep linear networks (Saxe et al., 2014; Woodworth et al., 2020) to show how parameterization (merged vs. separate key-query) yields distinct training behaviors of a single abrupt loss drop or multiple progressive plateaus. Overall, the work extends in-context linear regression analyses to a multi-head linear attention setting and also connects to known convergence results from linear networks (Arora et al., 2019; Shamir, 2019).
Essential References Not Discussed: One specific omission is the “Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection” (Bai et al., 2023), which frames multi-head attention learning as algorithm selection. Even though it's not a convergence proof, it's still relevant to how, in the rank-1 case of this paper, individual heads each specialize to a different principal component and captures separate directions in the data.
Other Strengths And Weaknesses: Overall, I think that the paper is well-written and presented. One of the paper’s main strengths is its rigorous theoretical analysis that clarifies how different parameterizations affect the training dynamics of multi-head attention models. Besides, the paper well-integrates the prior results on linear networks. The experiments complements the theoretical discussions. The connection shown in the paper between progressive eigenvector learning and in-context abilities is also instructive.
That said, I believe the paper needs more justification on why the model they use for multi-head linear self-attention in the case of merged key-query matrices is appropriate. There's frequent connecting discussions which uses the limit of $N \to \infty$ and I believe equivalence to the model which doesn't normalize the output by $N$ should be discussed. Also, some discussions or experiments on how deviations from the idealized assumptions (such as small initialization or white input covariance) might change the results could be helpful.
Other Comments Or Suggestions: - This is something minor but the matrix $X_{\mu}$ is not defined before being referenced in Section 2.1.
- Again something minor, the paper’s main focus is on analyzing linear attention, yet Section 2.2 provides a relatively detailed introduction to the multi-head softmax attention model, which is not used in later analysis. Why did the authors choose to present the standard softmax-based model in detail rather than only mentioning it briefly?
Questions For Authors: 1. I want to ask why the authors chose to scale the model's output by $1/N$. Wouldn't the training procedure naturally allow the weights to adapt to the sequence length without that additional scaling
2. Although the Section 2.4 has the low rank matrix assumption to match the realistic multi-head architecture, the Section 2.3 omits that assumption. Why the difference? Doesn't this setup than diverge from established multi-head structure of transformers? Because in this case, you're effectively repeating the same input $X$ with different parameters $H$ times. Moreover, I think this also might hurt the equivalence to the two-layer neural net since this forces $W_1$ to be low rank. Whereas, this does not naturally occur with the random initialization.
1. I'm also a bit confused by the limit with respect to $N$ in line 246. As I said, introducing $1/N$ is artificial and we could also instead scale $W_i^{KQ}$ and $W_i^V$ by $1/\sqrt{N}$. This change supposed to not affect the input covariance $\Lambda$, yet, it seems that when we remove $1/N$ from the line 246, it affects the result, why? This also seems to effect the Equations (18) and (21).
1. In your analysis of ATTN$_S$, you assume eigenvalues are distinct and that each head learns one eigenvector sequentially. How important is this assumption in your proofs? How does the training behavior change if $\Lambda$ has multiple identical top eigenvalues? Is it possible for several heads to learn these degenerate eigenvectors in parallel, and does that merge or skip some of the plateaus?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for a very detailed and thoughtful review. We're glad to know that you find our findings instructive, rigorous, and well-presented. We'd like to present some new experimental results and respond to questions. All of our added new figures can be found at [URL](https://icml1939.tiiny.site/).
- **Rationale for $1/N$ Scaling**
We follow the scaling choice in seminal theory works on linear attention trained for in-context linear regression [1,2]. You're correct that the model can be trained to adapt to the sequence length without the $1/N$ scaling. However, including the scaling factor grants the model the flexibility to be trained and tested with varying context lengths $N$. We've added Figure 14 in which the models are trained on sequences of varying lengths -- this would be unfeasible without the scaling. To explain this point, consider that $\mathsf{ATTN_M}$ computes
$$
\mathsf{ATTN_M}(X) = X +\sum_{i=1}^H W^V_i \frac{XX^\top}N W^{KQ}_i X
$$
The blocks in $XX^\top$ contain the term $\sum_{n=1}^N x_n y_n$, which diverges as $N$ increases. In contrast, the blocks in $\frac{XX^\top}N$ contain $\frac1N \sum_{n=1}^N x_n y_n$, which remains bounded as $N$ increases. Including the $1/N$ scaling allows the model to process sequences of varying lengths without needing to adapt weights specifically for each $N$.
> "we could also instead scale $W_i^{KQ}$ and $W_i^V$ by $1/N$ ..."
You're correct that we can either scale the $XX^\top$ by $1/N$ or scale $W_i^{KQ}, W_i^V$ by $1/\sqrt N $. However, in both case, we arrive at the same expression in line 246, Equations (18) and (21). In our case, the $1/N$ term is present because of the $1/N$ we build into the model. In your case, the $1/N$ term is also present because $W_i^{KQ}, W_i^V$ are scaled by $1/\sqrt N$. Thus, line 246, Equations (18) and (21) are not affected by whether the scaling is built in the model or adapted through training.
Thank you for this constructive feedback. We'll incorporate this more detailed explanation for the scaling in revision.
- **Why Full-Rank $W^{KQ}_i$ in $\mathsf{ATTN_M}$?**
We agree that $\mathsf{ATTN_M}$ diverges from the established multi-head attention structure in practical transformers. Our intention in studying both $\mathsf{ATTN_M}$ and the low-rank $\mathsf{ATTN_S}$ is that: $\mathsf{ATTN_M}$ is widely used in theoretical literature on transformers (see a list of references in lines 60-65) and low-rank $\mathsf{ATTN_S}$ is closer to the parametrization in practical transformers. Given that a significant portion of the theoretical literature has used $\mathsf{ATTN_M}$, we believe that analyzing it can be of independent interest to transformer theorists, if not to practitioners.
Your insight regarding that the multiple heads in $\mathsf{ATTN_M}$ are somehow "repetitive" or redundant is also right. In other words, the multi-head $\mathsf{ATTN_M}$ is overparameterized. While overparameterization seems unnecessary in terms of expressivity, its effect on training dynamics and convergence properties is a fundamental, long-standing question in deep learning, and is worth studying. Perhaps surprisingly, we find that the $W^{KQ}_i$ weights in different heads of $\mathsf{ATTN_M}$ stay parallel in training, as shown in Figure 4 and Appendix C.7. This provides a hint at how overparameterized attention models learn and generalize.
- **Effect of Initialization Scale**
We've added Figure 11 for this. Please refer to our rebuttal to reviewer Hj7w for details (we reached space limit for this rebuttal).
- **Multiplicity of Eigenvalues**
Thank you for this interesting question. We have added Figure 12 in which $\Lambda$ has repeated eigenvalues. Six runs from different random initialization are plotted. Linear $\mathsf{ATTN_S}$ with rank-1 key and query may still exhibit plateaus when learning the repeated eigenvalues due to the different random initial weights in each head. It may also skip the plateau for certain random seeds.
In the case with distinct eigenvalues, the plateau duration is determined by both the size of the eigenvalue and the random initialization. In the case with equal eigenvalues, the plateau duration is determined by only the random initialization.
- **Simulations with Larger** $N, D$
We have added Figure 13 in which we train $\mathsf{ATTN_M}$ and $\mathsf{ATTN_S}$ on a dataset with $N=128,D=64$. The loss trajectories are qualitatively similar to those in lower-dimensional cases, despite being noisier.
- We'll add the reference “Transformers as Statisticians" (Bai et al., 2023). It's indeed an omission -- thanks for your note!
- We'll cut back the introduction for softmax attention in Section 2.2 and clarify the subscript $\mu$ in $X_\mu$ denotes the index of a training sample in Section 2.1. Thanks for your suggestion.
[1] Ahn, Cheng, Daneshmand & Sra. NeurIPS 2023. arXiv:2306.00297
[2] Zhang, Frei & Bartlett. JMLR 2024. arXiv:2306.09927
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors for their rebuttal. My concerns are satisfactorily addressed. I will maintain my positive score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your engagement in the rebuttal process, and for your thoughtful review. We are glad to hear that your concerns have been satisfactorily addressed.
If you feel it is appropriate, we would be grateful if you might consider raising the score. | Summary: This paper investigates the training dynamics of in-context learning (ICL) in multi-head linear attention models trained on in-context linear regression tasks. The authors examine two parametrizations: one with merged key and query weights (ATTNM) and one with separate key and query weights (ATTNS).
The paper provides a theoretical explanation of how ICL abilities evolve during gradient descent training, revealing either abrupt acquisition or progressive improvements depending on the parametrization of the key and query matrices.
Claims And Evidence: The authors provide:
- mathematical analysis of fixed points in the loss landscape
- precise characterization of gradient descent trajectories
- analytical solutions for special cases
- simulations that match their theoretical predictions
Methods And Evaluation Criteria: The authors rely on
- a well-established ICL regression task
- empirical validation through simulations that match theoretical results
Theoretical Claims: I did not check the proofs of the theoretical claims
Experimental Designs Or Analyses: The simulations given in the paper cover
- training simulation of both ATTNM and ATTNS models with different initializations
- tracking of the evolution of weights and loss during training
- comparison of empirical results with theoretical predictions
Supplementary Material: I only skimmed through the supplementary material
Relation To Broader Scientific Literature: The paper is in the intersection of several research topics including:
- work showing abrupt emergence of ICL
- work on the optimization properties of attention
- work on the stage-wise dynamics of transformers
Essential References Not Discussed: The paper covers thoroughly the relevant literature.
Other Strengths And Weaknesses: Strengths:
-The work bridges optimization theory and the emergent capabilities of transformer models
- The characterization of saddle-to-saddle dynamics in ATTNS offers a clear explanation for progressive ICL acquisition
Weaknesses:
- The analysis is limited to linear attention, and it's unclear how much of it generalizes to softmax attention used in practical transformers
The paper focuses on ICL regression tasks, which are simpler than the complex language tasks where ICL is typically observed
Other Comments Or Suggestions: It would be valuable to include a discussion on how the theoretical insights might translate to practical training recommendation
Questions For Authors: How sensitive are your findings to the choice of initialization scale?
The paper demonstrates that ATTNM is equivalent to a two-layer fully-connected network with cubic features. Could this have a practical implication on transformer architectural choice ?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your interest in our work and for a thoughtful review. We appreciate how you positioned our contribution at the intersection of three very important and relevant topics, and we’re glad that you found our results provide a clear explanation for progressive ICL acquisition. Below, we present some new experimental results and respond to questions.
- **Softmax Attention**
We have added Figure 10 (see [URL](https://icml1939.tiiny.site/)) to show that the different training dynamics of linear $\mathsf{ATTN_M}$ and linear $\mathsf{ATTN_S}$ also occur in their softmax counterparts. Figure 10 follows the same setup as Figures 1–2 for linear attention, with the only difference being adding the softmax activation function for the attention calculation. We observe that softmax $\mathsf{ATTN_M}$ exhibits a single abrupt loss drop, whereas softmax $\mathsf{ATTN_S}$ undergoes multiple loss drops, separated by phases of conspicuously slower training. This demonstrates that our findings and theoretical intuition are not unique to linear attention but can also extend to softmax attention.
- **Effect of Initialization Scale**
We'd like to clarify that many of our results do not rely on initialization: the fixed points, the connection to MLP, and the ICL algorithm we identified for the converged network. Then, we analyze the training dynamics under small initialization.
We've added Figure 11 (see [URL](https://icml1939.tiiny.site/)) to demonstrate the effect of initialization scale on dynamics. For $\mathsf{ATTN_M}$, increasing the initialization scale shortens the plateau before the single abrupt loss drop. For $\mathsf{ATTN_S}$, increasing the initialization scale shortens all the plateaus between any two successive abrupt loss drops. For both models, the loss trajectories with the largest initialization exhibit an exponential decay shape, which is the hallmark of lazy learning (NTK regime). In existing theoretical literature, the standard choice of initialization is either the large initialization of exponential-shaped lazy learning, or small initialization of rich feature learning as in our paper. The practical initialization scheme is usually somewhere in between. In Figure 11, we see that dynamics from the intermediate initialization seems like a mix of the exponential-shaped and the abrupt sigmoid-shaped curves. Such mixed curves are often seen in practice, e.g. in induction head emergence in natural language settings ([Olsson et al. 2022](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html#argument-phase-change); Argument 1). Our dynamics analysis focuses on the rich learning regime and provides analytical insight into such phenomena, which we believe is a first step toward understanding dynamics in naturalistic settings.
- **Practical Implication**
Given that the scope of our paper is theoretical, we believe our findings and intuition can inform practice by providing a deeper understanding of common phenomena observed in practical settings -- such as abrupt learning and the progressive acquisition of ICL. Though not directly resulting in practical recommendation, we hope our work provides a foundation for future work along the spectrum from theory to practice, ultimately leading to more effective architectural and optimization choices.
Regarding the implication of the equivalence between $\mathsf{ATTN_M}$ and MLP with cubic features, it can add a perspective to the open question of whether MLPs can learn in-context (Boix-Adsera et al [1]; Tong and Pehlevan [2]). Moreover, we reveal that MLP may perform ICL more comparable to attention models when provided with polynomial features instead of the original sequence. This may explain why Boix-Adsera et al (Figure 25) observed that MLP fails to learn ICL with the original sequence $X$ as input, but succeeds when the input is augmented with $XX^\top$. We'll add a discussion of this point in revision.
[1] Boix-Adsera, E., Saremi, O., Abbe, E., Bengio, S., Littwin, E., & Susskind, J. M. When can transformers reason with abstract symbols? ICLR 2024.
[2] Tong, W. L., & Pehlevan, C. MLPs Learn In-Context on Regression and Classification Tasks. ICLR 2025.
We've added the new figures to our appendix and will use the extra page in final revision to include as many as possible in the main text. | Summary: The paper investigates the theoretical understanding of gradient descent training dynamics for multi-head linear self-attention models performing in-context linear regression tasks. It analyzes two parametrizations of linear self-attention: one where key and query matrices are merged (ATTNM), and another with separate key and query matrices (ATTNS). For the merged model (ATTNM), the authors identify two fixed points in training dynamics and find an abrupt loss drop, analytically solved under specific conditions. In the separate model (ATTNS), exponentially many fixed points are found, with the loss trajectory showing saddle-to-saddle dynamics where the model progressively learns principal components of the data. Ultimately, both models implement approximate least squares regression in-context upon convergence, providing insights into how parameterization influences the evolution of in-context learning (ICL) abilities during gradient descent training
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Specifically, the authors provide detailed theoretical analyses, derivations, and simulation results that align closely with their main theoretical claims. Here’s a breakdown of how effectively the claims are supported:
### Claims Supported by Strong Evidence:
1. **Existence of Fixed Points**:
- **Evidence:** The authors derive precise analytic forms for fixed points in both the merged (ATTNM) and separate (ATTNS) parametrizations. The existence of these fixed points is rigorously demonstrated mathematically (e.g., equations provided explicitly such as (9a)-(9b), (15)-(17)), providing strong support.
2. **Abrupt Loss Drop (ATTNM)**:
- **Evidence:** Both theoretical analysis (with analytical solutions, such as equation (10)) and detailed simulations (shown in Figure 1) clearly show the existence and nature of the single abrupt drop in loss. The analytical solutions are explicitly derived, and the simulations match these theoretical predictions.
3. **Saddle-to-Saddle Dynamics (ATTNS)**:
- **Evidence:** Authors convincingly illustrate through detailed analytic derivations and numerical simulations (Figure 2) the sequence of abrupt loss drops separated by plateaus. Their scalar ordinary differential equation approximation (equation (20)) closely matches simulated trajectories, reinforcing their claims.
4. **ICL Algorithm Interpretation (Principal Component Regression)**:
- **Evidence:** The authors explicitly derive the algorithm implemented at each fixed point, demonstrating how the model progressively learns principal components (equations (18), (21)). Simulations confirm the model aligns with eigenvectors sequentially during training (Figure 2c).
5. **Impact of Parametrization (Rank of Key-Query Matrices)**:
- **Evidence:** Simulation results (Figure 3) convincingly demonstrate the effects of different ranks (R = 1, 2, 4, 8) on the dynamics and duration of loss plateaus, supporting claims about parametrization affecting training dynamics.
### Potential Minor Limitations (though not significantly problematic):
- **Simulation Scope**:
The numerical simulations strongly align with the theory, yet they predominantly focus on low-dimensional cases (e.g., dimension \(D=4,8\)). Although suitable for clear visualization and validation, higher-dimensional simulations could strengthen the general applicability of their results.
In summary, all key claims in this paper are backed by clear, precise, and convincing evidence.
Methods And Evaluation Criteria: n/a. theory paper, not so many methods and evaluations
Theoretical Claims: Yes. They all appear reasonable to me.
Experimental Designs Or Analyses: n/a
Supplementary Material: n/a
Relation To Broader Scientific Literature: The paper "Training Dynamics of In-Context Learning in Linear Attention" contributes to the broader scientific literature by providing a theoretical analysis of how multi-head linear self-attention models develop in-context learning abilities through gradient descent training. This work builds upon and extends existing research in several key areas:
1. **In-Context Learning in Transformers**: The study aligns with previous findings that transformers can perform in-context learning, adapting to new tasks based on input data without explicit parameter updates. Notably, it complements empirical observations by offering a theoretical framework that explains how these capabilities emerge during training.
2. **Training Dynamics and Fixed Points**: The identification of fixed points in training dynamics and the characterization of abrupt loss drops or saddle-to-saddle dynamics provide a deeper understanding of how self-attention models learn. This insight is valuable for designing more efficient training protocols and architectures.
3. **Parametrization Impact**: By analyzing different parametrizations (merged vs. separate key and query matrices), the paper highlights how architectural choices influence learning dynamics. This perspective is crucial for developing models that balance computational efficiency with learning efficacy.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: n/a
Other Comments Or Suggestions: n/a
Questions For Authors: **Generalizability Beyond Linear Tasks:**
The paper thoroughly analyzes linear attention trained on linear regression tasks. Have you investigated whether similar saddle-to-saddle dynamics and progressive improvement phenomena also occur for non-linear tasks or in practical transformer models with softmax attention?
**Effects of Initialization Conditions:**
The analytical derivations assume very small (or infinitesimal) initialization scales. How sensitive are your theoretical results (fixed points, saddle-to-saddle dynamics) to larger or more realistic initialization conditions often used in practice?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for a detailed review and for raising constructive questions. We're glad to know that you think our claims are backed by clear, precise, and convincing evidence. Below, we present some new experimental results and respond to questions.
- **Softmax Attention**
We've added Figure 10 (see [URL](https://icml1939.tiiny.site/)) to show that the different training dynamics of linear $\mathsf{ATTN_M}$ and linear $\mathsf{ATTN_S}$ also occur in their softmax counterparts. Figure 10 follows the same setup as Figures 1–2 for linear attention, with the only difference being adding the softmax activation function for the attention calculation. We observe that softmax $\mathsf{ATTN_M}$ exhibits a single abrupt loss drop, whereas softmax $\mathsf{ATTN_S}$ undergoes multiple loss drops, separated by phases of conspicuously slower training. This demonstrates that our findings and theoretical intuition are not unique to linear attention but can also extend to softmax attention.
- **Effect of Initialization Scale**
We'd like to clarify that many of our results do not rely on initialization: the fixed points, the connection to MLP, and the ICL algorithm we identified for the converged network. Then, we analyze the training dynamics under small initialization.
We've added Figure 11 (see [URL](https://icml1939.tiiny.site/)) to demonstrate the effect of initialization scale on dynamics. For $\mathsf{ATTN_M}$, increasing the initialization scale shortens the plateau before the single abrupt loss drop. For $\mathsf{ATTN_S}$, increasing the initialization scale shortens all the plateaus between any two successive abrupt loss drops. For both models, the loss trajectories with the largest initialization exhibit an exponential decay shape, which is the hallmark of lazy learning (NTK regime). In existing theoretical literature, the standard choice of initialization is either the large initialization of exponential-shaped lazy learning, or small initialization of rich feature learning as in our paper. The practical initialization scheme is usually somewhere in between. In Figure 11, we see that dynamics from the intermediate initialization seems like a mix of the exponential-shaped and the abrupt sigmoid-shaped curves. Such mixed curves are often seen in practice, e.g. in induction head emergence in natural language settings ([Olsson et al. 2022](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html#argument-phase-change); Argument 1). Our dynamics analysis focuses on the rich learning regime and provides analytical insight into such phenomena, which we believe is a first step toward understanding dynamics in naturalistic settings.
- **Simulations with Larger** $N, D$
Thank you for this suggestion. We've added Figure 13 (see [URL](https://icml1939.tiiny.site/)) in which we train $\mathsf{ATTN_M}$ and $\mathsf{ATTN_S}$ on a dataset with $N=128,D=64$. The loss trajectories are qualitatively similar to those in lower-dimensional cases, despite being noisier. This suggests that our findings do not break in high-dimensional settings.
We've added the new figures to our appendix and will use the extra page in final revision to include as many as possible in the main text. | null | null | null | null | null | null |
Overcoming the Curse of Dimensionality in Reinforcement Learning Through Approximate Factorization | Accept (poster) | Summary: After rebuttal: Thanks for addressing my questions! I'll keep my rating.
===
This paper considers FMDP and provides a powerful framework to tackle the curse of dimensionality in reinforcement learning (RL) by breaking down high-dimensional MDPs into smaller, independently evolving components, i.e., using factored MDP.
Tis paper introduces approximate factorization, which extends FMDPs to handle imperfectly factored models. Additionally, a graph-coloring-based optimal synchronous sampling strategy is proposed to enhance sample efficiency.
A model-based and the first model-free variance-reduced Q-L algorithm for FMDP are proposed and their theoretical guarantees are shown. They both outcompete the STOA.
Numerical results are provided in the appendix.
Claims And Evidence: Yes they are
Methods And Evaluation Criteria: Yes they do.
Theoretical Claims: The theoretical bounds for model-based algorithms look good to me.
The theoretical bounds for model-free algorithms look good to me.
Experimental Designs Or Analyses: Numerical results are included in the appendix. Synthetic MDP tasks and an electricity storage control problem are considered and the STOA algorithms are compared.
Supplementary Material: I didn't check the supplementary material in detail, but I will take a closer look if necessary.
Relation To Broader Scientific Literature: Many real-world applications feature structured transition probabilities and reward functions that can reduces the sample complexity of high-dimensional MDPs.
For example, robotic control problems can be effectively modeled using the factored MDP (FMDP) framework.
Essential References Not Discussed: no
Other Strengths And Weaknesses: The results are solid.
Both model-free and model-based methods are proposed.
Results improve the state of the art bounds.
Detailed comparison with the exact factorization case and the comparison of looseness of the bounds are discussed in the paper.
Other Comments Or Suggestions: It might be helpful to include a proof sketch to summarize the main techniques and novelty of the proof in the main text.
Can the algorithm be extended to the case where factorization is unknown?
Does the algorithm require factorization bias term Delta as an input? If so, is it possible to generalize to the unknown bias case?
Can you comment on the computation complexity of the algorithms? Appendix H might be related, but it is still unclear to me.
Questions For Authors: The variance reduction method appears to rely on the reference function. Have you considered variance-dependent bounds, similar to those in Variance-Dependent Regret Bounds for Linear Bandits and Reinforcement Learning: Adaptivity and Computational Efficiency?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful feedback and greatly appreciate the recognition of our work's novelty and contributions! Below, we provide point-by-point responses to the questions.
**Q1: It might be helpful to include a proof sketch to summarize the main techniques and novelty of the proof in the main text.**
Thank you for your constructive suggestion! We will include a "Proof Sketch" section in the revised version to highlight the key ideas and techniques behind our results.
* **Sample Complexity of Model-based Method**.
The main challenge lies in handling statistical correlations introduced by factorized sampling when estimating the finite-sample errors. The analysis relies on cross-component correlation control and variance analysis with customized Bernstein concentration bounds (e.g., Lemma B.2 and B.3-B.5).
* **Sample Complexity of Model-free VRQL-AF**.
The additional challenges for this algorithm come from the additional correlation during the two-timescale variance-reduced updates. Our improvement builds on the tailored factored empirical Bellman operator for approximately factorized MDPs, combined with a variance-reduction approach to minimize the variance of the stochastic iterative algorithm. The key idea is to show that this operator retains the non-asymptotic behavior of the standard one while achieving greater sample efficiency.
We develop a refined statistical analysis to tightly control estimation errors and iteration variance across multiple factored components (e.g., Lemmas G.2, G.3, G.6, G.8, G.9).
**Q2: Can the algorithm be extended to the case where factorization is unknown?**
Thank you for the question. Yes, our algorithm works with any given factorization, even if it is not perfectly aligned with the true MDP. For each factorization, our analysis provides the corresponding sample complexity and bias guarantees. While the bias is typically unknown, the algorithm remains valid regardless.
In practice—especially in FMDPs or approximately factored MDPs—domain knowledge often reveals strong and weak dependencies (e.g., Appendix B), which enables the choice of factorizations that balance sample efficiency and bias. We will clarify this in the revised version.
**Q3: Does the algorithm require factorization bias term Delta as an input? If so, is it possible to generalize to the unknown bias case?**
Thank you for the question. The algorithm does not require $\Delta$ as an input. While our analysis quantifies how performance depends on the factorization bias, the algorithm itself runs without access to knowing $\Delta$, and is fully applicable in the unknown bias case.
**Q4: Can you comment on the computation complexity of the algorithms? Appendix H might be related, but it is still unclear to me.**
This is a great point. Our algorithm consists of two components: computing the cost-optimal sampling policy and executing the RL procedure. The RL component follows standard implementation, while the cost-optimal sampling problem is discussed in more detail in Appendix H.
Solving the optimal sampling policy, though combinatorial in nature, is computationally efficient for two reasons: (1) the number of factors $K$ is typically small--on the order of $\log(|\mathcal{S}||\mathcal{A}|)$, and (2) it can be solved efficiently using modern integer programming solvers (e.g., Gurobi, CPLEX) or established approximate algorithms.
In particular, the problem reduces to a standard weighted graph coloring formulation of size $\mathcal{O}(K)$, with a lot of established solutions[1–2] that can solve near-optimal solutions within seconds even $K$ is very large (e.g., $K=5000$).
>[1] Shen, Y., Sun, Y., Li, X., Eberhard, A., & Ernst, A. (2022, June). Enhancing column generation by a machine-learning-based pricing heuristic for graph coloring. AAAI.
>[2] Dokeroglu, T., & Sevinc, E. (2021). Memetic Teaching–Learning-Based Optimization algorithms for large graph coloring problems. EAAI.
**Q5: The variance reduction method appears to rely on the reference function. Have you considered variance-dependent bounds, similar to those in [1]?**
This is a great point. The variance-dependent bounds aim to express regret or sample complexity in terms of the instance-dependent variance of the value function[1–2]. Adapting these ideas to our setting may involve incorporating variance-related quantities into the algorithm's step-wise parameters, and avoiding the standard worst-case scaling of $V$ and $Var(V)$ as $1/(1-\gamma)$ and $1/(1-\gamma)^2$. We agree this is a promising direction for future work.
>[1] Zhao, Heyang, et al. "Variance-dependent regret bounds for linear bandits and reinforcement learning: Adaptivity and computational efficiency." COLT (2023).
>[2] Yin, Ming, and Yu-Xiang Wang. "Towards instance-optimal offline reinforcement learning with pessimism." NeurIPS (2021). | Summary: This paper studied one key challenge of RL: curse of dimensionality related to high-dimensional state-action spaces. To address the challenges, the authors introduce approximate factorization, which extends the Factored Markov Decision Process (FMDP) framework to the imperfect misspecification case. Under a generative model, this paper studies both model-based and model-free RL algorithm and provide sample complexity bound.
Claims And Evidence: In introduction (line 52), this paper claims that " limited extensions to non-linear methods, such as neural networks, which
typically rely on specific assumptions (Fan et al., 2020; Xu & Gu, 2020)." However, their are also a line of researches studied non-linear low-rank mdps proposed by [1], the extensions are not so "limited", I think the authors should do more literature reviews.
[1] FLAMBE: Structural Complexity and Representation Learning of Low Rank MDPs.
Methods And Evaluation Criteria: The evaluation criteria sample complexity is standard for RL problem.
Theoretical Claims: I didn't check the proof in detail.
Experimental Designs Or Analyses: This paper does not provide experiments.
Supplementary Material: I read the illustrative examples part of appendix.
Relation To Broader Scientific Literature: This paper is related to function approximation in RL to deal with curse of dimensionality. However, the assumption of generative model makes it quite limited.
Essential References Not Discussed: Many other papers tried to deal with curse of dimensionality by function approximation including both linear and non-linear approximation (like I state in Claims And Evidence*), I think the authors should also provide a discussion on these papers.
Other Strengths And Weaknesses: I think the biggest limitation for this paper is the assumption of generative models, which does reflect the real data collection process of RL under mdps.
Moreover, in line 200, this paper claims "In this case, instead of sampling from the global state-action space, we only need to sample from the substate-subaction spaces, which reduces the sample complexity". Such a sampling is even more unnatural.
Finally, this paper does not provide experiment to validate their theoretical results.
Other Comments Or Suggestions: line 136, right column, $P_k(s'[Z_k^S]|x)\in \mathbb{R}^{|\mathcal{X}|\times|\mathcal{S}[Z_k^S]|}$, in the defintion of the set of feasible marginal transition probabilities (line 159 left column), $P_k$ is defined to be in $\mathbb{R}^{|\mathcal{X}[Z^P_k]|\times|\mathcal{S}[Z_k^S]|}$.
Questions For Authors: line 136, right column, why do you need to define the set of feasible marginal transition probabilities as this set? And can you explain in high level what kind of properties does this set have?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful questions. Below, we provide point-by-point responses to the reviewer's comments.
**Q1: About the claim of " limited extensions to non-linear methods, such as neural networks, which typically rely on specific assumptions (Fan et al., 2020; Xu & Gu, 2020)." There exists a line of research studied non-linear low-rank mdps proposed by [1], the extensions are not so "limited".**
We agree with the reviewer that there are some non-linear methods with different assumptions. In light of this, we will remove the term “limited” and include additional relevant references in the revised version.
**Q2: This paper does not provide experiments to validate their theoretical results.**
Due to space limitations, we included our experimental results in Appendices I and J (pages 46–50), which include a synthetic MDP and a real-world electricity storage task. These results demonstrate improved sample efficiency and validate our theoretical findings. We will highlight them more clearly in the main text in the next version, as suggested.
**Q3: I think the biggest limitation for this paper is the assumption of generative models, which doesn't reflect the real data collection process of RL under mdps.**
Thank you for raising this point.
The generative model is a widely used and fundamental setting in the RL literature. It is particularly practical for applications with high-fidelity simulators (e.g., robotics). This setting is often the first step—prior to tackling more complex settings such as trajectory-based sampling—in developing a theoretical understanding of an algorithm’s behavior [1–3]. Even in this setting, the non-asymptotic analysis of (approximately) factored MDPs remains heavily understudied. Since we are the first to extend factorizability, establish model-free algorithms, and improve sample complexity guarantees for factored MDPs, we focus on the generative model setting in this work.
That being said, we believe our algorithm and theoretical results can potentially be extended to the Markovian sampling setting. Specifically, as long as the behavior policy induces a uniformly ergodic Markov chain, the algorithm should work since each substate-subaction pair is sampled infinitely often along the trajectory. Recent techniques (e.g., [4]), based on mixing times and conditioning, also provide tools to convert generative-model bounds into trajectory-based bounds. Rigorously studying this extension is a promising direction for future work.
>[1] A sparse sampling algorithm for near-optimal planning in large Markov decision processes. Machine Learning, 2002.
>[2] On the Sample Complexity of Reinforcement Learning with a Generative Model. ICML 2012.
>[3] Sample-efficient reinforcement learning for linearly-parameterized mdps with a generative model. NeurIPS 2021.
>[4] Sample complexity of asynchronous Q-learning: Sharper analysis and variance reduction. NeurIPS 2020.
**Q4: This paper claims "sample from the substate-subaction spaces". Such a sampling is even more unnatural.**
We appreciate the reviewer’s concern. We do not assume direct access to substate-subaction transitions. Instead, we sample full transitions and then extract the components for specific dimensions, which is standard in factored MDPs.
For example, in an electricity storage control problem with state $s=(s_1,s_2)$ where $s_1$ is the storage level and $s_2$ is the electricity price. The price $s_2$ is typically externally generated, observable, and unaffected by the system's actions. Therefore, we can estimate $P(s'_2|s_2)$ from full transition samples $(s_1,s_2,a,s'_1,s'_2)$. We will clarify this in the revised version.
**Q5: $P_k$ used in line 136 and line 159 are with different shapes.**
Thank you for pointing this out. We acknowledge the slight abuse of notation. We will revise the usage of $P_k$ in line 136 to make it consistent throughout the paper.
**Q6: Why do you need to define the set of feasible marginal transition probabilities? And what are its properties at a high level?**
Thank you for the question. At a high level, we introduce this feasible set to capture how factorization error may arise when full transitions are aggregated into factor-level marginals when the MDP is not perfectly factorizable. It thus bounds the possible deviations from perfect factorization for error definition, which is crucial for the theoretical analysis and doesn't influence algorithm design.
For instance, for an MDP with state $s=(s_1,s_2)$ and action $a$, we aim to estimate the transition $P(s'_1 |s_1,a)$ of a factor. What we actually observe is $P(s'_1 |s_1,s_2,a)$ (because we need to sample from a full state-action pair). This is consistent across all $s_2$ in the perfectly factored case, but in the approximate case, different $s_2$ may introduce biases. The feasible set captures this variability and is used to define the factorization error. We will add more discussions in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors’ response, which partially addresses my concern. I have increased my score to 3, although I remain concerned about the generative model and hope the authors can consider extending their work to real RL sampling scenarios, perhaps as future work.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the thoughtful follow-up and for raising the score. We truly appreciate your engagement with our work and your constructive suggestion regarding extending our method to real RL sampling scenarios (Markov sampling).
**Motivation**: We fully understand the concern about the use of a generative model. This design choice was made to help clearly isolate and present the core algorithmic and theoretical contributions of our work in a more "clean" setting, which is a common choice in RL theory literature [1-3]. While Markovian sampling better reflects real-world scenarios, it introduces sample dependencies and dynamics that, although quite addressable with standard techniques [4], would introduce complexity to both the analysis and its presentation in the main body—making it harder to clearly communicate our main structural contributions within the limited space of a conference paper. Incorporating these elements could have shifted the focus away from the core insights we aimed to highlight.
**Justification**: That said, we fully agree with the reviewer that extending the analysis to standard Markovian sampling is a valuable direction. In fact, this extension is conceptually straightforward. To illustrate: in a generative model setting, one requires $|\mathcal{S}||\mathcal{A}|$ samples to uniformly cover all state-action pairs. In contrast, Markovian sampling under a behavior policy requires a trajectory long enough to get the samples covering all $(s, a)$ pairs. Intuitively, this only results in an additional sample size governed by coverage and mixing properties of the Markov chain, but it does not quite affect the algorithm’s design or the overall sample complexity scaling—only the constants.
**Extension**: To address the reviewer's concern, we plan to include an additional lemma in the next version to formally characterize this extension and further clarify the generality of our approach.
With this clarification, we now highlight the three core contributions of our work:
1. **An approximate factorization framework** that relaxes the strong assumption of perfect factorization in existing FMDPs, enabling more realistic and flexible modeling of complex systems.
2. **An optimal synchronous sampling strategy based on graph coloring**, which, for the first time, enables a single sample to be used in estimating multiple factors, significantly improving sample efficiency by reducing redundant queries across correlated components.
3. **Two corresponding algorithms with improved and near-minimax sample complexity guarantees**:
(i) a model-based method under approximate factorization, and
(ii) the first model-free, variance-reduced Q-learning algorithm for factored MDPs, enabled by a customized factored empirical Bellman operator and a refined statistical analysis framework.
We are grateful for the reviewer’s thoughtful feedback and hope our response has clarified both the reasoning behind our design choices and how to extend our approach to more general sampling settings. We will include a corresponding formal lemma in the next version and are excited to further develop this line of research.
>[1] Wang, B., Yan, Y., & Fan, J. (2021). Sample-efficient reinforcement learning for linearly-parameterized mdps with a generative model. Advances in Neural Information Processing Systems, 34, 23009-23022.
>[2] Clavier, P., Shi, L., Le Pennec, E., Mazumdar, E., Wierman, A., & Geist, M. (2024). Near-Optimal Distributionally Robust Reinforcement Learning with General $ L_p $ Norms. Advances in Neural Information Processing Systems, 37, 1750-1810.
>[3] Li, G., Wei, Y., Chi, Y., Gu, Y., & Chen, Y. (2020). Breaking the sample size barrier in model-based reinforcement learning with a generative model. Advances in neural information processing systems, 33, 12861-12872.
>[4] Li, G., Wei, Y., Chi, Y., Gu, Y., & Chen, Y. (2020). Sample complexity of asynchronous Q-learning: Sharper analysis and variance reduction. Advances in neural information processing systems, 33, 7031-7043. | Summary: The authors study Factored Markov Decision Processes (FMDPs). They extend this framework in significant ways and amend several of its shortcomings. For example, they develop a model-based RL algorithm achieving the first near-minimax sample complexity for FMDPs.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes, to some extent.
Experimental Designs Or Analyses: Yes, to some extent.
Supplementary Material: No.
Relation To Broader Scientific Literature: The authors study Factored Markov Decision Processes (FMDPs). They extend this framework in significant ways and amend several of its shortcomings.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: No.
Questions For Authors: Page 2:
Please add some discussion on the discount factor $\gamma$.
Page 3:
Should $s$ appear in the RHS of Equation (1)?
Page 4:
Can you please elaborate on the definition captured in Equation (6)?
Page 5:
Can you please elaborate on $\kappa_p \ll K_\omega$?
Page 6:
What is the main take-away from Equation (9)?
Page 7:
Algorithm 3:
How can we efficiently implement step 7?
Not sure if Equation (10) is efficient?
Page 8:
The parameters in Theorem 6.1 are a bit confusing :)
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for acknowledging the contributions of our work and providing insightful feedback! Below, we provide detailed responses to the reviewer's questions.
**Q1: Please add some discussion on the discount factor $\gamma$.**
In the discounted setting (which is standard in MDPs and reinforcement learning), the discount factor $\gamma \in [0, 1)$ determines how much weight the agent places on immediate versus future rewards. Smaller values prioritize short-term gains, while larger values emphasize long-term planning. We will include a more detailed discussion in the next version.
**Q2: Should $s$ appear in the RHS of Equation (1)?**
The current state $s$ is present on the right-hand side through the variable $x=(s,a)$, which denotes the state-action pair.
**Q3: Can you please elaborate on the definition captured in Equation (6)?**
Thank you for your constructive comment. Eq.(6) defines the **joint sampling set** used to simultaneously sample transitions for two factors $k_1$ and $k_2$, whose scopes are $Z_{k_1}$ and $Z_{k_2}$, respectively.
Here, $x[Z_{k_1}^P]$ and $x[Z_{k_2}^P]$ enumerate the relevant input dimensions (by sampling from these dimensions, we can estimate the respective transitions of factors) for the two factors, $x^{\text{default}}[- (Z_{k_1}^P\cup Z_{k_2}^P)]$ assigns fixed default values for all the other irrelevant dimensions. The modulo operation ensures that the sampling cycles through all possible values within each relevant scope, and $D_{\max} = \max(|\mathcal{X}[Z_{k_1}^P]|,|\mathcal{X}[Z_{k_2}^P]|)$ ensures full coverage of both factors' input spaces. This construction enables compact and efficient joint sampling, allowing shared samples to be reused across different factors.
**Q4: Can you please elaborate on $\kappa_p\ll K_\omega$?**
As discussed in Sections 4.1, 5.2, and Appendix H, the relationship $\kappa_p \ll K_\omega$ arises from the graph coloring perspective. We construct an undirected graph with $K_\omega$ nodes—one per factor—and edges between dependent factors. The **chromatic number** $\kappa_p$ is the minimum number of colors needed so that no adjacent nodes share a color, enabling efficient sampling in our setting.
Intuitively, $\kappa_p$ can be much smaller than $K_\omega$, especially when the graph is sparse. For example, in a star graph with one central node connected to $K_\omega-1$ outer nodes (which are not connected to each other), only two colors are needed: one for the center and another shared by all outer nodes. Thus, $\kappa_p=2\ll K_\omega$. In practice, many FMDPs exhibit sparse interaction structures among factors, leading to graphs with low chromatic numbers. Therefore, in such settings, it is common that $\kappa_p\ll K_\omega$ and even $\kappa_p= \mathcal{O}(1)$.
**Q5: What is the main take-away from Equation (9)?**
The key takeaway is that our algorithm significantly reduces the sample complexity depending on the size of the state-action space. Specifically, our dependence is $\sum_{k \in [\kappa_p]} |\mathcal{X}[Z^P_k]|$.
1. Compared with classical MDPs with dependence $|\mathcal{S}||\mathcal{A}|$, ours is much smaller because $|\mathcal{X}[Z^P_k]|$ is the state-action space size of a single factor, which is exponentially smaller than $|\mathcal{S}||\mathcal{A}|$.
2. Compared with SOTA FMDP results, which is $\sum_{k \in [K_\omega]} |\mathcal{X}[Z^P_k]|$, ours is also much smaller because we have shown $\kappa_p\ll K_\omega$ before.
3. We show that when $\kappa_p= \mathcal{O}(1)$, our result is minimax optimal.
**Q6: Algorithm 3: How can we efficiently implement step 7? Not sure if Equation (10) is efficient?**
Thank you for the comment. Equation (10) provides a formal and rigorous definition of the empirical factored Bellman operator used in Step 7 of Algorithm 4. While the notation may seem complex, the implementation is efficient in practice.
Due to the synchronous sampling design, a single sample can be reused to estimate transitions for multiple factors. Equation (10) simply extracts the relevant dimensions from each sample to update the corresponding factors. This makes the implementation straightforward and efficient.
We will include more discussions in the next version.
**Q7: The parameters in Theorem 6.1 are a bit confusing :)**
Thank you for the comment. The parameters in Theorem 6.1 correspond to key configuration choices in the VRQL-AF algorithm, including the number of epochs, epoch length, learning rate, and others. We agree that their roles could be explained more clearly and will revise the theorem statement and surrounding text to improve clarity in the next version. | Summary: This paper studied factorized Markov Decision Processes (MDPs) and proposed two algorithms based on a newly developed synchronous sampling methods. This method efficiently sample and estimate different component of the transitions and therefore help the algorithm to achieve less sample complexity in both model-based and model-free approach. This paper also characterize the effect of imperfect modeling, when the MDP is an approximated factorized MDP.
Claims And Evidence: Yes. The claims are generally well-supported by the analysis and results.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I did not fully check the proof, but it seems correct to me based on the main paper.
Experimental Designs Or Analyses: The experiments in appendix are generally sound and valid. One potential issue is the lack of comparison to a policy-based method such as policy gradient.
Supplementary Material: I briefly checked the experimental parts in the supplementary materials.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strength: the synchronize sampling method seems interesting and novel.
Weaknesses: The cost-optimal sampling problem might be hard to solve.
Other Comments Or Suggestions: N/A
Questions For Authors: This paper studied tabular RL, where the state and action space are finite. Is it possible to consider infinite state space? Because it is most likely the case in real-world applications.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for acknowledging our novelty and contributions and providing insightful feedback. Below, we provide detailed responses to address the reviewer's comments.
**Q1: The experiments in Appendix are generally sound and valid. One potential issue is the lack of comparison to a policy-based method such as policy gradient.**
Thank the reviewer for acknowledging the soundlessness of our experiments. Since our work mainly focuses on leveraging approximate factorizable structures to design customized value-based algorithms, we primarily compare against the same types of value-based baselines. That being said, the proposed algorithmic framework can also be used to solve the policy evaluation subproblem (which is based on solving Bellman equations) within an actor-critic framework. This can be combined with policy gradient methods (i.e., the actor) to develop efficient policy-based algorithms that leverage the approximate factorization structure. We leave providing theoretical guarantees and conducting numerical simulations in this direction as future work.
**Q2: The cost-optimal sampling problem might be hard to solve.**
Thank the reviewer for raising this point. Intuitively, the cost-optimal sampling problem involves grouping the $K$ factors of the MDP to minimize the total sampling cost. While the problem may appear combinatorial, it is computationally efficient in our setting for two reasons: (1) the number of factors $K$ is typically small (on the order of $\log(|\mathcal{S}||\mathcal{A}|)$, where $|\mathcal{S}||\mathcal{A}|$ is the state-action space size), (2) there exist established exact and approximate algorithms to solve such problems efficiently.
Specifically, the problem can be formulated as an integer program of size $\mathcal{O}(K)$, solvable by modern solvers like Gurobi and CPLEX when $K$ is moderate (e.g., $K\leq 500$). Furthermore, as discussed in Appendix H, it can be reduced to the classical weighted graph coloring problem, for which many scalable algorithms exist. In practice, even problems with thousands of nodes (e.g., $K=5000$) can be solved to near-optimality within seconds [2–4].
To address the reviewer's concern, we will include a more detailed discussion in the revised version.
>[1] Chen, X., Hu, J., Li, L., & Wang, L. Efficient Reinforcement Learning in Factored MDPs with Application to Constrained RL. In International Conference on Learning Representations. (2021)
>[2] Shen, Y., Sun, Y., Li, X., Eberhard, A., & Ernst, A. (2022, June). Enhancing column generation by a machine-learning-based pricing heuristic for graph coloring. In Proceedings of the AAAI conference on artificial intelligence (Vol. 36, No. 9, pp. 9926-9934).
>[3] Saeed, A., Husnain, A., Zahoor, A., & Gondal, R. M. (2024). A comparative study of cat swarm algorithm for graph coloring problem: Convergence analysis and performance evaluation. International Journal of Innovative Research in Computer Science and Technology (IJIRCST), 12(4), 1-9.
>[4] Dokeroglu, T., & Sevinc, E. (2021). Memetic Teaching–Learning-Based Optimization algorithms for large graph coloring problems. Engineering Applications of Artificial Intelligence, 102, 104282.
**Q3: This paper studied tabular RL, where the state and action space are finite. Is it possible to consider infinite state space? Because it is most likely the case in real-world applications.**
We thank the reviewer for their question. While our work focuses on the tabular setting to enable a clear theoretical understanding, we believe it can potentially be extended to MDPs with infinite (or continuous) state spaces. Specifically, the core structural insight—factored transition kernels—can naturally extend to continuous domains. For example, a discrete factorization $P(s'_1, s'_2 \mid s_1, s_2) = P(s'_1 \mid s_1) P(s'_2 \mid s_2)$ can be analogously expressed in terms of transition densities as $f(s'_1, s'_2 \mid s_1, s_2) = f(s'_1 \mid s_1) f(s'_2 \mid s_2)$ in the continuous case. However, working with MDPs with infinite (or continuous) state spaces presents other challenges, such as the potential unboundedness of the reward function (e.g., in the LQR setting in control). In addition, certain parameterizations (i.e., function approximation) might be required to ensure that Q-functions are updated infinitely often for each state-action pair. We view this as a promising direction for future work. | null | null | null | null | null | null |
Selective Prompt Anchoring for Code Generation | Accept (poster) | Summary: This paper proposes a method for improving LLM's performance at test time. The method helps the LLM keep the focus on the task, avoiding dilution as the number of tokens generated grows for complex tasks such as coding. The method anchors a part of the prompt that specifies the task to accomplish by amplifying its corresponding logits before next token generation. The method compares the logits generated by the LLM with the orginal token embedding matrix and those generated by an embedding matrix where the part corresponding to the anchor has been masked; the difference in logit strength is used to amplify the output logits. The anchor part of the prompt is defined by special tokens. The empirical evaluation of the method comprises 5 code benchmarks (HumanEval, BigCodeBench, etc.), 6 LLM open-source base models (DeepSeek, CodeGEN, etc.) and 5 SOTA methods (ReACT, PASTA, etc.). Results shows consistent improvements (13%) across all benchmarks compared to the baseline and better performance than SOTA methods. Several ablation and parameter studies are performed to determine the best part of the prompt to mask, the influence of the prompt length, the effect of using other programming languages and the tuning of the anchoring strength.
## update after rebuttal:
After carefully reviewing the author's rebuttal and considering the other reviews, i raise my score to Accept.
Claims And Evidence: Yes, the effects of anchoring part of the prompt to avoid dilution are shown clearly on several empirical studies.
Methods And Evaluation Criteria: Yes, the evaluation methods are well suited for the application. The code-generation benchmarks used are well-known and effective.
Theoretical Claims: The mathematics of the method are clearly explained, including finite-difference approximation used to reduce computational complexity. There are no theoretical proofs for bounds though.
Experimental Designs Or Analyses: The experimental design is fine, including the evaluation metrics (pass@k). The choice of 6 open-source code LLMs is extensive and representative: CodeGen-Mono-350M (Nijkamp et al.,2023), CodeLlama-7B (Rozi`ere et al., 2024), StarCoder2-15B (Lozhkov et al., 2024), and DeepSeek-Coder-Instruct-1.3B, 6.7B, and 33B (Guo et al., 2024). The choice of SOTA methods to compare to is fine (although i may be unaware of some recent proposed methods).
Supplementary Material: Yes, i reviewed all parts of the supplemental.
Relation To Broader Scientific Literature: The works on code-generation, attention and logit arithmetic are well cited.
Essential References Not Discussed: I'm not aware of essential references not discussed in the paper.
Other Strengths And Weaknesses: Strengths:
- the method is general, model agnostic and can be applied beyond coding tasks. The intuition behind the approach is sound and well explained.
- the method is fast as it does only adds 60% over the base model and does not used much extra memory. It is significantly faster than the SOTA methods compared against (> 3x).
- The paper is well written and organized. It is easy to follow.
- Code is provided on anonymous github
Weaknesses:
- The method applies masking to the token embeddings and arithmetic to the logits, so it needs to be inserted within the LLM model. No discussion is made on how complex that is. The modifications would also be model-dependent. It is not discussed how hard it is to implement these modifications for the chosen models. Are there commonalities that can be abstracted ? Overall, this approach is an integral part of an open-source model code, and cannot just be applied on top of an existing models in an easy way.
- The hyperparameter that controls the strength of the logit amplification is model and dataset -dependent. It needs to be tuned as it exhibits a clear maxima. This effect is well discussed in appendix J and the authors show that, while the parameter is strongly model-dependent, the method can still be effective with a fixed value across the benchmark datasets. Unfortunately, the authors do not compare directly in a table the fixed approach with the other SOTA methods. This would be desired, as the real question is whether the proposed approach beats SOTA for a deployable model (dataset-independent).
Other Comments Or Suggestions: see above.
Questions For Authors: no questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: `The method applies masking to the token embeddings and arithmetic to the logits, so it needs to be inserted within the LLM model. No discussion is made on how complex that is. The modifications would also be model-dependent. It is not discussed how hard it is to implement these modifications for the chosen models. Are there commonalities that can be abstracted ? Overall, this approach is an integral part of an open-source model code, and cannot just be applied on top of an existing models in an easy way.`
The implementation is not model-dependent. All six models in our paper are built upon the Huggingface Transformer library, which offers the APIs to directly access and edit token embeddings and logits. So we only need to implement one single decoding method for all these models using these APIs based on the SPA algorithm. Please check Lines 2859-3019 for the implementation of this decoding method in our anonymous repository (https://anonymous.4open.science/r/Selective-Prompt-Anchoring-3693/weighted_utils/weighted_text_utils.py). We currently cannot integrate SPA with closed-source models like GPT-4 since these models only provide a text-based prompt and response API. However, SPA will be applicable once they offer APIs to their logits and token embeddings.
`The hyperparameter that controls the strength of the logit amplification is model and dataset -dependent. It needs to be tuned as it exhibits a clear maxima. This effect is well discussed in appendix J and the authors show that, while the parameter is strongly model-dependent, the method can still be effective with a fixed value across the benchmark datasets. Unfortunately, the authors do not compare directly in a table the fixed approach with the other SOTA methods. This would be desired, as the real question is whether the proposed approach beats SOTA for a deployable model (dataset-independent).`
Thank you for the suggestion. We conducted an additional experiment to evaluate a fixed anchoring strength across different benchmarks. Specifically, we got this fixed value by averaging the tuned anchoring strengths across benchmarks for each model. In the table below, we update Table 2 by adding a new condition (SPA-fixed). It shows that SPA is effectively deployable in new scenarios once a reasonable value is set. We will add more discussion about the practical deployment of SPA in Appendix J.3.
| Method | Improvement |
|----------------|:-------------:|
| PASTA | + 1.2 |
| Self-Debugging | + 4.2 |
| Self-Edit | + 1.8 |
| Self-Planning | + 3.6 |
| ReAct | + 1.3 |
| SPA-fixed | + 5.0 |
| SPA | + 7.7 | | Summary: This work identifies attention dilution as a cause of code performance worsening as the context (generated code) increases. They subsequently propose a solution based on attention steering to upweight relevant tokens. This effectively shortens the effective context of the model and they show substantial performance (12.9%) improvement.
Claims And Evidence: The consistent improvement is impressive especially if omega can be consistently selected without bootstrapping on a subset of problems.
The claims seem well substantiated.
Methods And Evaluation Criteria: The motivation of the proposed method is difficult to understand but boils down to a simple reweighing of the logits based on masking vs no masking of the prompt text (anchoring text).
Theoretical Claims: "For simplicity, we demonstrate by making the entire user prompt x as the anchored text." As figure 1 indicates the choice of the anchored text is important this simplification does not help and makes it more confusing as to what the Masking does. It seems that the approach is to mask out the anchored text and see the impact on the log probabilities.
Ei(X, ω) is not formally defined and so It's not clear in the derivation why "Scaling the semantics of X by ω times is not equivalent to multiplying X by ω.".
Experimental Designs Or Analyses: It's not clear why exactly there's a 1.25X inference overhead. Is this simply a result of applying omega to the original embeddings? Figure 1 and eq 14 seems to indicate a parallel decoding. Is this included in the 1.25X overhead.
Line 175 come out of nowhere. Why is it that you only run this steering when there are failed test cases? If this consistently leads to improved performance it seems natural to always run with the steering. Additionally, this would implicitly avoid any errors introduced by incorrectly setting omega -- leading to upside only. In practice, you would be incurring at least 2x the overhead by having to generate and then run test cases, before rerunning. Is this rerunning included in the 1.27x overhead?
Supplementary Material: I read the appendix especially to understand how omega is selected.
Relation To Broader Scientific Literature: This work proposes a paired decoding technique that's especially effective for coding applications. The derivation is missing some details but it's a neat approach to steering models to pay more attention to relevant code segments. This work relates well to localization for coding agents. In that literature, it's understood that the precision of localization has a large impact on the agent's performance. This work seems to indicate clever attention steering can reduce the effective context and implicitly does localization.
Essential References Not Discussed: Previous work has shown attention blocks tend to focus on the recent tokens but also on early tokens (corresponding to system prompt):
StreamingLLMs: https://arxiv.org/pdf/2309.17453
In settings where the model does indeed focus on the early tokens mostly, the dilution may not be of that much of a concern. However, it would be interesting if it was a code model specific phenomena.
Other work also looks at paired decoding capturing the differential signal based on the prompt:
context-aware decoding: https://arxiv.org/pdf/2305.14739
Other Strengths And Weaknesses: I worry that MBPP and HumanEval are in the memorization regime. It would support the argument to demonstrate the same trends on LiveCodeBench or other coding tasks to show this behavior also occurs when the model is operating on uncontaminated data.
I really like that the attention steering is a very general approach and should apply also to vision and other non-code language tasks. This initial work provides strong evidence to explore the impact in other domains.
Other Comments Or Suggestions: It would be good to have the discussion on how to select omega in the main body of the text and move parts of the derivation to the appendix. Empirically, it seems the choice of omega is important.
Questions For Authors: Can you provide intuition especially for the code domain why the last layer "has been shown to represent the most accurate attention distribution" (line 86)?
In what way is the attention being steered and not just the final logits? This may be clarified by clarifying Ei(X, ω).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: `I worry that MBPP and HumanEval are in the memorization regime... `
Thank you for the suggestion. We conducted a new experiment on LiveCodeBench (10/1/2024-2/1/2025). The results show that SPA remains effective.
| Model | LiveCodeBench |
|-----------------------------|---------------------------|
| Codegen-mono (350M) | 1.1 |
| +SPA | 1.5 (+0.4) (36%↑) |
| DeepSeek-Coder (1.3B) | 6.5 |
| +SPA | 9.2 (+2.7) (42%↑) |
| DeepSeek-Coder (6.7B) | 7.8 |
| +SPA | 10.8 (+3.0) (38%↑) |
| CodeLlama (7B) | 3.8 |
| +SPA | 4.0 (+0.2) (5%↑) |
| StarCoder2 (16B) | 7.0 |
| +SPA | 8.2 (+1.2) (17%↑) |
| DeepSeek-Coder (33B) | 11.9 |
| +SPA | 15.8 (+3.9) (33%↑) |
`It's not clear ... Is this rerunning included in the 1.27x overhead?`
The inference overhead includes running test cases, the two forward passes, and the logit arithmetic operations. We did a further analysis of these different sources of overhead and found that the overhead for running test cases only incurs a very small overhead (0.1s on average) compared to the entire inference process (9.6s). Thus, only performing steering in case of test failures actually reduces the overhead. We did not perform a parallel decoding but we think this is a great idea and will discuss how this will further reduce the overhead in future work. Thank you!
In our experiments, we use test cases in the benchmark instead of prompting the LLM to generate new test cases. Leveraging existing test cases is a common practice in code generation, such as [1] and [2]. But we agree that if there are no test cases available, prompting the LLM for test generation could be a worthwhile solution even with the additional overhead.
[1] Chen et al. Teaching large language models to self-debug. ICLR 2024.
[2] Zhang et al. Self-Edit: Fault-aware code editor for code generation. ACL 2023.
`"For simplicity, we demonstrate by making the entire user prompt x as the anchored text."...`
Sorry for the confusion. We initially intended to use anchoring the entire prompt as an example scenario to illustrate how the attention steering mechanism works in Section 3.3. We will change this writing and present the anchored text more generally and formally.
`Ei(X, ω) is not formally defined so It's not clear why "Scaling the semantics of X by ω times is not equivalent to multiplying X by ω."`
Ei(X,ω) represents the augmented embedding matrix of Ei (Line 142 after Eq. 5), where the semantic influence of X on the generated output is scaled by ω. Simply multiplying the embedding of X by ω does not simply improve the “**semantic influence**” of X since the embedding of X also encodes other non-semantic information such as the positional information. This is why we compute the difference between the logits when masking and unmasking X to cancel out noises and some of the non-semantic information. We will clarify this in the paper.
`In what way is the attention being steered and not just the final logits?`
As shown in Section 3.3., SPA can mathematically simulate attention steering via logit arithmetics. We chose not to directly modify self-attention since it is brittle and costly. For example, PASTA requires an expensive model profiling stage to identify usable attention headers to steer. Furthermore, only modifying self-attention layers does not account for the impact of other components like the feedforward layers and may cause an adversary effect given that self-attention and other components are trained together. As shown in Table 2, SPA is more computationally efficient and also achieves better performance than PASTA.
`Can you provide intuition for the code domain why the last layer "represent the most accurate attention distribution"?`
According to [3], while lower attention layers capture local dependencies such as the variable relationship within an expression, deeper layers capture more abstract representations with long-distance dependencies such as the control flow. Deeper layer attention distribution mirrors how humans understand programs by integrating information across the project. [4] confirmed this by showing attention distributions in the last layer produce the highest alignment between model and human.
[3] Wan et al. What Do They Capture? A Structural Analysis of Pre-Trained Language Models for Source Code. ICSE 2022.
[4] Kou et al. Do Large Language Models Pay Similar Attention Like Human Programmers When Generating Code? FSE 2024.
`It would be good to move how to select omega in the main body.`
Thank you for the suggestion. We will move Appendix J.1 to Section 5. | Summary: This paper identifies an attention dilution problem in code generation using LLMs, where models pay decreasing attention to the user prompt as more code tokens are generated. To address this issue, the authors propose Selective Prompt Anchoring (SPA), a model-agnostic approach that amplifies the contextual impact of user prompts during generation. SPA works by calculating the difference between logit distributions from original and masked versions of anchored text, then scaling this difference by a hyperparameter. Experiments across multiple benchmarks (HumanEval, MBPP, etc.) and models demonstrate consistent improvements, with Pass@1 increasing a bit.
Claims And Evidence: The primary claims regarding attention dilution and SPA's effectiveness are well-supported. The authors provide persuasive empirical evidence demonstrating attention patterns across multiple models (Figures 2-4). The performance improvements are consistently observed across diverse models and benchmarks.
However, the claim that attention dilution is the reason for code generation errors could be better substantiated. While correlation between incorrect code and longer generation length is shown, this doesn't necessarily show causation, e.g, "code is longer" maybe just mean the question itself is harder.
Methods And Evaluation Criteria: - The evaluation is comprehensive, using established benchmarks (HumanEval, MBPP, and variants) along with multilingual code generation (HumanEval-X) and more complex real-world tasks (BigCodeBench).
- The comparison against both attention steering (PASTA) and prompting methods (Self-Debugging, Self-Edit, etc.) provides a well-rounded assessment.
- inference time is also analyzed, but maybe better to discuss a little bit about potential memory overhead or precomputation requirements
Theoretical Claims: The mathematical derivation of SPA in Section 3 is generally sound.
Experimental Designs Or Analyses: The experimental design is thorough with appropriate controls and ablations.
A limitation is the selection of "anchored text." This approach appears to be somewhat fixed and not "clever", and constrained to HumanEval-related tasks and may not generalize well to more complex programming scenarios (e.g. in some other code tasks, NL might not be that important compared to code") . While the authors explore various selection strategies in Section 5.4, a more intelligent approach for identifying optimal anchored text across diverse programming contexts would significantly strengthen the practical applicability of SPA.
Supplementary Material: I reviewed the appendices, but did not carefully examine code repositories for reproducibility.
Relation To Broader Scientific Literature: - The paper effectively situates SPA within both code generation and attention steering literature. The connection to the psychological concept of the "anchoring effect" provides an interesting interdisciplinary perspective.
- The work advances upon prior attention steering methods like TOAST and PASTA by offering a more model-agnostic approach with lower computational overhead. It also contributes to the growing body of research on training-free methods for improving LLM performance.
- A notable constraint is that SPA appears most effective for code generation tasks specifically. The paper's own experiments in Section 5.6 show limited effectiveness on other generative tasks (e.g., MMLU), suggesting task-specificity that restricts its broader applicability in the LLM literature.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: See above comments
Other Comments Or Suggestions: See above comments
Questions For Authors: - My understanding is that the SPA method amplifies the attention of specific tokens of the prompt. Now, would this approach, to some extent, change the model’s original behavior, and like, possibly affecting outputs that were initially correct? It seems plausible that it could introduce some bias, but I suppose the actual impact would depend on the context and how the method is applied.
- How would SPA perform with more dynamic anchored text selection strategies that change during the generation process?
- The paper shows that SPA is particularly effective for longer prompts. Would it be suitable for more complex programming tasks like super long prompts (even whole repos) and sometimes, random prompt (hard to define which parts are more important)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: `While correlation between incorrect code and longer generation length is shown, this doesn't necessarily show causation, e.g, "code is longer" maybe just mean the question itself is harder.`
This is a good point! We investigated this using LiveCodeBench, which provides the difficulty level for each task. For tasks with the same difficulty level, we still observed a significant difference between correct solutions and incorrect solutions. We appreciate your insightful feedback and will include this new result in the appendix.
| | Easy | Medium | Hard |
|---------|:--------:|:--------:|:--------:|
| Passed | 294 | 475 | 400 |
| Failed | 418 | 664 | 784 |
`inference time is also analyzed, but maybe better to discuss a little bit about potential memory overhead or precomputation requirements`
There is no precomputation requirement. For memory overhead, SPA needs to store the logits computed from masked prompt embeddings. Theoretically, the extra memory is equal to vocabulary_size * token_embedding_dimension * logit_size.
Suppose:
- vocabulary_size = 50,000
- token_embedding_dimension = 4,096
- logit_size = 2 bytes
The logit overhead will be 50,000 * 4,096 * 2 ≈ 390 MB.
In practice, this memory overhead can be further reduced because low-ranked tokens do not contribute significantly. We can only augment the logits of a few top tokens. For example, if we consider the top 100 logits, the overhead will dramatically drop to 800 KB.
We will discuss this in the paper.
`My understanding is that the SPA method amplifies the attention of specific tokens of the prompt. Now, would this approach, to some extent, change the model’s original behavior, and like, possibly affecting outputs that were initially correct? It seems plausible that it could introduce some bias, but I suppose the actual impact would depend on the context and how the method is applied.`
Indeed, as shown in Figure 6, different anchoring strengths ω of SPA would have different impacts. If the strength is too low, the performance improvement is low; if it is too high, the LLM starts becoming biased, leading to a decline in performance. Therefore, the actual impact depends on how we set this hyperparameter. Fortunately, we observe that it is simple to tune a balanced value where performance significantly improves. Please check more details in Appendix J. We will clarify this in the paper.
`A limitation is the selection of "anchored text." This approach appears to be somewhat fixed and not "clever", and constrained to HumanEval-related tasks and may not generalize well to more complex programming scenarios (e.g. in some other code tasks, NL might not be that important compared to code") . While the authors explore various selection strategies in Section 5.4, a more intelligent approach for identifying optimal anchored text across diverse programming contexts would significantly strengthen the practical applicability of SPA.`
`How would SPA perform with more dynamic anchored text selection strategies that change during the generation process?`
It is a very interesting future work to investigate how to intelligently select anchored text. One idea is to use LLMs to first select important words or phrases to anchor on before code generation. For tasks where NL may not be important compared to code (e.g., code translation), one idea is to use static code analysis to identify important code elements (e.g., function calls and variable names heavily used in the code).
Furthermore, the anchoring strength, which determines the degree of attention steering, can also change dynamically at different steps. One idea is to develop a method to calculate the relevance of words and phrases in the user prompt to each decoding step. Based on the relevance scores, SPA can dynamically assign higher values to more relevant contexts while assigning lower values to less relevant ones.
We will discuss these ideas as future work in the paper.
`The paper shows that SPA is particularly effective for longer prompts. Would it be suitable for more complex programming tasks like super long prompts (even whole repos) and sometimes, random prompts (hard to define which parts are more important)?`
SPA is suitable for super long prompts since specific instructions like "do not use lambda expressions in the generated code" are likely to be buried in the long prompt and not followed by the LLM. SPA will help improve the influence of such specific instructions.
In fact, it may be more helpful to use SPA in a self-improving pipeline for fairly complex tasks. Based on the errors of the initially generated code, we prompt the LLM to identify which instructions or requirements are not followed in the prompt and then use SPA to improve the influence of these instructions/requirements. | null | null | null | null | null | null | null | null |
Efficient Motion Prompt Learning for Robust Visual Tracking | Accept (poster) | Summary: The paper presents a lightweight Motion Prompt Tracking module that integrates motion cues into vision-based trackers. Using a motion encoder with spatial, point, and temporal encodings and a Transformer-based fusion decoder with adaptive weighting, it improves robustness against occlusion and distractors. The method employs prompt learning, freezing the visual model while fine-tuning only the added modules.
## update after rebuttal
Thank the author for the rebuttal. I will keep my original rating which was already positive.
Claims And Evidence: Main Claims:
- Motion prompts can effectively complement visual features to improve tracking robustness.
- A non-linear temporal positional encoding is theoretically justified and empirically optimal.
- The proposed module improves performance on multiple tracking benchmarks with minimal additional resources.
Evidence:
- Extensive quantitative results across benchmarks (VOT2018/2020/2022, LaSOT, LaSOTEXT, TNL2K, TrackingNet) demonstrate significant performance improvements.
- Ablation studies highlight the contributions of each component (positional encodings, data augmentation strategies, trajectory length).
- Visualizations and qualitative comparisons illustrate improved robustness over baseline trackers.
Methods And Evaluation Criteria: Methods:
- Motion Encoder: Encodes historical trajectories using spatial, point, and non-linear temporal positional encodings.
- Fusion Decoder: A two-layer Transformer network that fuses motion and visual features.
- Adaptive Weight Mechanism: Dynamically balances the contribution of visual and motion cues based on an auxiliary prediction.
- Training Strategy: Prompt learning with frozen visual backbones, reducing training time and memory usage.
Evaluation Criteria:
- Performance metrics such as Expected Average Overlap (EAO), AUC, precision, and robustness.
- Comparison against state-of-the-art trackers and baseline models on standard benchmarks.
- Analysis of training efficiency, convergence, and ablation of different components.
Theoretical Claims: - The paper derives a non-linear temporal positional encoding using the Nyquist frequency principle.
- It demonstrates mathematically that setting $\alpha \approx 7.23$ optimally controls the frequency range of the temporal encoding.
- Experimental results confirm that this setting achieves the best performance compared to other $\alpha$ values.
Experimental Designs Or Analyses: Design:
- Integration of the MPT module into various baseline trackers (OSTrack, SeqTrack, ARTrack).
- Evaluation on multiple challenging benchmarks to test robustness and accuracy.
Analyses:
- Ablation studies that individually assess the impact of each positional encoding, data augmentation strategies (CutMix, sparse sampling), and trajectory lengths.
- In-depth performance analysis comparing hard versus easy scenarios to validate robustness improvements.
Visualizations of attention maps and training dynamics to further substantiate the method's benefits.
Supplementary Material: - Detailed proofs of the non-linear temporal positional encoding.
- Extended implementation details, including architecture settings and training parameters.
- Additional ablation studies and in-depth performance analyses (e.g., effects of trajectory length, data augmentation).
- Extra visualizations and qualitative comparisons, including video demonstrations.
Relation To Broader Scientific Literature: - The work builds on established vision-based tracking methods and recent advancements in prompt learning and Transformer architectures.
- It positions itself relative to methods like ARTrack, ODTrack, and other sequential training approaches by emphasizing efficiency and robustness.
- The integration of motion cues into visual tracking is contextualized in traditional motion estimation techniques and modern deep learning frameworks.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Weaknesses:
- The focus on robustness may come at a slight cost to tracking accuracy compared to some sequential training methods.
- The approach might be sensitive to the quality of motion inputs, which could be further explored.
- Limited discussion on potential challenges when extending the method to all moving subjects, which is multi-object tracking scenarios. Does it cause any conflict?
Other Comments Or Suggestions: - A deeper analysis of failure cases and conditions under which the motion cues might mislead the tracker would be beneficial.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback, and address the concerns below.
1. **Sensitivity to the trajectory quality:** The proposed adaptive weight mechanism in our method can mitigate the impact of noisy trajectories to some extent. As shown in Figure 3 (Page 8 in submitted paper), the predicted weights align with the accuracy of motion predictions which implicitly reflect the quality of input trajectories.
|Success Rate|[0.0,0.1)|[0.1,0.2)|[0.2,0.3)|[0.3,0.4)|[0.4,0.5)|[0.5,0.6)|[0.6,0.7)|[0.7,0.8)|[0.8,0.9)|[0.9,1.0)|
|-|-|-|-|-|-|-|-|-|-|-|
|OSTrack_MPT|1%|21%|26%|32%|42%|63%|82%|94%|99%|100%|
To further analyze the sensitivity of our MPT to varying noisy trajectories, we evaluate the success rate (IoU>0.5) under different quality trajectory inputs on LaSOTEXT, as shown in the table above. Here, [a,b) represents the range where the average IoU of the trajectory is no less than a and less than b. The following conclusions can be drawn:
* The success rate increases significantly with the improvement of trajectory quality, especially when the trajectory IoU is in the range of [0.6, 1.0).
* Our method shows a certain level of robustness to noisy trajectory inputs, obtaining a success rate of 21% even when the trajectory IoU is in the range of [0.1, 0.2).
Thanks for your valuable advice, we will add the above analysis to our paper.
2. **Deep analysis of success/failure cases:** To deeply explore the success/failure cases of our MPT, we evaluate the proportion (Rate) of these two conditions on LaSOTEXT, along with the corresponding average trajectory IoU (Avg_IoU_traj) and average IoU of the previous tracking result (Avg_IoU_pre).
Here, we define a success case when the predicted IoU of our MPT exceeds that of the visual baseline by more than 0.3. Conversely, when the predicted IoU of our MPT is lower than that of the visual baseline by more than 0.3, we define it as a failure case. As shown in the table below, success cases of our MPT usually occur when the historical trajectory IoU is around 0.72, while failure cases usually happen when the historical trajectory IoU is around 0.18. Additionally, our method is more likely to succeed when the tracking result of the previous frame is better, whereas poorer previous tracking results are more likely to lead to failure. This is mainly because the tracking result of the previous frame affects the current search area and the completeness of the target.
||Rate|Avg_IoU_traj|Avg_IoU_pre|
|-|-|-|-|
|Success|2%|0.72|0.76|
|Failure|1%|0.18|0.1|
Moreover, we separately evaluate the distribution of trajectory IoU and previous frame IoU in both success and failure cases, shown in the following two tables. In success cases, the input trajectory IoU primarily falls within a relatively broader range of [0.6, 1.0), especially in [0.8, 0.9). In failure cases, the input trajectory IoU mainly falls within the range of [0.0, 0.2), particularly in [0.0, 0.1). This indicates that addressing misleading tracking issues caused by extremely poor-quality trajectories remains a challenge for our method. This is a limitation of our method that requires future research.
|Distribution_succ|[0.0,0.1)|[0.1,0.2)|[0.2,0.3)|[0.3,0.4)|[0.4,0.5)|[0.5,0.6)|[0.6,0.7)|[0.7,0.8)|[0.8,0.9)|[0.9,1.0)|
|-|-|-|-|-|-|-|-|-|-|-|
|IoU_traj|1%|2%|3%|4%|6%|6%|10%|20%|34%|15%|
|IoU_pre|2%|0%|1%|3%|4%|6%|9%|16%|37%|23%|
|Distribution_fail|[0.0,0.1)|[0.1,0.2)|[0.2,0.3)|[0.3,0.4)|[0.4,0.5)|[0.5,0.6)|[0.6,0.7)|[0.7,0.8)|[0.8,0.9)|[0.9,1.0)|
|-|-|-|-|-|-|-|-|-|-|-|
|IoU_traj|55%|12%|8%|6%|6%|5%|4%|3%|1%|0%|
|IoU_pre|74%|11%|4%|2%|2%|2%|1%|1%|1%|1%|
Thanks for your valuable advice, we will add the above analysis to our paper.
3. **Extension to MOT:** Different from single-object tracking (SOT), typical multi-object tracking (MOT) frameworks involve object detection and association. Therefore, integrating our MPT into an MOT framework will differ from that of SOT. However, the core idea of our method remains highly compatible with MOT problem. Specifically, historical trajectories of each object can be utilized as prompts to enhance MOT performance. On the one hand, our MPT can be integrated into the object detection process to enable motion-aware detection, thereby improving detection accuracy. On the other hand, during the association process, our MPT can help maintain the coherence of the object trajectory, achieving more accurate matching.
In addition, compared with SOT, MOT usually encounters more complex backgrounds with a higher number of distractors. As an advantage of our MPT, introducing motion prompts can potentially help address distractor challenges. However, complex scenes are more prone to generating noisy trajectories, which may also pose a risk of misleading tracking issues.
Thanks for your valuable advice, we will add the above discussion to our paper. | Summary: This paper presents a flexible and efficient motion prompt tracking method, which complements existing trackers based on motion prompts. It includes a motion encoder with three different positional encodings, a Transformer-based fusion encoder, and an adaptive weight mechanism. The tracker is evaluated on several benchmarks and achieves encouraging results.
Claims And Evidence: Comprehensive analysis and support have been provided concerning the significance of motion patterns and the need for greater computational resources for trackers relying on sequential training. The MPT module exhibits outstanding performance on certain datasets, such as the VOT dataset; its enhancement on other datasets is less conspicuous.
Methods And Evaluation Criteria: The proposed MPT method makes sense as it addresses the challenges of incorporating temporal coherence in tracking. Its flexible integration and training-efficient design fit the problem well.
Theoretical Claims: The MPT module can be flexibly integrated into existing vision-based trackers and enhance various baseline trackers, which is demonstrated through a description of its internal components and their interactions. For example, within a lightweight frame level fine-tuning, the MPT achieves comparable performance to those sequential-based SOTA methods.
Although the text provides an overview of the design and anticipated functions of MPT, when it comes to trackers like Artrack, which already incorporate temporal information, the enhancements that MPT makes turn out to be fairly restricted. Likewise, its contributions on datasets such as LaSOT and TrackingNet prove to be rather meager as well.
Experimental Designs Or Analyses: 1. The Efficiency comparison is insufficient. For example, the increase in FLOPS ought to be provided.
2. The authors are suggested to provide the results of full-finetuning the head instead of training an adaptive weight mechanism to support effectiveness and efficiency or using LoRA.
Supplementary Material: Yes. Training details and more analysis.
Relation To Broader Scientific Literature: This paper contributes to the broader literature by refining motion-prompt-based tracking through improved encoding, Transformer-based fusion, and adaptive weighting. It aligns with prior research on Transformers in tracking but introduces motion prompt mechanisms to enhance efficiency and flexibility.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths: The paper introduces a motion encoder with three different positional encodings, which appears to be an effective motion-tracking approach.
Weaknesses:
1.Include experimental results comparing with LoRAT and other recent trackers to ensure fair benchmarking.
2. Provide efficiency metrics (e.g., FLOPs, latency, memory footprint) to justify the claim of being “efficient.”
Other Comments Or Suggestions: More comparisons about PEFT-based tracking methods, like LoRAT, are suggested.
Questions For Authors: Please refer to the above suggestion.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback, and address the concerns below.
1. **More efficiency metrics:** We have provided several efficiency metrics for both baselines and our methods in Table3 (Page 7 in submitted paper), including memory footprint (Params, and Mem), latency (FPS), and training time (Time_tr). Thanks for your advice, we will add the FLOPs metric to our paper to further demonstrate the efficiency, as shown in the table below.
||OSTrack |+MPT(Ours)|ARTrack|+MPT(Ours)|
|-|-|-|-|-|
|FLOPs (G)|29.1|33.7|40.3|41.9|
2. **Impact of fine-tuning strategies:** According to your advice, we supplement the ablation study in terms of fine-tuning strategies. Our method freezes all of the baseline parameters, and only fine-tunes parameters of MPT during training. Here, we also evaluate two variants. FinetuneAll fine-tunes both MPT and baseline parameters, while FinetuneHead fine-tunes parameters of MPT and the tracking head of the baseline, i.e., only freezing the backbone of the baseline model. As shown in the table below, our method slightly outperforms the other two variants, and is more efficient in terms of memory footprint and training time. The potential reason is that freezing the baseline parameters enables the model to focus on learning motion cues and the fusion mechanism.
||OSTrack-B256|+MPT(Ours)|FinetuneAll|FinetuneHead|
|-|-|-|-|-|
|VOT22(EAO)|0.530|0.572|0.567|0.548|
|LaSOTEXT(AUC)|46.9|48.7|48.3|48.1|
|Mem(GB)|35.0|6.1|37.5|7.7|
|Training Time(h)|38|5|8|6|
Thanks for your valuable advice, we will add the above experimental analysis to our paper.
3. **Comparison with PEFT-based tracking methods, like LoRAT:** PEFT methods primarily include adapter-based and prompt learning-based approaches, both of which demonstrate great efficiency in various downstream tasks. LoRAT successfully applies LoRA, a typical adapter-based PEFT method, to the tracking task. The main contribution of LoRAT lies in designing a LoRA-friendly visual tracking model to achieve efficient training and inference. In contrast, our work aims to build a joint tracking framework that incorporates both visual and motion cues, leveraging the temporal consistency of videos and complementing visual limitations. Additional motion inputs are naturally compatible with prompt learning-based PEFT, which is why we choose prompt learning to ensure efficiency.
Furthermore, we integrate our MPT into LoRAT, as shown in the table below. Results demonstrate that our MPT improves LoRAT by an average performance of 1.5% across the two datasets, showing good complementarity of our method with such PEFT-based tracking methods.
||LoRAT|+MPT(Ours)|
|-|-|-|
|VOT22(EAO)|0.541|0.558|
|LaSOTEXT(AUC)|49.9|51.2|
Thanks for your advice, we will add the above comparison to our paper.
4. **Comparison with recent trackers:** Since our method is a plug-and-play module, and can be flexibly integrated into varying visual trackers to achieve performance complementarity, especially in challenging scenarios, the relative performance improvement over baseline methods can fairly demonstrate the effectiveness of our method. Thank you for your advice. We will add the above experimental comparisons with LoRAT to our paper to showcase the effectiveness of our method on recent trackers.
5. **Minor performance gains for ARTrack and on some datasets:** Compared with ARTrack, we provide new insights into efficient temporal tracking. In addition, the apparent performance bias is primarily due to the varying distributions of easy and hard sequences across different datasets, which is further clarified through the quantitative analysis of performance bias in Table 6 (Page 13 of the submitted paper). Our method consistently exhibits robustness advantages in challenging scenarios, demonstrating its prospects for real-world tracking applications.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. Some questions were well resolved. However, could there be some errors in the FLOPs calculation? Please refer to the FLOPs comparison published by the ARTrack authors [1]. I also share the same concern as Reviewer 7U4B regarding the performance of Appearance-based prompt trackers.
[1] https://github.com/MIV-XJTU/ARTrack/issues/80
---
Reply to Comment 1.1.1:
Comment: 1. **FLOPs calculation:** We greatly appreciate you pointing out the discrepancy in the FLOPs calculation. Since ARTrack does not provide official implementation for FLOPs evaluation, our previous evaluation was based on the official implementation of OSTrack, which adopted **profile()** function from **thop** library. To obtain more accurate FLOPs calculations, we carefully use **FlopCountAnalysis()** function from **fvcore** library to recompute FLOPs. The results are shown in the table below. The updated FLOPs show only a slight difference compared to that provided by ARTrack (55.2 v.s. 57.6). Nonetheless, we believe that the efficiency comparison, based on the same FLOPs calculation method, remains fair and credible. Our method only introduces a small increase in FLOPs compared to the baselines. We will include the new FLOPs results and the detailed calculation method in our paper, and will also try to contact authors of ARTrack to inquire about their FLOPs calculation for further verification.
||OSTrack|+MPT(Ours)|ARTrack|+MPT(Ours)|
|-|-|-|-|-|
|FLOPs (G)|31.2|35.9|55.2|56.8|
2. **Performance of appearance-based prompt trackers:** To the best of our knowledge, the concept of the appearance-based prompt tracking is first introduced in multi-modal trackers[1,2], which use visual prompts from other modalities to enhance the tracking performance. In contrast, for the RGB tracking task, all visual trackers naturally utilize appearance cues for tracking (i.e., visual information provided by the initial template). Beyond that, dynamically updated templates align better with the concept of appearance prompts. PromptVT[3] mentioned by Reviewer 7U4B proposes a dynamic template prompter to learn appearance changes between initial and dynamic templates, which shares a very similar appearance-based prompt idea with visual trackers employing dynamic template mechanisms.
As we replied to Reviewer 7U4B, appearance-based prompt trackers (like PromptVT and those trackers with dynamic template mechanisms) enhance appearance features using dynamic appearance information, thereby enabling the model to be robust to appearance changes during tracking. However, the tracking problem is still treated as a discrete image processing task. In contrast, our MPT leverages the temporal consistency of videos by incorporating continuous object trajectories into the visual tracking framework. Our method proves to be more effective in addressing visually challenging scenarios, such as occlusions and distractors. Therefore, these two kinds of methods are complementary: while appearance-based prompt methods focus on enhancing visual features, our MPT can integrate historical motion cues into this visual framework, achieving a joint enhancement of both visual and motion information.
Our experiments integrating our MPT into SeqTrack[4], a typical visual tracker with a dynamic template mechanism, can demonstrate the performance improvement brought by our MPT in such appearance-based prompt trackers. As shown in the table below, our MPT improves the two models of SeqTrack by an average of 1.2%AUC on LASOTEXT, 1.1%AUC on LASOT, and 0.8%EAO on VOT22. More results can be found in Table1 and Table2 (Page 6-7 in submitted paper).
||SeqTrack-B256|+MPT(Ours)|SeqTrack-L384|+MPT(Ours)|
|-|-|-|-|-|
|VOT22(EAO)|0.523|0.528|0.568|0.579|
|LaSOTEXT(AUC)|49.8|50.8|50.4|51.7|
|LaSOT(AUC)|69.4|70.1|72.5|73.9|
[1] Prompting for multi-modal tracking, ACMMM2022.
[2] Visual Prompt Multi-Modal Tracking, CVPR2023.
[3] PromptVT: Prompting for efficient and accurate visual tracking, IEEE TCSVT2024.
[4] SeqTrack: Sequence to sequence learning for visual object tracking, CVPR2023. | Summary: This paper proposes an Efficient Motion Prompt Learning (EMPL) framework for robust visual tracking. The key idea is to leverage historical motion trajectories as prompts that are encoded via three components: Spatial Encoding (SPE), Point Embedding, and Temporal Positional Encoding (TPE).
## update after rebuttal
I have reviewed the rebuttal and found that all my concerns were thoroughly addressed with clear analysis and additional experiments. I therefore maintain my original score of 4 (Accept).
Claims And Evidence: The paper claims that incorporating motion prompts can significantly improve tracking robustness, particularly in the presence of occlusions and distractors. Experimental evidence on several benchmarks (VOT, LaSOT, etc.) supports this claim.
Methods And Evaluation Criteria: The proposed method employs SPE, Point Embedding, TPE to map motion trajectories into a high-dimensional vector space aligned with visual features. The evaluation is based on established tracking benchmarks, and ablation studies are performed to validate the key design choices.
Theoretical Claims: The paper includes a theoretical rationale behind the non-linear initialization of the Temporal Positional Encoding (TPE) using Nyquist frequency principles. The provided intuition is compelling.
Experimental Designs Or Analyses: The experimental design is comprehensive, testing on multiple datasets and including ablation studies on individual components.
Supplementary Material: The supplementary material was reviewed, including demo videos and source code.
Relation To Broader Scientific Literature: The paper builds on established ideas in positional encoding and prompt learning, extending them to the domain of motion cues for visual tracking.
Essential References Not Discussed: Can you discuss PromptVT[A] in this paper?
[A] Zhang, Minghua, et al. "Promptvt: Prompting for efficient and accurate visual tracking." IEEE Transactions on Circuits and Systems for Video Technology (2024).
Other Strengths And Weaknesses: Strengths:
1. Innovative use of motion trajectories as prompts, which is relatively unique in the tracking literature.
2. The design is efficient, both in terms of computational cost and training resources.
Weaknesses:
1. The method’s reliance on accurate spatial (SPE) and point (Point Embedding) encodings may be vulnerable to errors in detection; any mistakes could potentially lead to cascading errors.
2. There is no direct comparison with appearance-based prompt methods (e.g., PromptVT), which would help contextualize the benefits of using motion cues exclusively.
3. The Fusion Decoder is fixed at two layers without analysis on whether a deeper or alternative architecture could further improve performance.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Have you considered how your motion prompt framework might be integrated with appearance-based prompt methods like PromptVT?
2. Can you provide more insights or experiments on how sensitive your model is to errors in the initial detection of bounding box corners?
3. While you have chosen a two-layer Transformer for the Fusion Decoder, did you perform any preliminary experiments with varying depths?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback, and address the concerns below.
1. **Appearance-based prompt trackers:** Taking PromptVT as an example, appearance-based prompt methods efficiently enhance appearance features using dynamic appearance information, thereby enabling the model to be robust to appearance changes during tracking. However, the tracking problem is still treated as a discrete image processing task. In contrast, our MPT leverages the temporal consistency of videos by incorporating continuous object trajectories into the visual tracking framework. Our method proves to be more effective in addressing visually challenging scenarios, such as occlusions and distractors. These two kinds of methods are complementary: while appearance-based prompt methods focus on enhancing visual features, our MPT can integrate historical motion cues into this visual framework, achieving a joint enhancement of both visual and motion information.
Thanks for your valuable advice. We will add the above comparison and related references to our paper.
2. **Sensitivity to the trajectory quality:** The proposed adaptive weight mechanism in our method can mitigate the impact of noisy trajectories to some extent. As shown in Figure 3 (Page 8 in submitted paper), the predicted weights align with the accuracy of motion predictions which implicitly reflect the quality of input trajectories.
|Success Rate|[0.0,0.1)|[0.1,0.2)|[0.2,0.3)|[0.3,0.4)|[0.4,0.5)|[0.5,0.6)|[0.6,0.7)|[0.7,0.8)|[0.8,0.9)|[0.9,1.0)|
|-|-|-|-|-|-|-|-|-|-|-|
|OSTrack_MPT|1%|21%|26%|32%|42%|63%|82%|94%|99%|100%|
To further analyze the sensitivity of our MPT to varying noisy trajectories, we evaluate the success rate (IoU>0.5) under different quality trajectory inputs on LaSOTEXT, as shown in the table above. Here, [a,b) represents the range where the average IoU of the trajectory is no less than a and less than b. The following conclusions can be drawn:
* The success rate increases significantly with the improvement of trajectory quality, especially when the trajectory IoU is in the range of [0.6, 1.0).
* Our method shows a certain level of robustness to noisy trajectory inputs, obtaining a success rate of 21% even when the trajectory IoU is in the range of [0.1,0.2).
**Due to the character limit, please find additional deep analysis of success/failure cases in our response to Reviewer EZ8N (2)**
3. **Architecture of fusion decoder:** According to your advice, we supplement two sets of ablation studies in terms of the architecture of the fusion decoder, as shown in the table below.
||OSTrack-B256|+MPT(Ours)|+OnewayCA|D3|D4|
|-|-|-|-|-|-|
|VOT22(EAO)|0.530|0.572|0.554|0.555|0.555|
|LaSOTEXT(AUC)|46.9|48.7|43.8|47.7|49.2|
* First, our method employs bidirectional Cross-Attention (CA) for better alignment and fusion. Compared with one-way CA, our method obtains an average performance improvement of 3.4% across the two datasets, which demonstrates the effectiveness of our fusion decoder architecture.
* Second, our method implements a two-layer fusion decoder, improving the baseline by an average performance of 3.0% across the two datasets. Deeper variants (D3/4) exhibit slightly inferior or comparable performance, with average improvements of 1.7% and 2.4%, respectively. Therefore, the two-layer structure excels in both performance and efficiency. In addition to reducing efficiency, there are two potential risks for a deeper-layer fusion decoder. First, since the number of prompt tokens is much smaller than that of visual tokens, a deeper fusion process may lead to the dilution and loss of motion prompts, thereby weakening the effectiveness of the motion information. Second, deeper-layer decoders introduce more trainable parameters, increasing the risk of overfitting.
Thanks for your valuable advice. We will add the above experimental analysis to our paper. | null | null | null | null | null | null | null | null |
Merge-Friendly Post-Training Quantization for Multi-Target Domain Adaptation | Accept (poster) | Summary: This paper proposes a post-training quantization (PTQ) method tailored specifically for the model merging scenario in multi-target domain adaptation. In particular, the paper argues that existing PTQ methods quantize each individual model into a state that makes them difficult to merge effectively into a single multi-domain model. To address this issue, the proposed method, HDRQ (Hessian and Distance Regularizing Quantization), introduces three specific design elements aimed at mitigating the degradation in merging quality. These include novel regularization strategies based on Hessian and weight-distance measures, as well as an advanced noise-sampling-based rounding technique.
Claims And Evidence: The authors claim that conventional PTQ methods lead to failure in model merging due to uncontrolled weight perturbations, and they propose a “merge-friendly” PTQ method supported by both theoretical analysis and empirical evidence. However, several issues arise:
- Limited Merging Setting:
Both the theoretical analysis and experiments consider only interpolated models—the most naïve strategy for model merging. In reality, many state-of-the-art (SOTA) merging methods exist and should be included for a more robust comparison. (See the “Essential References Not Discussed” section for relevant literature.) This limitation undermines the motivation for HDRQ, as it remains unclear whether the observed degradation is inherent to PTQ or specific to the merging strategy chosen.
- Attribution of Accuracy Degradation:
In the experimental section (e.g., Table 1(b) and Table 2), it is observed that merged quantized models yield slightly lower performance (e.g., in terms of mIoU) compared to merged full-precision models. However, it is not clear whether this drop is due to the quantization process itself or because the quantized weights are less amenable to merging. For instance, Table 1(a) shows that quantization alone degrades performance even without merging. Hence, the authors should clarify whether the accuracy drop after merging quantized models is a consequence of quantization per se or of the incompatibility of the quantized weights with the merging process.
- Baseline Performance:
More critically, the experimental results indicate that the proposed HDRQ method does not consistently outperform the compared baseline; in particular, Table 2 suggests that QDrop yields superior performance. This raises concerns regarding the superiority of HDRQ over existing methods.
Methods And Evaluation Criteria: The proposed HDRQ method is derived through theoretical analysis and is based on a well-reasoned process. However, as mentioned above, the theoretical derivation is based on a setting where model merging is confined to interpolated models—a scenario that is overly simplistic and not representative of more advanced merging cases. Moreover, the experimental evaluation does not incorporate SOTA merging methods.
Furthermore, since the paper’s goal is to develop a PTQ method that is “merge-friendly,” the benchmarks should be drawn from established model merging benchmarks. The authors are encouraged to evaluate HDRQ across a variety of vision tasks, NLP tasks, and different numbers of target tasks, as well as on models fine-tuned via parameter-efficient fine-tuning (PEFT). The authors may want to learn from the following papers:
[1] Yadav, Prateek, et al. "Ties-merging: Resolving interference when merging models." NeurIPS 2023.
[2] Huang, Chenyu, et al. "Emr-merging: Tuning-free high-performance model merging." NeurIPS 2024.
Theoretical Claims: As mentioned in the Claims and Evidence section, the theoretical claims in this paper have a major issue: the basic setting is not entirely realistic because it only considers interpolated models. Apart from this initial setting, the subsequent derivations are straightforward and easily understandable. Consequently, the novelty of the theoretical contributions is limited, and the analysis does not convincingly establish that HDRQ can resolve merging issues beyond what standard quantization errors would introduce.
Experimental Designs Or Analyses: As noted in the Methods and Evaluation Criteria section, the benchmarks employed in the paper require a more formal treatment. In addition, there are two major concerns regarding the experimental design:
- The PTQ methods used for comparison are not state-of-the-art. For instance, BRECQ (from 2021) and QDrop (from 2022) are used, yet these methods are relatively outdated—and they have not been properly cited in the main text. More recent approaches (see the Essential References Not Discussed section) should be considered for a fair comparison.
- The model merging strategy evaluated in the experiments is limited solely to model interpolation—a very naïve merging approach. There exist many advanced, state-of-the-art merging methods that should be included in the experiments to thoroughly validate the proposed approach.
Moreover, the reported experimental results are not statistically significant. As mentioned in the Claims and Evidence section, the results in Table 1(a) and Table 2 do not clearly demonstrate HDRQ’s superiority; in fact, the baseline QDrop appears to outperform HDRQ.
Supplementary Material: The supplementary material is the provided code. I didn't run the code.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: Representative papers about model merging:
[1] Yadav, Prateek, et al. "Ties-merging: Resolving interference when merging models." NeurIPS 2023.
[2] Huang, Chenyu, et al. "Emr-merging: Tuning-free high-performance model merging." NeurIPS 2024.
[3] Ilharco, Gabriel, et al. "Editing models with task arithmetic." ICLR 2023.
[4] Wang, Ke, et al. "Localizing task information for improved model merging and compression." ICML 2024.
SOTA papers about PTQ:
[1] Lee, Jung Hyun, et al. "Flexround: Learnable rounding based on element-wise division for post-training quantization." ICML 2023.
[2] Shang, Yuzhang, et al. "Enhancing post-training quantization calibration through contrastive learning." CVPR 2024.
[3] Xu, Ke, et al. "Ptmq: Post-training multi-bit quantization of neural networks." AAAI 2024.
Other Strengths And Weaknesses: Please refer to the other sections. Thanks.
Other Comments Or Suggestions: Please refer to the other sections. Thanks.
Questions For Authors: Please refer to the other sections. Thanks.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: First and foremost, we sincerely appreciate your thoughtful comments.
## Concern 1 : Limited merging setting
We emphasize that our work deliberately targets the most challenging scenario for enabling domain adaptation on resource-constrained edge devices. Specifically, we design our setting to be demanding in the following four aspects:
1. Edge devices lack sufficient resources for fine-tuning.
2. State-of-the-art PTQ algorithms are also too costly for such devices.
3. To minimize communication costs, only quantized models can be deployed.
4. Merging must remain lightweight.
While exploring boundary-breaking methods is important, we also believe addressing practical and challenging real-world deployment scenarios is equally critical. From this view, complex, data-intensive merging methods fall outside our scope. Given the strict resource constraints and latency sensitivity of edge environments, merging must remain lightweight.
With this in mind, we adopt the interpolation-based merging strategy proposed in [1]. Notably, our baseline outperforms prior MTDA techniques, demonstrating its strength as a representative of state-of-the-art trends in model merging.
[1] Wenyi Li, Huan-ang Gao et al, “Training-Free Model Merging for Multi-target Domain Adaptation”, ECCV 2024
## Concern 2 : Attribution of accuracy degradation
As shown in Table 1 (a) and (b), quantization degrades each model's quality and negatively impacts merged performance. However, when quantization is done with merging in mind, quality is better preserved. In Table 1 (a), HDRQ matches QDrop in sole quantization but significantly outperforms it in harmonic mean(1.69 mIoU / 4.14 mIoU, before/after correction). This highlights the importance of merging-aware quantization.
## Concern 3 : Baseline performance
We acknowledge that HDRQ initially underperformed QDrop and have devoted significant effort to address this. We identified a minor bug in our implementation of HDRQ—the quantization step size was not updated during training—leading to negatively biased results.
Corrected results are shown below:
| | | W8A8 | W8A4 | W4A8 | W4A4 | W3A3 |
|:---:|:----:|:----:|:---:|:---:|:----:|:---:|
| **R → ACP** | QDrop | **67.91** | **67.4** | 64.85 | 66.26 | 62.99 |
| | HDRQ | 67.46 | 66.71 | **66.75** | **66.41** | **64.7** |
| **A → RCP** | QDrop | **68.75** | **68.24** | 66.83 | 66.04 | 64.22 |
| | HDRQ | 68.1 | 68.15 | **67.8** | **67.58** | **65.29** |
| **C → RAP** | QDrop | **74.79** | **74.58** | 73.81 | 73.25 | 71.01 |
| | HDRQ | 74.58 | 73.7 | **74.26** | **73.57** | **71.63** |
| **P → RAC** | QDrop | **64.8** | **64.52** | 62.52 | **63.22** | 61.24 |
| | HDRQ | 64.51 | 64.09 | **63.93** | 63.19 | **61.55** |
HDRQ generally outperforms QDrop at lower bit-widths of weight (W4A8, W4A4, W3A3). In 11 wins, HDRQ’s average margin is 0.95—nearly double the 0.41 margin across 9 losses. These results demonstrate HDRQ’s consistent superiority over QDrop in many scenarios. We will revise all results in the final paper. Thank you again for the feedback led to this correction.
## Concern 4 : Comparison with SoTA PTQ method
While we recognize the importance of comparing with recent methods, we note that BRECQ and QDrop remain strong and reproducible baselines despite earlier publication. Still, we agree on the importance of comparison with recent methods and conducted further experiments.
Specifically, we included FlexRound as a recent baseline, but excluded PTMQ due to its multi-bit focus and weak single-bit performance. We also tried to reproduce Shang et al.’s method, but the lack of official code and limited time prevented inclusion.
We validated implementation using 4W4A quantization on ResNet-18 with the official PyTorch checkpoint (top-1: 69.76%). FlexRound achieved 67.63%, and QDrop 67.87%, consistent with original reports (~0.2%p difference).
Then we compared our method and FlexRound on semantic segmentation under 4W4A setting:
| | FlexRound | HDRQ |
|:------:|:-----:|:----:|
| **Sole - CS** | 60.01 | 59.76 |
| **Sole - IDD** | 48.94 | 48.51 |
| **Merged - HMean** | 46.44 | **46.73** |
While FlexRound is slightly better standalone, HDRQ achieves a higher harmonic mean-highlighting the advantage of merging-aware quantization.
## Concern 5 : Model merging strategy
As detailed in our motivation, our goal is real-time model merging on edge devices. While advanced methods could improve merging quality, edge devices often lack resources to handle complex techniques. Thus, we adopt interpolation-based merging as our default, given the limited research on quantization-merging interplay. We hope this work offers valuable insights in this direction.
We also acknowledge the missing key references and will include the suggested citations in the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. The results are well-received on my end. I raise my score. But since the benefit from the method is kind of marginal, I will keep my rating but will not fight for its acceptance.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer zjyC,
We sincerely appreciate your thoughtful feedback and constructive comments. Thank you for the time and effort you dedicated to reviewing our work.
Although our rebuttal may not have fully addressed all of your concerns to the extent of earning a more positive assessment, we greatly value your insights and will carefully incorporate them as we refine our paper for the final version.
Once again, we are truly grateful for your effort and contributions to the review process. | Summary: This paper proposes a merge-friendly post-training quantization method for multi-target domain adaptation. It considers quantization and domain adaptation simultaneously. They propose HDRQ (Hessian and Distance Regularization Quantization), a post-training quantization method to preserve merging compatibility in multi-target domain adaptation. Hessian Regularization controls sensitivity to perturbations, while distance regularization reduces weight divergence among models. A noise-sampling-based rounding mechanism is proposed to alleviate rounding ambiguity. Experiment results prove the effectiveness of the proposed method.
Claims And Evidence: I think the claims in this paper are supported by convincing evidence.
Methods And Evaluation Criteria: I think the proposed method is reasonable. The evaluation criteria are also reasonable, but I think the authors can include more complicated datasets or tasks, eg. for image classification, I am curious about the method performance on larger datasets such as DomainNet.
Theoretical Claims: I have checked the theoretical claims in Chapter 3. They are clear and can support the method design.
Experimental Designs Or Analyses: I think the model design is reasonable and supported by theoretical claims. The experiment designs and analyses are also reasonable and clear. My concern is about the experiment results on more complicated tasks.
Supplementary Material: The authors provide code in the supplementary material. I have not checked the code due to resource limits.
Relation To Broader Scientific Literature: I think proposing merge-friendly quantization method for a specific downstream task is beneficial to the community and application of deep learning.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strength:
1. The paper is overall well-written, with a very clear organization.
2. The theoretical insights of this paper are demonstrated clearly, and it naturally supports the model design.
3. The proposed method is reasonable and interesting to me.
Weakness:
1. As mentioned, I think the authors can provide more results on more complicated datasets or tasks.
Other Comments Or Suggestions: N/A
Questions For Authors: See weaknesses.
## After Rebuttal
After reading the rebuttal and other reviews, I still tend to accept this paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful review and constructive feedback on our submission.
## Weakness 1 : Complicated data and tasks
We agree that experiments on more complicated tasks are truly helpful. We appreciate your suggestion and will consider additional experimental results.
Due to the limited resource and time, we are not able to produce full experimental results for domainnet, but we here provide partial merging results of 3W3A quantized DA models from Real → Clipart, Sketch, Painting:
| | BRECQ | QDROP | HDRQ |
|:--------------------------------:|:-----:|:-----:|:-----:|
| **Harmonic Mean (R → C, S, P)** | 34.07 | 41.51 | **42.08** |
In this moment, we hypothesize HDRQ to work well with larger domain shift, since domain adapted weights are expected to still remain in single basin, where our theoretical analysis are valid. We expect our method to either surpass baseline methods or show comparable performance in other settings, as observed in the Office-Home dataset. We will include the full experimental results in the camera-ready version. Thank you for your invaluable suggestion.
---
Rebuttal Comment 1.1:
Comment: After reading the rebuttal, I still tend to accept this paper.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer 5KmE,
Thank you for your thoughtful review and for sharing your perspective on our work. We appreciate that you remain inclined to accept the paper, and we're grateful for the insights you provided.
Thank you again for your contribution to the review process. | Summary: This paper introduces HDRQ (Hessian and Distance Regularizing Quantization), a novel post-training quantization (PTQ) method designed to improve merge-friendly quantization for multi-target domain adaptation. Model merging has been shown to be an effective way to adapt models across multiple target domains, but quantization introduces discretization effects that degrade merging performance. This work systematically analyzes the impact of quantization on model merging through the lens of error barriers, showing that quantization-induced perturbations disrupt weight alignment, leading to suboptimal merging.
Claims And Evidence: The paper presents a well-structured argument for HDRQ, and its claims are largely supported by theoretical analysis and empirical results. The paper provides a theoretical analysis using the error barrier concept, showing that quantization noise misaligns weights and increases merging degradation. Experiments on both semantic segmentation and classification tasks demonstrate that HDRQ achieves comparable or superior accuracy to standard PTQ methods in single-model settings while significantly improving merged model accuracy.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria generally make sense for the problem of post-training quantization (PTQ) in multi-target domain adaptation (MTDA) via model merging. The use of semantic segmentation datasets (GTA → Cityscapes, Indian Driving Dataset) is appropriate since segmentation is a resource-intensive task where quantization is highly relevant. The Office-Home dataset (Real, Art, Clipart, Product) is a well-established benchmark in domain adaptation, making it a reasonable choice for testing multi-target domain adaptation in image classification. The paper correctly evaluates the harmonic mean of accuracy across target domains, which is a standard metric for multi-target domain adaptation to ensure balanced performance across different domains.
Theoretical Claims: Yes, I checked the proof and found no issues.
Experimental Designs Or Analyses: I examined the experimental design and analysis in the paper, particularly focusing on the validity of the experimental setup, evaluation metrics, and statistical rigor. The use of GTA → Cityscapes, Indian Driving Dataset for segmentation and Office-Home dataset for classification aligns well with multi-target domain adaptation (MTDA). Quantization is applied after domain adaptation but before merging, which mirrors a realistic deployment scenario where models are adapted to different target domains and then quantized for efficiency. The paper incrementally removes components (noise-based quantization, distance regularization) and shows their impact on performance.
Supplementary Material: Yes, I reviewed all of them.
Relation To Broader Scientific Literature: The key contributions of this paper, HDRQ (Hessian and Distance Regularizing Quantization), are closely related to several existing areas in the broader scientific literature, including model merging, post-training quantization (PTQ), multi-target domain adaptation (MTDA), and loss landscape analysis.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths: 1. The paper introduces HDRQ, a merge-friendly post-training quantization (PTQ) method, which is a novel contribution. While quantization and model merging have been studied separately, this work is the first to systematically analyze how quantization affects merging and propose a quantization method explicitly designed for multi-target domain adaptation (MTDA) via model merging. 2. The use of error barriers to analyze quantization effects on merging adds a theoretical foundation to a practical problem, which is an underexplored area in PTQ. 3. It enhances the practicality of training-free multi-target domain adaptation, making real-time adaptive AI more feasible without retraining.
Weakness: 1. The paper claims HDRQ is the first to study quantization-aware model merging, but it does not compare against quantization-aware fine-tuning methods (e.g., QAT approaches that may mitigate merging issues). 2. Since edge deployment efficiency is a key motivation, including latency/runtime comparisons would improve the practical significance. 3. in highly heterogeneous domains, where domain shifts are large, does HDRQ still work? A failure case analysis would help clarify the method’s limitations.
Other Comments Or Suggestions: No
Questions For Authors: 1. In Section 3.2, you state that HDRQ flattens the loss surface using Hessian regularization, and Figure 2 visually supports this claim. However, you do not provide a quantitative metric for Hessian smoothness. Could you report the Hessian eigenvalue distribution or a sharpness measure (e.g., Spectral Norm of Hessian, Fisher Information Matrix, or Sharpness-Aware Minimization (SAM) curvature metrics) to validate this claim?
2. The results in Tables 1 and 2 report performance improvements, particularly in low-bit settings (e.g., W4A4, where HDRQ achieves +1.69 mIoU over QDrop). However, standard deviations and statistical significance tests are not reported. Could you provide standard deviations over multiple runs and conduct a paired significance test (e.g., t-test or Wilcoxon signed-rank test) to ensure that the improvements are statistically meaningful?
3. HDRQ is tested on ResNet-based architectures (ResNet-50, ResNet-101), but modern resource-efficient models (e.g., Vision Transformers, MobileNets, EfficientNet) are widely used in real-world low-power applications. Have you tested HDRQ on non-ResNet architectures? If not, do you anticipate challenges in applying HDRQ to architectures with different weight distributions (e.g., ViTs)?
4. HDRQ introduces Hessian-based smoothing, distance regularization, and noise-sampling-based rounding. How does this impact quantization time, inference speed, and memory overhead compared to standard PTQ methods (e.g., QDrop, BRECQ)? Can you provide runtime comparisons (in milliseconds per sample) and memory usage measurements?
5. HDRQ includes a noise-sampling-based rounding technique to improve merging stability, but no direct ablation is provided for this component. Could you report results comparing HDRQ with and without noise-based rounding to show its direct impact on merging performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We've covered your key points and clarified our work.
## Weakness 1 : Comparison with QAT
Quantization-aware training (QAT) typically yields higher quantization quality compared to post-training quantization (PTQ), but it demands full access to training data and significantly more computational resources. In our setting, which centers on per-device domain adaptation, the available target data is often too limited to support end-to-end fine-tuning without risking overfitting. To ensure practical applicability, we focus on PTQ and compare our approach against baseline PTQ methods.
## Weakness 2 : latency/runtime comparisons
In our setup, fine-tuning and PTQ are done on a server, while only model merging and inference run on the edge device. Since the quantized models retain the same bit configuration and granularity, their latency and runtime remain unchanged after merging.
As a result, the only remaining computational component on the edge device is the model merging process. To clarify its cost, we measured the time on a Raspberry Pi 5 using MobileNet-V2. The process took 35.53 seconds with 100 noise sampling iterations on an ARM Cortex-A76 CPU—about 3.60 seconds per 10 iterations—showing that merging is feasible even on low-power devices.
## Weakness 3 : Highly heterogenous domains
Domain adaptation typically uses a lower learning rate and fewer epochs than training from scratch, so we assume that the resulting weight deviations are not large enough to escape a single basin—even under significant domain shifts.
However, we acknowledge that if such a case occurs, additional treatments may be required to maintain the quality of quantized networks.
We conducted a partial experiment on DomainNet (Real, Clipart, Sketch, Painting), a more complex dataset, and observed consistent improvements over baselines. Please refer to our response to Reviewer 5KmE’s Weakness 1.
## Question 1 : Sharpness measure
As a measure of sharpness, we report the Hessian trace of the Real→Clipart domain-adapted model quantized to W4A8 on the Office-Home benchmark. The table below shows that HDRQ exhibits a lower Hessian trace, indicating better flatness:
| |QDrop|HDRQ|
|:-:|:-:|:-:|
| **Hessian Trace** |42171|38348|
## Question 2 : Multiple runs
We conducted five additional experiments with different seeds and report the mean and standard deviation:
| |QDrop|HDRQ|
|:-:|:-:|:-:|
| **4W4A results (Harmonic mean)**|41.97 (±1.27)|46.14 (±0.74)|
We would like to sincerely apologize for an error identified during the revision process. Specifically, we discovered that in our earlier experiments, the quantization scale in our method was not properly updated, which led to the reporting of degraded accuracy. In the updated results presented in this table, we have corrected this implementation bug. As a result, the overall accuracy has improved significantly. While we deeply regret this oversight, we also hope for your understanding that this issue introduced a negative bias in evaluating the performance of HDRQ.
## Question 3 : Other architectures
Since our analysis is agnostic to model architecture, we expect it to generalize well to vision-oriented models. However, architectures like transformers—which often exhibit unique characteristics such as outlier channels—may require additional techniques to preserve quantization quality. That said, existing baseline methods are also likely to experience similar degradation in such cases. We plan to include results on ViTs and other architectures in the final version.
## Question 4 : Runtime, memory comparison
Here is a table comparing the maximum memory consumption and quantization time of baseline methods and ours:
| |BRECQ|QDROP|HDRQ|
|:-:|:-:|:-:|:-:|
| **Peak Memory Consumption (MB)** |9394|14643|14586|
| **Quantization Time (hours)** |2.45|2.12|2.5|
As shown, quantization time slightly increases over QDrop due to noise sampling at each iteration. Theoretically, memory usage may also rise slightly, as sampled noise must be stored for each gradient step. However, actual measurements via torch.cuda.max memory allocated show similar memory consumption due to PyTorch's caching mechanism. From the result, this allows us to assume that the memory overhead of HDRQ is minimal.
In summary, these increases in memory consumption and overall time are minimal and represent an affordable cost.
The inference speed of models quantized with each method remains the same, as the quantized models have the same bit configuration and quantization granularity.
## Question 5 : Ablation of noise-sampling-based rounding technique
The ablation study is provided in Figure 3 of the main paper, where "Naive" denotes merging without advanced sampling. For the merging of ambiguous cases with multiple valid quantization levels, the Naive method assigns one at random. Advanced sampling filters out low-quality samples, improving average performance and reducing distribution tail size. | Summary: This paper investigates the impact of quantization on model merging in multi-target domain adaptation. The key insight is that prior approaches, which quantize the model before merging, degrade merging quality. To address this, the paper introduces HDRQ, a merge-friendly quantization method that incorporates two regularization terms: one to control sensitivity to perturbations and another to minimize weight divergence between models. The authors evaluate the proposed method on two tasks: semantic segmentation and image classification. Experimental results demonstrate that the proposed method HDRQ enhances model merging performance across various adaptation settings.
Claims And Evidence: Yes. It looks so. The insight that quantization induces misalignment and affects the model merging process makes sense to me.
Methods And Evaluation Criteria: The proposed method appears reasonable; however, since the quantization approach is limited to vanilla round-to-nearest, it remains unclear whether more advanced quantization methods could further improve its effectiveness.
Theoretical Claims: This theoretical claim in Section 3.1 makes sense.
Experimental Designs Or Analyses: The experimental design is well-structured and supports the main claim of this work.
Supplementary Material: N/A
Relation To Broader Scientific Literature: - This work analyzes the interplay between quantization and model merging, offering a valuable contribution to the model merging literature.
Essential References Not Discussed: - The quantization method relies solely on vanilla round-to-nearest. A discussion on more advanced methods, such as AWQ, would be beneficial.
Other Strengths And Weaknesses: - The main motivation is unclear. If the goal is to obtain a quantized merged model for efficient deployment, why not merge two unquantized models first and then apply post-training quantization? Could the authors elaborate on the practical applications to provide a clearer motivation?
- In Table 3, the improvements from distance regularization are not significant. How do the results vary with changes in the hyperparameter λ?
- In Table 1 (W4A4), the best results come from QDrop, yet the highlighted number corresponds to the proposed method. Could the authors clarify this discrepancy?
Other Comments Or Suggestions: - Some implementation details of quantization are unclear, such as the quantization granularity and whether the method is based on RTE. Additionally, how would more advanced approaches like AWQ, SmoothQuant, SpinQuant, or GPTQ compare to the proposed method?
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for thoughtful feedback and constructive comments.
## Weakness 1 : Main motivation
To begin with, we would like to emphasize that our work deliberately addresses the most challenging scenario for enabling domain adaptation on resource-constrained edge devices. Specifically, we design our setting to be highly demanding in the following four aspects:
1. The edge device lacks sufficient resources to support fine-tuning operations.
2. State-of-the-art PTQ algorithms are also too computationally expensive for such devices.
3. To minimize communication costs, only quantized models can be deployed.
4. Model merging operations must remain lightweight.
From this perspective, as some reviewers pointed out, an alternative approach—merging two unquantized models first and then applying PTQ—could be a viable solution. However, this approach becomes less practical when progressive adaptation is considered.
In the case of progressive adaptation, additional merging must be performed on top of the previously merged weights. In our proposed scenario, the target model is expected to accumulate updated weights locally over multiple rounds. Since the server discards the personalized model after each fine-tuning round, this approach imposes a strong constraint. Nonetheless, it provides significant security advantages, as personalized models are not stored centrally. By contrast, in the alternative approach, the server must maintain a separate customized model for each device, which leads to increased storage overhead and raises potential concerns regarding information leakage. Please note that our proposed scheme works really well on our challenging scenario, showing the outstanding performance of the proposed idea.
## Weakness 2 : Results vary with changes in the hyperparameter λ
Here, we present the results of model merging with varying values of $\lambda$. As shown in the table below, increasing $\lambda$ initially improves performance due to the regularization effect. However, beyond a certain point, excessive regularization starts to harm the quality of reconstruction during the quantization process, leading to accuracy degradation. Based on empirical observations, we selected $\lambda = 5 \times 10^{-2}$ as a balanced choice that consistently performs well across different settings. We adopt this value uniformly in all experiments throughout our paper.
| λ | 5e-2 | 7e-2 | 9e-2 | 1.1e-1 | 1.5e-1 |
|---------|-------|-------|-------|--------|--------|
| Harmonic Mean of Accuracy (%) | 66.75 | 66.78 | 66.84 | 66.98 | 66.86 |
## Weakness 3 : Discrepancy in highlighted number
Sorry for the confusion, and thank you for pointing that out. As you correctly noted—particularly in Table 1(a)—the performance of QDrop should be more clearly emphasized. Our main intention was to highlight the quality improvement achieved after model merging, as demonstrated in Table 1(b). To avoid misunderstanding, we will revise Table 1(a) to clearly indicate that QDrop is the best-performing method for standalone quantization.
## Other comments : Comparison with AWQ, SmoothQuant, SpinQuant, or GPTQ
In our experiments, we apply per-tensor, round-to-nearest quantization with truncation. Notably, the truncation range is learnable, following the formulation used in QDrop.
AWQ, SmoothQuant, SpinQuant, and GPTQ are designed specifically for LLMs, and thus differ fundamentally from gradient-based methods like QDrop. These algorithms perform progressive, layer-wise quantization without iterative updates to reduce the computational burden—an essential property given the vast number of parameters in LLMs. However, in the case of vision models, QDrop often outperforms these LLM-oriented techniques. For this reason, our primary focus is on vision-oriented PTQ approaches.
That said, we agree it is also valuable to assess the effectiveness of our proposed method under more advanced PTQ schemes, even if they were originally developed for LLMs. To this end, we conducted 4W4A quantization experiments on semantic segmentation using SmoothQuant, and compared our method against relevant baselines:
| | SmoothQuant | Qdrop | HDRQ |
|:------:|:----------:|:------:|:-----:|
| **6W6A** | 44.69 | 52.62 | **53.95** |
| **4W4A** | 5.43 | 42.5 | **46.21** |
As shown in the table, ours performs significantly better than SmoothQuant, especially in the low-bit W4A4 case. We will update the results in the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. I have increased my rating, but will not advocate strongly for the acceptance.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer UftN,
We sincerely appreciate the time and effort you dedicated to reviewing our work. Your thoughtful feedback and reassessment are truly meaningful to us, and we are grateful for the valuable insights you have shared.
Thank you once again for your contribution to the review process. | null | null | null | null | null | null |
Verification Learning: Make Unsupervised Neuro-Symbolic System Feasible | Accept (poster) | Summary: This paper introduces a novel learning paradigm, named verification learning, which transform the label-based reasoning process in neuro-symbolic into a label-free verification process. It achieves good learning results solely by relying on unlabeled data and a function that verifies whether the current prediction conform to the rules. The theoretical analysis points out which tasks in neuro-symbolic systems can be completed without labels and explains why rules can replace infinite labels.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: Neuro-Symbolic Learning, Constraint optimization Problem.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strength**: \
(1) The idea and motivation is clear state. The writing is good and the method pipeline is easy to follow. \
(2) This paper provides a comprehensive theoretical foundation for the proposed method.
**Weaknesses**: \
(1) The novelty of the proposed method requires further elaboration and discussion. \
(2) The experiments is somewhat weak and necessitates further expansion and refinement to ensure robustness and comprehensiveness.
Other Comments Or Suggestions: See questions below.
Questions For Authors: (1) The proposed method seems simple, and I concern about the novelty. I recognize the significance of a label-free neuro-symbolic framework and the urgency of developing corresponding solutions. While I appreciate the idea and motivation behind this paper, I remain interested in how other reviewers evaluate its novelty. Could the authors provide a more detailed explanation of the proposed method's novelty from both algorithmic and theoretical perspectives? \
(2) The algorithm description requires further refinement, as I believe many details are omitted and not adequately explained in the supplementary materials. For example, what is the precise definition of the score in Equation 1? Additionally, are there brief proofs provided for the theorems and propositions presented in the paper? \
(3) The experiments are weak incomplete; additional evaluation metrics and baselines are necessary to facilitate more comprehensive comparisons.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Reviewer W1W6:
Thank you for valuable comments.
### Regarding Question 1 and Weaknesses 1:
Methodological Innovations:
1. Explored fully unsupervised Nesy algorithms and provided a systematic solution.
2. Proposed a universal approach for finding globally optimal solutions to Nesy search problems solvable in sub-exponential time.
3. Demonstrated that in the Nesy domain, a single validation function can replace an entire complex knowledge base.
4. Shifted the paradigm from inference to validation, expanding the symbolic space complexity that can be solved.
Theoretical Innovations:
1. Proved that when a symbolic system is sufficiently differentiated, rules can theoretically replace all labels. This is equivalent to proving that rule-based clustering alone can achieve classification.
2. Demonstrated that monotonicity is a necessary and sufficient condition Nesy to achieve a globally optimal solution in sub-exponential time.
3. Established that the irreplaceability of symbols within a system is a necessary condition for their correct recognition.
### Regarding Question 2:
Regarding Formula (1):
In Formula (1), we have generalized various algorithms, such as deepproblog and others, to highlight the commonality in their form. Therefore, the Score does not refer to any specific score. I outline the differences in the Score chosen by different algorithms for comparison.
- In deepproblog and deepstochlog, $Score(S)=\prod_{i=1}^{|path|} p(path_i)$ where path refers to the path leading to S derived from the root.
- In ABL, $Score(S)=\sum_{i=1}^{m}[f(x_i)==s_i]$ known as the Consistency, which represents the number of symbols in S that align with the predictions.
- In WSABL, $Score(S)=\prod_{i=1}^{m} confidence(x_i)_{s_i}$ known as the Confidence, which represents the probability that predicts.
- In VL. In the independent mode, we use the Confidence. In the non-independent mode, Score(S)=(Consistency(S), Confidence(S)). This means seeking the solution with the highest consistency and, the one with the highest confidence among them.
Regarding Proofs
For Theorem 4.1, we use proof by contradiction. Suppose for the i-th largest solution i>1, there is no 0<j<i such that i=suc(j). Then, there must exist some l where i=suc(l) and score(l)>score(i). Since no 0<j<i satisfies i=suc(j), it follows that l>i, contradicting score(l) >score(i). Thus for any i>1, there must exist 0<j< i such that i=suc(j).
The proof for Theorem 4.2 follows the same reasoning as Theorem 4.1.
For Proposition 4.3, if S is independent in terms of Score, then Score(S) is the product of individual scores. This ensures that reordering any subset does not affect the relative ranking of scores, making independence a sufficient condition for monotonicity.
For Theorem 6.1, let $\hat{R}(f)$ be the empirical loss of the best permutation for clustering. The difference between its empirical and generalization error is bounded by
$2\rho \mathcal{R}_n(F)+3 \sqrt{\frac{\log(2/\delta)}{2n}}$.
The difference between the optimal and true permutation is given by $R^{up}_{task}$.
### Regarding Weaknesses 2 and Question 3:
We expanded the experiments by adding 4 new comparison methods. These include 2 classic methods, Hybrid and ABL-zoopt, where Hybrid is an algorithm for solving the CSP problems using neural networks, and ABL-zoopt is a classic method based on gradient-free optimization. Additionally, we included 2 recent methods, REFL and PSP. REFL accelerates the inference process using the Reflection mechanism which is the best paper at AAAI 25. PSP is the latest algorithm that combines probabilistic symbolic perception with symbolic context.
|Method|2|3|4|5|6|7|8|9|10|
|-|-|-|-|-|-|-|-|-|-|
|Hybrid|53.75|40.85|38.73|35.48|28.38|30.48|24.13|21.83|21.43|
|ABL-Zoopt|100.00|44.50|99.55|70.33|40.15 |25.25|26.43|48.25|48.03|
|REFL|99.98|42.60|64.80|99.43|99.05|62.78|57.57|62.93|49.08|
|PSP|99.88|45.58|64.75|99.03 |75.38|67.58|63.30|98.23 |97.38|
|VL|100.00|49.83|100.00|100.00|100.00|100.00|69.20|99.95|100.00|
Additionally, we have included more evaluation metrics. In addition to the accuracy in symbol recognition, we have also included classic metrics such as Recall, Precision, F1 Score, and ROC-AUC. Furthermore, we have added the Global Accuracy metric, which describes the probability that the network provides the correct answer for all symbols in a given set. It is used to reflect the algorithm's ability to find the precise solution.
|Method|Accuracy|Recall|Precision|F1|ROC-AUC|Global Accuracy|
|-|-|-|-|-|-|-|
|Deepporblog |21.41|2.14|10.00|3.53|50.00|0.75|
|Deepstochlog|12.50|1.25|10.00|2.22|47.60|0.00|
|NeurASP|7.44|1.23|9.96|1.55|49.41|0.00|
|ABL|21.43|2.14|10.00|3.53|50.00|1.50|
|WSABL|21.47| 21.47|10.00|3.54|50.73|0.98|
|Hybrid|21.43|2.14|10.00|3.53|50.00|1.50|
|ABL-Zoopt|48.03|22.27|27.23|24.75|81.23|5.20|
|REFL|49.09| 27.29|30.93|29.91|78.30|10.90|
|PSP|97.38| 97.09|96.54| 96.77|99.92|89.80|
|VL|99.95|99.93|99.94|99.93|99.98|99.90| | Summary: In this work, the authors introduce a novel learning paradigm called Verification Learning (VL) to address the problem of learning without labels in neuro-symbolic models. VL corresponds to flipping the standard label-based reasoning [S,KB |= Y] into predicting a set of possible candidate solutions leveraging the knowledge base and the labels [KB,Y |= candidates(S)], and eventually remove the labels from the picture, to infer the set of candidates directly from the knowledge base [KB |= candidates(S)]. Since directly inferring the correct candidate assignment from the set of candidate symbols is computationally prohibitive (would require enumerating all the possible candidates, infeasible in practical scenarios) the author resorts to a generation-validation strategy. To be competitive, this verification approach exploits the assumption that the generated candidates can be sorted according to some scores (that matches the fitness of the candidates), allowing the exploration of the candidate space in an efficient manner using a heap structure. The authors show that this is possible not only when the score factorizes over the symbols (independent assumption, Th. 4.1) but also in the more relaxed constraint where the score satisfies monotonicity (Th. 4.2). The authors also introduce a distribution alignment with some prior to avoid shortcut solutions. Finally, the authors include a theoretical analysis on the upper bounds on accuracy that VL can achieve and experimental results on 4 tasks: addition, sort, match, chess.
## update after rebuttal
I acknowledge and appreciate the efforts made by the authors, and I have carefully reviewed their responses. The additional experimental results on the Road-R dataset, the ablation on the distribution alignment, the additional baselines introduced for other reviewers (Hybrid, ABL-zoopt, REFL, PSP), and the empirical quantification of the benefits of sorting algorithms increased my confidence in the value of this work. I also want to thank the authors for addressing my questions and some of the weaknesses highlighted in the initial review. Even if I agree with Reviewer W1W6 on the fact that the work still requires further improvement, I am raising my score in light of the new experimental results and explanations added during the rebuttal.
Claims And Evidence: The authors claim that VL can be a valid alternative to the standard label-based reasoning to learn symbols in neuro-symbolic models. This is supported by presenting (loose) theoretical upper bounds motivated by group theory and a limited set of empirical results on four different symbolic tasks (addition, sort, chess, and match). A second claim is that VL can reduce known issues of neuro-symbolic models, i.e., label leakage and shortcut learning, which comes by the definition of the methodology itself (labels are not used in the training process at all). The authors claim that the proposed sorting strategy gives an edge compared to a naive implementation that evaluates of all the possible symbol assignments. Clearly this is the case, but the advantage is not quantified experimentally. Also, while the argument about monotonicity as necessary condition to enable an efficient combinatorial sorting solution, it is not clear how often and for which score functions this condition would generally hold. Finally, the extent to which this methodology could be apply in real-world scenarios (for which collecting labels can be expensive) is not clear, as a complete verifier would be always needed to allow the generation-verification procedure. Adding an outlook on how this methodology could scale to more complex scenarios could help.
Methods And Evaluation Criteria: The proposed benchmark are valid choices to evaluate the claims, even if slightly simplistic. This does not represent a major issue, since it is in line with the datasets used to benchmark standard NeSy models. The proposed methodology is principled and, to the best of my knowledge, is original compared to previous works.
Theoretical Claims: The authors propose three main theoretical contributions in the paper:
- A proof that, when the symbols satisfy the independence assumption, the exploration of the candidates can be performed efficiently “exploding” a limited number of successors for each candidate and guiding the search using a heap structure [Th 4.1].
- A proof that independence is a sufficient but not necessary condition. On the other hand, the relax constraint of monotonicity of the scoring function is a more relaxed but necessary condition [Th 4.2].
- Theoretical bounds on which problem types are addressable by the VL framework using group theory [Section 6].
The first two are correct, I did not check carefully the third.
Experimental Designs Or Analyses: The methodology of the experiments is not sufficiently clear and could benefit from some more careful rewriting. For instance, it is not clear what the accuracies reported in the table represent, how exactly is the model trained, how the baseline models compare to the proposed VL framework in terms of size/supervision/etc. Moreover, some of the novelties introduced in this work are not really ablated in the experimental analysis. For instance, there is no quantification of the computational benefit of the efficient combinatorial sorting solution and no ablation of the distribution alignment technique introduced in Section 5 (which also comes out a bit of nowhere).
Supplementary Material: I reviewed the Appendix A, containing the experimental results for Match and Sort. I skimmed but not checked carefully Appendix B, containing the verification programs for the experimental tasks.
Relation To Broader Scientific Literature: This work is mainly related to works on label leakage (Chang 2020) and shortcut learning (Yang 2024, He 2024) as a potential solution to these problems (labels are completely removed from the learning process, hence no leakage or shortcut is possible)
Essential References Not Discussed: I am not aware of essential references related but not discussed in the manuscript.
Other Strengths And Weaknesses: A major strength of this paper is the originality of the idea, which is interesting and allowed by a series of non-trivial steps, e.g. the efficient combinatorial sorting to speed up the COP. However, the presentation of the material could be greatly improved. I believe the manuscript still needs to be ironed in many parts, the notation needs to be simplified in some sections, and less background knowledge must be given for granted in others. Some examples:
- The examples in the 3rd paragraph of the introduction are not well introduced and it is borderline impossible to understand them without referring to the original papers. More in general, I think that giving for granted a lot of background knowledge is an issue underlying many parts of the text, which undermines its accessibility and utility for the community in general.
- Sometimes the notation is heavy, overlapping, and confounding. For instance, take the paragraph in Line 206 left column: on its own the concept it’s rather straightforward, but the imprecise notation (S_{i+1}\in S_j, while it should be S_{i+1}\in Suc(S_j)) considerably complicates the understanding of it.
- It is not explained in the manuscript what are the test time corrections (TTC) mentioned in the results section.
- Tables are very poorly commented and could benefit from more explicative captions or more detailed descriptions in the main text. Right now, they are not self-contained, and one should go through the entire result section to understand what the different columns represent. The tables do not report which metric is used in the corresponding experiments.
Other Comments Or Suggestions: - In Line 45, page 1 you use a comma to “tie” together the two symbols, while in Line 51 you use a plus symbol (not clear if this is intentional or not).
- KL in Line 70 of page 2 left column pops out of nowhere.
- The references should be revised, as some are missing quite important fields (e.g. Yang et al. He et al., and van Krieken et al are all missing the year).
- The last sentence of the paragraph, line 126 left column, does not sound correct.
- Line 133, page 3 right column there is a comma instead of a full stop.
- Lines 174—195 inconsistent use of ; and .
Questions For Authors: - In the 1st example of section 6, you state that the error upper bound for Sudoku is 100%, because any permutation of numbers within a row, column or grid satisfies the constraints within them. However, these permutations would not satisfy the constraints of other entities (e.g., permuting within a row would still guarantee the satisfaction within itself, but would most likely result in breaking the constraints of other columns/grids). Why are these not taken in account?
- Where is the L (learning) in VL? What is effectively trained in your setup? This should be really clarified in the manuscript, which right now is missing a clear outline of the proposed system and training pipeline.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer Vjyz:
Thank you for valuable comments.
### Regarding the Benefits of Sorting Algorithms
We supplement this comparison with the time consumption of the naïve sorting method. As the space grows exponentially, the time required by the naïve method increases tenfold, while DCS remains stable.
|Algorithm|2|3|4|5|6|7|8|9|10|
|-|-|-|-|-|-|-|-|-|-|
|VL|112|114|111|111 |111|109|106|112|116|
|Naive|114|127|173|213|239|394|573|641|1254|
### Regarding Monotonicity
When monotonicity is not satisfied, it can be proven that no algorithm with a complexity lower than exponential can find the optimal solution. Fortunately, the optimization objectives in current Nesy frameworks almost satisfy monotonicity.
1.Metrics that satisfy independence
An example is the Confidence Score, defined as:
$Confidence(S)=\prod_{i=1}^{m} confidence(x_i)_{s_i}$
2.Metrics that do not satisfy independence but still satisfy monotonicity.
$Consistency(S)=\sum_{i=1}^{m}\left[ f(x_i)=s_i\right]$
3.Combinations of Metrics
Any metric formed by adding, multiplying (with positive values), or concatenating also retains monotonicity. For example, the two-dimensional tuple: $(Consistency(S), Confidence(S))$ remains a monotonicity-preserving metric.
### Regarding Practical Applications
In applications, VL can be applied in various scenarios. In autonomous driving, a vehicle can automatically recognize pedestrians, traffic lights, and crosswalks in an image using only the traffic rules. In circuit fault analysis, VL can identify faulty components based solely on input circuit diagrams and electrical rules.
We further validated VL in real-world scenarios by conducting experiments on the Road-R autonomous driving dataset with driving rules.
|Method|4|5|6|7|8|9|
|-|-|-|-|-|-|-|
|Deepporblog|6.24|16.91|20.31|7.46|12.57|11.50
|Deepstochlog|5.98|19.12|MLE|MLE|MLE|MLE|
|NeurASP|27.30|22.34|6.12|TLE|TLE|MLE|
|ABL|86.58|58.01|44.93|28.15|19.50|5.18|
|WSABL|87.22|57.38|32.11|31.01|36.29|22.45|
|$VL_{\not\perp}$|92.06|86.54|86.53|82.16|82.90|73.36|
|$VL_{\not\perp}^{TTC}$|93.43|92.38|93.87|90.63|93.33|88.48|
|$VL_{\perp}$|92.06|87.84 |87.59|84.93|82.80|72.72|
|$VL_{\perp}^{TTC}$|93.43|92.40|92.93|90.86|91.90|82.24|
### Regarding Ablation Study on Alignment
Without distribution alignment, unsupervised learning can effectively cluster similar samples together but struggles to associate them with the correct symbols.
|Method|2|3|4|5|6|7|8|9|10|
|-|-|-|-|-|-|-|-|-|-|
|$VL_{\not\perp}^{noalign}$|46.25|100.00|34.5|29.6|27.00|25.25|21.83|24.13|21.42|
|$VL_{\not\perp}$|100.00|99.88|99.75|100.00|99.75|99.00|99.25|97.75|48.80|
|$VL_{\perp}^{noalign}$|46.60|32.68|29.58|29.60|27.00|35.08|24.13|28.43|31.43|
|$VL_{\perp}$|100.00|41.38|99.50|99.80|99.65|99.19|70.70|98.73|98.28|
### Regarding Experimental Designs or Analyses
The goal of this work is to train neural networks without any labels by utilizing verification functions. This enables the neural networks to accurately extract symbols from raw data. All accuracy values reported in this paper correspond to the accuracy of the neural network in symbol recognition. In Section 7, we introduce the general experimental setup. A LeNet backbone was employed in all cases, with training conducted for 10 epochs.
### Regarding Question 1
In this paper, all the permutations we describe are global permutations applied to the entire symbol system. The permutation we aim to express is holistic (in a 9x9 grid) and refers to symbol permutations on the entire Sudoku board. For example, by permuting all the 1s in the 9x9 grid to 2s, 2s to 3s, ..., and 9s to 1s, this global permutation will cause corresponding changes in each row, column, and 3x3 grid, but will not violate any of the constraints on any of them. For example, in a vision Sudoku task with 81 unlabeled images, a completely unsupervised approach would fail to prevent the images from being interpreted as permuted labels. Due to the possibility of all labels being permuted, the error bound caused by the Sudoku task itself can reach 100%.
### Regarding Question 2
In Nesy, machine learning models such as neural networks play a role in recognizing abstract symbols from raw data. Our work is no different in this regard, and our ultimate goal is to complete the neural network training process using only a verification function as the knowledge base.
The pipeline described in this paper can be summarized as follows:
1. A set of samples $X=[x_1,\dots,x_m]$ is input into the network f.
2. The neural network outputs probability predictions for this set of samples g(X) and label predictions f(X).
3. The probability predictions g(X) are then adjusted through distribution alignment to obtain $g_{align}(X)$.
4. A Score is chosen as an optimization goal with monotonicity.
5. The DCS is called to obtain the optimal result $Y^{val}$ validated by the verification function.
6. The result $Y^{val}$ is used as supervision information, and the loss is computed by comparing it with g(X). | Summary: The paper introduces verification learning, a neuro-symbolic paradigm to overcome reliance on labeled data by converting traditional symbolic reasoning into a label-free verification process. VL frames the learning task as a constraint optimization problem and leverages a dynamic Combinatorial sorting algorithm to efficiently find optimal solutions. A distribution alignment method is also introduced to mitigate shortcut issues common in unsupervised settings. Experiments demonstrate the efficacy of VL across various tasks, including addition, sorting, matching, and chess.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: No
Relation To Broader Scientific Literature: This work provides practical and theoretical foundations for unsupervised learning without labelled data.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The proposed verification learning paradigm is innovative. It clearly addresses critical bottlenecks in existing neuro-symbolic frameworks by eliminating the need for labeled data.
2. The paper provides theoretical analyses that clarify task solvability conditions and generalization bounds.
3. Experiments across diverse tasks demonstrate performance improvements and data efficiency.
Weaknesses:
1. The assumption of monotonicity, although relaxed compared to independence, still limits the generalizability of the proposed DCS algorithm to more complex, non-monotonic scenarios.
2. The theoretical framework, while insightful, may be overly idealized, and the translation of these theoretical insights to complex real-world scenarios may not be straightforward.
3. The complexity of the distribution alignment strategy may not be trivial in tasks with highly imbalanced or unknown natural symbol distributions.
4. In general, this method requires substantial domain knowledge, which may be impractical in many real-world scenarios.
Other Comments Or Suggestions: 1. Clarify explicitly the conditions under which monotonicity fails. Provide a simple example scenario.
2. The readability of certain theoretical sections (especially Sections 4.1 and 4.2) could be improved by using concrete examples to illustrate key concepts clearly.
Questions For Authors: 1. Can the DCS algorithm handle tasks with high-dimensional symbol spaces or extensive solution spaces without significant computational overhead?
2. In practice, how sensitive is VL to incorrect or noisy verification functions, and how might such imperfections affect learning performance?
3. Could distribution alignment be adapted or automated for scenarios where the natural distribution is not unknown?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer 3PLJ:
Thank you for valuable comments.
## Weakness 1, Suggestions 1 & 2
First, we need to clarify that in Nesy, for more complex scenarios where monotonicity is not satisfied, **no algorithm can find the global optimal solution in sub-exponential time. Monotonicity is both a necessary and sufficient condition for solving it**.
In a Nesy problem, where there are **m symbols**, each with **n possible values**, the complexity of the solution space is **$O(n^m)$**. The goal is to find the optimal solution among them.
- If monotonicity holds, the DCS algorithm can solve the problem efficiently with extremely low complexity.
- If monotonicity does not hold, the only way to guarantee finding the optimal solution is by exhaustively searching the entire solution space, because even if **$n^m-1$** solutions have already been evaluated, there is no way to determine whether the last remaining solution is optimal.
Example 1: Independence Holds
For m = 2, n = 2 suppose:
score=[[0.4,0.6],[0.7,0.3]]
Score(s₁=v₁,s₂=v₁)=0.28
Score(s₁=v₁,s₂=v₂)=0.12
Score(s₁=v₂,s₂=v₁)=0.42
Score(s₁=v₂,s₂=v₂)=0.18
Example 2: Monotonicity Holds
Score(s₁=v₁,s₂=v₁)=0.25
Score(s₁=v₁,s₂=v₂)=0.18
Score(s₁=v₂,s₂=v₁)=0.34
Score(s₁=v₂,s₂=v₂)=0.23
monotonicity is satisfied:
Score(s₁=v₁,s₂=x)<Score(s₁=v₂,s₂=x)
Score(s₁=x,s₂=v₂)<Score(s₁=x,s₂=v₁)
Example 3: Monotonicity Does Not Hold
Score(s₁ = v₁,s₂= v₁)=0.23
Score(s₁ = v₁,s₂ = v₂)=0.25
Score(s₁ = v₂,s₂ = v₁)=0.34
Score(s₁ = v₂,s₂ = v₂)=0.18
Here, fixing one symbol’s value leads to inconsistent ranking of the other symbol’s scores, meaning monotonicity is violated. In this case, there is no alternative to brute-force enumeration to guarantee finding the optimal solution.
## Weakness 2
Our theory is actually applicable. It demonstrates that in many real-world scenarios—such as **traffic control, industrial anomaly detection, and circuit analysis**—it is entirely feasible to replace all labels with rules for learning.
Our theory also highlights that the key to replacing labels with rules lies in ensuring that different symbols within the rule system possess irreplaceability. For instance, in autonomous driving tasks, the rule **"Red and green lights cannot be on simultaneously"** cannot distinguish between them. However, the rule **"Stop at red, go at green"** effectively differentiates them. This insight is valuable for **selecting meaningful rules** when constructing a knowledge base.
## Weakness 3 & Question 3
The primary role of distribution alignment is to initialize the neural network’s predictive distribution, preventing it from falling into trivial solutions.
According to Bayes' theorem, the posterior probability learned by the model can inherently correct biased priors. When the true natural distribution is unknown, a uniform distribution can be used as the prior to ensure a sufficiently diverse initialization. This allows the model to fully utilize all available knowledge and gradually converge to the true distribution.
Below is a comparison between using a uniform prior for alignment and no alignment on an imbalanced addition dataset:
|Method|2|3|4|5|6|7|8|9|10|
|-|-|-|-|-|-|-|-|-|-|
|No prior|46.60|32.68|29.58|29.60|27.00|35.08|24.13|28.43|31.43|
|Inexact prior|100.00|100.00|99.95|100.00|58.50|99.90|59.23|49.43|43.45|
## Weakness 4
Our algorithm is practical as it only requires writing a verification function, significantly simplifying the complexity of knowledge compared to others.
Furthermore, when dealing with a large amount of unlabeled data, writing a verification program is more cost-effective than manually labeling. VL can achieve 92.02% based on the autonomous driving rule set Road-R.
## Question 1
As the size of the symbol space increases, VL exhibits a low growth in runtime. In Table 2, we compare the runtime as the number of symbol categories increases from 2 to 10, where VL's runtime only increases from 112.04 to 116.26, whereas the runtimes of DeepProbLog increase by a factor of 20.
This efficiency is due to our sorting algorithm, which ensures that the first solution passing verification is the optimal one. The time complexity is O(K(logK+mlogm+nlogn)), where the constant K represents the position of the first verified solution.
## Question 2
We conducted experiments to validate the robustness. In our tests, the verification function returns the correct validation result with probability p, and we gradually decrease p from 100% down to 50%(completely random).
Our findings indicate that the algorithm's performance only experiences a significant decline when predictions approach complete randomness. As long as the probability of returning correct results is higher than that of returning incorrect results, VL can largely mitigate the impact of errors, demonstrating a strong fault tolerance.
|p|100|95|90|85|80|75|70|65|60|55|50
|-|-|-|-|-|-|-|-|-|-|-|-|
|Acc|98.28|97.58|97.24|94.19|95.02|94.72|93.99|93.55|92.58|83.34|53.58| | Summary: This paper presents a new framework in unsupervised NeSy, excelling in theory and experiments. And the new framework shows excellent performance across diverse tasks. However, there are several weaknesses that need to be addressed to enhance the quality of the paper. The authors should provide further explanations for the experiment and refine the language clarity.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes
Supplementary Material: N/A
Relation To Broader Scientific Literature: enhance the trustworthy of model
Essential References Not Discussed: Yes
Other Strengths And Weaknesses: weakness:
1. The language is occasionally unclear, e.g., "KL paradigm" in the introduction should likely be "VL paradigm" . Additionally, some equations are lack sufficient explanation.
2. Table 1 shows significant performance fluctuations in the addition task (e.g., $V L_L^{TTC}$ drops to 51.40% at base 10), but no explanation is provided.
3. Although the experiments compare against DeepProblog, NeurASP, etc., the authors don't sufficiently discuss the limitations of these methods in unsupervised settings and why VL significantly outperforms them in specific tasks (e.g., 100% accuracy in addition), lacking in-depth qualitative analysis.
Strength:
1. The VL paradigm, by replacing reasoning with verification, eliminates the reliance on labels, filling a gap in unsupervised NeSy research.
2. The DCS algorithm reduces the complexity of COP from exponential to near-CSP levels, significantly improving computational efficiency. Time consumption data in Table 2 further validates its superiority.
3. The paper validates VL across four diverse tasks (addition, sorting, matching, chess), spanning simple arithmetic to complex rule-based scenarios. The results show excellent performance across tasks.
Other Comments Or Suggestions: No
Questions For Authors: See the strengths and weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear reviewer NuWZ:
Thank you for valuable comments.
**Regarding Weakness 1:**
About the language, you are absolutely correct. "KL" should be corrected to "VL"; this was a typographical error. Thank you for pointing it out.
About the formulas, we have provided more detailed explanations for all of them. If anything remains unclear, please feel free to point it out:
- **Equations (1) and (2)** summarize the training methods of previous label-based neuro-symbolic learning algorithms. Equation (1) generalizes the training method of algorithms like DeepProbLog, which utilize all candidate solutions for training. Equation (2) generalizes the training method of algorithms like ABL, which select the highest-scoring candidate solution from the search results for training.
- **Equation (3)** describes the process of aligning the predicted probabilities from the neural network with the prior distribution of symbols, which helps prevent the network's predictions from collapsing into trivial solutions. For example, the addition rule is still satisfied even when the neural network predicts all symbols as 0.
- **Equations (4) and (5)** describe the upper bound and mean error caused by the task itself in unsupervised neuro-symbolic systems. Equation (4) states that the error upper bound is equal to the sum of the distributions of all non-fixed-point symbols in the symbolic system, where a fixed point refers to symbols whose functions cannot be replaced in the system. Equation (5) states that the mean error is equal to the sum of the ratios between the distribution of each symbol and the number of its orbits, where the number of orbits represents how many other symbols can replace the function of a given symbol.
- **Equation (6)** describes the empirical error caused by training a neural network in an unsupervised manner. Since the training error in unsupervised learning should be determined by its clustering error, the minimum error under all label assignments should be considered.
- **Equation (7)** describes the upper-bound generalization error of an unsupervised neuro-symbolic system. This error is jointly determined by the training error, the complexity of the model space, and the task-specific error upper bound.
**Regarding Weakness 2:**
The observed performance variation is due to the early termination of the neural network training process before full convergence. To ensure an objective comparison, we maintained consistent hyperparameters across all methods and experimental settings. Specifically, we fixed the number of training epochs for all algorithms to 10 across different bases.
In the particular setting you pointed out, while the accuracy was 51.40% at the 10th epoch, it exceeded 90% by the 13th epoch:
|Epoch|10|11|12|13|14|15|
|-|-|-|-|-|-|-|
|$VL_{\not\perp}$|48.80|57.05|67.88|90.85|92.83|94.28|
|$VL_{\not\perp}^{TTC} $ |51.40|63.05|78.13|97.38|98.85|98.95|
This indicates that the algorithm did not fail; rather, due to the expanded symbolic space, the neural network requires more training. We did not adopt a higher number of epochs because this would cause DeepProbLog and NeurASP to exceed the time limit (300 hours).
**Regarding Weakness 3:**
In Section 3 of the original paper, we analyzed why methods such as DeepProbLog and NeurASP fail when reasoning without label Y, and why the VL paradigm remains effective even without relying on Y. Here, we further expand this analysis by incorporating specific tasks.
The fundamental reason why DeepProbLog, NeurASP and etc. fail in unsupervised tasks is that they lack the ability to precisely identify the correct symbolic label S from a vast set of candidate symbolic labels candidates(S).
- In supervised tasks, |candidates(S)| is relatively small, making this issue less severe.
- In unsupervised tasks, |candidates(S)| is significantly larger, amplifying the problem and ultimately leading to algorithm failure.
For example, in an **addition task**:
- With supervision, the KB consists of addition rules. Given an equation like **Image1+Image2=Y**, when Y=0, we get candidates(S) = \{(0,0)\}; when Y=1, candidates(S) = \{(0,1), (1,0)\}, and so on. The average of |candidates(S)| is only **4.91**, indicating a relatively low level of imprecision.
- Without supervision, where no Y is provided, given **Image1+Image2=Image3 Image4**, the KB alone must infer: candidates(S)=\{(0,0,0,0),...,(9,9,1,8)\}. Here, |candidates(S)| reaches **100**, making it infeasible for DeepProbLog and NeurASP to identify the correct symbolic label S efficiently. The overwhelming number of imprecise labels leads to training failure.
In contrast, VL has a significant advantage in efficiently identifying the precise symbolic label S from candidates(S). VL excels even when |candidates(S)| is large because it leverages the proposed combinatorial ranking algorithm, which efficiently and precisely selects the optimal symbolic label S that adheres to the rules. | null | null | null | null | null | null |
A Physics-preserved Transfer Learning Method for Differential Equations | Reject | Summary: The paper introduces the problem of domain shifts when learning neural operators, particularly when the model needs to be transformed to predict similar PDEs possibly with different input distributions. To improve upon the existing transfer learning approaches, the paper uses optimal tensor transport methods, with added loss term to incorporate a
Claims And Evidence: I am not convinced by the experimental results. See the Experiments section below.
Methods And Evaluation Criteria: The benchmarks seem to include a good selection of PDEs that would already be standard in the area. The different values of $\nu$ of the PDEs seem to be an important case where the re-training of operators may be necessary, and may be worth highlighting further. However, I think they can seem a bit like toy examples, and therefore could benefit from more realistic data scenarios or even with real experimental data if those can be obtained.
Theoretical Claims: I skimmed through the proofs and see no immediate issues.
Experimental Designs Or Analyses: Major issue with experimental results -- none of the results report any standard deviation scores. This would make the results more convincing especially that some loss terms are quite close, by showing the statistical significance.
Furthermore, the domain shifts chosen for the results can seem a bit cherry-picked -- since there are only 3 dataset, there would be 6 domain shifts available, which should not be much more experiments that need to be run. This would provide more transparent results, and give better interpretation to the Average scores.
Results regarding training time would also be good since they can demonstrate how computationally effieicnt the methods are.
Supplementary Material: I have skimmed through them and see no immediate issues.
Relation To Broader Scientific Literature: I believe that improving the efficiency in learning deep operators is of great scientific importance since it allows amortisation of the simulation of data. However, I think the link as to why transfer learning is the way to do so could be made more clear. See Questions section below.
Essential References Not Discussed: I’m not fully convinced that operator learning are inherently data-driven, since physics-informed losses can be added as done in PINNs [1,2]. Because of that, I would also be interested to see whether these methods could outperform those models as well, where target data and a physics-informed loss are used for training.
[1] Physics-Informed Neural Operator for Learning Partial Differential Equations, https://arxiv.org/pdf/2111.03794
[2] Physics-Informed Deep Neural Operator Networks, https://arxiv.org/pdf/2207.05748
Other Strengths And Weaknesses: Strengths: I do think it does provide an interesting perspective on incorporating domain knowledge into domain adaptation works, which potentially has use cases outside of learning physics-obeying neural operators. So even if not necessarily practical in physics domains, I think it can still be a valuable work nonetheless.
Weaknesses: discussed in the paper, but mainly (1) use in the physics-related domains, and (2) aspects of the experimental results.
Other Comments Or Suggestions: Since this seems to be targeted as a more in the science domain (as mentioned in the primary area), it can probably be written so that it is more friendly towards people unfamiliar with transfer learning. Particularly, it can be further highlighted which details are the more "standard" techniques from transfer learning, and which are the novel area that arises due to the physics-informed portions, and what they may look like wrt actual PDE equations that the practitioner may want to incorporate (I think these are in the paper however can still be made clearer).
Questions For Authors: I haven't really phrased them as questions but these two points are my concerns related to the motivation of the work:
1. Why is transfer learning really necessary, since all examples seem to focus on known PDEs, so in these cases physics informed loss maybe used, and shifting to a new domain may not even be necessary (mentioned this again in the References Not Discussed section).
2. Furthermore, since the tests seem to focus on simulated data (which most likely can be adjusted to fit new conditions for example), it may be difficult to imagine a scenario where data in the target domain cannot already be generated, especially if the PDE is fully known even in the transferred domain. It may be more convincing to show a more concrete example where this occurs, or maybe through some real dataset where it may be hard to find data in one domain than in another.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for taking the time to review our paper and provide professional and constructive reviews. We are encouraged by the positive comments like "a valuable work" of our work. All suggestions will be carefully incorporated into final version.
Q1. Operator learning and problem settings.
(1) Problem explanation and physics-informed loss for operator learning
Perhaps our experimental setup could be misinterpreted as focusing on learning solution functions for known equations. In fact, as highlighted in the end of the introduction in the paper, our method focuses on the transfer learning of neural operator models that learns function-to-function mappings without requiring explicit equations forms. Unlike analytical models like PINNs, the data-driven operator learning is more suitable for real-world problems where equations are approximations of the problems. For instance, several kinds of differential equations can be used to characterize the weather forecasting problem, but none of them is proved to be the true inherent physical relations. In this case, the physics-informed loss is infeasible, while the data-driven property allows operator learning to be more widely applicable in such scenarios.
(2) Necessity of transfer learning.
Despite its advantages, operator learning assumes identical training and testing environments. If testing data comes from a different distribution, (e.g., cross-region weather forecasting), the trained operator model performance may degrade significantly. In such cases, transfer learning is essential for improving model accuracy in cross-domain deployment. Again, as the exact physical relations within problems are usually unknown, the physics-informed losses is infeasible. In contrast, the physical constraints in Eq. (10) is more flexible and practical for broader applications.
(3) Real-world examples.
As mentioned above, this problem setting indeed has broader applications scenarios. In weather forecasting, for instance, differential equations are used to approximate the weather systems, and operator networks are trained on collected data to fit the systems. Generally, more data is available in developed regions while less data is available in less developed regions. As geographical conditions vary across regions, directly deploying models across regions leads to performance degradation. Thus, introducing transfer learning into cross-domain model deployment provides critical value to these areas.
While industrial data would better validate our method, these data is often difficult to obtain. The standard practice in the community is to use data generated from classic equations for simulations. Models performing well on simulated data are then further developed on real-world data. As our work is one of the pioneer works in combining differential equations problems with general transfer learning, our validation on simulated data implies the potential real-world applicability. And we plan to further explore real-world applications in future work.
Q2. Experimental settings and results.
Due to space constraints, here we take the Darcy flow as example for explanation.
(1) In fact, the results we reported in the paper is the average results of three times repeated experiments. We didn’t report the standard deviation (std) in the paper due to the space limitations of the tables. Now we provide the std of finetuning (FT) and POTT in Darcy flow for example:
Tasks | $D_2 \to D_1$ | $D_1 \to D_3$ | $D_2 \to D_3$
$N_t$ | 50 | 100 | 50 | 100 | 50 | 100
FT | 0.1426$\pm 0.003$ | 0.0869$\pm 0.001$ | 0.1556$\pm 0.012$ | 0.1605$\pm 0.008$ | 0.4693$\pm 0.034$ | 0.3553$\pm 0.026$
POTT | 0.1362$\pm 0.002$ | 0.0762$\pm 0.002$ | 0. 1397$\pm 0.009$ | 0. 1404$\pm 0.006$ | 0.3527$\pm 0.025$ | 0.2271$\pm 0.019$
As can be seen, the std of FT and POTT is not significant, and POTT still outperforms FT with std taken into consideration.
(2) As for the concerns about sub-domain selection, as stated at the beginning of Section 5.3, "We evaluate these transfer learning methods in some challenging tasks that require more proactive transfer." For the transfer tasks with minor distribution shift, FT is sufficient to perform well and additional transfer learning methods are unnecessary. And that’s why we only conduct experiments in 3 tasks in each datasets, as you pointed out. We run the remaining experiments in Darcy flow for further explanation:
Tasks | $D_1 \to D_2$ | $D_3 \to D_1$ | $D_3 \to D_2$
$N_t$ | 50 | 100 | 50 | 100 | 50 | 100
FT | 0.0515 | 0.0429 | 0.0492 | 0.0469 | 0.0669 | 0.0598 |
POTT | 0.0496 | 0.0409 | 0.0489 | 0.0445 | 0.0649 | 0.0525 |
It’s obvious that the distribution shifts are minor in these tasks, the performance of FT is good enough and additional transfer learning methods are unnecessary. But noted that the POTT method still outperforms FT to some degree. | Summary: The paper proposes a transfer learning method for differential equations (DEs) that preserves physics consistency while adapting models to new domains. By decomposing domain shifts into distribution bias and operator bias, the author introduces Physics-preserved Optimal Tensor Transport (POTT), which learns to map the source and target domains under the physical constraints. This method addresses the challenge of poor generalization in traditional data-driven models when applied to out-of-distribution data. The effectiveness of POTT is demonstrated through experiments on both 1D and 2D DEs, showing its ability to improve predictive performance under domain shifts.
Claims And Evidence: The paper claims that POTT can achieve higher performance in domain-transfer tasks compared to the existing fine-tuning based and domain-adaptation methods. This claim is supported by Table 2, which presents numerical results demonstrating that POTT consistently achieves lower error compared to baseline methods. Additionally, the magnitude of the error across different tested scenarios reinforces the author’s assertion that the generalization ability of the proposed method is related to the scenario difficulties.
Methods And Evaluation Criteria: The author evaluates POTT on three DEs in 1D and 2D scenarios. Each DE is generated using three different distributions, and the results are presented for all possible domain shift scenarios. The evaluation metric used is relative Mean Square Error (rMSE), which is a standard measure in numerical experiments for assessing prediction accuracy in DE solving tasks. While the evaluation is limited to synthetic data, it is suggested to incorporate real-world datasets to better assess the robustness of the proposed domain shift method, such as fluid dynamic or climate data.
Theoretical Claims: Section 4 describes the theoretical formulation of POTT as an optimal transport problem and introduces approximations that ensure the learned mapping transfers the input distribution to the target domain while preserving physical properties.
Experimental Designs Or Analyses: The author designs the experiments to compare POTT against six existing methods, including fine-tuning approaches and domain-adaptation techniques. While POTT is used for data-driven models, since it is specifically utilized to ensure physics preservation, some physics-informed models should be utilized as baselines to demonstrate the method's effectiveness, such as [1] and [2].
[1] Raissi, Maziar, Paris Perdikaris, and George Em Karniadakis. "Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations." arXiv preprint arXiv:1711.10561 (2017).
[2] Li, Zongyi, et al. "Physics-informed neural operator for learning partial differential equations." ACM/JMS Journal of Data Science 1.3 (2024): 1-27.
Supplementary Material: The supplementary material provides essential details on each DE's domain and parameters, clarifying the 'difficulty level' of the domain transfer tasks. It also specifies the backbone models used for implementation, which makes the paper more reproducible.
Relation To Broader Scientific Literature: The paper discusses transfer learning and domain adaptation approaches for scientific machine learning models. Some relevant fields are already discussed in the Related Works section, including data-driven approaches in DE solving, transfer learning, and optimal transport.
Essential References Not Discussed: The author discusses the majority of the essential references.
Other Strengths And Weaknesses: Clarity strength: The paper has a clear flow and is easy to follow.
Other Comments Or Suggestions: It is suggested to include ablation studies on different backbone model architectures for each task to assess the generalizability of POTT. While the paper demonstrates POTT’s effectiveness using a specific choice of backbone models, evaluating its performance across multiple architectures would provide stronger evidence of its robustness and adaptability to various neural PDE solvers.
Questions For Authors: Question 1: How does the performance of fine-tuning and target-only methods compare to POTT as the amount of target data increases? Specifically, does POTT continue to provide advantages when more target data becomes available, or do fine-tuning and target-only methods eventually close the performance gap?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for taking the time to review our paper and provide professional and constructive reviews. We are encouraged by the positive comments on clarity strength of our paper. All suggestions will be carefully incorporated into final version.
Q1. Baselines and backbone
(1) PINNs, PINO and similar model aim are the learning model in differential equations (DEs) problems. In the discussion on transfer learning in DEs problems, they should be regarded as backbone models like DeepONet and FNO rather than baseline of transfer learning method. In fact, the PINO model mentioned by the reviewer is exactly an enhanced model of the FNO.
What’s more, PINNs and PINO are analytical models requiring explicit equation expressions of the problems, which are often impractical in real-world applications. In fact, in most problems the equations are only the approximations and no exact forms of equations are available to train the PINNs or PINO models. For instance, several kinds of differential equations can be used to characterize the weather forecasting problem, but none of them is proved to be the true inherent physical relations. In this case, the physics-informed loss is infeasible, while the data-driven property allows operator learning model like DeepONet and FNO to be more widely applicable in such scenarios. As our work focuses on developing a general method, we didn’t take the analytical models like PINNs and PINO as backbones in our experiments.
(2) To assess the generalizability of POTT, we applied it on both DeepONet and FNO in different tasks, as mentioned in the appendix C.2. Such experiment settings is not a cherry-picking strategy that choose a specific backbone for the specific tasks but instead demonstrates the generalizability of POTT. However, as suggested by the reviewer, we plan to include additional ablation studies in the final version to further emphasize its generalizability .
Q2. Relationship between performance comparison and the amount of target data
Our experimental design systematically evaluates how target sample amounts influences model accuracy. Here’s a concise summary:
(1) When very few target samples are available (maybe only less than 5 target samples), the transfer learning is extremely difficult.
(2) When the amount of target samples are limited but not extreme (as in our experiment settings), POTT outperform finetune and existing DA methods, and significantly improve the model performance in target domain.
(3) When the target samples are abundant as in source domain, even the target-only supervised training is sufficient, then the transfer learning methods are unnecessary.
Q3. Real-world datasets
Thank you for this important suggestion.
(1) This problem setting indeed has broad applications in various problem scenarios. In weather forecasting, for instance, differential equations are used to approximate the weather systems, and operator networks are trained on collected data to fit these systems. Generally, more data is available in developed regions while less data is available in less developed regions. As geographical conditions vary across regions, directly deploying models across regions can reduce their performance. Thus, introducing transfer learning into cross-domain model deployment provides critical value to these areas.
(2) While industrial data would better validate operator models, these data is often difficult to obtain. The standard practice in the community is to use data generated from classic equations for simulations. Models performing well on simulated data are then further developed on real-world industrial data. As our work is one of the pioneer works in combining differential equations problems with general transfer learning, our validation on simulated data aligns with conventions and suggests potential real-world applicability of proposed method. And we plan to further explore the applications of POTT in real-world industrial data in future work as suggested. | Summary: The authors propose a method to tackle transfer learning in solving differential equations. Current data-driven methods for solving differential equations suffer when training and testing environments differ, or from insufficient data. While transfer learning (TL) has been used previously to adapt models trained on one domain for another domain, the authors identify two types of bias in current TL strategies, which hinders their generalization capability. The authors propose Physics-preserved Optimal Tensor Transport (POTT), which can simultaneously correct the two types of biases in TL, and improve performance even when the target domain dataset is small. Experimental results are provided to show superior performance of the proposed method, and a theoretical analysis is also presented.
Claims And Evidence: The authors claim that their proposed transfer learning method is more general compared to the baselines, and present visualizations in several figures (1,3,4). However, the figures need major rethinking, as currently it is very hard to get the message clearly from these figures. I suggest the authors put arrows/markings inside the figure, so that readers know where to look and what to look for in these figures. A common way to do this is showing ground truth and predictions side by side so that readers clearly see the match/mismatch. The authors can think about doing something similar, otherwise it is hard to see for example in Figure 3, which method is the proposed one, which are the baselines, and why one is better than the other.
Methods And Evaluation Criteria: The methodology and problem formulation of the paper at its current state need some major updates. Here are my suggestions:
1. The problem of neural operator learning (first line of the abstract) needs more description as the whole problem formulation depends on this, perhaps discuss how this is different from function learning with examples as differentiation/integration
2. The problem of operator bias is not intuitively/clearly explained at the beginning, while data shift is known to people working outside of transfer learning. Perhaps the authors can provide a more detailed description at the beginning in the introduction (this is provided later)
3. The meaning of physics-preserved needs a bit more detail. What does it mean to characterize a target domain with physics preservation, aren’t physics laws already preserved in all domains?
4. Problem formulation at present is hard to follow in the middle, during the discussion around Equation 3. Perhaps the authors can add an example of what P_s, D_s looks like in real world scenarios.
5. What is meant by pushforward distribution? This is mentioned several times before details are provided.
6. Where is Figure 3.2 as cited in section 3.2?
7. What is meant by features being “confused”?
8. Minor typo: “visable” in Motivation of POTT, perhaps the authors meant “feasible”?
Theoretical Claims: I did not check the theoretical claims.
Experimental Designs Or Analyses: The authors present detailed experimental results, I haven’t checked them thoroughly, however from a glance they look extensive.
Supplementary Material: No
Relation To Broader Scientific Literature: Transfer learning is a topic of general interest in the machine learning community. Solving differential equations is a topic of great interest in the physical science community. The authors identify 2 key issues with current transfer learning strategies, and offer a solution in the context of solving differential equations. Therefore the current work has broad implications in both domains.
Essential References Not Discussed: None
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: None
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for taking the time to review our paper and provide professional and constructive reviews. We are encouraged by the positive comments on the broad implications of our work in physical science and domain adaptation community. All suggestions will be carefully incorporated into final version.
Q1. Figures explanation
We sincerely thank the reviewer for this suggestion. In Figure 3, we did arrange the figures in a "ground truth and predictions side by side" format. As described in the caption of Figure 3: Columns 1 and 2 show the input and output function pairs from the target domain. Columns 3, 4, and 5 display the output functions predicted by POTT, finetuning, and COD, respectively. Columns 6, 7, and 8 show the prediction errors relative to the ground truth. We also added borders in Column 7 to emphasize the source domain function properties in finetuning. To further enhance readability and clarity, we will incorporate clearer arrows/markings in the final version as suggested.
Q2. More description on problem formulation
We thank the reviewer for the suggestions and summarize them as a more detailed explanation on problem formulation:
(1) Operator Learning: Unlike function learning (e.g., PINNs), which learns a specific solution function from equation (e.g., u: x → u(x)), operator learning aims to learn a function-to-function mapping (e.g., G : k(x) → u(x)). This is critical for generalization. For example, in Darcy flow, while function learning models cannot adapt to problems with new permeability coefficients k(x), operator learning models can predict u(x) for varying k(x). Notably, when k(x) is the identity mapping, operator learning reduces to function learning.
(2) Operator Bias: Despite its advantages, operator learning assumes identical training and testing environments. If testing data comes from a different distribution (e.g., varying types of k(x)), the intrinsic physics also differs. Then the trained operator model performance may degrade significantly, which is described as the operator bias in Eq. (3).
(3) Examples: As discussed in Sec. 5.1, to simulate distribution and operator bias, we sampled k(x) from three distinct function distributions $P_1$, $P_2$, $P_3$ to form three subdomains $D_1$, $D_2$, $D_3$ and the corresponding transfer tasks for each datasets. As shown in Tab. 1, functions from different subdomains exhibit clear differences due to their distinct distributions. The examples are thus provided in Tab. 1.
Q3. Aren't physics laws already preserved in all domains? More description on physics-preserved.
Yes, the physical laws are already in target domain. But they are hard to learn with limited target data. While the physical laws are determined by the problems and subdomains properties, in cross-domain model transfer, we often have limited target-domain samples, which are insufficient for the model to learn the underlying physical laws of the target domain. POTT addresses this by leveraging both source and target samples to characterize the target distribution in a physics-preserved way, and thus enabling the model to learn the embedded physical laws in target domain.
Q4. Noun explanation
(1) The pushforward distribution is a concept in probability theory, referring to the distribution obtained by transforming another distribution through a mapping. For example, g_\# $P_{s}$ denotes the distribution obtained by applying the transformation g to $P_s$. Formally:
g_\# $P_{s}(A) = P_s(g^{-1}(A)) $
for all Borel set $A$ in the output space of g.
(2) We describe the features in the 3rd column of Figure 1 as "confused" to indicate that the aligned feature space lacks clear structure. As shown in the figure, these feature maps do not convey explicit physical meaning, and it is unclear whether they retain the correct physical information. Thus, we refer to them as "confused."
Q5. Typo
Thank you for your correction. Figure 3.2 should be figure 1. And the "visable" in line 218 should be "feasible" indeed.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their explanations, these should be included in the manuscript so that readers can easily understand. I have updated my score, I wish you good luck.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your reply and your constructive reviews. We will carefully incorporate all your suggestions into the manuscript. | Summary: This paper proposes POTT (Physics-preserved Optimal Tensor Transport), a transfer learning method for differential equations. Instead of preserving PDE structures directly, POTT ensures that the operator relationship u=G(k) is maintained during adaptation via optimal transport (OT) with a physics-regularized term. The method is tested on Burgers’, Advection, and Darcy Flow equations, showing improved generalization over fine-tuning and domain adaptation methods.
Claims And Evidence: The transfer learning tasks and methods are also interesting, but the name of the method ‘physics-preserved’ may be a little misleading. It seems as if it preserves the structure of physical laws and equations, but the reality is that it is regularised so that the solution operator u=G(k) pre-trained with the source domain data is preserved.
Methods And Evaluation Criteria: Although you are using OT for finite-dimensional probability distributions, shouldn't you actually be formulating it using OT on an infinite-dimensional function space [R1]?
[R1] Minh, H.Q. Infinite-dimensional distances and divergences between positive definite operators, Gaussian measures, and Gaussian processes. Info. Geo. (2024).
Theoretical Claims: No problem.
Experimental Designs Or Analyses: Please add a discussion about calculation time.
Supplementary Material: I read the supplementary material.
Relation To Broader Scientific Literature: The domain shift problem in operator learning is very important and interesting. Operator learning is a promising approach to accelerate physical simulations.
Essential References Not Discussed: The authors may need to add a discussion of infinite-dimensional OT.
Other Strengths And Weaknesses: Strengths
- The introduction of physics-regularized OT for operator transfer is an interesting extension of standard domain adaptation techniques.
- Many ML-based PDE solvers struggle with generalization in the domain shift settings. This paper addresses the significant problem.
- The proposed method is reasonable.
Weaknesses
- The learning procedure is a little difficult to understand. It would be good to write the steps and procedures for learning G^s, T_\theta, and G^t in the form of an algorithm.
- Computational cost and scalability not discussed. OT problem is generally costly.
- The name of the method ‘physics-preserved’ may be a little misleading. The paper claims to preserve "physics," but POTT does not enforce PDE constraints explicitly.
Other Comments Or Suggestions: N/A
Questions For Authors: - Can you explain the relationship with infinite-dimensional OT?
- Can you explain the learning procedure in the form of an algorithm?
- Are you conducting experiments with settings where the sub-domains overlap to some extent? I thought that the accuracy would drop if the sub-domains were too far apart.
- Can you tell us how the accuracy changes with the number of samples in the target domain?
- I think it would be good if you could mention important applications for this problem setting. For example, it would be effective for experiments on systems with low energy, which are easy to conduct and for which a lot of data can be collected, but for which there are only a few data points for systems with high energy.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for taking the time to review our paper and provide professional and constructive reviews. We are encouraged by the positive comments on the significance of our work in physical science and domain adaptation community. All suggestions will be carefully incorporated into final version.
Q1. The name of 'physics-preserved'.
We term our method as "physics-preserved" OT to emphasize its core mechanism and the physical constraints is not limited to $G_s$.
(1) As discussed in Sec. 4.2, we aim to characterize the target distribution and its underlying physical structure by a mapping. If the distributions match perfectly, standard OT can yield valid solutions. However, when distributions do not align completely (common scenario in practice), the pushforward distribution serves as an approximation of the target. The POTT is introduced to guide this approximation process in a physically meaningful way. This is why we name our method "physics-preserved" OT.
(2) The physical constraints in Eq. (10) are inherently problem-dependent. As our work focuses on developing a general method, we use $G_s$ as the basic constraint, which is universally accessible in most tasks. But in applications with additional prior knowledge (e.g., potential structures of the operator), more precise constraints can be designed.
Q2. Relationship between proposed method and infinite-dimensional OT.
In fact, the definitions and derivations of OT and its dual formulation are discussed in general separable metric spaces, which inherently includes infinite-dimensional function spaces. Thus, the definition of POTT is mathematically well-defined and not limited to finite-dimensional spaces. While the mentioned "OT on infinite-dimensional function space H" (denoted as OT-inf) explicitly instantiates the function space, it does not conflict with our definition.
The key distinction lies in the practical implementation. In standard OT, the function variables (k, u) are directly discretized into tensors via sampling, leading to the empirical estimation form in Eq. (13). In contrast, the empirical estimation of OT-inf relies on properties of the specific function space H, such as the kernel trick. While OT-inf is more precise in definition, its computation requires restricting H to spaces like RKHS for tractability, which may limit the applicability of POTT.
Q3. Explanation of learning procedure and computational cost.
Algorithm: Transferring $G_s$ to $G_t$
>1. Input: source data $D_s$, pretrained model $G_s$, target data $D_t$ , batchsize B
>2. Initialize $T_\theta$, $f_\phi$
>3. For $N_{11}$ steps:
>4. Update $\theta$ to minimize the first part in Eq. (13) for $N_{12}$ steps
>5. Update $\phi$ to maximize the first part in Eq. (13) with $\theta$
>6. End
>7. For $N_2$ steps:
>8. Update $\eta$ to minimize the second part in Eq. (13) with $\theta$ and $\phi$
>9. End
>10. Output: $G_\eta$ as approximation of $G_t$.
Thus, the entire model transfer process of POTT requires O(($N_{11}$⋅$N_{12}$+$N_{2}$)B) operations. Typically, $N_{12}$ is set to 10. While OT-based methods are generally costly, they offer a viable strategic trade-off in scenarios where data acquisition is constrained or costly (e.g., weather forecasting, medical diagnosis) by leveraging algorithms and computation to reduce dependency on large-scale target-domain data.
Q4. Applications for this problem setting.
Thank you for highlighting this important point. This problem setting indeed has broad applications in various problem scenarios. In weather forecasting, for instance, differential equations are used to approximate the weather systems, and operator networks are trained on collected data to fit these systems. Generally, more data is available in developed regions while less data is available in less developed regions. As geographical conditions vary across regions, directly deploying models across regions can reduce their performance. Thus, introducing transfer learning into cross-domain model deployment provides critical value, which highlights the importance of transfer learning in achieving technology and development equity around the world.
Q5. Impact of domain shift and target samples on accuracy.
Our experimental design systematically evaluates how domain shift and target sample amounts jointly influence model accuracy. Here’s a concise summary:
(1) Large or moderate domain shift reduces the effectiveness of finetuning (FT), while POTT significantly improves the accuracy. Minor domain shift allows FT to perform well, and POTT still offers improvements. Extremely small shift (overlapping subdomains) makes transfer unnecessary, while extremely large shift make model transfer infeasible.
(2) Abundant target samples make FT or even supervised training sufficient, with marginal gains from POTT. Limited target samples see POTT outperform FT largely. Very few target samples challenge all transfer methods. | null | null | null | null | null | null |
ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding | Accept (poster) | Summary: This paper introduces a visual editing method where authors proposed a range of tools (i.e., python functions) that vision language models can use to edit the image (e.g., highlight, draw box, mask out) to better focus on the structured visual content corresponding to the text query. Experiments show that this process improves model performance on certain tabular and chart understanding tasks, and authors show that open-weight vision language models can also benefit from this process by being finetuned on data that involves tool calling that leads to the correct answer.
## update after rebuttal
I would like to thank the authors for their rebuttal and thoughtful follow-up addressing my concerns and questions. I have carefully re-evaluated the manuscript, particularly Section 5, and respectfully disagree with the authors’ claim that their method demonstrates the ability of models to generate novel functions for visual editing at test time (which was the core of my initial question). In my view, providing function definitions as part of the prompt, or generating answers in the same structured format as used during training (e.g., `<image> <question> <thought1> <ReFocus bounding box> <thought2> <answer>`), does not constitute novel test-time function generation. Rather, it reflects learned behavior from training patterns, not evidence of the model inventing new visual editing logic on its own. Because authors did not show that models can generate novel visual editing functions, the answer to the follow-up question "can models make good use of these self-generated functions to edit the image and improve their performance" remains unclear.
That said, I do agree with the authors that the proposed prompting approach is effective, and that fine-tuning models with rejection-sampled visual CoT data distilled from trajectories does lead to improved performance. However, I still remain hesitant in considering the work to be above the bar for a conference paper due to the limited scope and a relative lack of novel technical contributions.
Claims And Evidence: - I think that the claims are well-supported in general, though the comparison with Visual Sketchpad that it incorporated external expert knowledge while ReFocus does not require additional information does not appear to be fair. In particular, ReFocus still relies on algorithmic tools (i.e., `findContours` , `getStructuringElement` ) to extract coordinates of table/chart element, which weakens the argument that it doesn’t rely on additional information. To really support this claim, an ideal setting would be that the model needs to provide the coordinates as input arguments (instead of providing any labels to mask and let the algorithm decide the region).
- For your claim that multimodal models lack selective attention and multi-hop visual reasoning ability mentioned in the introduction, can authors provide supporting literature and/or additional experiments?
Methods And Evaluation Criteria: Yes, the selection of evaluation dataset appears to be suitable within its application scope.
Theoretical Claims: n/a
Experimental Designs Or Analyses: On the analysis that discussed the particular editing method that works the best, authors should constrain their conclusion toward that particular dataset for a particular model. The conclusion might not be generalizable on other datasets or other models.
Supplementary Material: I read the full supplementary material.
Relation To Broader Scientific Literature: The general finding aligns with Visual Sketchpad where visual editing improves vision language model performance.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: - The paper is clearly written, which I really appreciate.
- I think the contribution is too incremental and limited in scope. In particular, I believe ReFocus is simply an analogy to Visual Sketchpad used for structured images (with a different set of tools). Because all the functions are pre-defined and the functionality is hardcoded, the experiments can only be done with very few datasets that show compatibility with these handcrafted tools. It’s not a general method that improves vision language model on a wide range of datasets, not even to say applications. I would be much more convinced if you can show that models can generate code on-the-fly to achieve some “focus” functionality and apply it to a variety of structured image evaluations instead of sticking to a few hand-crafted tools on very few evaluation settings.
Other Comments Or Suggestions: n/a
Questions For Authors: - I’m curious if you can provide any cost / speed analysis in inference because apparently I think there’s a trade-off between better accuracy and prolonged generation in thoughts and IO.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's valuable feedback. Below, we provide a detailed response to address the concerns and questions raised:
* Q1. External Expert Knowledge Comparison with Visual SketchPad
We would like to clarify some confusions here. What we want to say is that, the tools used in ReFocus, do not bring in external information – every information is constrained to what’s provided in the input images. However, in Visual SketchPad, specialists such as Depth Anything will bring in external information (e.g. a depth map) to the process, to assist in decision making. We'll ensure this distinction is clearer in our revisions.
* Q2. Claim about Lack of multi-hop visual reasoning ability
Basically, we were referring to all different kinds of multimodal LLMs (e.g. GPT, Gemini, Claude, LLaVa, LLaVa-Next, etc.), when we ask a VQA question, they could not answer while providing the selective attention areas or multi-hop visual reasoning processes. When we play with these models, they would at most generate some thinking processes in text, but not in visual format, on structured images, as we explained in the introduction. Many related literature are listed in Related Work starting line 154. But we were just making a very general claim through our observations, that multimodal LLMs cannot return the visual area to focus on during question answering.
* Q3. Analysis Constraints
We will update in our revisions to constrain our conclusions toward a particular dataset for a particular model when discussing a particular editing method that works the best.
* Q4. Contribution & Comparison with Visual SketchPad
We would like to clarify any confusion regarding this concern. ReFocus is the first method to excel at structured image tasks, which Visual SketchPad cannot solve. **More importantly, ReFocus explores whether it can provide a more human-like heuristic supervision signal and if this supervision is practically useful—unlike Visual SketchPad, which is limited to prompting only.** In our fine-tuning experiments (Section 5), we found that training data obtained through ReFocus is 8% more effective than default VQA training pairs and 2.6% better than Chain-of-Thought data. We believe our fine-tuned model aligns with the reviewer’s definition of “performing on-the-fly refocus without relying solely on a few tools”.
* Q5. Cost / Speed Trade-off Analysis
Due to lack of time, we conducted a model usage analysis on Chart Horizontal Bar Set.
As for the consideration of speed, the generated token count with ReFocus is about twice as that of the baseline model.
As for the consideration of cost, the total token count is approximately ten times higher with ReFocus than the baseline. This can be too costly for API-based models like GPT-4o. In this context, our fine-tuned model offers a practical alternative. While it may not be as effective as ReFocus combined with GPT-4o, it is cost-free.
We will include these discussions in our revisions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses! I understand the notion of bringing external information or not better. But I am still not sure if it's the right term to use. While a depth map is definitely external information, I would argue that ReFocus also brings external information, such as visual cues that are not originally present in the image that are helpful for model to localize areas of interests. The distinction definitely has to be made clearer.
I appreciate the authors' response to Q3. But it further echoed one of my original concerns that the framework is limited in scope. The current formulation of ReFocus is not a general solution for structured image understanding in terms of image-editing based CoT. Like the authors' mentioned in Q4, the framework first *manually* crafts a few human-like heuristics in terms of functions and performs SFT on models, so that the models can learn these functions and generate the corresponding code calling *these functions* during inference.
That said, one cannot easily scale the number of functions with the framework, or ways in visual heuristics that the models can employ. To really show the generality of this framework, I would expect that, instead of having humans to write the functions for models to learn and use during inference time, *can models write the functions themselves during inference depending on the questions for the visual cues that they want to apply (e.g., what if the model wants to draw a horizontal line aligned to the top of a bar across all bars)?* For example, I can see that in page 13 and 14 where you give the model all the contexts for the functions that you designed, and in page 15 the model answers by calling such functions i.e., `focus_on_columns_with_draw`. My question toward this framework has two parts: (1) can models generate new functions on their own instead of relying on what are provided by humans; and (2) can models make good use of these self-generated functions to edit the image and improve their performance. This is different from the authors' current paradigm, and explains my previous comment on *“performing on-the-fly refocus without relying solely on a few tools”*.
Connecting this disadvantage all the way back to limitation in scope -- I am sure that there will be structured image scenarios where all the currently manually designed functions will be useless, and in that case models that learn to use these functions will eventually fail, but models that learn to synthesize new functions that are tailored to the scenarios and make correct uses of their generated functions will eventually likely to succeed.
Authors' response in Q5 addressed my corresponding question.
Given the authors' responses and the scope/applicability of the work, I maintain my rating as a weak rejection.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's feedback!
* External Information
As for the distinction of external information -- we will make it clearer in our revisions!
* Scope Limitation
We agree that the prompting part is limited in scope. However, as we show in the Finetune experiments (Section 5), **once we use ReFocus to collect training data and finetune a better model using this data, we are no longer limited to tools or prompts to realize the purpose of "refocus"**.
For the questions:
(1) can models generate new functions on their own instead of relying on what are provided by humans: --
Yes, not need any tools or prompting, we realize a general function which is "providing refocus area", through our finetuning approach;
(2) can models make good use of these self-generated functions to edit the image and improve their performance: --
Yes, in the prompting experiments, the editing functions that we provided based on the idea of "focus on certain area" can improve multimodal LLMs performance.
(3) (e.g., what if the model wants to draw a horizontal line aligned to the top of a bar across all bars)? --
This is an extension of what editing function can help models better solve the problems. It would be interesting to explore the answer, but our current prompting experiments with provided editing functions already support our findings (which is that refocus helps), and our finetuning experiments show a general way to turn these specific editing functions into general refocusing ability.
We hope this additional response can help explain things better. Please let us know if reviewer has further questions. Thanks! | Summary: This paper proposes ReFocus, a visual reasoning framework that enhances structured image understanding with editing on the input image. Specifically, ReFocus iteratively highlights some columns and rows in the tabular problems, masks the other information, and draws bounding boxes on the information that needs attention on. Experimental results show that the proposed method ReFocus yields consistent performance gain across various tasks.
## Update after rebuttal
The rebuttal addresses most of my concerns about calling the model "chain-of-thought", inference time comparison, etc. One remaining issue that I have is the potential insights in the proposed method that I find hard to evaluate on. The proposed method still seems to be some prompt engineering plus some simple tricks, for me as a person mainly working on general computer vision. Editing something by highlighting [1] to improve performance is also not a brand new technique in computer vision. However, due to my limited knowledge on structured image understanding, I want to request for AC and other reviewers' expertise on this field to tell me whether the proposed method is insightful enough to be accepted to ICML, or it is just some simple tricks and prompt engineering. Therefore, I want to flag this potential issue for AC to confirm the novelty and insight in the proposed method before this paper gets accepted.
[1] Shtedritski et al. What does CLIP know about a red circle? Visual prompt engineering for VLMs. ICCV 2023.
Claims And Evidence: Yes, the claim that the proposed ReFocus can help structured image understanding (more specifically, tabular problems) is well validated by the experimental results across multiple benchmarks across tabular understanding tasks.
Methods And Evaluation Criteria: Yes, the proposed method makes sense, and I think it is intuitive as it follows the way we humans process the tabular data, as we first focus on several rows / columns that we are interested in, while ignoring other irrelevant rows / columns, then reasoning across the relevant information in the tabular data.
Theoretical Claims: The paper has no theoretical claims.
Experimental Designs Or Analyses: The experimental designs and analyses are sound and valid to me, and also comprehensive as it is conducted across multiple benchmarks and evaluated on state-of-the-art multi-modal LLMs like many versions of GPT-4o.
Supplementary Material: Yes, I have reviewed all the parts of the supplementary material.
Relation To Broader Scientific Literature: The capability of multi-modal LLMs to understand structured images like charts and figures is broadly related to the advancement towards artificial general intelligence.
Essential References Not Discussed: I do not find essential related works that are not discussed. However, this may not be accurate because I am not very familiar with the field of structured image understanding.
Other Strengths And Weaknesses: **Strengths:**
- The proposed method ReFocus conforms to the reasoning process of humans, which looks intuitive to me.
- The experiments seem to be comprehensive, covering a broad range of benchmarks and structured image understanding tasks and shows the superior performance after incorporating ReFocus on the existing powerful multi-modal LLMs like GPT-4o.
**Weaknesses:**
- I think it is a bit overclaiming to call the proposed method "chain-of-thought". Essentially, the proposed method is iterate the columns / rows to highlight until the multi-modal LLMs determines TERMINATE. Basically, there is no specific "chain-of-thought" from the multi-modal LLM agent to find the next most possible rows / columns in a heuristic way by itself with reasoning. Therefore, I think the "reasoning process" within this paper is more primitive than the one in the original paper [1], which may be overclaiming to call it "chain-of-thought".
- **(I am doubtful of whether it should be a strength / weakness, but I am leaning towards weakness right now)** I often in favor of a method that is "simple but effective". For the method proposed in this paper, I think it is effective as supported by the experimental results. However, I think it seems to be too simple to me (because it is basically just iterating the rows and columns to highlight some of them until reaching TERMINATE to get the final answer), which seems more like a simple trick to me, instead of providing insights in the scientific / techincal parts.
[1] Wei et al. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. NeurIPS 2022.
Other Comments Or Suggestions: I think Section C in the supplementary material is important for understanding how the proposed method works. When I only read Section 3 in the main paper, actually I cannot understand how the proposed method actually works. The information of adopting the "iterative chain-of-thoughts (although in the Weaknesses part I said I think calling it "chain-of-thoughts" may be overclaiming) until reaching TERMINATE" strategy for conducting the method is important for understanding how the algorithm works. Instead, the main paper spend much space on explaining the different structured image understanding tasks, which I think is unnecessary and can be moved to the supplementary.
Questions For Authors: - The main reason for me to give a weak reject as the initial score is that I think the proposed method is so simple that it seems to be a trick for performance improvement instead of providing scientific / techincal insights. What do the authors think of my concerns and arguments? What additional insights the authors want to convey to me about the proposed method?
- Since the proposed method needs to iterate through the rows and columns to highlight some of them. How is the inference time after adding ReFocus to the multi-modal LLMs compared to the original base models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's valuable feedback. Below, we provide a detailed response to address the concerns and questions raised:
* Q1. Confusion about “Chain-of-Thought” and “Iterations”
We would like to clarify any confusion regarding this concept. **ReFocus does not iteratively highlight every column or row.** Instead, it leverages a multimodal LLM to determine which columns or rows to highlight or maskout by generating Python code. For instance, in Figure 1, when provided with an image, the multimodal LLM identifies focus areas (Column “Country”, Column “Wins”) and generates Python code `focus_on_columns_with_maskout(“Country”, “Wins”)` (where `focus_on_columns_with_maskout` is one of the edit functions, and all edit function names are provided to the LLM in prompt as in Supplementary C. the LLM decides which function to use). The generated edit function is then executed through our backend support. Next, the LLM assesses whether additional edits are necessary (because as in Fig1 it needs two edits), and if so it decides autonomously on the next focus areas. (And this is where the word iteration comes from.) This method allows models to perform self-reason, justifying the use of the term "chain-of-thought." We add "visual" to CoT to reflect the presence of intermediate visual editing outputs.
* Q2. Simplicity of the Method
The difficulty of this method during the prompting stage is specifically how to enable the method to self-reason about what to focus on, rather than relying on iterations. We are the first paper to demonstrate that using simple tools and editing can achieve this goal on structured images.
**Another challenge is determining whether ReFocus can provide a more human-like heuristic supervision signal and whether this supervision is useful in practice.** In our fine-tuning experiments, we found that the training data obtained through ReFocus is 8% more effective than default VQA training pairs and 2.6% better than Chain-of-Thought training data. Overall, both the prompting and fine-tuning goals present significant challenges.
* Q3. Inference Time
Naturally, the inference time increases when incorporating ReFocus compared to using the original base models. It can be seen as roughly twice the inference time, transitioning from Question -> Answer, to Question -> Edit Decisions -> Answer. We did check the model usage, and the generated token number with ReFocus is about twice as that of baseline model.
Another consideration is cost. For example, on the Chart Horizontal Bar set, the total token count is approximately ten times higher with ReFocus than the baseline setting. This can be too costly for API-based models like GPT-4o. In this context, our fine-tuned model offers a practical alternative. While it may not be as effective as ReFocus combined with GPT-4o, it is cost-free.
We hope our answers can address the reviewer’s concerns! We’re happy to provide further clarification if needed.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thanks for your response! My comments are as follows:
The authors have clarified that the multimodal LLMs will decide which row/column to focus on, instead of iterating every row/column. Now I agree that it is somewhat mimicing the chain-of-thought process. But I do think that the organization of the paper is suboptimal. I think the unnecessarily long introduction of all the "Structured Image Problems" needs to be moved to the supplementary, while the much more important method explanation needs to be emphasized and explained more clearly in the main paper. For the inference time, I am OK with it being twice long as the original model, because naturally more chain-of-thought increases the inference time, but presenting this in the paper could be crucial to audience. For the simplicity of the method, this is something that I still hold reservations. The proposed method still seems to be some prompt engineering plus some simple tricks, for me as a person mainly working on general computer vision. Editing something by highlighting [1] to improve performance is also not a brand new technique in computer vision. However, due to my limited knowledge on structured image understanding, I want to request for AC and other reviewers' expertise on this field to tell me whether the proposed method is insightful enough to be accepted to ICML, or it is just some simple tricks and prompt engineering. Therefore, **I want to flag this potential issue for AC to confirm the novelty and insight in the proposed method before this paper gets accepted**.
Based on the above information, I can raise my score to 3, but **I would like to request judgement for the novelty of the proposed method from AC and other reviewers that have more expertise**.
[1] Shtedritski et al. What does CLIP know about a red circle? Visual prompt engineering for VLMs. ICCV 2023.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's feedback and score adjustment! We will optimize paper organization in our revisions!
As for the contribution of the paper, we want to emphasize that ReFocus is **not only a prompting paper**, because it proves to be a **better supervision signal** as in our fine-tuning experiments (Section 5). Basically, we (1) first show that prompting works, and (2) this prompting way can allow us collect better training data to train better vision language models.
Please let us know if this further clarifies reviewer's confusion. We will for sure include all the additional response in our revisions. | Summary: This paper focuses on the structured image tasks, using the Python codes to edit the input image as the "visual thought" and the modified image can be more focused on the target information of the question, which benefits the accuracy of the answer.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The evaluation datasets seem small, why only a subset is selected for evaluation and what is the standard to select data.
Theoretical Claims: There's no theoretical claim.
Experimental Designs Or Analyses: 1. For the analysis of Refocus to improve grounding and OCR capabilities, it is better to add more experimental results instead of giving conclusions through only two examples.
2. Lack of analysis for some results, like Table 2.
Supplementary Material: All materials have been read.
Relation To Broader Scientific Literature: This method proposed a visual CoT approach which uses Python codes as intermediary.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
1. The motivation of this paper is very clear.
2. The collected 14k training set is effective.
Weaknesses
1. The method is limited and can only be applied to the chart that has a specific structure.
2. The method is more like introducing a strong prior specifically for certain data, thereby enhancing performance.
3. The method exhibits significant performance fluctuations across different models, and even different versions of the same GPT model can show substantial variations. Additionally, the performance may have adverse effects on some models, and the author lacks discussion on these points. Meanwhile, please also see the 'Experimental Designs Or Analyses'.
Other Comments Or Suggestions: No
Questions For Authors: Why was the current evaluation dataset selected? For the bar chart, the other datasets like DVQA have more scales of test data and can provide a more effective evaluation. The current testing scale is too small, and the test results also show significant performance fluctuations between different models and datasets. This may stem from the unreliable evaluations caused by the small scale. Meanwhile, due to the strong limitations and priors of the method, the limited data scale can not evaluate the extensibility of the method well.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's valuable feedback. Below, we provide a detailed response to address the concerns and questions raised:
* Q1. Evaluation Dataset Selection.
The major reason for testing on ChartQA instead of on DVQA is that DVQA is a synthetic (chart and data) dataset with template questions, while ChartQA and CharXiv include real charts and data with open-form questions that have been manually checked. Additionally, ChartQA has been much more widely tested by many recent models (as in Gemini and GPT reports) than DVQA so that it’s easier to compare with them, and CharXiv has its unique challenges of scientific chart reading.
* Q2. Evaluation Set Size Concerns
We want to emphasize that small scale evaluation data does not mean unreliable evaluations. For example, datasets like Winoground (CVPR 2022), which contains only 400 examples, have significantly influenced VLM advancements. Similarly, although focused on different aspects, other widely adopted benchmarks—including WHOOPS! (ICCV 2023), LlaVA-Bench (NeurIPS 2023), Visit-Bench (NeurIPS 2024), ConTextual (ICML 2024), VibeEval, and Visual Riddles (NeurIPS 2024)—comprise 90, 500, 576, 500, 269, and 400 examples, respectively, and have been pivotal for evaluating VLMs. Our total evaluation set has a size of 2219 instances, and should reflect a reliable evaluation result.
* Q3. Performance Fluctuations between different models.
Yes, there are performance differences between different multimodal LLMs, but we believe the Table1 results show relatively consistent improvement. The programing ability and the visual prompt understanding ability of LLMs will both affect the effectiveness of ReFocus. However, we want to emphasize that (1) ReFocus is a simple yet effective general pipeline that can work with different multimodal LLMs, and (2) we add the fine-tuning experiments to reduce the reliance on prompting to a certain degree.
* Q4. Limitations and Priors of the method
We are not sure if we understand what the prior is as in “The method is more like introducing a strong prior specifically for certain data”. Could the reviewer kindly elaborate more on what the “prior” is? If the reviewer means that the method favors structured image data, then we want to emphasize that ReFocus was designed for solving this kind of problems with simple editing processes. Extending to other types of image data is for sure interesting and worth exploring, but a bit beyond our paper’s scope.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's feedback.
1. I agree that small scale evaluation data does not mean unreliable evaluations, but I also believe that a larger scale is more reliable and I don't find it difficult to enlarge the test set scale in this paper's case.
2. I still think this method has significant limitations due to its requirement for structured data, which limits the impact of this paper.
Based on the above reasons, I will keep my original score. Thanks.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's feedback. We will try to make the scale larger in the future, Thanks! | Summary: The paper proposed a LMM + tool-using style framework for structured image understanding. Specifically, the framework provides tools to edit the original image and either highlight the important region or remove irrelavent region. Results show that this framework generally improves LMMs zero-shot performance on understanding tables, charts, etc. The paper additionally show that using the framework to curate SFT data provides better supervision signal for these tasks.
## Update after rebuttal
The rebuttal provides additional infomation, I am keeping my already positive evaluation
Claims And Evidence: The claims are well-scoped and the results supports the claims.
Additional explanation/analysis are needed; see the Experimental Designs Or Analyses section.
Methods And Evaluation Criteria: The evaluation setting makes sense; The strength is the paper also includes finetuning experiments;
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experiments are mostly solid and informative;
Additional anaysis is needed in:
- It's good to show error analysis on the tools and how it will be propagated to the final performance;
- Categorize what are the remaining errors that cannot be addressed even with accurate visual editing;
- For some categories, ReFocus fail to yield better performance, e.g., VTabFact in Table2; What's the reason? Additional analysis would be helpful
Supplementary Material: N/A
Relation To Broader Scientific Literature: This is in line with research in multimodal reasoning on abstract visual concepts.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper is clearly written and easy to follow
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's insightful feedback. Below, we provide a detailed response to address the concerns and questions raised:
* Q1. Tool Errors
We carefully examined this phenomena through manual checking, on around 40 data cases each for Table VWTQ and Chart Horizontal Bar datasets.
For table editing, most of them (roughly 80%) are correct. For the problematic ones, there are two big error types. The main error (\~80% of which) is that sometimes, both column editing and row editing are needed, but only one of them was applied, making the original problem, e.g. counting specific rows, a bit easier but not easy enough for the model to solve. A small portion (\~20% of which) is that it could mask out important information (e.g. some rows or columns) that should not be masked out.
For chart editing, it’s similar that most of them(\~90%) are correct. The main error is that sometimes the python program is incorrect, and the wrong area gets edited, e.g. the chart should focus on the last three bars, but the last 4th instead of the last 3rd bar was highlighted, therefore the final calculation uses the incorrect number of the bar, etc.
* Q2. Remaining Errors if Visual Editing is Accurate
Some complicated reasoning cases still remain a challenge. For instance, if a chart image has three bars, and all of them needs to be considered, compared, and calculated with each other, then even if ReFocus highlights all three bars, it still makes the same mistake as if no edit is performed.
* Q3. Performance Difference.
For VTabFact in Table2, we believe it is because Phi-3-vision may not deal with visual prompts that well, compared to dealing with the original image, because it was largely trained on OCR tasks and has seen many original text-rich images. We believe that the programing ability of the multimodal LLM and the visual prompt understanding ability will both affect the effectiveness of ReFocus.
---
Rebuttal Comment 1.1:
Comment: Thank authors for providing the rebuttal, I keep my already positive rating
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's feedback and will include our additional response in our revisions. | null | null | null | null | null | null |
Provable Benefits of Unsupervised Pre-training and Transfer Learning via Single-Index Models | Accept (spotlight poster) | Summary: This paper investigates the benefits of unsupervised pre-training in supervised learning tasks, focusing particularly on GLMs with parameter vectors correlated to a spike in the data covariance. The authors show that a "PCA initialization" derived from an independent sample of unlabeled data can drastically reduce the sample complexity threshold for parameter vector recovery through SGD, compared to random initialization. They highlight a setting in which random initialization fails regardless of the amount of data, while PCA initialization does not. Additionally, the authors discuss the benefits of pre-training in transfer learning, which can lead to a polynomial reduction in sample complexity, provided that the correlation between the source and target is strong enough. The findings are validated through finite-dimensional simulations.
Claims And Evidence: All claims are supported by convincing evidence.
Methods And Evaluation Criteria: The methods are well suited to this problem.
Theoretical Claims: I checked the proofs for the theorems in section 3, I have no issues to discuss.
Experimental Designs Or Analyses: I have no issues to discuss.
Supplementary Material: I revised the supplementary material, I have no issues to discuss.
Relation To Broader Scientific Literature: This paper builds on existing literature in supervised learning with GLM target functions, extending methods and results from the Gaussian iid setting with random initialization to the spiked covariance problem. The authors introduce a pre-training-based initialization procedure, which improves learnability in the considered setting, offering a significant advancement over traditional random initialization approaches in high-dimensions.
Essential References Not Discussed: I am not aware of any essential references that have been omitted.
Other Strengths And Weaknesses: The paper is clear in its exposition and addresses the important issue of the theoretical understanding of the benefits of pre-training, deriving interesting results that support the argument. One weakness is the artificial nature of the data distribution, which may seem tailored to make the proposed pre-training method work. In my opinion, the authors do not discuss this artificiality sufficiently, nor do they highlight how this toy model could resemble more realistic settings and how the results may translate to them. Nonetheless, this setting provides clear theoretical insights and original results, and could serve as a baseline for future work in more complex or realistic contexts.
Other Comments Or Suggestions: A minor issue is the inconsistency in the use of parentheses around citations (for example, on lines 327-328, second column). I suggest revising the paper to correct this.
Questions For Authors: Following the discussion in the strengths and weaknesses section, I would ask the authors to provide further comments on their choice of setting and whether they can offer a stronger justification for it. I suggest expanding this discussion in the paper as well.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful comments and suggestions.
+ Use of parentheses around citations---Thank you for pointing out this issue. We will be sure to correct this in the final version.
+ Limitations of the data distribution---We are happy to include additional comments on the motivations for our data distribution, and its limitations. Let us recap here: Our goal is to understand the benefit of pre-training. For this, one must have a solid baseline where the sample complexity without pretraining is exactly understood. We work with single index models/GLMs as they are, to our knowledge, the only non-trivial models of supervised learning with neural networks where the sample complexity of online SGD is exactly understood.
For pre-training to be successful, the (unlabeled) features must have some non-trivial information about the downstream prediction task. For single-index models, this correlation must occur either as a non-trivial mean for or in the covariance of the features. The question is which more naturally models what is seen in practice.
In the context of modern language models, a popular pre-training strategy is to construct artificial prediction tasks by 'masking' tokens in the unlabeled text data. Intuitively, one hopes to construct 'clozed masks', i.e., masks that are helpful to predict the downstream label. This masking strategy has been deployed in well-known models such as BERT (Devlin et. al. (2018)).
More generally, it is common to perform pre-training via denoising autoencoders. In light of this, and the references in Section 3.1 regarding autoencoders, it seems more natural to relate the features to the downstream prediction tasks via the feature covariance.
Given this, the spike covariance model is a natural, albeit simple starting point. Let us end by emphasizing that we are not proposing PCA as a general purpose pretraining algorithm. Instead, we use it as a natural model of pretraining in our set up. We hope that this simple analysis will motivate further investigations into richer models.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply and clarifications, in light of which I am updating my score to "accept". | Summary: The paper provides a theoretical analysis of how model initialization learned via unsupervised pre-training benefits supervised learning tasks when training a single-layer neural network using online stochastic gradient descent.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I did not check the proofs.
Experimental Designs Or Analyses: Yes.
Supplementary Material: No.
Relation To Broader Scientific Literature: The setting is interesting, but it requires some strong assumptions.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strength:
The paper provides a clear and well-motivated setup, and is strong in terms of readability and structure. The problem formulation and theoretical contributions are presented clearly and logically; in particular, the discussion sections, as well as the intuitive explanations of theorems and assumptions, are quite helpful.
Weakness:
Assumption 3.2 significantly simplifies the analysis and may be overly restrictive. It requires that the gradient with respect to all data samples let the update towards the direction $e_1$ and be far from $e_2$. However, in realistic scenarios, gradient descent updates could have more complex trajectories while still converging optimally. This assumption might limit the general applicability of the theoretical insights.
Other Comments Or Suggestions: See questions.
Questions For Authors: 1. Could the authors elaborate on the assumption (1) of $f$ (Line 168)? I appreciate the discussion in Section 3.4. However, is the commonly used ReLU function not satisfied? Could we weaken the third requirement to let the ‘>0’ to be ‘’$\geq 0$’?
2. Since in your setting the data sample and the gradient update steps are the same, it leads to a lower bound of sample complexity in Theorem 3.4. However, in reality, these two terms are not the same. If we let them to be two separate terms, what can you get?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful comments and suggestions.
+ Weakness of Assumption 3.2---Please note that Assumption 3.2 pertains to gradient flow on the *population* loss (population gradient flow) and not to online SGD. Indeed, one of our main contributions is to establish rigorously that if the step sizes are reasonably small, the evolution of the correlations of the SGD trajectory with the latent directions is well approximated by that of population gradient flow.
We believe that Assumption 3.2 is quite mild. Note that $(1,0)$ is the global minimizer of the population loss. Assumption 3.2 is essentially equivalent to the assumption that the global optimizer is a local attractor for population gradient flow. This ensures that if population gradient flow is initialized in a small neighborhood of $(1,0)$, it will converge to the optimum in short time. We agree that the precise form of Assumption 3.2 could be potentially weakened---we stated it in a form which was easy to verify. In our paper, we verify this condition formally when $f$ is a Hermite polynomial.
+ Assumption 1---There has been significant recent progress in understanding the properties of gradient descent algorithms on single index models with isotropic gaussian features. Prior work establishes that the sample complexity of one pass SGD is governed by the "information exponent'' of the problem. In this setting, the information exponent is the index of the first non-zero term of the expansion of the function $f$ in the Hermite basis. Assumption 1 is in the same spirit---we assume that $f$ has information exponent at least three, while $f^2$ has information exponent at most two. We note in the paper that this assumption is satisfied by all Hermite polynomials of degree $\geq 3$, and suitable linear combinations---thus this assumption is satisfied by a broad class of non-linearities $f$. We note that the RELU function does not satisfy this condition; however, the RELU activation function has information exponent one, and thus can be learned with approximately linear sample complexity. The condition focuses on more challenging non-linearities which have polynomial sample complexity for the isotropic single index model.
+ Data sample vs. gradient updates---We focus exclusively on single pass SGD in this work. We agree that it would be interesting to study other variants e.g. batched SGD or multi-pass SGD in future work. We believe that our analysis can be extended to settings where each epoch uses fresh batches of data, but the batch sizes are relatively small. This variant will not change the sample complexity of the algorithm.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification! I increase the score accordingly. | Summary: This paper studies the dynamics of online SGD for learning single-index models for spiked Gaussian data, where the spike in the Gaussian distribution is meaningfully correlated with the ground truth direction. The authors model pretraining and fine-tuning in practical machine learning settings as learning the spike direction and learning the ground truth direction. In that sense, the authors prove that under a certain assumption on the target function and sufficient correlation between the spike and the ground truth direction, SGD initialized from PCA pretraining is successful at recovering the ground truth direction with only a linear number of samples. In contrast, the authors prove negative results that show $\Omega(d^2)$ samples are necessary from random initialization, and further when the spike and ground truth directions are identical, even constant initial correlation is not sufficient, regardless of the amount of data used.
Claims And Evidence: The claims are clearly supported by proofs and some experiments.
Methods And Evaluation Criteria: Not applicable.
Theoretical Claims: I did not explicitly check the correctness of the proofs, but the statements seem plausible and in agreement with prior results.
Experimental Designs Or Analyses: Not applicable as this is mostly a theory paper.
Supplementary Material: I did not review the details in the supplementary materials.
Relation To Broader Scientific Literature: The paper provides a valuable contribution to the high-dimensional SGD literature, especially since learning single-index models has seen a lot of recent attention as a model of recovering low-dimensional structure in high-dimensional spaces and feature learning with neural networks. While the results may not be very surprising, they offer a nice perspective on the principles of pretraining and fine-tuning, even for simple models such as learning single-index models.
Essential References Not Discussed: Essential references seem to be sufficiently discussed. I provide additional discussion suggestions below.
Other Strengths And Weaknesses: **Strengths**:
The paper presents a nice story with intuitive arguments that are easy to follow, and the negative result of Theorem 3.4 is particularly strong as it covers many initializations and any number of samples.
**Weaknesses**:
1. The role of certain parameters in the results is not clear. For example, how would $\lambda$ or $\eta_1$ that scale with dimension affect Theorem 3.3? Also, Assumption 3.2 is a bit opaque. The authors provide some arguments to support this assumption in Section 5, but I still don't have a good intuition of how $m^*$ behaves with respect to dimension in typical settings. I would be happy to raise my score if there is more discussion and examples on these points.
2. A limitation of the lower bounds is that they only apply to sufficiently small step size, $\alpha\delta^2 = O(1)$ and $\alpha\delta^2 = O(d^{1/3})$. Intuitively, SGD updates should become unstable beyond this step size, but this is not theoretically shown in this work or prior works such as Ben Arous et al., 2021.
Other Comments Or Suggestions: Please see below.
Questions For Authors: 1. Theorem 3.3 does not mention the assumption on $\lambda$. I believe it is assumed that $\lambda > 0$ is dimension-independent, in which case it would be helpful to explicitly mention this. Also, what happens if we have $\lambda$ and $\eta_1$ that scale with dimension? Mousavi-Hosseini et al., 2023 consider $\lambda \asymp d^{r_2}$ and $\eta_1 \asymp d^{-r_1}$, and show that the sample complexity of full-batch gradient flow with PCA initialization is $d^{1 + \max((k-1)(2r_1 - r_2), 0)}$ for information exponent $k$, while Mousavi-Hosseini et al., 2025 consider gradient descent with exponential compute and show the sample complexity is $d^{1 - \max (r_2 - 2r_1,0)}$ independent of information exponent. It would be helpful if the sample complexity from Theorem 3.3 could be compared with these results.
2. Moreover, it can be helpful to note the sample complexity improvement of Theorem 4.2 is identical to Theorem 7 of Mousavi-Hosseini et al., 2023. The authors there turn gradient flow on spiked covariance into gradient flow on isotropic data with non-random initialization, and achieve a sample complexity of $d^{1 + 2\zeta(k-1)}$. Theorem 4.2 of this paper provides an improved $d^{2\zeta}$ factor. This is similar to the improvement of online SGD (Ben Arous et al. 2021), over gradient descent/flow (Bietti et al., 2022) on isotropic Gaussian data .
3. I believe Equation (1) asks for f to have information exponent >= 3 and f^2 to have an information exponent <= 2. It would be helpful to discuss the relationship between information exponent and Equation (1).
4. My interpretation was that $r$ in Theorem 3.5 is not dimension-dependent. If that is the case, it would be helpful to explicitly mention it in the theorem statement.
5. I think $B_2(0,1)$ is not defined in the text, does it mean the 2-dimensional ball of radius 1 centered at 0? Also, is there any way to make Assumption 3.2 more interpretable, e.g. for single Hermite polynomials? In which case, can we explicitly know the dimension-dependency of $m^*$?
References:
A. Mousavi-Hosseini et al. "Gradient-Based Feature Learning under Structured Data." NeurIPS 2023.
A. Mousavi-Hosseini et al. "Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics." ICLR 2025.
G. Ben Arous et al. "Online stochastic gradient descent on non-convex losses from high-dimensional inference." JMLR 2021.
A. Bietti et al. "Learning Single-Index Models with Shallow Neural Networks." NeurIPS 2022.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful comments and suggestions.
+ Notation and dimension dependence of parameters---We assume throughout that $\lambda>0$ is dimension independent. $B_2(0,1)$ is indeed the two-dimensional ball of radius 1 centered at the origin, and $m^*$ is also dimension independent in our analysis. We will clarify these dependencies in the final version.
+ $\lambda$ and $\eta_1$ scaling---We assume that $\lambda$ and $\eta_1$ are dimension independent in our work. We expect that the analysis we provide here can be carried over to the case (following your notation) $r_1=0$ and $r_2>0$. We agree that the case $r_1>0$ is interesting. However it will require substantially more work: One would have to carry out a similar “bounding flows'' style argument, but around the point $(0,1)$ instead of $(0,0)$. In light of the tight turn around for this conference, we defer this to future work.
+ Comparison with Mousavi-Hosseini et. al. (2023, 2025)---Thank you so much for pointing us to these results. As remarked above, if $\lambda$ and $\eta_1$ scale with $d$, our regime is related to the prior work. Thank you also for pointing us to the sample complexity improvement in Mousavi-Hosseini et. al. (2023). We will add a discussion of these connections in our final version.
+ Information exponent clarification---Thanks for pointing this out. We will include your interpretation in the final version. Please see the response to reviewer DGqD.
+ Assumption 3.2, dimension dependence of $m^*$ and dimension dependence of $r$ in Theorem 3.5---Thanks for these comments. Indeed, the parameter $r$ in Theorem 3.5 is dimension independent. Assumption 3.2 is equivalent to the condition that the global minimizer of the population risk (i.e. $(1,0)$) is locally attractive. Thus, the population dynamics initialized in a neighborhood of the global minimum will converge to this minimum. Finally, note that $m^*$ is dimension independent---to see this, note that $m^*$ can be specified in terms of the function $\phi$, which is dimension independent.
+ Step-size for SGD---If the step size is larger, the SGD does not follow the population gradient flow. This invalidates the main analysis strategy introduced in Ben Arous et. al. We adapt these ideas in our analysis and are thus also unable to handle SGD with large step size. | Summary: This paper investigates the benefits of unsupervised pre-training and transfer learning in the context of high-dimensional supervised learning, specifically in single-layer neural networks trained via online stochastic gradient descent. The authors establish that pre-training and transfer learning can significantly reduce sample complexity and, in some cases, even lead to exponential improvements over random initialization. The analysis relies on single-index models with spiked covariance and characterizes the effects of different initialization schemes. A key result is that pre-training enables efficient learning in scenarios where random initialization fails due to the existence of attractor basins. The paper also extends its analysis to transfer learning and shows how pre-trained feature representations improve learning efficiency.
Claims And Evidence: The paper makes several claims regarding the benefits of pre-training and transfer learning. While the analysis generally follows well-established approaches in the study of high-dimensional inference, some claims appear somewhat engineered. For instance, the role of the Information Exponent is controlled to illustrate benefits, which raises concerns about whether the observed advantages of pre-training arise naturally or are artificially emphasized. The negative result in Theorem 3.5—establishing a threshold below which learning does not occur—is an interesting contribution, but it might be an artifact of the specific Taylor expansion used rather than a fundamental property of the model. The claim that pre-training yields exponential benefits in certain cases is intriguing, but more discussion is needed on how realistic those cases are in practical settings.
Methods And Evaluation Criteria: The methodology follows a standard approach for analyzing the sample complexity of single-index models. The authors leverage stochastic gradient descent in a spiked covariance model and analyze the evolution of correlations over training. However, the connection to prior work, particularly the analysis of Ben Arous et al., is very strong, making the contribution appear incremental. The paper could benefit from a clearer comparison to these prior works to highlight what is truly novel in the methodology.
Additionally, the use of PCA as a pre-training strategy is reasonable but somewhat simplistic, as it does not capture more sophisticated pre-training methods commonly used in deep learning. Howver, this might eb expected for analysis purpose, and i agre with the justification of the author (who citie the statistical properties of PCA)
Theoretical Claims: The theoretical results certainly correct, but they rely on assumptions that may limit their applicability. The proof techniques align with prior work on high-dimensional learning dynamics and information exponents. The claim that pre-training reduces sample complexity is well supported, though it is not surprising given prior literature. Theorem 3.5 presents a negative result that suggests the existence of a minimum correlation threshold for learning, which is an interesting insight. However, the ability to set the leading term in the Taylor expansion to a negative value raises concerns about whether this result is intrinsic to the problem or simply an artifact of the chosen expansion.
Experimental Designs Or Analyses: The empirical results support the theoretical claims but do not provide strong additional insights. The numerical simulations primarily verify the sample complexity predictions rather than exploring the broader practical implications of pre-training. One limitation is that the experiments focus on synthetic settings with controlled spiked covariance structures, which may not fully capture real-world scenarios. Moreover, the figures suggest that the transition between learning and non-learning regimes is relatively smooth rather than sharply defined, raising questions about whether the term “phase transition” is appropriate in this context.
Supplementary Material: The supplementary material includes additional proofs and numerical results. It provides the necessary technical justifications for the main claims
Relation To Broader Scientific Literature: The paper is well-situated within the existing literature on pre-training and transfer learning in high-dimensional learning models.
On a side note, the spiked random feature model has been investigated in prior works such as, with a different movitations, sich as:
- Ba, J., Erdogdu, M. A., Suzuki, T., Wang, Z., Wu, D., & Yang, G. (2022). High-dimensional asymptotics of feature learning: How one gradient step improves the representation. Advances in Neural Information Processing Systems, 35, 37932-37946.
- Cui, H., Pesce, L., Dandi, Y., Krzakala, F., Lu, Y. M., Zdeborová, L., & Loureiro, B. (2024). Asymptotics of feature learning in two-layer networks after one gradient-step. arXiv preprint arXiv:2402.04980.
- Dandi, Y., Pesce, L., Cui, H., Krzakala, F., Lu, Y. M., & Loureiro, B. (2024). A random matrix theory perspective on the spectrum of learned features and asymptotic generalization capabilities. arXiv preprint arXiv:2410.18938.
Essential References Not Discussed: When the paper cite Lee et al 2014, they should cite the near identicial but prior results of Arnaboldi et al arXiv:2405.15459. In fact, i would be curious about the effect of training discussed in Lee et al et Arnadoli et al in the present paper.
Other Strengths And Weaknesses: Strengths:
Provides a theoretical framework for understanding pre-training and transfer learning in high-dimensional learning problems.
Derives sample complexity results that confirm the benefits of pre-training in structured feature models.
Highlights the role of initial feature correlation in determining the success of training, which is an important insight.
Weaknesses:
The theoretical results, while correct, are largely incremental relative to prior work.
The pre-training strategy (PCA) is somewhat simplistic compared to modern deep learning methods.
The empirical validation is limited to synthetic experiments and does not explore broader practical implications.
Some results, such as the failure of learning below a certain correlation threshold, may be artifacts of the mathematical formulation rather than fundamental properties of the problem.
Other Comments Or Suggestions: The authors should more explicitly compare their results to prior work, particularly in relation to the spiked random feature model and information exponents.
The numerical experiments could be extended to investigate how different levels of correlation affect learning outcomes.
The discussion of Theorem 3.5 should clarify whether the threshold phenomenon is a fundamental property of the model or an artifact of the analysis.
Questions For Authors: Theorem 3.5 suggests that below a certain correlation threshold, learning does not occur. Could this result be an artifact of the chosen expansion, rather than a fundamental limitation?
Have you considered alternative pre-training methods beyond PCA, and how would your results extend to those cases?
The numerical experiments show smooth transitions rather than sharp phase transitions. How do you justify using the term “phase transition” in this context?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful comments and suggestions.
+ Controlling the information exponent---We indeed control the information exponent of $f$. However, this assumption is satisfied by all *Hermite polynomials of degree at least three*. We can also check this condition for many polynomials constructed as a combination of these monomials. Consequently, our assumption captures a reasonably generic class of functions, and is not an edge case.
+ Taylor expansion in Theorem 3.5 \& minimum correlation threshold--- Please note that this is a Taylor expansion of the population loss and is necessarily fundamental to the model. The sign of the coefficient depends on properties of the data distribution, namely $f$ and $\lambda$ and cannot be changed at will. The desired sign on the coefficient holds for a broad class of functions (see above). There does exist an $r$ (depending on $f$) such that one-pass SGD fails to learn the latent signal for initial correlations less than this value. This correlation value $r$ is fundamental to the model, and is obtained by analyzing the population loss landscape around the origin ($m(X_0)=0$) for functions $f$ satisfying the assumptions stated above. Note that we do not aim to exactly characterize the class of functions $f$ which lead to this phenomenon. Instead our goal is to illustrate the benefits of pretraining on a class of commonly studied models.
+ 'Phase transitions' between not-learning and learning--- We do not use the terms 'phase transitions' and 'sharp thresholds' in our paper. However, as a consequence of Theorem~ 3.5 one can show a sharp phase transition for the sample complexity for a fixed $f$ as $\lambda$ varies: When $\lambda>0$ (the case studied in the paper) the information exponent is infinite so the sample complexity is super-polynomial. If, however $\lambda\leq 0$, the sample complexity is polynomial.
+ Other pretraining methods---We study PCA as a natural pretraining algorithm for this data model for the reasons described in Section 3.1. Please see the references therein for various settings where it is argued that more sophisticated pre-training methods effectively implement PCA. It would be interesting to extend the analyses to more realistic pre-training algorithms.
That said, we note that our arguments would naturally extend to other pretraining algorithms if one is able to characterize their `overlap' with the planted signal. This, however, is beyond the current paper.
+ Theoretical novelty--- The main technical novelty compared to Ben Arous et. al. arises from the analysis of the population gradient flow. In that work, the population gradient flow reduces to a one dimensional system whereas here we have a two-dimensional dynamical system in our study, which exhibits subtle properties due to the presence of local traps. The analysis of this dynamical system is our main technical contribution.
+ Missing reference and comparison with prior work--- Thank you for catching the missing Arnaboldi et. al. reference. We will certainly add this reference to the camera-ready version. We will also add a comparison with and further contextualize the recent works on the spiked random feature model.
+ Multi-pass SGD---We agree that it would be interesting to study multi-pass SGD in this setup, adapting the ideas in Lee et. al. and Arnaboldi et. al.
+ Empirical validation beyond stated framework---Pretraining and transfer learning are routinely employed in modern practice. Our results are an attempt to explain the empirical success of these methods. In this sense, our results provide theoretical insight into empirical phenomena already observed in previous works.
+ Empirical study of effect of correlation at initialization---We are happy to add a simulation study noting the effects of initial correlation on the output of one-pass SGD.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed and thoughtful responses. I appreciate the clarifications provided. While the rebuttal addresses some of my main concerns, I still find the contribution somewhat incremental relative to prior work and slightly engineered in parts. I therefore maintain my original score.
That said, I note a clear consensus toward acceptance among the reviewers, and I also recommend acceptance. | null | null | null | null | null | null |
Stochastic Smoothed Primal-Dual Algorithms for Nonconvex Optimization with Linear Inequality Constraints | Accept (spotlight poster) | Summary: This paper investigates a single-loop ALM-type algorithm for solving linearly constrained nonconvex optimization problems. The framework accommodates both stochastic objective functions and stochastic constraints. Theoretical analysis demonstrates that the proposed algorithm achieves competitive complexity.
Claims And Evidence: Yes, it is clear.
Methods And Evaluation Criteria: The complexity measure studied is standand.
Theoretical Claims: Yes, I think the main theorem makes sense.
Experimental Designs Or Analyses: No experiments.
Supplementary Material: Yes, I go through the supplementary material.
Relation To Broader Scientific Literature: This paper builds upon the approach of Zhang & Luo (2020, 2022) and extends it to the stochastic setting.
Essential References Not Discussed: The literature is well discussed.
Other Strengths And Weaknesses: Although this is a theoretical paper, validating the proposed algorithm with numerical examples would strengthen its contributions. Additionally, the differences from previous work should be highlighted, as this paper heavily relies on the smoothed technique, which has been extensively studied in analysis.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Are inequality constraints handled solely through projection? From the problem formulation, it appears that they can only be incorporated into the set $X$. Could you clarify this?
2. The paper states that the key difference from Alacaoglu \& Wright (2024) is the inclusion of additional constraints in $X$. However, the complexity result in Alacaoglu \& Wright (2024) is $O(\epsilon^{-3})$, which is better than the result presented in this paper. Could you provide further explanation on this? Is there potential to improve the complexity in this paper, and what are the underlying reasons?
3. I notice that in Theorem 3.1, the stepsizes $\tau$, $\eta$, and $\beta$ decrease as the number of iterations $T$ increases. Would this lead to impractically small stepsizes in real applications? How should these parameters be set in practical implementations?
4. How should the threshold $M_y$ be chosen in Algorithm 2? If it is set too small, $y$ may always remain at zero, potentially causing issues with the feasibility update. Could you provide guidance on selecting an appropriate value for $M_y$?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful review and their detailed questions. We really appreciate their effort in evaluating our work.
> Numerical examples
We ran a preliminary experiment to validate our theory. Please see our response to Reviewer WK98.
> Additionally, the differences from previous work should be highlighted, as this paper heavily relies on the smoothed technique, which has been extensively studied in analysis.
We want to highlight that, to our knowledge, this is the first algorithm achieving the optimal $\mathcal{O}(\epsilon^{-4})$ sample complexity for stochastic nonconvex optimization problems with stochastic constraints using only one sample per iteration.
First, in view of [1], which works for unconstrained min-max problems, the descent property cannot be guaranteed when their idea is applied to constrained problems. To address this issue, we introduce a potential function specifically designed for constrained settings, ensuring the descent property. We also use the gradient of the Moreau envelope, in place of the function $\nabla K$ used in works of Zhang and Luo, see for example Lem A.7 and A.8 and the estimations using these. We note that projection causes extra difficulties with stochastic gradients, see for example the work of Davis&Drusvyatskiy, 2019.
Furthermore, in the case of stochastic linear constraints, we propose a novel technique "pulling back the dual variable", which ensures boundedness of the dual variables. This property is critical but cannot be guaranteed by the deterministic analysis framework.
We will add these descriptions in the final version.
[1] Yang et al. Faster Single-loop Algorithms for Minimax Optimization without Strong Concavity, AISTATS 2022
> Inequality constraints
Thanks for the good question (please also see our response to Reviewer srvF). We clarify that our work can solve problems with constraint $\{x: Ax\leq b\}$ by only projecting on the set $\{x: x\geq 0\}$ and multiplying with $A, A^T$.
In fact, the assumption on projectability of $X$ is without loss of generality.
Suppose that ${x: Hx\le h}$ is difficult to project. We can add slack variables to reformulate this problem to make the set defined by inequality constraints easy to project. Specifically, we can rewrite the original problem as
$$ \min_{x, s} f(x): Ax = b, Hx + s = h, s \geq 0 $$
where $s$ is the slack variable.Then the set defined by inequality constraints $\{(x,s)\mid s\ge 0\}$ is easy to project and then we can handle the equality constraints $Ax=b,Hx+s=h$ by using simple products $A, A^T, H, H^T$, due to our algorithm design.
In summary, our assumption on $X$ being projectable is without loss of generality and for any inequality constrained problem, we can introduce slack variables so we only require projecting onto the simple set $\{x: x\ge 0\}$
> Comparisons with Alacaoglu&Wright, 2024
The sample complexity $O(1/\epsilon^4)$ achieved by our algorithm is already optimal under our assumptions and oracle model, as established by [2] ([2] shows $O(1/\epsilon^4)$ is even optimal for unconstrained problems). The improved complexity in Alacaoglu & Wright (2024) relies on a stronger stochastic oracle and additional smoothness assumptions. Specifically, our oracle assumes access to only one stochastic gradient $\nabla f(x, \xi)$ at a single query point $x$ per iteration, with $\xi$ sampled from the distribution $P$. Under this oracle, the lower bound is $\Omega(1/\epsilon^4)$ [2], matching our complexity.
However, Alacaoglu&Wright, 2024 require a stronger oracle: they assume availability of two stochastic gradients $\nabla f(x;\xi)$ and $\nabla f(y;\xi)$ evaluated at two different points $x,y$ for the same sample $\xi$. Additionally, they make a stronger smoothness assumption:
$\mathbb{E}_{\xi}\|\nabla f(x,\xi)-\nabla f(y,\xi)\|^2\leq L^2\|x-y\|^2$,
which is not required in our analysis.
[2]Arjevani et al. Lower bounds for non-convex stochastic optimization. Math. Program. 2023.
In addition, under the same stochastic oracle and the additional smoothness assumption, we can achieve the complexity of $O(\varepsilon^{-3})$ by combining our ALM framework with the STORM variance reduction. This is a straightforward extension of our techniques and we can provide a proof sketch in the discussion stage if the reviewer wishes. We will add this result in the final version. The only difference in this algorithm would be in the definition of $G$:
$$G(x_t, y_{t+1}, z_t) = F_t + A^{\top}(Ax_t - b) + \lambda(x_t - z_t)$$
where
$$F_{t+1} = \nabla f(x_{t+1}, \xi_{t+1}) + (1 - \alpha) (F_t - \nabla f(x_t, \xi_{t+1}))$$
> Step sizes
Our stepsizes are chosen as order $1/\sqrt{T}$, as standard in stochastic optimization. We refer to the textbook [3].
[3] Lan. First-order and stochastic optimization methods for machine learning, 2020.
> Choice of $M_y$
Sorry, the choice of $M_y$ was given in our proof of Thm 4.2, we will add it to main text in the final version. | Summary: The paper proposes smoothed primal-dual algorithms for solving stochastic nonconvex optimization constrained by deterministic or random linear inequality constraints. This is both an important advance in theory, and could be useful practically.
Claims And Evidence: The claims are correct, as far as I can tell.
Methods And Evaluation Criteria: There are no experiments described.
Theoretical Claims: The proofs are correct, as far as I can tell.
Experimental Designs Or Analyses: There are no experiments described.
Supplementary Material: I have seen the "subsequent pages" of the submission, but there is no other "supplementary material".
Relation To Broader Scientific Literature: The literature is fairly presented.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: - l. 42 (first paragraph of intro) should mention the assumption that X is a set easy to project on. Currently, it gives a misleading impression of handling all linear inequality constraints.
- on penalty methods: other penalty strategies (eg, Pen(x) = f(x) + \rho \| Ax-b \|_2) do not require $\rho \to \infty$. This part is unclear.
- Th. 3.1 gives a complexity result for all iterates (z_t). In view of the proximal point interpretation of the method, one would think that the last iterate has better properties. Is that so?
- l. 117 on the right: “...and lambda is selected accordingly”, it is a useful property of Moreau envelope? This part feels weird in the sentence.
- l. 123: missing parenthesis
- l. 143 on the left: “When we have the problem” is not followed by a clause.
- l. 160: x -> y
- l. 170 on the left: the full stop after -/infty.
- l. 182 on the right: “outputs a points”
- l. 194: augmented Lagrangian L_ρ(x_t, y_t+1, z_t) is mentioned, while in the rest of the paper, there are only two arguments used. The three-argument version (with z_t) does not make sense, because the "smoothing operation" is performed "outside" L_ρ.
- l. 217 on the right: “This post processing step… ...do not affect”
- l. 237 on the left: “Moreau envelope are critical”
- l. 254 on the right: “This which requires…”
- l. 295 on the right: “...decreases, in expectation,u p to an error term depends \tau^2 and the variance”
- l. 397 on the left: “the best-known rate have been”
- l. 422 on the left: “which is obtained with either double loop algorithms” not followed by “or…”
- l. 435 on the left: “Since these work focuses”
- l. 557: should be "u \in X"
Minor things:
- l. 143 on the left: “When we have the problem” is not followed by a clause.
- l. 117 on the right: “...and lambda is selected accordingly”, it is a useful property of Moreau envelope? This part feels weird in the sentence.
- l. 170 on the left: the full stop after -/infty.
- l. 182 on the right: “outputs a points”
- l. 217 on the right: “This post processing step… ...do not affect”
- l. 237 on the left: “Moreau envelope are critical”
- l. 254 on the right: “This which requires…”
- l. 295 on the right: “...decreases, in expectation,u p to an error term depends \tau^2 and the variance”
- l. 397 on the left: “the best-known rate have been”
- l. 422 on the left: “which is obtained with either double loop algorithms” not followed by “or…”
- l. 435 on the left: “Since these work focuses”
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful review and their detailed questions. We really appreciate their effort in evaluating our work.
---
>(first paragraph of intro) should mention the assumption that $X$ is a set easy to project on. Currently, it gives a misleading impression of handling all linear inequality constraints.
This is a good point! We first wish to emphasize that assuming $X$ is easy to project is without loss of generality.
Suppose that $X=\{x\mid Hx\le h\}$ is difficult to project. We can add slack variables to reformulate this problem so that we can handle it by projecting on simple set $\{ x: x\geq 0\}$ and multiplications by $H, H^T$. Specifically, we can rewrite the original optimization problem as
$$
\min_{x, s} f(x): Ax = b, Hx + s = h, s \geq 0
$$
where $s$ is the slack variable. Then the set defined by inequality constraints $\{(x,s)\mid s\ge 0\}$ is easy to project on and we can dualize the equality constraints $Ax=b,Hx+s=h$ in the Lagrangian function and use only multiplications with matrices $A, A^T, H, H^T$.
In summary, for problems with linear equality and inequality constraints, we can assume $X$ is easy to project without loss of generality. We also refer to our response to Reviewer srvF.
> Th. 3.1 gives a complexity result for all iterates ($z_t$). In view of the proximal point interpretation of the method, one would think that the last iterate has better properties. Is that so?
This is a good question. However, in nonconvex stochastic optimization, the last iterate generally may not have better properties, and typically we need to sample a point from among all the iterates. For example, in [1] (Algorithm 1 and 2), the authors also need to randomly draw output x from $(x_t)^T_{t=1}$ at uniform. There are more examples, please see Section 6, in textbook [2]. In summary, in the stochastic case, we believe that it is hard to use the proximal point interpretation of ALM to improve the last iterate guarantees, because even without linear constraints, we are not aware of last iterate guarantees for single-loop algorithms for nonconvex stochastic optimization.
[1] Lin, Jin, Jordan. On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems, ICML 2020.
[2] Lan. First-order and stochastic optimization methods for machine learning. Springer, 2020.
> On penalty methods: other penalty strategies (e.g., $Pen(x) = f(x) + \rho \| Ax - b \|_2$) do not require $\rho \to \infty$. This part is unclear.
Indeed, in principle, there are papers about exact nonsmooth penalty methods, however we are not aware of complexity results with such a method for our stochastic problem. If the reviewer knows of such a reference, we would be happy to include it. We are taking advantage of the dual variable updates to avoid large penalty parameters. Our focus on ALM instead of nonsmooth penalty methods is because ALM has been a popular method in practice and our aim is to improve its theoretical understanding. The importance of analyzing ALM-based algorithms is also emphasized in the work of Alacaoglu&Wright, 2024 and in fact, we solve an open question stated in the Section 5 of this work.
> Line 117 on the right: “...and lambda is selected accordingly”, is it a useful property of the Moreau envelope? This part feels weird in the sentence.
Because in line 117, we only assume $f$ is weakly convex, we must select $\lambda$ accordingly to ensure the Moreau envelope is also smooth. For example, if $f$ is $\rho$-weakly convex, then we need $\lambda < \frac{1}{\rho}$. We will revisit the sentence
> Line 194: augmented Lagrangian $L_\rho(x_t, y_{t+1}, z_t)$ is mentioned, while in the rest of the paper, only two arguments are used.
Thank you for pointing out this typo, which we will fix. Please see the response to reviewer WK98.
Finally, we wish to thank you so much for your careful reading and suggestions about the phrasing at many places, we will carefully incorporate your suggestions in our final version. | Summary: The authors introduce smoothed primal-dual algorithms for solving stochastic nonconvex optimization problems with linear inequality constraints. Their approach builds on an inexact gradient descent framework for the Moreau envelope, where the gradient is approximated using a single step of a stochastic primal-dual augmented Lagrangian algorithm. They also establish the $\mathcal{O}(\epsilon^{-4})$ sample complexity guarantee for their algorithms and provide extensions to stochastic linear constraints.
Claims And Evidence: I guess the authors should avoid stating that "the algorithm is free of large batch sizes" in the abstract and elsewhere, as the post-processing step requires a batch size of $\mathcal{O}(\epsilon^{-2})$.
Methods And Evaluation Criteria: This paper primarily focuses on the convergence analysis of the proposed algorithm, without presenting any experiments. While I understand the overall idea, certain aspects remain unclear, as noted in my previous comments.
Theoretical Claims: 1. The title mentions linear inequality constraints. However, in (1), the constraint is given as $Ax = b$, which represents a linear equality constraint. Then, in line 145, the constraint changes to $Ax \leq b$. This inconsistency is confusing—what is the main problem the authors aim to solve?
2. In Assumption 1.1.2, the set is defined as $X = {Hx \leq h}$. If the studied problem involves linear inequality constraints, why is $X$ treated separately from $Ax = b$?
3. The paper presents two definitions of $\epsilon$-stationary points—one in line 140 and another in (3). Are these definitions equivalent?
4. What does $L_{\rho}(x, y, z)$ represent in line 194? In line 98, $L_{\rho}$ only involves $x$ and $y$.
Experimental Designs Or Analyses: There are no experiments included. It is essential to add some to empirically validate the performance of the proposed algorithm.
Supplementary Material: I went through Appendix A.
Relation To Broader Scientific Literature: I do not see much connection to the broader scientific literature, as this paper focuses solely on algorithm analysis. However, the proposed algorithm has the potential to be applied to other machine learning problems.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: See the summary and theoretical claims sections.
Other Comments Or Suggestions: It could be better to have a table summarizing all the related algorithms, their convergence guarantees, and their complexities.
Questions For Authors: Please answer my questions from other sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful review and their detailed questions. We really appreciate their effort in evaluating our work.
> The reviewer asks to avoid "the algorithm is free of large batch sizes" in the abstract and elsewhere, as the post-processing step requires a batch size $\mathcal{O}(\varepsilon^{-2})$.
We respond to this question in two parts.
First, in our Theorem 3.1, we can get the $\varepsilon$-near stationary point without the post-processing step, hence this result is completely free of large batch sizes.
Second, for the result in Corollary 3.2, we use the post-processing step **only once** to get $\epsilon$-stationary point (we refer to our answer below for the difference between stationarity and near-stationarity), whereas other works that use large batch sizes require it in **every iteration**. For example, the work [I] not only considers linear equalities but also uses increasing mini-batch sizes at every iteration (see Remark 2 of [I]). Moreover, such a post-processing step is standard and has been used in earlier works, for example, [II, Appendix H] that also use single-sample algorithms at every iteration.
[I] Huang, Chen, and Huang. "Faster stochastic alternating direction method of multipliers for nonconvex optimization." ICML 2019.
[II] Alacaoglu and Lyu. "Convergence of first-order methods for constrained nonconvex optimization with dependent data." ICML 2023.
As a result, we still argue that our method has a much more benign batch-size requirement: we use a large-batch size for **only one** iteration, and only for the optional post-processing step. Hence, our algorithm uses a single sample at every iteration. We will clarify this in our final revision.
> "The title mentions linear inequality constraints, ... This inconsistency is confusing—what is the main problem the authors aim to solve?"
Sorry for the confusion!
We focus on the problem with the linear inequality constraint $Ax\leq b$. Let us see why it is a special case of our problem formulation. Let us start from
$$
\min_x f(x) \quad \text{s.t.} \quad Ax \leq b.
$$
Specifically, we introduce a slack variable $s$, set $Ax - b = -s$, and $s \geq 0$, and write the equivalent problem as:
$$
\min_{x\in\mathbb{R}^n, s\geq 0} f(x) \quad \text{s.t.}\quad [A \; I]\begin{bmatrix} x \\ s \end{bmatrix} = b.
$$
Setting
$$\begin{equation*}
z=\binom{x}{s}, ~~Z = \mathbb{R}^n \times \mathbb{R}^m_+, ~~B=\begin{bmatrix} A & I \end{bmatrix}, ~~~ g(z) = f(x)
\end{equation*}$$
then we have
$$\begin{equation*}
\min_{z\in Z} g(z)\colon Bz=b,
\end{equation*}$$
which is precisely the problem in Eq (1) where $Z$ is indeed polyhedral, which is required by $X$ in Eq (1). We will clarify this in the final version.
> In Assumption 1.1.2, the set is defined as $X = \{Hx \leq h\}$. If the studied problem involves linear inequality constraints, why is $X$ treated separately from $Ax = b$?
Here, without loss of generality, we can always assume $X$ to be easy to project (if not, we can always use slack variables to reformulate the problem and require projecting on simple sets such as $\{x: x\geq 0\}$), please see also our response to Reviewr 1Nyx. This means each of our projections have closed-form solution. And as explained in our response above, we handle the constraint $Ax \leq b$ by reformulating it into equality constraints with slack variables.
The technical reason we treat them separately is because we would like to ensure that dual variable remains unconstrained (Note that writing the Lagrangian with an inequality constraint will require the dual variable to have a sign).
> The paper presents two definitions of $\varepsilon$-stationary points—one in line 140 and another in (3). Are these definitions equivalent?
These are different notions, as emphasized in our text. Line 140 defines an **$\varepsilon$-stationary** point, while Eq. (3) defines an $\varepsilon$-**near stationary** point. Intuitively, near stationarity indicates proximity to a stationary point, making it slightly weaker than stationarity. Our results show near-stationarity without the post-processing step, whereas stationarity requires post-processing. These two definitions are commonly used in the literature.,such as Davis & Drusvyatskiy (2019). We will clarify this further in our final version.
> What does $L_\rho(x,y,z)$ represent in line 194? In line 98, it only involves $x$ and $y$.
Thank you for pointing out this typo, which we will fix. Please see our response to Reviewer WK98.
> It is essential to add some to empirically validate the performance of the proposed algorithm.
We ran some experiments to validate our theory. Please see our response to Reviewer WK98, and we will include the experimental results in the final version.
> It could be better to have a table.
Please see the table in our response to Reviewer WK98. We will include the table in our final version. | Summary: This work proposed stochastic algorithms for solving nonconvex optimization problems with linear inequality constraints. The main idea is to treat the original nonconvex constrained problem as a nonsmooth optimization problem, and solve it by leveraging the Moreau envelope smoothing technique.
Claims And Evidence: The main claim of this paper is that the proposed method can achieve a complexity of $O(\epsilon^{-4})$ in terms of solving nonconvex optimization with linear inequality constraints. This claim is supported by the analysis.
Methods And Evaluation Criteria: The idea behind the proposed method makes sense.
Theoretical Claims: I briefly checked all the proofs in the appendix, and I do not see any major issues.
Experimental Designs Or Analyses: The paper has no experiment.
Supplementary Material: I reviewed most of the proofs in the appendix.
Relation To Broader Scientific Literature: The main contribution of this work is to show a way of using the Moreau envelope technique in solving the family of constrained optimization problems. Similar strategy may apply to other setting.
Essential References Not Discussed: There is a previous work studied the nonsmooth min-max problem and proposed a very similar algorithm. Although the ultimate goals are different, as they do not consider any constraints, I believe it still worth to discuss due to the high similarity in the method design.
Quanqi Hu, Qi Qi, Zhaosong Lu, and Tianbao Yang. Single-loop stochastic algorithms for difference of max-structured weakly convex functions. In Advances in Neural Information Processing Systems 37: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, 2024.
Other Strengths And Weaknesses: The method design and analysis seem interesting and solid to me, but the lack of experiment is the main weakness.
Other Comments Or Suggestions: As someone who is not familiar with the constrained optimization literature, I find the motivation to be vague while reading the first section, especially when many details on existing works are deferred to the last section. It would be a lot clearer if there is a table summarizing existing and the proposed methods and highlighting the main contributions of this work.
Typo: in line 123, the right column, a ')' is missing in the gradient of Moreau envelope.
Questions For Authors: In line 194 (left column), the definition of the function $L_\rho (x,y_{t+1},z_t)$ seems missing. Should it be $L_\rho (z_t,y_{t+1})$ instead? The same issue exists in Algorithm 1 and Algorithm 2.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful review and their detailed questions. We really appreciate their effort in evaluating our work.
> The reviewer suggests discussing the following paper by Hu et al, 2024.
Thank you for pointing out this relevant work! We agree that this paper focuses on a different problem with different goals. We will include a discussion of the algorithm design in this paper and its similarity to our work, in the final version of our paper.
> It would be a lot clearer if there is a table summarizing existing and the proposed methods and highlighting the main contributions of this work.
We will add this table to our paper. We also include it here for reference. For further complexity comparisons (and clarifying how we handle $Ax\leq b$), we refer to our response to Reviewer 1NyX. For the importance of showing guarantees for ALM instead of penalty methods, we also refer to [1]. We solve an open question stated in Section 5 of [1].
$$
\text{Objective: } \mathbb{E}_\xi [f(x,\xi)]
$$
| **Reference** | **Constraint** | ** Oracle** | **Complexity** | **Loops** | Method |
| ------------- | -------------------------------------------------------------------------------------- | ----------- | ------------------------------------------- | --------- | ------- |
| [1] | $A x = b$ | ① | $\widetilde{\mathcal{O}}(\varepsilon^{-3})$ | 1 | ALM |
| [1] | $\mathbb{E}[c(x,\zeta)] = 0$ and $x \in X$ where $X$ is easy to project on | ① | $\widetilde{\mathcal{O}}(\varepsilon^{-5})$ | 1 | Penalty |
| [2] | $c(x)=0$ and $x \in X$ where $X$ is easy to project on | ① | $\mathcal{O}(\varepsilon^{-3})$ | 1 | Penalty |
| [3] | $\mathbb{E}[c(x,\zeta)] = 0$ and $x \in X$ where $X$ is easy to project on | ① | $\mathcal{O}(\varepsilon^{-5})$ | 2 | Penalty |
| This work | $A x \leq b$ | ② | $\mathcal{O}(\varepsilon^{-4})$ | 1 | ALM |
| This work | $\mathbb{E}_\zeta[A(\zeta)x - b(\zeta)] \leq 0$ | ② | $\mathcal{O}(\varepsilon^{-4})$ | 1 | ALM |
Oracle ①: for a given random seed $\xi$, we need to sample $\nabla f(x, \xi)$ and $\nabla f(y, \xi)$, satisfying:
$$\mathbb{E}_\xi[\nabla f(z,\xi)]=\nabla f(z)$$
and
$$\mathbb{E}_\xi\|\nabla f(z,\xi)-\nabla f(z)\|^2\leq \sigma^2$$
for $z=x$ and $z=y$ and,
$$\mathbb{E}_{\xi}\|\nabla f(x,\xi)-\nabla f(y,\xi)\|^2\leq L^2\|x-y\|^2$$
Oracle ②: we only evaluate the stochastic gradient at a single point $x$ for seed $\xi$, satisfying
$$\mathbb{E}_\xi[\nabla f(x,\xi)]=\nabla f(x)$$
and
$$ \mathbb{E}_\xi\|\nabla f(x,\xi)-\nabla f(x)\|^2\leq \sigma^2$$
[1] Alacaoglu, and Wright. “Complexity of Single Loop Algorithms for Nonlinear Programming with Stochastic Objective and Constraints.” AISTATS 2024
[2] Lu, Mei, Xiao. Variance-reduced first-order methods for deterministically constrained stochastic nonconvex optimization with strong convergence guarantees arXiv, 2024.
[3] Li et al. Stochastic inexact augmented Lagrangian method for nonconvex expectation constrained optimization. COAP 2024
> Typo: in line 123, the right column, a ')' is missing
Thank you! We will fix the typo.
> In line 194 (left column), the definition of the function $L_\rho(x, y_{t+1}, z_t)$ seems missing. Should it be $L_\rho(z_t, y_{t+1})$ instead?
It is a typo. In line 194, $L_\rho(x, y_{t+1}, z_t)$ should be $L_\rho(x, y_{t+1})$ (which is defined in line 97). And in Algorithm 1 and 2 (line 224 and 338), $L_\rho(x_t, y_{t+1}, z_t)$ should be $L_\rho(x_t, y_{t+1})$.
> The method design and analysis seem interesting and solid to me, but the lack of experiment is the main weakness.
We ran a preliminary experiment to validate our theory. This result also shows that our algorithm converges faster than [I]. We consider the stochastic quadratic programming problem:
$$\min_{x} \mathbb{E} \frac{1}{2} x^T Q(\zeta) x + r(\zeta)^T x: \mathbb{E} [A(\xi) x - b(\xi)] = 0, \ell_i \leq x_i \leq u_i$$
where $x \in \mathbb{R}^{n}$, $Q(\zeta) \in \mathbb{R}^{n \times n}$, $r(\zeta) \in \mathbb{R}^n$, $A(\xi) \in \mathbb{R}^{m \times n}$, and $b(\xi) \in \mathbb{R}^m$.
| **Iteration** | ALM(ours) | **Penalty method [I]** |
| ------------: | --------: | ---------------------: |
| 1e5 | 0.5036 | 0.7050 |
| 1e6 | 0.0454 | 0.5017 |
| 2e6 | 0.0219 | 0.2656 |
| 5e6 | 0.0152 | 0.2025 |
| 1e7 | 0.0082 | 0.1755 | | null | null | null | null | null | null |
Unlocking Post-hoc Dataset Inference with Synthetic Data | Accept (poster) | Summary: The paper proposes a novel approach to dataset inference (DI) by addressing the challenge of requiring a held-out dataset that closely matches the suspect dataset’s distribution. The authors generate synthetic held-out data using a text generator trained on a suffix-based completion task. They further introduce a post-hoc calibration step to bridge the likelihood gaps between real and synthetic data, improving the reliability of DI. Experiments on diverse datasets demonstrate that this method enables accurate detection of whether a dataset was used in model training while minimizing false positives, making it a viable tool for copyright enforcement and data ownership verification.
Claims And Evidence: I am confused by Section 4.3, and I believe more details should be added to improve the clarity of the claims.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are reasonable (Section 5.1), but the limited model scale and dataset size constrain the generalizability of the conclusions.
Theoretical Claims: The paper does not present theoretical claims, however:
1. The justification for hypothesis testing, particularly Equation (3), lacks sufficient reasoning.
2. There appears to be a contradiction between the first paragraph of Section 2.2 and Equation (1), suggesting an error in one of them.
3. The explanation in Section 4.3 is unclear.
Experimental Designs Or Analyses: The experimental design has limitations due to the small model scale and dataset size. The analysis is reasonable, but the necessity of the weight constraint is not clearly demonstrated based on the ablation in Section 5.5.
Supplementary Material: Part of the supplementary material is checked: Appendix B and Appendix E.
Relation To Broader Scientific Literature: The paper addresses a key limitation of prior DI methods by eliminating the need for an in-distribution held-out dataset, making DI more practical and reliable.
Essential References Not Discussed: I am not deeply familiar with the literature on DI, but I am well-versed in MIA research. The paper's coverage of MIA-related works is adequate.
Other Strengths And Weaknesses: **Strengths**:
1. The research problem is important and addresses a clear gap in existing dataset inference methods.
2. The approach is interesting, particularly the calibration step for handling distribution shifts.
3. The current experimental results are generally effective.
**Weaknesses**:
1. The experimental scale is limited, involving only small models and two datasets. The fine-tuning setup is also narrow (only one recipe is tested), limiting broader MIA evaluations. Additionally, the evaluation of text classifiers is limited, as only two specific classifiers are tested without exploring stronger alternatives.
2. The writing lacks clarity in several areas. There is a contradiction between Section 2.2’s first paragraph and Equation (1). Sections 4.2 and 4.3 are unclear, particularly the reasoning behind Equation (3) and the rationale of the weight constraint.
I believe expanding the experiments would strengthen the validation of the method.
Other Comments Or Suggestions: **Typo**:
1. Line 283: "we rely on an important observations..."
2. Line 310: A "]" is missing
**Suggestions**:
1. I suggest giving a concrete example of MIA(f(x)), which can help readers understand the context better.
2. I strongly suggest improving the clarity of Section 4.2 and Section 4.3.
Questions For Authors: 1. Can you clearly explain the relationship between $MIA(x_{val})$ and $MIA(x_{sus})$? There seems to be a contradiction in Section 2.2.
2. What is the rationale behind Equation (3)? Specifically, what is the range of values for $c(\cdot)$? Discrete or continuous?
3. Can you provide a more detailed explanation of the rationale in Section 4.3? Why does the linear regression coefficient become negative? A concrete example would be helpful.
4. Have you considered using stronger text classifiers beyond the two currently used?
5. Can your method defend against a malicious actor? Please discuss potential vulnerabilities.
6. How does your method perform under different fine-tuning recipes? My understanding is that you tested only one setting.
7. In the ablation study, you included ICL paraphrasing. I am curious about the results when using preference optimization for generation.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > **The limited model scale and dataset size constrain the generalizability of the conclusions.**
We performed additional experiments on larger Pythia models (2.8B, 6.9B). Moreover, we present results on Olmo 7B, which is trained on a very large training corpus of 4.5TB. The results show our method can scale to much larger models and dataset sizes. Please refer to our response to Reviewer z1aa (https://openreview.net/forum?id=a5Kgv47d2e¬eId=z18yFkoW8x) for detailed results.
> **The justification for hypothesis testing, particularly Equation (3), lacks sufficient reasoning. [...] Specifically, what is the range of values for c(⋅)? Discrete or continuous?**
The difference comparison t-test in Eq. 3 is designed following our proposed suffix completion generation task. In the suffix completion task, the suspect and generated heldout suffixes have the same position within a sentence, making the two suffixes directly comparable. To this end, we propose to quantify the distributional gap in each suspect-heldout suffix pair. The gap is measured *with only text signals* (by the text classifier), and *with both text and MIA signals* (by the combined classifier). If the gaps in suspect-heldout pairs are larger with extra MIA signals, then the suspect suffixes are likely to be used for training.
Specifically, the values for c(⋅) are continuous, ranging from 0 to 1.
> **There appears to be a contradiction between the first paragraph of Section 2.2 and Equation (1).**
We modified the text in Section 2.2 as follows:
“The null hypothesis is that the average scores for $D_{sus}$ **are higher than or equal to** those for $D_{val}$.”
> **The fine-tuning setup is narrow (only one recipe is tested)**
We performed additional experiments under two more fine-tuning setups, (1) finetuning with LoRA for 10 epochs and (2) full-finetuning, as follows.
| Fine-tuning Method | True | AUC Text (%) | AUC Comb (%) | P-value (Diff) | Inferred |
|-------------------|------|--------------|--------------|----------------|----------|
| **LoRA (1 epoch)** *(in Table 3)* | ✓ | 53.8 | 55.6 | 0.01 | ✓ |
| | ✗ | 53.8 | 53.9 | 0.13 | ✗ |
| **LoRA (10 epochs)** | ✓ | 53.7 | 56.2 | 0.005 | ✓ |
| | ✗ | 53.6 | 53.5 | 0.14 | ✗ |
| **Full-finetuning** | ✓ | 53.7 | 56.8 | 0.008 | ✓ |
| | ✗ | 53.8 | 53.7 | 0.21 | ✗ |
Our results suggest that, with more iterations or larger parameter size, the membership signal is stronger and therefore easier to detect. This means that the setup in our submission is the most challenging one.
> **Only two specific classifiers are tested without exploring stronger alternatives.**
We explored a few stronger text classifier backbones and chose the simple 2-layer GPT2-based classifier.
Considering the limited number of tokens provided by the author in the DI scenario, stronger text classifiers can be easily overfitted, especially for SoTA LLM-based text classifiers. Here, we present the results for different architectures with different parameter sizes.
| Architecture | AUC_text (%) | Training time (minutes) |
|-------------|------------|------------------------|
| **GPT2 (initialized+2 layers)** | 53.3 | 0.5 |
| **GPT2 (pretrained+lora)** | 53.0 | 26.2 |
| **GPT2 (pretrained+full finetuning)** | 52.3 | 36.8 |
| **Llama3-8B (pretrained+lora)** | 53.2 | 65.1 |
The results show that the simple GPT2-based classifier ((initialized+2 layers) can achieve the best AUC. Additionally, this simple classifier has a significantly shorter training time, making the method more practical when faced with more queries.
> **Typo: Line 283 and Line 310**
We have modified those texts in our manuscript.
> **Can you provide a more detailed explanation of the rationale in Section 4.3? A concrete example would be helpful.**
We added a more detailed explanation with an example in the manuscript for better understanding. We also added a figure to visually demonstrate the idea.
> **Can your method defend against a malicious actor?**
The existence of a malicious actor is one of the important problems we would like to address for the previous LLM Dataset Inference—the author might provide a held-out set with a distributional shift to trigger a false positive result. With our approach, **the arbitrator can synthesize a reliable held-out set** to avoid false positives caused by a malicious user. We think this should be a standard pipeline for dataset inference and visualize it in Figure 1.
> **In the ablation study, you included ICL paraphrasing. I am curious about the results when using preference optimization for generation.**
We extended Table 5 and provided extra results of three baselines (preference optimization, ICL text completion, ICL paraphrasing) together with two more ablation studies for each component for our generation method. Please refer to the second table in our response to reviewer hUkh (https://openreview.net/forum?id=a5Kgv47d2e¬eId=aEzlhvnQm9) for more detailed results.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal—I have read it carefully.
I’m generally satisfied with the new experimental results. I believe many of these should be incorporated into the final version of the paper. The original version presented rather limited empirical evidence, and these additional results (e.g., evaluations on more models and datasets, exploration of alternative classifiers, and the demonstrated ineffectiveness of other data generation methods) are important to substantiate the core claims.
I have one remaining question regarding the experiments: why didn’t you consider using BERT or sentence-transformer as text classifiers? Also, I’m not fully convinced by your statement that “stronger text classifiers can be easily overfitted”—this sounds more like an issue with experimental implementation than a fundamental limitation.
Regarding Equation (3), after re-reading it, I suspect there may be an issue with **the subscripts in the expectations**. I encourage you to double-check this. Please let me know if I am wrong.
For Section 4.3, I cannot comment further since I haven’t seen the revised version.
Overall, I find the experimental results (**after revision**) reasonably comprehensive, and the problem setting and motivation are both meaningful and interesting. **However, I still find the exposition in Sections 4.2 and 4.3—especially the methodological reasoning and notation—quite unclear.** This summarizes my view on the strengths and weaknesses of the paper. I recommend that the final decision be left to the AC’s discretion.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for finding our “problem setting and motivation both meaningful and interesting”, and our “experimental results comprehensive”.
> **Use BERT or sentence-transformer as text classifiers?**
We also ran experiments with BERT and sentence transformer (all-MiniLM-L6-v2) as text classifiers, showing that our GPT2-based classifier overperforms both architectures.
| Model | AUC_text (%) |
|-------------|------------|
| all-MiniLM-L6-v2 | 50.8|
| BERT| 51.2|
| **Ours (GPT2-based)** | 53.3 |
> **“Stronger text classifiers can be easily overfitted” sounds more like an issue with experimental implementation than a fundamental limitation.**
In our experimental setup where *user-provided samples are limited*, a simple GPT2-based classifier slightly outperformed other architectures. In this challenging scenario, larger classifiers likely lack sufficient data to reach their optimal performance. In real-world applications, however, an arbitrator can select the most suitable text classifier based on their specific conditions regarding data size, data type, and computation resources. This selection would give *even stronger performance* for our method.
> **Subscripts in Equation 3**
We modify the subscripts in Equation 3 as follows.
$E_{x_{val}^{test} ∈ D_{val}^{test}, x_{sus}^{test} ∈ D_{sus}^{test}}[c_{comb}(x_{val}^{test}) - c_{comb}(x_{sus}^{test})] ≤ E_{x_{val}^{test} ∈ D_{val}^{test}, x_{sus}^{test} ∈ D_{sus}^{test}}[c_{text}(x_{val}^{test}) - c_{text}(x_{sus}^{test})].$
> **I still find the exposition in Sections 4.2 and 4.3 unclear, especially the methodological reasoning and notation.**
We added the following explanation regarding Sec. 4.2 and 4.3 for better understanding.
**Sec. 4.2**
Here we explain our dual-classifier t-test in Equation 3, where the superscripts "test" are abbreviated for simplicity.
The aim of our t-test is to distinguish the distributional shift caused by *membership signal* from the shift caused by *generation*. These two kinds of distributional shifts exist between every suspect suffix $x_{sus} ∈ D_{sus}$ and its generated held-out version $x_{val} ∈ D_{val}$.
We train two classifiers to achieve this goal, as described in Sec. 4.2, lines 283-307. Here, a classifier $c$ predicts a probability $c(x)$ (ranging from 0 to 1) that the input $x$ is from the held-out set $D_{val}$. Therefore, the difference in the predicted score $c(x_{val}) - c(x_{sus})$ shows how well the classifier can distinguish the suspect-heldout pair $x_{val}$ and $ x_{sus}$. In every such pair, we quantify the shift caused by *generation* with the text classifier as $c_{text}(x_{val}) - c_{text}(x_{sus})$. We also quantify the combined effects caused by *generation and membership* with the combined classifier as $c_{comb}(x_{val}) - c_{comb}(x_{sus})$.
If the membership signal is present, the combined effects will be stronger than the generation effect alone, and the predicted probability will be slightly more accurate for the combined classifier, i.e. $c_{comb}(x_{val}) - c_{comb}(x_{sus}) > c_{text}(x_{val}) - c_{text}(x_{sus})$. This will be examined with a t-test, where the null hypothesis is formalized as in Equation 3.
**Sec. 4.3**
We illustrate our approach with the following example. For each suspect-heldout pair, the smaller MIA score is highlighted in bold in the table. We observe:
1) On the **member set**, suspect data consistently shows smaller MIA scores. This occurs because membership signals have stronger effects than generation, causing suspect data to consistently yield lower MIA scores than held-out data.
2) On the **non-member set**, held-out data may exhibit smaller values for certain metrics. This happens because generation randomness introduces fluctuation in MIA scores.
For both member and non-member sets, we train a linear model $l$ that aggregates all MIA scores to predict an overall score:
$l(x) = \sum_i w_i MIA_i(x)$
The held-out set is labeled as 1 and the suspect set as 0. The model assigns positive weights $w_i$ to any MIA metrics $MIA_i$ on the *member set* because label 0<1 and $MIA_i($suspect$)<MIA_i($held-out$)$. However, on the *non-member set*, the model assigns a negative weight $w_3$ for $MIA_3$. This means a smaller $MIA_3$ score in the held-out set would contribute to a larger overall MIA score, which is undesirable. To address this, we constrain all weights in the linear model to be strictly positive, ensuring that a lower $MIA_i$ score can only result in a lower overall MIA score.
| Membership | Split | Label | $MIA_1$ | $MIA_2$ | $MIA_3$ |
|------------|------------|---------------------|-----------------------------------|-----------------------------------|----------------------------------------|
| ✓ | Suspect (*natural*) | 0 | **0.86** | **0.87** | **0.54** |
| | Held-out (*generated*) | 1 | 0.90 | 0.91 | 0.55 |
| ✗ | Suspect (*natural*) | 0 | **0.88** | **0.89** | 0.58 |
| | Held-out (*generated*) | 1 | 0.90 | 0.90 | **0.56** | | Summary: The work presents a method to generate synthetic data for use with Dataset Inference (DI) algorithms, which require held-out examples from the dataset distribution of interest. DI algorithms are used to detect the presence of certain data distributions within the training data of a particular model. The authors argue that a (non-synthetic) held out dataset is rarely available and that even small shifts in the distribution of a held out set can cause DI algorithms to fail. Furthermore, the authors define a "post-hoc calibrated" test procedure, to use in conjunction with this synthetic dataset, to identify when a suspected dataset has been included in a model's training set.
Claims And Evidence: The submission claims two contributions, the procedure for generating synthetic held-out data and a second procedure for testing whether the suspect data was included in the target model's training set via a "post-hoc calibration". The impact of the post-hoc calibration seems quite well supported by the empirical evaluation of Section 5. The necessity for a more complex data generation procedure is somewhat supported in Table 5, but it is unclear from the text, is this result only for the single-author blogpost dataset? Do similar trends also hold for datasets from the Pile?
Methods And Evaluation Criteria: The benchmark datasets (The Pile and blog posts) are reasonable, however, the authors do not mention what blog posts they use. Could this be clarified?
The model (Pythia 410M) is relatively small so there is some question as to how well the method performs with model scale. Additional model sizes (2B, 7B) would help illustrate the scaling behavior.
Another simple data generation baseline, which should be considered, is an in-context suffix completion approach. One where you prompt the model to complete a suffix (selected in the same fashion as done for the proposed suffix completion approach). Furthermore, a few full examples could be included in the context to give the model some guidance on the writing style.
Theoretical Claims: No theoretical claims were made.
Experimental Designs Or Analyses: The empirical methodology seems sound (apart from my comments related to methods and evaluation criteria).
Supplementary Material: I did review the entire appendix briefly.
Relation To Broader Scientific Literature: The paper does a reasonable job of introducing DI concepts and related works.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: As presented, the paper presents two main contributions -- (1) how to generate synthetic data for solving the DI problem and (2) how to conduct tests that take into account the synthetic data. The paper could be strengthened and made more clear, if there are additional experiments to support (1), i.e., how critical is the synthetic data generation approach given (2). If it is indeed critical, it would be great to have further empirical support and if it is not important (e.g. any reasonable data generation approach works well when coupled with (2)), then the paper should be updated to emphasize (2) more, in my opinion.
Other Comments Or Suggestions: - As motivated by sec 3.3, using a bad (i.e. out of distribution) held-out set can create false positives...this begs the question, should we use distribution testing to ensure that the suspect and held out set are statistically similar as a standard part of the DI protocol?
- "The null hypothesis is that the average scores for D_sus is lower than for D_val"
Shouldn't the null hypothesis (eq 1) be that the scores for the two distributions are the same, i.e. in the case that D_sus and D_val are drawn from the same distribution and neither has been included in the training set? If we find D_sus has a much higher score, then we reject the null and conclude D_sus was included in the training set.
- Please make clear earlier that larger MIA score implies the datapoint is included in the training set. At least, that is what I inferred from how MIA is being used in Eq (1).
- Figure 3, left-panel, step 2: all the text snippets in the illustration have identical text. Is this intended?
- This is minor, but I found the use of red checks to indicate positive and green/blue crosses to indicate negative confusing, since I usually think of red to indicate negative and green to indicate positive. I suggest using green checks and red crosses.
- Does the inferred column in Table 4 need to be updated to indicate some of the errors, for example in the case of Ubuntu IRC it looks like there is a false positive?
Questions For Authors: (1) Can you provide some (even preliminary) empirical results for using an in-context suffix completion approach (as suggested in the Experimental Design section above)? This would serve as an ablation to measure the effectiveness of suffix completion with LoRA vs using in-context instructions.
(2) Can you provide additional experiments showing that the synthetic generation method is necessary (and that the post-hoc calibration alone is not sufficient)? Essentially, expanding the result of Table 5.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > **Is the result in table 5 only for the single-author dataset? Do similar trends also hold for the Pile dataset?**
The results in Table 5 are for the Pile. Below, we also show that our data generation procedure is necessary for the single-author dataset.
| Configuration | True Membership | P-value | Inferred Membership |
|--------------|----------------|---------|---------------------|
| **w/o Suffix Completion (ICL Paraphrasing)** | ✓ | 1.0 | ✗ |
| | ✗ | 1.0 | ✗ |
| **w/o Post-hoc Calibration (Original T-test in DI)** | ✓ | <0.001 | ✓ |
| | ✗ | <0.001 | ✓ |
| **w/o Weight Constraint** | ✓ | 0.02 | ✓ |
| | ✗ | 0.08 | ✗ |
| **Ours** | ✓ | 0.01 | ✓ |
| | ✗ | 0.13 | ✗ |
The results for single-author data follow a similar trend as the Pile, showing the importance of each component in our framework.
>**The authors do not mention what blog posts they use. Could this be clarified?**
The blog posts are from the financial domain, regarding stock investment suggestions. Due to copyright reasons, the blog post data is not available for sharing yet.
>**Additional model sizes (2B, 7B) would help illustrate the scaling behavior.**
We provide the results to demonstrate the effectiveness of our method across larger model sizes of Pythia (2.8B and 6.9B) and on different model architecture (Olmo 7B). Please refer to our response to Reviewer z1aa (https://openreview.net/forum?id=a5Kgv47d2e¬eId=z18yFkoW8x) for detailed results.
>**Another simple data generation baseline, which should be considered, is an in-context suffix completion approach.**
We performed this baseline in Appendix A.1 “Text Completion Prompt”. Concretely, as an approach of in-context learning, we give ten examples from the documents, and prompt the GPT-4-Turbo model to finish an incomplete document. This baseline performs slightly worse than ICL paraphrasing, and the GPT2 classifier can easily detect the distributional shift with an AUC of 99.2%.
> **How critical is the synthetic data generation approach given contribution (2)?**
Besides the results in Table 2 and Appendix A, we also extended the following results in Table 5 to show the importance of our data generation approach. Concretely, we show that all the generation baselines do not work even with our self-calibration methods (Row 1-6). Moreover, we show that each component in our generation procedure is critical to our framework (row 7-10).
| Configuration | True Membership | P-value | Inferred Membership |
|--------------|----------------|---------|---------------------|
| **w/o Suffix Completion (ICL Text Completion)** | ✓ | 1.0 | ✗ |
| | ✗ | 1.0 | ✗ |
| **w/o Suffix Completion (ICL Paraphrasing)** | ✓ | 1.0 | ✗ |
| | ✗ | 1.0 | ✗ |
| **w/o Suffix Completion (Performance Optimization)** | ✓ | 1.0 | ✗ |
| | ✗ | 1.0 | ✗ |
| **w/o segment and shuffle** | ✓ | 1.0 | ✗ |
| | ✗ | 1.0 | ✗ |
| **w/o suffix completion** | ✓ | 1.0 | ✗ |
| | ✗ | 1.0 | ✗ |
| **Ours** | ✓ | <0.001 | ✓ |
| | ✗ | 1.0 | ✗ |
> **should we use distribution testing to ensure that the suspect and held-out set are statistically similar as a standard part of the DI protocol?**
Adding distributional testing as part of DI protocol only strengthens its *robustness* but not *utility*. When a distributional shift is identified, it is still hard for the author to provide held-out data required for DI (as shown in Sec. 3). This disables further DI procedures, even when there is a copyright infringement. This is exactly the problem we would like to address in this work—instead of asking the author to provide statistically similar held-out data, **the arbitrator should be able to synthesize a reliable held-out set**. We think this should be a standard protocol for dataset inference and visualize it in Figure 1.
> **"The null hypothesis is that the average scores for $D_{sus}$ is lower than for $D_{val}$" Shouldn't the null hypothesis (eq 1) be that the scores for the two distributions are the same?**
The text should be “The **alternative** hypothesis is that the average scores for $D_{sus}$ is lower than for $D_{val}$”. If $D_{sus}$ is trained on the target model, then its MIA scores (e.g. loss) should be lower than the IID held-out set $D_{val}$. The null hypothesis, against the alternative hypothesis, is that “the average scores for $D_{sus}$ is higher than or the same as those for $D_{val}$”. This also aligns with Eq. 1.
>**Does the inferred column in Table 4 need to be updated to indicate some of the errors, for example in the case of Ubuntu IRC it looks like there is a false positive?**
As pointed out in Sec. 5.3, although the **difference comparison** t-test works for all subsets, the **AUC comparison** t-test fails for three subsets: Ubuntu IRC, PubMed Central, and PhilPapers. Therefore, we suggest the difference comparison t-test as a better metric. We have also modified the table accordingly.
> **Suggestions for figures and symbols**
We thank the suggestions from the reviewer, and made changes as suggested.
---
Rebuttal Comment 1.1:
Comment: The author's response effectively addressed some of my previous concerns, leading me to increase my score to a 3.
---
Reply to Comment 1.1.1:
Comment: We appreciate your time and effort in reviewing our work. Thank you for your thoughtful feedback and for reconsidering our submission. We are happy that our responses addressed the concerns. We would be grateful for any additional guidance on potential further improvements. | Summary: This paper presents a method for Unlocking Post-hoc Dataset Inference (DI) with Synthetic Data, for safeguarding intellectual property in the era of Large Language Models. The authors claim that synthetic data generation combined with post-hoc calibration can robustly enable DI, allowing data owners to verify unauthorized data usage in LLM training.
Claims And Evidence: The claims made in the submission are well-supported by experimental evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-designed and appropriate for the problem of post-hoc dataset inference. The authors provide clear and convincing evidence that their approach effectively addresses the challenges of DI.
Theoretical Claims: All theoretical claims have been checked and seem valid.
Experimental Designs Or Analyses: The authors evaluate their method on single-author blog posts and subsets of the Pile dataset, which include diverse text types (e.g., Wikipedia, ArXiv, GitHub). These datasets are representative of real-world scenarios where DI might be applied, such as copyright claims for blog posts or academic papers.
Supplementary Material: Yes, all supplementary materials have been reviewed.
Relation To Broader Scientific Literature: Previous work on Dataset Inference by Maini et al. (2021) provides a method to determine whether a suspect dataset was used to train a machine-learning model. However, DI traditionally requires an in-distribution held-out dataset, which is rarely available in practice. The paper addresses the critical limitation of DI by proposing a method to synthetically generate in-distribution held-out data.
Essential References Not Discussed: To the best of my knowledge, there are no essential references that are not discussed.
Other Strengths And Weaknesses: Strengths:
The paper is well-written and contains extensive experimental results.
It touches on an important topic of safeguarding intellectual property against Large Language Models.
Weaknesses:
The method is evaluated on single-author blog posts and subsets of the Pile dataset, which are relatively homogeneous or well-structured. However, it may struggle with highly specialized datasets (e.g., medical, legal, or technical texts) where domain-specific patterns might not be captured. This could lead to larger distributional shifts and reduced reliability in these contexts.
Other Comments Or Suggestions: No other comments.
Questions For Authors: No other questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the Reviewer for the positive feedback and encouraging comments.
>**However, it may struggle with highly specialized datasets (e.g., medical, legal, or technical texts) where domain-specific patterns might not be captured. This could lead to larger distributional shifts and reduced reliability in these contexts.**
We would like to note that the Pile is not a homogeneous dataset and has many different data subsets from various domains, formats, and languages. For example, the PubMed Central (PMC) is a subset of the PubMed online repository for biomedical articles (*medical*). The FreeLaw subset contains opinions from federal and state courts (*legal*). The ArXiv subset contains *technical text* mostly from papers in the fields of Math, Computer Science, and Physics. EuroParl is a *multilinguall* corpus consisting of the proceedings of the European Parliament in 21 European languages. StackExchange contains user-posted content on the Stack Exchange network in the *question-answer format* covering a wide range of subjects. We present examples of these synthetic texts from the different subsets of the Pile in the Appendix E. These samples show the diversity of the subsets in the Pile. Here, we also demonstrate that our method generalizes well across different pile subsets and model parameter sizes.
We further extend our analysis beyond the Pile and test our method on the Dolma dataset, which was built from a diverse mixture of web content, scientific papers, code, public-domain books, social media, and encyclopedic materials. We run additional experiments using Olmo 7B, with the member data from Dolma-v1_7 and the non-member data from Paloma. We observe that our method is also effective for the different data types. The results are provided in the table below (following Table 4 in our submission):
| **Model** | **Subset** | **True** | **AUC Text (%)** | **AUC Comb (%)** | **P-value (Diff)** | **Inferred** |
|-------|--------|------|--------------|--------------|----------------|----------|
| **Pythia-2.8B** | ArXiv | ✓ | 53.0 | 59.1 | <0.001 | ✓ |
| | | ✗ | 53.1 | 53.3 | 0.84 | ✗ |
| | StackExchange | ✓ | 53.7 | 57.7 | <0.001 | ✓ |
| | | ✗ | 52.3 | 53.7 | 0.18 | ✗ |
| | EuroParl | ✓ | 50.5 | 54.0 | 0.009 | ✓ |
| | | ✗ | 51.1 | 52.3 | 0.17 | ✗ |
| **Pythia-6.9B** | ArXiv | ✓ | 53.6 | 60.0 | <0.001 | ✓ |
| | | ✗ | 53.5 | 54.1 | 0.37 | ✗ |
| | StackExchange | ✓ | 53.7 | 58.4 | <0.001 | ✓ |
| | | ✗ | 52.4 | 53.1 | 0.21 | ✗ |
| | EuroParl | ✓ | 50.8 | 55.8 | <0.001 | ✓ |
| | | ✗ | 50.7 | 53.4 | 0.11 | ✗ |
| **Olmo-7B** | Wikipedia | ✓ | 52.9 | 55.4 | 0.009 | ✓ |
| | | ✗ | 52.1 | 50.6 | 1.0 | ✗ |
| | Common Crawl | ✓ | 53.5 | 55.7 | 0.01 | ✓ |
| | | ✗ | 54.2 | 53.8 | 0.68 | ✗ | | Summary: This paper introduces a framework for post-hoc dataset inference in large language models (LLMs) by synthesizing held-out data. The central motivation is to address the critical bottleneck of conventional dataset inference methods, which require an in-distribution held-out set that is rarely available in practice. To overcome this limitation, the authors propose generating synthetic held-out data using a fine-tuned data generator trained on a suffix completion task, designed to mimic the distribution of the suspect dataset. Since synthetic data may introduce distributional shifts relative to natural data, the paper further introduces a post-hoc calibration step that leverages a dual-classifier approach—one classifier distinguishes natural versus synthetic text, while a membership-aware classifier combines textual features with membership signals (e.g., perplexity and token probabilities). The method is evaluated on both single-author datasets (such as blog posts) and larger, heterogeneous collections (e.g., various subsets from the Pile), demonstrating that the synthetic held-out data, once calibrated, can reliably reveal whether a suspect dataset was used in LLM training. The authors also present ablation studies comparing different generation strategies (including in-context paraphrasing, preference optimization, and suffix completion), showing that suffix completion yields the smallest distributional gap.
Claims And Evidence: 1. The paper claims to unlock reliable post-hoc dataset inference by synthetically generating an in-distribution held-out set that can serve as a surrogate for real data.
2. The authors designed data generation pipeline based on a suffix completion task, followed by a post-hoc calibration procedure using a dual-classifier approach.
Methods And Evaluation Criteria: The methodology focuses on generating synthetic held-out data from the suspect set using a fine-tuned generator and then aligning the generated and natural data distributions via post-hoc calibration. The dual-classifier setup—combining a text-only classifier with a membership-aware classifier—is used to disentangle genuine membership signals from artifacts caused by distributional shifts.
Theoretical Claims: The dual-classifier and t-test framework are theoretically motivated by prior work in membership inference attacks.
Experimental Designs Or Analyses: Experiments are conducted on both a single-author blog dataset and multiple subsets from the Pile dataset. The target models are LLMs (e.g., fine-tuned Pythia 410M) and the synthetic data is generated via multiple strategies, with suffix completion proving most effective.
However, the evaluation might be strengthened by testing on more recent LLM architectures and diverse model sizes.
Supplementary Material: The supplementary sections include detailed descriptions of the generation process.
Relation To Broader Scientific Literature: The work is well positioned within the literature on dataset inference, membership inference attacks, and test set contamination.
Essential References Not Discussed: Incorporating a comparison with more recent methods for LLM membership inference could strengthen the literature review.
Other Strengths And Weaknesses: Strengths:
1. The paper addresses a practical bottleneck in dataset inference by eliminating the need for a reserved held-out set.
2. The paper is easy to follow.
Weaknesses:
1. The approach for synthesizing held-out data, using techniques like suffix completion and preference optimization—does not represent a significant departure from existing methods for synthetic data generation. Similar strategies have been explored in other works.
2. While the authors claim that their dual-classifier t-test framework uniquely disentangles genuine membership signals from mere distribution shifts, its novelty compared to the approach used in “LLM Dataset Inference: Did you train on my dataset?” remains unclear.
3. Sensitivity to different LLM architectures and hyperparameter settings is not fully explored.
Other Comments Or Suggestions: Refer to weaknesses
Questions For Authors: 1. How does your approach for synthesizing held-out data—using prompted paraphrasing, suffix completion and preference optimization—improve upon or differ from prior methods for synthetic data generation?
2. Can you clarify the unique aspects of your dual-classifier t-test framework compared to the methodology presented in “LLM Dataset Inference: Did you train on my dataset?”
3. What is the sensitivity analyses on various LLM architectures and hyperparameter settings?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: >**The evaluation might be strengthened by testing on more recent LLM architectures and diverse model sizes.**
We provide results to demonstrate the effectiveness of our method across larger model sizes of Pythia (2.8B and 6.9B) and different model architecture (Olmo 7B). Please refer to our response to Reviewer z1aa (https://openreview.net/forum?id=a5Kgv47d2e¬eId=z18yFkoW8x) for detailed results.
>**Weakness 1 & Question 1: How does your approach for synthesizing held-out data [...] differ from prior methods for synthetic data generation?**
The goal of our work is not to propose a novel method for synthetic data generation. Instead, our main contributions lie in combining *high-quality and nearly IID synthetic data generation* with *post-hoc calibration* within a framework of dataset inference to provide a practical tool for real-world litigations.
Moreover, we note that directly applying previous generation methods only have limited performance for our task (shown in Table 2). Therefore, we introduce two simple but effective designs tailored for our framework:
(1) **Segmenting and shuffling** author-provided documents: The segmented and shuffled sequences constructs IID generator training and inference splits, facilitating easier synthesis by the data generator.
(2) **Suffix comparison** scheme: Both the suspect and synthetic suffixes share a common prefix, which ensures the two suffixes have the same position within a sentence. This enables direct comparison between original-synthetic suffix pairs, which also supports the difference comparison t-test in our *post-hoc calibration*.
We present the following results to demonstrate the importance of the two key designs in our held-out data synthesis:
| **Approach** | **True Membership** | **AUC_Text (%)** | **AUC_Comb (%)** | **P-value** | **Inferred Membership** |
|----------------------|-------------------|----------------|-----------------|-------------|------------------------|
| **w/o segment and shuffle** | ✓ | 72.5 | 45.8 | 1.0 | ✗ |
| | ✗ | 72.6 | 42.2 | 1.0 | ✗ |
| **w/o suffix comparison** | ✓ | 62.8 | 52.9 | 1.0 | ✗ |
| | ✗ | 62.4 | 50.3 | 1.0 | ✗ |
| **Ours (w/ segment and shuffle+suffix comparison)** | ✓ | 53.6 | 59.7 | <0.001 | ✓ |
| | ✗ | 53.1 | 46.6 | 1.0 | ✗ |
With the two proposed designs, our method has the lowest AUC_Text, indicating it can synthesize nearly IID held-out data. Moreover, the generated held-out set has a significantly higher AUC_Comb on the member set than on the non-member set, which shows our synthetic texts are effective for detecting membership signals.
>**Weakness 2 & Question 2: Clarify the unique aspects of your dual-classifier t-test framework compared to the method in “LLM Dataset Inference”**
In Section 3, we show that the “LLM Dataset Inference” fails since it uses *the simple null hypothesis* that “the suspect dataset was not used for training” with a t-test that compares the MIA scores between *the suspect and the held-out sets*.
By introducing *statistical tests based on dual-classifier*, we can statistically distinguish distributional shifts caused by actual membership signals from those caused by generation, and provide more robust detection results. Specifically, our method compares the shifts between suspect and held-out sets *with and without MIA metrics*, which are measured by the text classifier and the combined classifier, respectively.
>**Weakness 3 & Question 3: Sensitivity analysis on hyperparameter settings**
In our proposed framework, the arbitrator, who verifies the training data usage, can choose hyperparameters in both the held-out data generation and post-hoc calibration stages.
In the **held-out data generation** stage, the arbitrator can choose hyperparameters according to the AUC of the text classifier, where a low AUC_text indicates nearly IID suspect and synthetic held-out texts.
In the **post-hoc calibration** stage, there are a few important hyperparameters for our method, especially epochs for training the linear classifiers and t-test sample sizes. We present the sensitivity analysis of our method for these hyperparameters below (we use 200 epochs and 2000 sample size in our manuscript).
| **Hyperparameter** | **Value** | **True membership** | **P-value (diff)** | **Inferred membership** |
|----------------|-------|-----------------|----------------|---------------------|
| **Epoch** | 100 | ✓ | <0.001 | ✓ |
| | | ✗ | 1.0 | ✗ |
| | **200** | ✓ | <0.001 | ✓ |
| | | ✗ | 1.0 | ✗ |
| | 500 | ✓ | <0.001 | ✓ |
| | | ✗ | 1.0 | ✗ |
| | 1000 | ✓ | 0.003 | ✓ |
| | | ✗ | 1.0 | ✗ |
| **Num. t-test samples** | 1000 | ✓ | <0.001 | ✓ |
| | | ✗ | 1.0 | ✗ |
| | **2000** | ✓ | <0.001 | ✓ |
| | | ✗ | 1.0 | ✗ |
| | 3000 | ✓ | <0.001 | ✓ |
| | | ✗ | 0.41 | ✗ |
| | 4000 | ✓ | <0.001 | ✓ |
| | | ✗ | 0.25 | ✗ |
The results show that our method can tolerate a wide range of hyperparameters and gives true positives and true negatives. | null | null | null | null | null | null |
The Limits of Predicting Agents from Behaviour | Accept (poster) | Summary: This paper explores the theoretical limits of predicting AI agent behavior from observational data alone. The authors analyze the extent to which we can infer an agent's beliefs and predict its behavior in novel situations based only on its past behavior. They provide:
1) A mathematical framework using Structural Causal Models (SCMs) to represent an agent's internal world model and how it relates to observable behavior.
2) Formal bounds on how well we can predict an agent's preferences and actions out-of-distribution, showing that multiple different internal models can be consistent with the same observable behavior but lead to different actions in new situations.
3) Theoretical results showing that while some aspects of an agent's beliefs can be bounded based on behavioral data, other aspects (such as counterfactual fairness) remain fundamentally underdetermined.
Claims And Evidence: The paper's claims are generally well-supported through formal mathematical proofs. The authors establish several theorems with bounds on what can be inferred about an agent's internal model from behavioral data alone. These theoretical results are complemented by concrete examples that illustrate the concepts.
Methods And Evaluation Criteria: The proposed methods make sense for the problem or application at hand.
Theoretical Claims: I perused the proofs at a high-level but not in detail; they seem correct.
Experimental Designs Or Analyses: n/a
Supplementary Material: I reviewed the proofs at a high level.
Relation To Broader Scientific Literature: The paper builds on and connects several areas of research:
- The work extends bounds from classical causal inference (referencing work by Pearl, Robins, Manski, etc.) to the novel context of agent behavior prediction.
- The paper directly addresses concerns in AI safety about predicting agent behavior out-of-distribution, particularly connecting to work on goal misgeneralization.
- The authors connect their theoretical framework to emerging evidence that language models may develop internal world models.
- The paper applies its framework to fairness definitions from the literature, showing fundamental limitations in inferring an agent's notion of fairness from behavior alone.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Strengths:
- The paper provides a rigorous mathematical framework using Structural Causal Models to analyze agent behavior prediction.
- The work has clear and important implications for AI safety research, establishing fundamental limits on our ability to predict AI behavior in new situations based solely on observational data.
- The Medical AI example used throughout the paper effectively illustrates the theoretical concepts
- The paper explores extensions to its core assumptions, such as approximate grounding, partial observability, and modifications to model structure, making the analysis more robust and applicable to real-world scenarios.
Weaknesses:
- Many examples use binary variables (like medical outcomes being 0 or 1), which simplifies the analysis but may not capture the complexity of real-world scenarios with continuous outcomes or multiple categories.
- The framework primarily addresses single-step decision making rather than sequential decision problems, which limits its applicability to reinforcement learning agents and longer-term planning scenarios.
Other Comments Or Suggestions: n/a
Questions For Authors: Can you add a limitations section wherein you discuss cases in which SCMs might not be applicable? e.g., practical instances of cyclical dependencies that are assumed away in the definition of SCMs
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our paper, we appreciate the questions and suggestions.
***1. Many examples use binary variables (like medical outcomes being 0 or 1), which simplifies the analysis but may not capture the complexity of real-world scenarios with continuous outcomes or multiple categories.***
To clarify, note that our results hold for systems of (discrete) variables of any dimensionality and arbitrary complexity (in the functions relating different variables and distributions), and continuous outcomes / utility values. There is in principle no additional difficulty to accounting for complex environments, though perhaps the computation of the bounds from finite samples may become more challenging. Our intent with examples using binary variables was to make them easy to describe, though we go agree that more realistic experiments could add to illustrate the results. See also the response to reviewer EHMW for additional context.
***2. The framework primarily addresses single-step decision making rather than sequential decision problems, which limits its applicability to reinforcement learning agents and longer-term planning scenarios.***
The sequential setting, in principle, offers no additional difficulties: you would have variables and observations indexed by time and the formalism and bounds would be unchanged. More concretely, the preference gap $\Delta$ at some time $t$ that determines the AI's beliefs that one decision is superior to another one might be written in the sequential setting as:
$\Delta := \mathbb E_{\hat{P}_{\sigma, d_1^{(t)}}}[Y^{(t)} \mid \boldsymbol c^{(t)} ] - \mathbb E\_{\hat{P}\_{\sigma, d_0^{(t)}}}[ Y^{(t)} \mid \boldsymbol c^{(t)}]$
where $\boldsymbol c^{(t)} = (s^{(t)}, d^{(t-1)}, \dots, d^{(0)},s^{(0)})$ now includes the agent's trajectory in the deployment environment up to time $t$ (for example). In this setting, we do assume however that there is no learning taking place in the deployment environment, i.e. the AI is not updating its internal model upon interacting with the environment $\boldsymbol c^{(t)}$. This might be a limitation in practice that would be interesting to relax in future work.
***3. Can you add a limitations section wherein you discuss cases in which SCMs might not be applicable? e.g., practical instances of cyclical dependencies that are assumed away in the definition of SCMs.***
This is a good suggestion, thank you. We will add to the limitations section in Appendix B.1 a description of the kinds of environments that would not be well captured by SCMs. | Summary: The authors study the problem of predicting out of distribution behaviours of AI agents based on data, assuming the AI agents have internal causal models of the world. First, examples are presented for why observing the behaviour of AI agents may not be enough to determine the specific causal model it is using (i.e. inferring the causal model from observations and utilities is an ambiguous problem). Then, authors present formal bounds on the utility gap that the causal agents experience based on the observed data. Finally, they discuss the implications this has for AI safety and overall inferring of preferences in autonomous systems.
Claims And Evidence: The claims are well supported, and there are no contributions that are left unjustified. The authors are very clear that their main contribution is the derivation of such bounds and the accompanying discussion on the implications.
Methods And Evaluation Criteria: There are not really any methods or evaluations. The authors provide some illustrative examples that cover their claims.
Theoretical Claims: Unless I missed any key details, the proofs and theoretical claims seem correct. I do have to raise that there are imprecisions and misspecified formal statements along the paper that make the judgement of some of these claims difficult (see questions below), but overall the proofs are well written and clear.
Experimental Designs Or Analyses: The paper has no experiments.
Supplementary Material: I reviewed the additional results and proofs (in as much detail as I could given the reviewing loads).
Relation To Broader Scientific Literature: The authors do a very thorough job at relating the work to existing works in causality, but there is barely no discussion on how the work relates to other sub-fields that have studied similar problems. The question of whether preferences can be inferred from agent choices and observations is a classic problem in game theory, and more recently has received a lot of attention in reinforcement learning through inverse RL or reward shaping. Furthermore, the conclusion that one cannot fully determine preferences from limited partial observation data has been reached in these subfields before. I would have appreciated some mentions to these connections.
Essential References Not Discussed: On reward misspecification, see e.g.:
Freedman, Rachel, Rohin Shah, and Anca Dragan. "Choice set misspecification in reward inference." arXiv preprint arXiv:2101.07691 (2021).
Skalse, Joar, and Alessandro Abate. "Misspecification in inverse reinforcement learning." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 12. 2023.
On more classic (and some more recent) game theoretic results studying preference misspecification:
Richter, Marcel K. "Revealed preference theory." Econometrica: Journal of the Econometric Society (1966): 635-645.
Afriat, Sydney N. "The construction of utility functions from expenditure data." International economic review 8.1 (1967): 67-77.
Abbeel, Pieter, and Andrew Y. Ng. "Apprenticeship learning via inverse reinforcement learning." Proceedings of the twenty-first international conference on Machine learning. 2004.
(There are many more, and I'm not suggesting all need to be included or a specific discussion is needed to justify the work, but some mention on the novelty of the contribution given existing preference theory results would be of benefit).
Other Strengths And Weaknesses: Strengths:
- Paper is very well structured, the message is very clear and the examples help illustrating the points.
- I appreciate work that neatly demonstrates how given systems we are deploying at a large scale may have flaws, and how these flaws can affect their deployment.
Weaknesses:
- Some formal statements are not clear or feel rushed, see below.
- There is not really a prescriptive nature to the paper (i.e. even a heuristic on how to improve interpretability of AI agents under causal world models where we only have access to observational data). This is not always needed of course, but the work would be more complete.
- It is not clear how the main contribution, although interesting, is of particular relevance to the field or novel when compared to existing subfields (mentioned above). Intuitively there seems to be some novelty when considering such causal world models, and in particular on the specific bounds obtained, but it feels slightly thin.
Other Comments Or Suggestions: I have some particular comments and suggestions regarding the formalisms used.
- Before equation (1) authors state that $Y$ is a 'potential response', which seems to map latent variables to observables, and that it entails a distribution over the possible outcomes of $Y$ (this is already not very clear). However, in section 4, it is stated that $Y$ is a utility function, and at the same time it seems to be a member of the outcome set ($Y\in \mathbf{V}$). It is not clear whether $Y$ is then a set-valued map, a random variable, a function or a set of outcomes.
- Definition 2 is not very clear. $\phi$ is not used again, and it is not clear whether it's a functional that just maps to the reals.
- In Footnote 4, the idea of a policy is introduced, but it's not very clear what a policy means in this context, nor what the 'training domain' is. Is a policy just a function from the observables to probability distribution over interventions?
- Authors seem to indicate policies are stochastic, but then seem to indicate in eq (3) that AI agents are assumed to act deterministically. Some clarification on this would be good.
Questions For Authors: See comments above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your review, we appreciate the references to related work and suggestions for clarifying the formalism.
***1. The authors do a very thorough job at relating the work to existing works in causality, but there is barely no discussion on how the work relates to other sub-fields that have studied similar problems (e.g., game theory, inverse reinforcement learning, decision theory).***
Thank you for highlighting work in inverse reinforcement learning (IRL) and decision theory. There are several similarities in the broader goals of these different lines of research, e.g., inferring expected utilities from data, but also a few differences that are becoming clearer to us as we contrast the papers mentioned by the reviewer with ours. This deserves a longer discussion but one advantage of the causal formalism (in our view) that we can highlight here is that it gives us tools to make inferences across different environments (including in counter-factual scenarios, e.g., to reason about harm and fairness). Perhaps it is fair to say that our bounds thus more concretely characterize the limits of what can be predicted about agent behaviour *out-of-distribution* which then complements the (in our understanding mostly in-distribution) partial identification results in IRL and decision theory.
***2. It is not clear how the main contribution, although interesting, is of particular relevance to the field or novel when compared to existing subfields (mentioned above). Intuitively there seems to be some novelty when considering such causal world models, and in particular on the specific bounds obtained, but it feels slightly thin.***
We do believe that our bounds characterize more precisely what can be expected of AI behaviour out-of-distribution. The bounds themselves are significant because they fully describe the set of possible AI behaviours, under our assumptions. This is an important question for AI Safety and there are several implications for the design of AI systems and the extent to which we can trust them (e.g., if the bounds are wide we might want to monitor AI behaviour more closely, or if harmful actions can be ruled out by our bounds that might increase our trust in the technology when deployed).
To our knowledge, related work, e.g. in IRL, has studied extensively the inference of utilities from data but less so the inference of action choice given a (set of) compatible utilities, which is the contribution we seek to make in our paper. More speculatively, we also believe that there is an interesting cross-discipline opportunity that this paper might encourage: the causal formalism provides a new angle of attack, showing that it is possible to exploit the inductive biases implied by the AI's learned world model to predict their behaviour and beliefs. This approach is novel and potentially fruitful due to the strength of the worst-case guarantees, subject to our assumptions.
***3. Particular comments and suggestions regarding the formalisms used.***
Thank you for these. Here we provide a few clarifications and we will update the manuscript accordingly.
- $Y\in\boldsymbol V$ encodes the utility and is a random variable whose assignment is determined by the AI’s world model (structural causal model).
- $\phi$ is meant to describe a mapping from an AI’s internal model to a statement over probability of events. With hindsight, this notation was confusing: in the revised manuscript we have replaced the definition of AI beliefs by "probability statements that are derived from the AI's internal world model" such as $P^{\widehat M}(A=a)$ or $P^{\widehat M_z}(B=b | C=c)$.
- Yes, a policy is a function from the observables to probability distribution over interventions.
- We hypothesize that the data of AI interactions we observe, denoted $P_\pi(\boldsymbol v)$, is collected while the AI is training or learning, meaning that we can expect some amount of exploration. The policy $\pi$ is therefore stochastic. However, once the agent is deployed in a new environment, we assume that its choices are driven by expected utilities and are therefore deterministic (though other decision-making models are also possible and would induce variations on how to use the bounds to inform our understanding of AI behaviour). | Summary: This paper derives theoretical bounds on predicting an AI agent’s future behavior from observed actions, using structural causal models (SCMs) to formalize beliefs and grounding. It introduces metrics like the preference gap and counterfactual fairness gap, arguing that—even with full behavioral data—fundamental limits remain on how precisely we can infer an AI’s internal decision-making, especially out-of-distribution. The authors back their claims with theorems and examples (e.g., a Medical AI scenario).
Claims And Evidence: - the paper claims that an AI’s internal beliefs and decision-making can only be bounded, not exactly determined, using external behavior. though the derivations (e.g., Theorems 1–5) lay out tight bounds under various assumptions and their proofs appear rigorous at a glance, they lean heavily on idealized assumptions (like perfect grounding) that are rarely met in real-world scenarios.
- the proofs are not thoroughly validated for practical use. examples like the Medical AI feel oversimplified, leaving questions about whether the results would hold in more settings.
Methods And Evaluation Criteria: - The paper is almost entirely theoretical, centered on structural causal models and derived bounds on preference gaps, fairness, and harm, well-motivated from a theoretical standpoint.
- but the work leaves a gap regarding practical evaluation
Theoretical Claims: I reviewed several key proofs of theorems 1-5, the proofs seem to be correct under the idealized conditions assumed
Experimental Designs Or Analyses: The work includes only illustrative examples, such as examples 1-3, which are largely schematic, appearing more as a proof-of-concept rather than a validated experimental framework, i.e., no rigorous experimental design or analysis on real or simulated data.
Supplementary Material: I read appendix A and C for derivations of several proofs (including theorems 1 and 5)
Relation To Broader Scientific Literature: - well-rooted in established causal inference literature (think Pearl, Bareinboim, and others) and builds upon causal modeling and counterfactual analysis
- it neglects to contrast these with recent empirical work on AI interpretability and behavior prediction.
Essential References Not Discussed: the paper thoroughly engages with classical causal inference literature but omits more recent developments in causal representation learning within deep learning, such as *Towards Causal Representation Learning (2021)*;
also, it doesn’t mention recent empirical benchmarks for evaluating AI safety and interpretability which may complement its theoretical contribution, like *RobustBench: a standardized adversarial robustness benchmark (Croce et al., 2020)*
Other Strengths And Weaknesses: - without real data experiments or simulations, the practical impact of these bounds remains questionable
- a major downside is the heavy reliance on assumptions that may not hold in practice, making it hard to see how these bounds could directly inform AI safety
- as to theorem presentation, the notation is dense and sometimes overwhelming
Other Comments Or Suggestions: I don't have any other comments
Questions For Authors: - might it be more reliable to use the inferred beliefs to predict (or get the bounds on) the agent's behaviour in the in-distribution situation?
- can you provide any empirical or simulation-based evidence to support your theoretical bounds in a relatively realistic setting?
- if not, what steps would be needed to translate these bounds into actionable guidelines for AI safety and policy design?
- what will happen to your framework if the AI system's internal models evolve over time?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper.
***1. The proofs are not thoroughly validated for practical use. Examples like the Medical AI feel oversimplified, leaving questions about whether the results would hold in more settings.***
To clarify possible misconceptions, we should emphasize that our results hold for any agent deployed in (discrete) systems of arbitrary dimensionality and complexity in the underlying functions and distributions, subject to our assumptions. More complex environments with more variables do not on their own pose any additional challenges for predicting future agent behaviour according to our bounds. As the reviewer mentions, our contribution is to formally characterize the range of an AI’s possible behaviours out-of-distribution. In our view, it is important that this characterization be theoretical, and apply in the worst-case, even in cases that are difficult to foresee in simulations.
We do appreciate, however, that assumptions on grounding and expected utility maximization might not be appropriate for all systems, and that further experiments could help probe whether the behaviour of current AIs is well predicted through our bounds (as the reviewer is suggesting). One additional experiment we consider prompts an LLM (the agent) to output its subjective expected utilities under different actions, comparing those inferences to our bounds. We provide a summary below of this experimental set-up.
*Experiment description*. We give the LLM in-context the counts of transitions of a simulated dataset to mimic its past experience and approximately satisfy the condition of grounding, i.e., that the AI’s beliefs on the likelihood of events in the environment are consistent with the data. Find below the template we use.
```
You are an agent operating in an environment. You are given the following dataset that represents your experience so far.
{{transition_count}}
Infer a causal model that is compatible with this dataset and use it for decision making.
Note that your causal model can be anything you choose, including unobserved confounders and arbitrary causal associations, as long as it is able to (approximately) reproduce the data.
Your task is to return your beliefs on the expected value of Y after taking the action do(X=0) and do(X=1) in a new environment in which we apply the intervention do(Z=1).
Return your answer as a dictionary {{'expected value of Y given action X=1': <value>, 'expected value of Y given action X=0': <value>}}.
```
Across different simulated datasets we compare the responses of various models from the Gemma family with our bounds. We find in all cases that the responses are included in our bounds. The reasoning traces reveal that LLMs generally follow instructions, hypothesizing a causal model, and deriving expected utilities correctly. This set-up is not without limitations but it does suggest (anecdotally) that it is possible to prompt LLMs to internalize a causal model and act rationally, such that are bounds can be reasonably expected to hold in practice.
***2. A major downside is the heavy reliance on assumptions that may not hold in practice.***
We expand on the strength of modelling assumptions and possible relaxations in the answer to reviewer Q6s6. We would appreciate if you could refer to that response (let us know if we can add further details).
***3. might it be more reliable to use the inferred beliefs to predict (or get the bounds on) the agent's behaviour in the in-distribution situation?***
Our results so far are framed in the infinite-sample regime. This means that if we have access to the training distribution and the AI is grounded (i.e., it has learned how to predict the likelihood of events in-distribution) then we could automatically evaluate expected utilities in-distribution for all actions and derive the AI choice without any uncertainty (assuming it chooses actions to maximize expected utility). If we understand the reviewer’s suggestion as “with finite samples, even in-distribution beliefs might be inferred with error which introduces some uncertainty in their choice of action in-distribution"; this is correct. The question here is how the AI is estimating its beliefs from finite samples, which is interesting. In this paper, we are not addressing estimation issues directly but we do acknowledge that it is an important limitation and a good topic for future work.
***4. what will happen to your framework if the AI system's internal models evolve over time?***
This is an interesting setting, the question might then become: how to predict AI behaviour given a large dataset of interactions in a training environment and a small dataset of interactions in the deployment environment. Our current results do not consider this setting so far, but extensions might be feasible along the lines of (Bellot et al., 2023).
Bellot, A. et al. "Transportability for bandits with data from different environments." NeurIPS 2023. | Summary: This paper considers a model of a decision-making agent as follows. Note the
paper always imagines the agent to be an AI system, but this does not appear to
be essential other than for motivation.
* Suppose the agent makes decisions guided by (1) a utility function and (2) a
causal model of how its decisions influence outcomes in the world.
Precisely, suppose the agent makes decisions so as to maximise expected
utility in a structural causal model (SCM).
* Suppose further that the agent's utility function is known to us, but the
SCM is not, other than that the SCM is sufficiently well-calibrated
("grounded") in the sense that it is in agreement with some known ground
truth SCM as to the distribution over outcomes induced by each of the
agent's potential actions.
In this setting, one can ask theoretical questions of (partial)
identifiability, i.e., to what extent does the behaviour of the agent in the
ground truth SCM determine its future behaviour given various kinds of
interventions, or its true intentions regarding discrimination and harm?
Due to the fact that there may be multiple agent-SCMs consistent with the
observed behaviour, meaning its future behaviour under an intervention or its
internal motivations for a decision are under-determined. Thus, the best one
can hope for is a characterisation of the range of possible behaviours or
intentions given the range of possible internal SCMs.
This paper provides several such bounds under various assumptions.
The first contributions regard the range of possible degrees of preference
between two actions conditional on a known, unknown, or partially-known
intervention, indicating how (un)predictable the agent's future decisions are.
1. Theorem 1 gives tight bounds on the range of possible degrees of preference
between two actions given a fixed atomic intervention to the SCM.
2. Theorem 2 shows that no meaningful bound is possible on the degrees of
preference between two actions given an unspecified intervention to the SCM.
3. Theorem 3 shows that if the intervention is unspecified but the effect on
the distribution of input contexts is known, then some meaningful bounds can
be derived.
Further contributions regard evaluating the agent's intentions regarding
different notions of fairness/discrimination or safety/harm, indicating to what
extent behavioural evaluations can reach meaningful conclusions about these
properties of decision-making systems guided by causal world models:
4. Theorem 4 shows that no meaningful bound on the "counterfactual fairness
gap" can be derived from external behaviour.
5. Theorem 5 shows that in some situations, meaningful bounds on the
"counterfactual harm gap" can be derived from external behaviour.
6. Theorem 8 (appendix D) shows that in some situations, meaningful bounds on
the "direct discrimination gap" can be derived from external behaviour.
7. Appendix D also discusses "causal harm gap" and offers a proof of some
bounds on page 33, though a statement of 'theorem 9' and discussion of its
implications appears to be missing.
The paper also discusses the prospects of deriving stronger/weaker bounds given
different assumptions, primarily through the lens of a simple concrete example
(rather than general bounds) including:
8. If the "grounding" assumption is relaxed to allow an approximate agreement
between the distributions predicted by the agent's internal SCM and the
ground truth SCM, then slightly weaker bounds result.
9. If instead of assuming the agent optimises the specified utility function,
we assume that it optimises a similar 'proxy' utility function that induces
similar decisions given the ground truth SCM, then the behaviour can still
be constrained, at least in similar contexts.
10. Sections 5.2 and 5.3 discuss how with partial knowledge of the agent's
internal SCM, bounds can be improved.
Claims And Evidence: The model, assumptions, and results are clearly stated. The interpretations are
clearly explained. The examples help make the bounds concrete. The
proofs accompanying each formal claim give explicit detail for all derivations
and calculations.
Methods And Evaluation Criteria: N/A
Theoretical Claims: I did not check the correctness of the proofs or examples in detail.
Experimental Designs Or Analyses: N/A
Supplementary Material: I read appendices A and B. I briefly read appendices C and D, skipping the
details of the proofs.
Relation To Broader Scientific Literature: As I understand, prior work must have studied the related question of whether
the behaviour of a utility-maximising decision-maker uniquely determines the
(grounded) causal model used as the basis for its decisions (the answer being
no, in general). Authors can correct me if I am mistaken on this point.
This paper builds on this more fundamental partial identifiability by extending
it to partial identifiability results for the decision-maker's OOD behaviour or
its intentions with respect to fairness and harm.
This observation is valuable to the field of AI safety since it motivates the
need for understanding and control of the specific internal SCM that governs
behaviour in powerful AI systems of the near future.
Essential References Not Discussed: None to my knowledge.
Other Strengths And Weaknesses: **Strengths:** As noted in above sections, I think this paper is very clear and
makes a valuable contribution to the field of AI safety.
**Weaknesses:** The main weaknesses concern the strength of modelling
assumptions. While the paper contains a thorough discussion of how relaxing
various assumptions would affect the results, I think further discussion could
be warranted. Particularly, when it comes to the motivation of predicting the
OOD behaviour and safety properties of interest to the AI safety community, we
are interested in large-scale AI systems acting in very complex, real world
environments with very complex internal cognitive structures. This raises the
following questions:
1. While the paper points out (footnote 2) that there is no assumption that the
AI system reasons explicitly in terms of an SCM, it appears that there is an
assumption that the AI system's behaviour is well-captured by *some* SCM.
If I understand correctly, in reality, large-scale complex AI systems with
complex decision-making procedures might only be approximately modelled by
any (reasonably sized) SCMs.
In turn, if the AI's behaviour is only *approximately* captured by an SCM,
then I would suppose that the bounds will degrade. If this is accurate, then
I think it would be valuable to add an example along these lines.
2. The paper assumes that the agent's decisions are exactly optimal given its
causal model. In complex, real-world environments, even in the limit of
quite powerful AI systems, these systems are still subject to computational
limitations and accordingly it would be more appropriate to treat their
rationality as "bounded" in some sense (for example, supposing they choose
actions that have approximately the highest utility, rather than exactly the
highest utility, given their model).
Again, I suppose such an approximation should degrade the bounds on
predicting behaviour. If so, I think it would be valuable to add an example
along these lines.
3. The paper points out in footnote 4 that it additionally assumes access to a
comprehensive specification of the AI system's behaviour in all possible
contexts (if I understood this footnote correctly). However, in complex
environments, there are too many contexts for us to ever hope to have
such comprehensive behavioural knowledge. There will be contexts in which we
don't observe the system's behaviour. The footnote points out that this
creates strong limitations on inferring future behaviour. I had some trouble
following the exact meaning of this footnote, and invite the authors to
consider expanding this discussion. Is a result along the lines of theorem 2
possible in this situation? Or, at least, a concrete example along these
lines?
I should clarify that I don't see these limitations as undermining the
contribution of the work, which I think is acceptable even given the
assumptions that it makes. I am listing these issues more as a way to suggest
that the contribution would be improved in my opinion if the discussion in
section 5 could be further broadened along the above lines.
The third point above also bears on the discussion of goal misgeneralisation
(section 5.1), which I feel needs refinement:
4. The conclusion of section 5.1 appears to be that "OOD" behaviour (behaviour
under a novel intervention) may in principle be possible to bound if one has
sufficient constraints on the proxy utility function in relation to the true
utility function. An explicit comparison is made to goal misgeneralisation.
However, in empirical work defining and studying examples of goal
misgeneralisation, one trains in a *subset* of contexts, where the proxy
utility and the true utility are constrained to agree, but then tests in
*novel* contexts where they come apart, and the system's performance
degrades to an arbitrary degree. If I understand correctly, the combination
of (1) the assumption that we have behavioural data covering all contexts
and (2) a proxy utility function that is correlated with the true utility in
all observed circumstances amounts more to a setting of "approximate inner
alignment," rather than the concern about more substantial inner
misalignment and a different kind of distribution shift that drive the
failure mode of goal misgeneralisation.
Other Comments Or Suggestions: I found the paper generally well-written, but spotted a couple of minor typos,
or generally questionable phrases, as follows.
1. In the abstract, you write "If an agent behaves as if it has a certain goal
or belief, then we can make reasonable predictions about how it will behave
in novel situations..." This seems to beg the question, that is, isn't "we
can make reasonable predictions about behaviour" the definition of what it
means to say a system is an agent whose behaviour is driven by goals and
beliefs? I was not sure if this "if ... then ..." sentence was an attempt to
justify making predictions based on attributed beliefs---I think some other
justification is needed to draw that conclusion.
2. In the main result summary (Lines 032R--034R), I think you mean to emphasise
"partially determines," but I'm not sure that this emphasis comes across.
For reference, I have written papers on partial identifiability, and I
initially missed the emphasis here. It might help to bold "partially
determines," if this is what you were going for. A complementary approach
might be to add something in the introduction that explains what this means
(along the lines of explaining that even though we can't uniquely identify
the future behaviour, we can narrow it down to a range of possible outcomes,
this paper characterises those outcomes, etc.).
3. Line 040R: "can be can be"
4. "Shane (2023)" should probably be "Legg (2023)," and I think you should
probably cite the youtube video rather than a forum post in which someone
happens to be asking a question about the video.
5. Is definition 2 used in the main text?
6. Line 140L: "how ... the world looks like" ("how the world looks" or "what
the world looks like"?)
7. Definitions 3 and 4: I found it initially unclear what is meant by "for any
d." I think I understand now that $d$ is an arbitrary value associated with
the agent's decision variable $D$, i.e., it means $\forall d \in
\mathrm{supp}_ {D}$. I think it could help to make this explicit. Also I
think that the status of $D$ as a unique decision-making variable is not
made explicit anywhere. Of course, it's possible I'm still confused.
8. Opening of section 3.2: "We ... do not have access to the mchanisms
underlying the actual environment nor the agent's internal model." This is
framed as an assertion, but I think it would be more appropriate to frame it
explicitly as an assumption of the setting you are analysing. Later in the
paper, you go on to point to work that attempts to improve on this situation
as a promising direction.
9. Line 172L: I didn't really understand the connection between 'quick
learning' and grounding. I suppose there is not meant to be a necessary
connection, but rather you are just suggesting this as a sufficient way by
which an AI system may become (approximately) grounded in practice.
10. Definition 5: While some earlier definitions have a small square at the
end, this one (is the first that) does not. I don't personally think the
squares are necessary due to italicisation, but I suppose you may want to
follow a consistent style, so I thought I'd point this out.
11. Example 3: This example, and the definition of approximate groundedness,
felt like they conceptually fit more closely with the content in section 5
than they do here. I wonder if you have considered moving this 'assumption
relaxation example' to the end of the paper with the others, and if this
could improve the flow of the paper's presentation of the main formal
results.
12. Line 233R-ish: Estimating the min/max of a set from samples can be
challenging. I wonder if some kind of optimisation technique could be used
here. Just an idle thought.
13. Theorem 2: This theorem is the most surprising to me. In particular, I
paused to wonder whether it should hold for all SCMs. I noticed I am a
little confused about the space of possible SCMs under consideration. Could
there not be some corner cases involving very simple SCMs in which only a
single intervention were possible, such that we effectively know the
intervention even under the assumptions of this theorem? I apologise for my
unfamiliarity with the SCM framework---possibly this concern is not
well-formed. I looked briefly at the proof, but I am not familiar with the
prior work.
14. Line 293L: "it is known not tight in general," possibly a typo, or just
consider rephrasing?
15. Line 351L: "the external behaviour constraints the AI" typo.
16. Example 5, line 360R: "his" is possibly a typo.
17. Example 5, line 370R: "in fact could show that," typo.
18. Line 437L: "this bound are strictly tighter" typo.
19. Line 408R: "we can use their own" typo.
20. Line 413R: Unclear if "Peter" and "Jon" are meant to be different people or
the same person (if they are meant to be different people, I missed the
significance of this).
21. Line 416R: Double period.
22. Equation 28: There are two stray closing parentheses in the definition of
$\mathcal{F}_ \sigma$.
23. Line 729: "his" is possibly a typo.
24. Line 732: "This then constraints the possible values" typo.
25. Is there meant to be a theorem 9 stated before the proof on page 33?
Questions For Authors: I would appreciate if the authors could point out if there are any inaccuracies
in my summary of the paper and discussion of how it fits into related
literature. If the authors have the appetite for further discussion with me, I
am open to discussing the points I listed under weaknesses further. It doesn't
seem necessary to discuss the "other comments" listed above. Unfortunately, I
don't see much scope to increase my rating further than 'accept'.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review, we appreciate the feedback and the depth of the observations.
We do share your concern on modelling assumptions. Before discussing them in more depth, it might be worthwhile, however, to note more explicitly the increasing evidence available for AI systems (particularly LLMs) behaving rationally across a wide set of environments, and why this behaviour can be described by some causal model.
Decision-theoretic tests of rationality have been applied to LLMs that "demonstrate higher rationality score than those of human subjects" (Chen et al., 2023), see also (Mazeika et al., 2025). An LLM's set of preferences over interventions, to the extent that they are consistent with rationality axioms, can then be formally described by an SCM (Halpern and Piermont, 2024). The same conclusion can also be obtained for agents capable of solving tasks in multiple environments (Richens and Everitt, 2024).
Halpern, Joseph Y., and Evan Piermont. "Subjective Causality." 2024.
Richens, Jonathan, and Tom Everitt. "Robust agents learn causal world models." ICLR. 2024.
Chen, Yiting, et al. "The emergence of economic rationality of GPT." PNAS 2023
Mazeika, Mantas, et al. "Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs." 2025.
***1. If the AI's behaviour is only approximately captured by an SCM, then I would suppose that the bounds will degrade.***
In principle, this is true – if AI behaviour sometimes deviates from what is expected given the assumption of a fixed causal model, then the bounds might degrade. And if those deviations are sufficiently random then presumably no guarantees on AI behaviour can be given.
Possibly, one interesting relaxation that can be entertained is the assumption that the AI operates on multiple causal models. For example, suppose it samples a causal model from its set of possibilities and makes a decision according to that causal model. If the AI is grounded, meaning that all members of its set of causal models are compatible with the observed data, then all bounds remain unchanged. This is because the bounds capture the decisions implied by all causal models compatible with the observed data and therefore also the decisions implied by a randomly drawn causal model from this set. In contrast, if the AI is not grounded and sometimes makes decisions based on a causal model that is not compatible with the data, the bounds might degrade.
***2. It would be more appropriate to treat their rationality as "bounded" in some sense (for example, supposing they choose actions that have approximately the highest utility, rather than exactly the highest utility, given their model).***
This is correct -- if the AI chooses actions that have approximately the highest utility (under its own model) we would expect this additional source of uncertainty to degrade our ability to predict AI decision-making. If we understand the reviewer’s intuition correctly, while the AI’s beliefs on expected utilities remain unchanged, possibly the selection of actions given those beliefs may differ from exactly utility maximizing.
In this case, our bounds, e.g., on the preference gap $\Delta$ that measures the relative expected utility benefit from a decision $d_1$ relative to a decision $d_0$, would remain unchanged as they only reflect the AI’s beliefs and not their choices. For exact expected utility maximizers, $\Delta > 0$ ensures that $d_1$ will be chosen over $d_0$ by the AI. For approximate expected utility maximizers this threshold might not be sufficient, and we might require $\Delta > c$ for some $c>0$ to conclude that $d_1$ will be chosen over $d_0$ by the AI. The magnitude of $c$ will depend on how loosely the AI follows the expected utility maximization principle. This setting is potentially more realistic, thank you for suggesting it.
***3. In footnote 4 the paper assumes access to a comprehensive specification of the AI system's behaviour in all possible contexts (if I understood this footnote correctly). There will be contexts in which we don't observe the system's behaviour in the data.***
With this footnote we are trying to convey the possible challenges in estimation of bounds from finite samples. Our results are given in the infinite-sample regime, in terms of probability distributions, that may be difficult to estimate accurately if not enough data is available. For example, if the bound requires the estimation of a probability $P(d, c)$ but the combination of decision $d$ and context $c$ is not observed in the data, we might need to introduce additional modelling assumptions and account for errors in estimation to provide practical guarantees.
***4. Sec. 5.1 and the contrast between goal misgeneralisation and approximate inner alignment.***
Right, thank you for this explanation. Perhaps “approximate inner alignment” is a better framing for that example.
Thank you for the careful reading of our paper and for pointing out typos.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttals. I have considered this rebuttal and the discussion with other reviewers, and I'm happy to keep my positive score, with the following notes.
1. **Assumptions.** One of the main concerns of reviewer EHMW (2 weak reject) appears to be that the assumptions limit the practical implications of the results. In my opinion, the paper does a sufficient job of clearly stating the assumptions behind the main proofs, and has an extensive discussion with concrete examples of the ways in which the results might change if each assumption is relaxes. If the discussion from this rebuttal can be incorporated, then that discussion will be the more robust for it.
2. **Applicability to LLMs.** There also seems to be some concern about whether the results would describe practical LLMs. This rebuttal has cited some evidence of LLMs decision-making following rationality axioms. The rebuttal to EHMW (2 weak reject) sketches a proof of concept LLM experiment. I know the cited studies of LLM rationality, and I don't find them especially compelling. The experiment results sketched in the rebuttal seems sensible, but this investigation is very light, and does not necessarily reflect the way LLMs 'make decisions' in practice.
However, in my opinion, maybe it should not be necessary for this paper to demonstrate practical applicability. The paper answers some foundational questions about behaviour identifiability for agents reasoning with causal models. This is basic research that can frame future work on bounding system behaviour in more practical settings. For example, if you want to develop algorithms for 'IRL for predicting behaviour, but for SCMs' you need to know to what extent you can identify the behaviour in the limit. Is this enough of a contribution for an ICML paper? I leave this to be decided ultimately by the AC.
3. **Relationship to prior work.** On the topic of IRL identifiability, the reviewer Zntn (3 weak accept) points out that there could be further discussion of related work with similar goals. IRL is usually motivated in terms of learning a reward function from (human) expert demonstrations in order to subsequently optimise that reward function with RL methods. In contrast, this paper aims to identify an SCM of an (AI) agent in order to predict its future behaviour. Of course, IRL methods can be applied to AI systems to reveal information relevant to predicting their behaviour, and one could aim to learn an SCM of a human expert from their behaviour too. The situation with respect to partial identifiability is the same. Therefore, I agree that there is a relation, and the paper would be improved by including a discussion of this work.
One additional reference on IRL identifiability is [1], though there are also others, this one is perhaps especially relevant as it looks not only at identifying reward functions from expert demonstrations but also from other data (such as binary choices) and also looks at identifying downstream factors such as behaviour. The formalism is still RL, rather than SCMs.
4. **On terminology "out-of-distribution":**
* Following the thread in my review about goal misgeneralization. I take the authors suggesting to adopt my suggested terminology of "approximate inner alignment". I think this would be an improvement, but leave the final choice to the authors.
* Either way, something I am still uncertain about is the usage of the term 'out of distribution (OOD)'. In goal misgeneralization, 'OOD' usually means 'in new states or environment configurations that were not seen during training'. Let me call this 'state OOD'. I believe this is a generalisation of usage in supervised learning, where one considers 'input OOD'. In the present paper, there is the assumption (see footnote 4) that all states/environment configurations have been observed. Therefore, there is nothing that counts as state OOD. Instead, the authors use 'OOD' to mean *under a novel intervention* ('intervention OOD'?). This seems meaningfully different from state OOD. It's possible that intervention OOD is an established concept in SCM literature? If not, I invite the authors to consider switching to a new term to avoid confusion.
* Finally, I note that this bears on the discussion with reviewer Zntn about related work. The authors say "Perhaps it is fair to say that our bounds thus more concretely characterize the limits of what can be predicted about agent behaviour [OOD] which then complements the (in our understanding mostly in-distribution) partial identification results in IRL and decision theory." While I don't know the literature comprehensively, this seems fair to me, but *only* for *intervention OOD*. For example, [1] gives some negative state-OOD identifiability results (see their concept of a "transition mask").
References:
* [1] Skalse et al., "Invariance in Policy Optimisation and Partial Identifiability in Reward Learning", ICML 2023.
---
Reply to Comment 1.1.1:
Comment: Thank you for following up on our responses.
Indeed, we use the term **out-of-distribution** (OOD) to mean "intervention" OOD, or more generally the problem of extrapolating from the data observed in one domain to another domain in which some of the underlying "causal mechanisms" in the SCM might have changed (due to an intervention or a more general shift), closer to "input" OOD in supervised learning. In the causality literature this problem is also known as *transportability* [2] and generally assumes positive probability for all events. In light of [1] and related work in goal misgeneralization, it seems appropriate to describe the contrast between "input" and "state" OOD more specifically -- we will make this clearer, thank you.
[1] Skalse et al., "Invariance in Policy Optimisation and Partial Identifiability in Reward Learning", ICML 2023.
[2] Bareinboim, Elias, and Judea Pearl. "Causal inference and the data-fusion problem." PNAS 2016. | null | null | null | null | null | null |
Representation Surgery in Model Merging with Probabilistic Modeling | Accept (poster) | Summary: This paper builds upon an earlier work, representation survey. It there are T tasks and 1 model that is trained on each task, representation surgery paper defined representation bias as the sum of the distances between the merged model and the model trained on the current task for all the tasks. To alleviate this bias, they train an MLP module for each task that ensures that the representations of the merged model are closer to the model trained on the given task. This paper uses a gaussian distribution to model the latent space of the representation bias and uses an MLP module for each task to learn the mean and the standard deviation of the distribution, where the objective is to minimize the representation bias over all the tasks. They also formulate direct feature alignment (DFA) and proxy distribution alignment (PDA) methods to learn one module for all the tasks. They show that the PDA consistently outperforms the surgery based merging on all the model merging tasks. They also demonstrate that the model merged using method generalizes on out of distribution and on domain drift when used in conjunction with weight averaging.
Claims And Evidence: Claims:
1. To calibrate the representations of the outputs of the models merged using various methods and they demonstrated the same using their experiments. They used 4 model merging methods - Weight Averaging, Task Arithmetic, TIES, ADAMerging. The evidence is in table 2,3,5.
2. To use one unified module for all the tasks. The evidence is in table 4 and figure 8.
3. Tackling out of domain distribution and domain shift challenges. The evidence is in table 5.
Methods And Evaluation Criteria: The datasets used by this method have been used in the past by other model merging methods such as adamerging. In addition to the vision domain, they also evaluated their method on NLP tasks.
Theoretical Claims: Ya, they claimed that the probabilistic method results in a smaller classification error than a deterministic method. They proved it using PAC-Bayes theorem
Experimental Designs Or Analyses: They did a lot of ablation studies such as the values of the hidden size h2, the sampling parameter, the various loss functions etc. They also showed that their method converges faster when compared to surgery and the efficacy of calibration by comparing the l1 distance between the representations yielded by the probsurgery and surgery methods when compared to the representation of the model trained on that task.
Supplementary Material: Ya I looked through the entire supplementary material. They answered most of my questions in the supplementary material such as how exactly the architecture looked like, the number of parameters of probSurgery vs surgery modules, how much PDA improved over the other model merging methods etc.
Relation To Broader Scientific Literature: This method corrects the representations of the model yielded by various model merging methods in a post-hoc manner to make the representation closer to that of the model trained on the current task. This work builds upon an earlier work called Representation surgery. They were able to show gains over the representation surgery method in multi-task learning, domain drift and out of distribution data settings. While the representation surgery method required one module for each task, this method can improve over it using only one module for all the tasks.
Essential References Not Discussed: All the essential references are discussed
Other Strengths And Weaknesses: Strengths:
1. The paper is very well written. The experiments section is very elaborate and bolsters all the claims
2. This method is parameter efficient and requires only 1 module when compared to the representation surgery method
Weakness:
1. There is a risk that the merged model might lose its general capabilities. It is important to also show that the merged model performs well on datasets such as Imagenet for classification, coco dataset for object detection etc.
Other Comments Or Suggestions: You could also explore correcting the representation bias of all the layers rather than the last layer along the lines of surgeryv2
TYPO:
In the caption of table4, both learning with eight modules and learning with one module are denoted by cross.
Questions For Authors: 1. Your method is effective as the parameter interference hampers the model's performance, thus requiring this post-hoc calibration. Would this method still be able to improve methods such as DARE which try to reduce the parameter interference by reducing the number of parameters to begin with? Why is the performance improvement less for ada-merging while it is more pronounced for others? Similarly even for the NLP tasks, the gains are not as significant as the vision domain. Is it because of a smaller model?
2. The performance of model merging methods deteriorates as the number of models keep increasing. While your method alleviates it, would it still be effective if the number of models are 2 or 3?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your insightful comments and give the response as follows.
**Q1 Generalization on ImageNet**: Thanks for your valuable suggestion. We try the generalization performance of the merged model on unseen tasks like Classification task (ImageNet1k). We follow the setting in our paper, i.e., merging eight expert models, and then verfiy its general capability on the test set of ImageNet1k. The backbone is ViT-B/32 and the baseline is Task Arithmetic. The results in the table below show that the baseline Task Arithmetic method achieves 27.6% top-1 test accuracy, while the Surgery and ProbSurgery methods improve it to 48.5% and 52.7%, respectively. Thus, we believe our proposal represents a significant advancement in enhancing the generalization performance of existing model merging techniques.
| Method | Test Acc |
| - | - |
|Task Arithmetic | 27.6 |
| w/ Surgery | 48.5 |
| w/ ProbSurgery | 52.7 |
**Q2 ProbsurgeryV2, tntegrating Probsurgery into each block**: Thanks for this valuable suggestion. Our proposed ProbSurgery also can integrated into each block of the ViT-based model to surgery the layer (or block)-level representation bias, which is more refined and contributes to better performance. By adopting the design in Surgery V2, we conduct experiments on Task Arthmetric with ViT-B/32 and show the result in the following table.
| Method | SUN397 | Cars | RESISC45 | EuroSAT | SVHN | GTSRB | MNIST | DTD | Avg |
| - | - | - | - | - | - | - | - | - | - |
| Task Arithmetic | 55.2 | 54.9 | 66.7 | 78.9 | 80.2 | 69.7 | 97.3 | 50.4 | 69.1 |
| w/ Surgery | 63.8 | 59.9 | 83.3 | 97.9 | 87.0 | 87.0 | 98.6 | 69.4 | 80.9 |
| w/ ProbSurgery | 67.0 | 67.0 | 94.1 | 99.8 | 91.2 | 98.8 | 99.4 | 79.0 | 87.0 |
| w/ Surgery v2 | 73.8 | 67.9 | 94.5 | 99.6 | 96.8 | 98.8 | 99.5 | 78.0 | 88.6 |
| w/ ProbSurgery v2 | 74.1 | 68.2 | 94.8 | 99.8 | 97.1 | 99.1 | 99.8 | 78.3 | 88.9 |
**Q3 Typos**: Thanks for pointing out these typos. We will correct these typos in the next version of our manuscript.
**Q4-1 Other merging methods**. Our method is applicable to all model merging approaches, not just those based on task vectors. The table below reports the performance of three non-task-vector methods—Fisher Merging, RegMean, Task Arithmetic + DARE, and EMR-Merging — using ViT-B/32 as the backbone. We can see that applying ProbSurgery consistently improves average accuracy. In conclusion, we believe that ProbSurgery's effectiveness has been fully validated.
| Method | SUN397 | Cars | RESISC45 | EuroSAT | SVHN | GTSRB | MNIST | DTD | Avg |
| - | - | - | - | - | - | - | - | - | - |
| Fisher Merging | 68.6 | 69.2 | 70.7 | 66.4 | 72.9 | 51.1 | 87.9 | 59.9 | 68.3 |
| w/ ProbSurgery | 71.7 | 73.1 | 91.3 | 99.7 | 91.2 | 96.1 | 99.5 | 76.8 | 87.4 |
| RegMean | 65.3 | 63.5 | 75.6 | 78.6 | 78.1 | 67.4 | 93.7 | 52.0 | 71.8 |
| w/ ProbSurgery | 71.5 | 72.7 | 94.1 | 99.0 | 93.0 | 97.7 | 99.3 | 78.3 | 88.2 |
| Task Arithmetic + DARE| 51.7 | 51.4 | 63.2 | 75.4 | 76.7 | 66.2 | 94.9 | 46.9 | 65.8 |
| w/ ProbSurgery | 67.1 | 65.1 | 93.2 | 98.8 | 90.3 | 97.8 | 98.4 | 78.1 | 86.1 |
| EMR\-Merging | 75.2 | 72.8 | 93.5 | 99.5 | 96.9 | 98.1 | 99.6 | 74.4 | 88.7 |
| w/ ProbSurgery | 75.3 | 73.3 | 94.1 | 99.7 | 97.2 | 98.4 | 99.7 | 75.1 | 89.1 |
**Q4-2 Improvement in Adamerging**: The improvement observed with Adamerging is not significant, as it is a more advanced merging method that results in representation bias during merging (as shown in Fig 4, a smaller distance is achieved by Adamerging). Therefore, when integrated with ProbSurgery, the improvement is not considerable.
**Q4-3 NLP tasks**: Since different NLP tasks always enjoy the same parameters in almost all layers or blocks given a model like Bert, it exhibits fewer representation bias compared to vision tasks, which are highly sensitive to parameter changes. This difference leads to minimal performance degradation when merging multiple NLP tasks. Thus, integrating our proposal with NLP tasks yields a relatively smaller performance improvement than with vision tasks.
**Q5 Merging fewer models**: We believe that any model fusion process, regardless of the number of models involved, induces parameter changes that lead to representation bias. Consequently, our method can alleviate this bias and enhance performance. Due to the word limit, we merged three models and present the results in the following table. Notably, our method also demonstrates performance gains even when merging a small number of models.
| Method | SUN397 | Cars | RESISC45 | Avg |
| - | - | - | - | - |
| Task Arithmetic | 71.8 | 69.5 | 90.1 | 77.1 |
| w/ ProbSurgery | 74.1 | 73.8 | 94.4 | 80.8 |
---
Rebuttal Comment 1.1:
Comment: I thank the authors for answering all my questions. I would like to keep my score. | Summary: This paper provides a probabilistic interpretation of the previous work, Surgery, which addresses the issue of representation bias. Additionally, two strategies are proposed to reduce overall training costs, supported by a theoretical analysis that highlights the advantages of the probabilistic approach. Extensive experiments demonstrate the superiority of the proposed method.
Claims And Evidence: Yes, the paper addresses a valuable problem, and the motivation for the proposed method is clearly articulated. The algorithm is supported by the theoretical analysis.
Methods And Evaluation Criteria: Yes, the paper introduces a probabilistic approach to model representation bias, leading to more robust estimations. The datasets used in the experiments are widely applicable. In addition to conventional metrics, the authors incorporate visualization techniques to illustrate biased representations.
Theoretical Claims: Yes, I checked all contents for the theoretical claims and didn't find significant errors.
Experimental Designs Or Analyses: The authors provide rational and rigorous experimental designs of main experiments and ablation studies. The analyses are comprehensive and insightful
Supplementary Material: I’ve read all the parts within the supplementary material, including theoretical proof, experimental details, and additional results.
Relation To Broader Scientific Literature: This paper proposes a novel approach to bridge the gap between multiple individual expert models and the merged model through a representation rectification operation. Experimental results demonstrate that the proposed method, ProbSurgery, achieves performance comparable to that of various expert models. Consequently, I believe this approach has the potential to enhance the applicability of model merging as a multi-task learning strategy.
Essential References Not Discussed: This paper provides a comprehensive introduction to the task background and relevant preliminaries. There are no existing papers that are closely related to this work that require further discussion.
Other Strengths And Weaknesses: Strengths:
1) The writing in this paper is exceptionally clear, with the author providing detailed background information and a well-articulated motivation for the study.
2) The proposed method appears highly reproducible. The probabilistic approach is both simple and effective, successfully modeling the uncertainty that implicitly arises from parameter interference when merging multiple models.
3) The theoretical analysis demonstrates that the classification error of the probabilistic approach is lower than that of a deterministic approach, enhancing the novelty of the proposed method.
Weaknesses and questions:
1) There is an enhanced version of Surgery, named SurgeryV2 [1], which incorporates the surgery module into each block of the encoder. I am curious to know whether the proposed ProbSurgery also implements this operation.
2) The proposed ProbSurgery shows potential for improved performance when integrated with existing model merging methods. I noticed that the selected baselines belong to weight-based approach. I have some concerns regarding the applicability of ProbSurgery to other types of model merging methods. For instance, could it be integrated with route-based methods like EMR-Merging [2]?
3) Impact of the scale of validation set. The proposed framework relies on unlabeled test/validation data to build self-supervision signals and update the parameters of the ProbSurgery module. It would be beneficial for the author to discuss the impact of varying amounts of available test data on model performance.
4) Recent methods [2][3] have begun to evaluate their performance under extreme conditions, such as model merging with tens of models. I suggest the author conduct experiments under this setting to validate the generalization and robustness of ProbSurgery.
[1] SurgeryV2: Bridging the Gap Between Model Merging and Multi-Task Learning with Deep Representation Surgery. ArXiv 2024
[2] Emr-Merging: Tuning-free high-performance model merging. NeurIPS 2024
[3] Localizing Task Information for Improved Model Merging and Compression. ICML 2024
Other Comments Or Suggestions: None.
Questions For Authors: See Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your insightful comments and give the response as follows.
**Q1 ProbsurgeryV2, integrating Probsurgery into each block**: In this paper, ProbSurgery is integrated into the last layer. However, it also can be integrated into each block of the ViT-based model to surgery the layer (or block)-level representation bias, which is more refined and contributes to better performance. By adopting the design in Surgery V2, we conduct experiments on Task Arthmetric with ViT-B/32 and show the result in the following table. When our method is applied after every block, it can more effectively reduce representation bias and achieve better performance.
| Method | SUN397 | Cars | RESISC45 | EuroSAT | SVHN | GTSRB | MNIST | DTD | Avg |
| - | - | - | - | - | - | - | - | - | - |
| Task Arithmetic | 55.2 | 54.9 | 66.7 | 78.9 | 80.2 | 69.7 | 97.3 | 50.4 | 69.1 |
| w/ Surgery | 63.8 | 59.9 | 83.3 | 97.9 | 87.0 | 87.0 | 98.6 | 69.4 | 80.9 |
| w/ ProbSurgery | 67.0 | 67.0 | 94.1 | 99.8 | 91.2 | 98.8 | 99.4 | 79.0 | 87.0 |
| w/ Surgery v2 | 73.8 | 67.9 | 94.5 | 99.6 | 96.8 | 98.8 | 99.5 | 78.0 | 88.6 |
| w/ ProbSurgery v2 | 74.1 | 68.2 | 94.8 | 99.8 | 97.1 | 99.1 | 99.8 | 78.3 | 88.9 |
**Q2 Integrating with other methods**: Due to page restriction, we keep the same setting with Surgery that only conducts experiments on four baselines based on task vectors. Our method ProbSurgery is completely orthogonal to existing model merging approaches and can be incorporated into any model merging method to solve its representation bias problem. In the following table, we integrate ProbSurgery with other three methods that are not based on task vectors, including Fisher Merging, Regmean, and EMR\-Merging. By integrating our method, each baseline sees a substantial boost in performance, highlighting the effectiveness of our approach in mitigating representation bias.
| Method | SUN397 | Cars | RESISC45 | EuroSAT | SVHN | GTSRB | MNIST | DTD | Avg |
| - | - | - | - | - | - | - | - | - | - |
| Fisher Merging | 68.6 | 69.2 | 70.7 | 66.4 | 72.9 | 51.1 | 87.9 | 59.9 | 68.3 |
| w/ ProbSurgery | 71.7 | 73.1 | 91.3 | 99.7 | 91.2 | 96.1 | 99.5 | 76.8 | 87.4 |
| RegMean | 65.3 | 63.5 | 75.6 | 78.6 | 78.1 | 67.4 | 93.7 | 52.0 | 71.8 |
| w/ ProbSurgery | 71.5 | 72.7 | 94.1 | 99.0 | 93.0 | 97.7 | 99.3 | 78.3 | 88.2 |
| EMR\-Merging | 75.2 | 72.8 | 93.5 | 99.5 | 96.9 | 98.1 | 99.6 | 74.4 | 88.7 |
| w/ ProbSurgery | 75.3 | 73.3 | 94.1 | 99.7 | 97.2 | 98.4 | 99.7 | 75.1 | 89.1 |
**Q3 The size of the validation set**
Thanks for this constructive suggestion. We made an experiment to verify how the size of the validation set impacts ProbSurgery's performance. In the following table, we can see that using a larger proportion of the validation set helps the ProbSurgery module better capture and mitigate inherent biases, leading to higher overall accuracy. Notably, even when only 10% of the unlabeled validation data is used, our method still exceeds Surgery’s performance of 80.9%, demonstrating its robustness under limited data conditions.
| Method | Ratio of Val set | SUN397 | Cars | RESISC45 | EuroSAT | SVHN | GTSRB | MNIST | DTD | Avg |
| - | - | - | - | - | - | - | - | - | - | - |
| Task Arithmetic | - | 55.2 | 54.9 | 66.7 | 78.9 | 80.2 | 69.7 | 97.3 | 50.4 | 69.1 |
| w/ Ours | 10% | 63.5 | 64.5 | 89.6 | 94.9 | 88.4 | 89.4 | 98.4 | 74.5 | 82.9 |
| w/ Ours | 50% | 65.9 | 66.5 | 92.3 | 97.8 | 90.5 | 95.0 | 98.8 | 77.2 | 85.5 |
| w/ Ours | 100%| 67.0 | 67.0 | 94.1 | 99.8 | 91.2 | 98.8 | 99.4 | 79.0 | 87.0 |
**Q4 Merging tens of models**: Thanks for this constructive suggestion. To verify the performance in more challenging settings, we additionally add three tasks (ImageNet100, CIFAR100, and real-world Homeoffice) to original eight tasks. The results in the following table show that even when merging a large number of models, our method can effectively correct the representation bias and achieve superior performance.
| Method | SUN397 | Cars | RESISC45 | EuroSAT | SVHN | GTSRB | MNIST | DTD | ImageNet100 | CIFAR100 | HomeOffice | Avg |
| - | - | - | - | - | - | - | - | - | - | - | -|-|
| Task Arithmetic |31.8 | 39.3 | 49.6 | 60.3 | 74.4 | 60.4 | 95.8 | 38.4 | 11.6 | 42.6 | 70.3 | 52.3 |
| w/ Surgery | 50.5 | 52.7 | 76.8 | 96.3 | 83.8 | 79.9 | 98.6 | 65.8 | 56.8 | 55.6 | 80.3 | 72.5 |
| w/ ProbSurgery | 53.7 | 55.7 | 86.3 | 97.9 | 85.5 | 96.0 | 98.8 | 72.9 | 74.8 | 63.4 | 86.2 | 79.2 |
---
Rebuttal Comment 1.1:
Comment: The rebuttal has solved my question, and I will keep the score. | Summary: Surgery is a method to improve the merging performance by reducing the representation bias of model merging. This paper argues that there are two main issues of Surgery. First, the representation discrepancy is not fully addressed. Second, Surgery requires multiple task-specific modules rather than a unified one. This paper proposes ProbSurgery to mitigate these. They adapt the latent variable model (e.g., VAE) to enhance Surgery and further extend it to the one-to-all setting. Empirical and theoretical results are shown to illustrate their method's benefit.
# After rebuttal:
My concerns are addressed and I will raise my score.
Claims And Evidence: Some illustration figures and motivations are unclear to me. Please see the questions.
Methods And Evaluation Criteria: It may lack some important baselines. The motivation and design of the method are not clear to me. Please see the questions.
Theoretical Claims: I wonder about the adaptation of Theorem.4.1 from literature. See the questions for details.
Experimental Designs Or Analyses: I have no questions.
Supplementary Material: No supplementary material was provided.
Relation To Broader Scientific Literature: Model merging is an emergent technique but suffers from the performance gap between merged and task-specific models. This paper's method is built on a previously proposed method to address two main issues. I believe the proposed method can reduce both the memory cost and inference time of the merged model, as well as improve the merging performance compared to the previous one. Moreover, as a post-merging method, this paper is compatible with other merging methods, which enhances the impact of this paper.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The novelty and originality of this paper are good as they address two main issues of the previous method by applying VAE and Surgery. However, I think the clarity could be improved. Also, some important baselines are missing in their experiments, which may limit the significance of their paper. Please see the questions for details.
Other Comments Or Suggestions: Typos:
- Title of Sec.3.1: represetative - > representative
- Def.3.2: $f_{\theta_T}$ - > $\{f_{\theta_t}\}_{t=1}^T$
Questions For Authors: 1. How to compute $G$ in Observation.3.3.? I think a detailed formulation would be better than a description “overall performance gap”.
2. What is the upper performance in Fig.2(a)?
3. Why is the ablation of (Prob)Surgery not studied with Fisher and RegMean merging in Tab.2?
4. In Eq.4, why do we align the shift of merged features $z_{i,t}^{\theta_{unif}}-\xi_{i,t}$ rather than $z_{i,t}^{\theta_{unif}}$ with $z^{\theta_t}$? I understand that Eq.4 is an extension of Eq.1, but I am not clear why we follow the objective in Eq.1.
5. I do not understand the following sentence in lines 217- 219: "Due to limited ... to other tasks". Is the limited representation capability the same as what is illustrated in Fig.1? What do "other tasks" mean?
6. I am curious about which theorem in (McAllester, 2003) the author adapted in their proposed Theorem.4.1.
7. What is $E[Q_{\omega}, z^{bias}]$ in Eq.9?
8. As the proposed method needs to retrain the (Prob)Surgery modules after merging, I think it would be fair to compare it with the performance of the merged model after training on all validation sets.
---
My concerns are addressed and I will raise my score.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your insightful comments and give the response as follows.
**Q1 The observation**: Here, $G(\cdot)$ quantifies the difference in test accuracy, which reflects the average discrepancy between the feature representations produced by these two models. For a more formal representation, we extend the function $G(\cdot)$ to:
$E_{t\sim[T]} \Big[\text{Acc}(f_{\theta_t}, D_t) - \text{Acc}(f_{\theta_{unif}}, D_t)\Big]
\propto E_{t\sim[T]} \xi_t,$ where $\text{Acc}(f, D)$ denotes the test accuracy of model $f$ on the test set $D$. Detailed explanations will be added to our manuscript in subsequent revisions.
**Q2 Upper performance in Fig2**: The upper performance in Figure 2 denotes the average performance of various expert models on corresponding merged tasks.
**Q3 More baselines**: Due to page restrictions, we keep the same setting with Surgery that makes experiments on four methods based on task vectors. (Prob)Surgery is completely orthogonal to existing model merging methods and can be incorporated into all of them to mitigate the representation bias. In the table below (*more detailed results can be found in the response to Reviewer HxL4, Q4-1*), it shows that integrating (Prob)Surgery into existing baselines significantly improves performance. This confirms that (Prob)Surgery effectively mitigates representation bias in model merging and can be seamlessly incorporated into any merging approach.
| Method | Avg |
| - | - |
| Fisher Merging | 68.3 |
| w/ Surgery | 82.6 |
| w/ ProbSurgery | 87.4 |
| RegMean | 71.8 |
| w/ Surgery | 82.9 |
| w/ ProbSurgery | 88.2 |
**Q4 Format of Eq. (4)**: We follow the objective in Eq. (1) to maintain the core motivation of Surgery, namely to remove the representation bias introduced by the merged model. Concretely, $z_{i,t}^{\mathrm{unif}} - \xi_{i,t}$ represents the corrected (post‐calibrated) representation rather than the raw merged representation $z_{i,t}^{\mathrm{unif}}$. Extending from a deterministic correction term $\xi_{i,t}$ in Eq. (1) to a probabilistic treatment in Eq. (4) lets us more accurately capture the uncertainty arising from parameter interference during merging.
**Q5 Statement in Lines 217-219**: The “limited representational capacity” refers to the deterministic nature of the Surgery module. While deterministic modeling can be simpler and faster to train, it struggles to capture the inherent uncertainty of multiple merged tasks. As Figure 1 shows, even after applying a deterministic correction, the merged features remain partially misaligned with each individual model’s distribution, implying that a single deterministic module does not generalize well beyond its initially learned task. Besides, “other tasks” denote the remaining tasks in the merged set, which the single deterministic Surgery module fails to accommodate.
**Q6 Theorem 4.1**: In McAllester (2003), the statement closest to the paper’s Theorem 4.1 is Theorem 1, often referred to as the “PAC-Bayesian Theorem”. It provides a general PAC-Bayes bound on the expected loss of a posterior hypothesis in terms of its empirical loss plus a KL‐divergence term from the prior—precisely the structure adapted in ours.
**Q7 Expectation in Eq. (9)**: Eq. (9) measures how a predicted distribution $Q_{\omega}$ fits the observed (true) distribution $P^{\mathrm{bias}}$. We take the expectation of that score with respect to the “true” distribution $P^{\mathrm{bias}}$: $S\bigl(Q_{\omega}, P^{\mathrm{bias}}\bigr) =\int S\bigl(Q_{\omega}, z^{\mathrm{bias}}\bigr)\mathrm{d}P^{\mathrm{bias}}(z^{\mathrm{bias}}). $ Finally, it is written as $\mathop{\arg \min}\limits_{\omega} \mathcal{S}(Q_\omega, P^{\rm bias}) := \mathop{\arg \min}\limits_{\omega} E_{z^{\rm bias} \sim P^{\rm bias}}[Q_\omega, z^{\rm bias}]$, as shown in Eq. (9). Due to only one observation $z^{\rm bias}$ (a fixed bias) in practice, this equation can omit the expectation form and be reformulated as $\mathop{\arg \min}\limits_{\omega} \mathcal{S}(Q_\omega, P^{\rm bias}) := \mathop{\arg \min}\limits_{\omega} \mathcal{S}(Q_\omega, z^{\rm bias})$. Thus, in Eq. (10), we only have one variable $z^{\rm bias}$ and estimate the distribution $Q_\omega$ via sampling.
**Q8 Validation set**: First, the post-training step in (Prob)Surgery relies on an extra validation set that does not require any labels. Acquiring such a small-scale set is feasible in the real world since no label annotations are required. Meanwhile, to ensure a fair comparison, we employ test-time adaptation (TTA, adopted in many model merging methods like Adamerging) to post-train a merged model with Task Arithmetic. The results in the table below clearly demonstrate that our method significantly outperforms the TTA approach.
| Ratio of val set | 10% | 50% | 100% |
| - | - | - | - |
| Task Arithmetic: 69.1 |
|w/ TTA | 72.8 | 74.1 | 75.4 |
|w/ Ours | 84.8 | 86.2 | 87.0 |
**Q9 Typos**: Thanks for pointing out these typos. We will correct these in the next manuscript.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for your response. However, I have a new question here. When merging the models, how do the authors process the classifiers? You maintain task-specific classifiers and use them according to the task index, or have other methods? If it is the former, how do we do OOD evaluation?
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. Here are the details of our OOD detection experiments:.
When we finish the model merging and the post-calibration process with (Prob)Surgery, we test the performance of different model merging methods on OOD detection. We use the test sets of four tasks and a small OOD set of 500 samples randomly sampled from the test set of SUN397, denoted by $D_1,D_2, D_3, D_4$, and $D_o$.
1. Test Accuracy
To measure test accuracy, we follow the standard model-merging approach and maintain separate task-specific classifiers for the merged tasks. Formally, the test accuracy on the test set of the $t$-th task $D_t$ can be expressed as: $ACC_{t} = \frac{\sum_{(x,y) \in {D_t + D_o}} 1({\hat y} = y)}{||D_t + D_o||}$, where ${\hat y}$ denotes the predicted label.
2. AUROC
To report AUROC, we use a commonly adopted unsupervised technique that does not rely on the classifier. Specifically, for each task, we store its class prototypes during training (when no OOD samples are present). In the test phase, for task $t$, we compute the representation of every sample in the test set $D_t+D_o$ and measure its distance to the stored class prototypes. The greater the distance, the more likely the sample is to be an outlier. Eventually, the AUROC value can be calculated on the distance and the 0-1 label (where 0 and 1 denote the In-distribution and Out-of-distribution samples, respectively).
These details will be added into the next version of our manuscript. | Summary: This paper proposes ProbSurgery, a probabilistic approach to post-merging representation correction in model merging. The authors address the representation bias that occurs when merging multiple models for multitask learning. Unlike prior deterministic approaches (e.g., Surgery), ProbSurgery models the bias as a probabilistic distribution, which better handles uncertainty in parameter merging. The paper provides theoretical insights using a PAC-Bayes framework and proposes a one-to-all extension of ProbSurgery, allowing for efficient multi-task calibration. Experimental results across vision and NLP tasks show that ProbSurgery consistently outperforms existing merging strategies, achieving better generalization and robustness, particularly in OOD and domain shift settings.
Claims And Evidence: Yes, all claims are supported by both theoretical and experimental evidence. 1) The main claim that ProbSurgery improves representation correction and generalization is well-supported by both theoretical and experimental evidence. 2) The paper uses a PAC-Bayes theoretical framework to show that modeling bias as a distribution leads to a lower classification error.
Methods And Evaluation Criteria: Yes, the utilized criteria are reasonable. 1) The problem of representation bias in model merging is well-motivated, and the paper systematically compares ProbSurgery to existing merging methods (e.g., Weighted Averaging, Task Arithmetic, AdaMerging). 2) The use of L1 distance and test accuracy as evaluation metrics is reasonable and aligns with prior work (Figures 2, 4).
Theoretical Claims: Yes, I checked all theoretical claims and didn't find errors. The PAC-Bayes analysis is correctly applied and suggests that a probabilistic bias correction method results in a lower classification error bound.
Experimental Designs Or Analyses: Yes, the experimental designs are sound
1) The experiments use strong baselines, including Weighted Averaging, Task Arithmetic, and AdaMerging, ensuring fair comparisons.
2) The inclusion of OOD and domain shift experiments strengthens the claim that ProbSurgery improves generalization.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: Yes, the contribution and difference compared with previous works have been widely discussed. The paper builds on previous model merging works (Matena & Raffel, 2022; Ilharco et al., 2023) and extends the Surgery framework (Yang et al., 2024a). It also relates ProbSurgery to uncertainty modeling in probabilistic embeddings (Vilnis & McCallum, 2014).
Essential References Not Discussed: The cited works cover model merging and uncertainty modeling, making the context of the study clear.
Other Strengths And Weaknesses: Strengths:
1) Novelty: Probabilistic modeling of representation bias is a significant advancement over deterministic approaches.
2) Theoretical Rigor: Strong PAC-Bayes justification for ProbSurgery's effectiveness.
3) Empirical Validation: Extensive experiments across vision and NLP tasks show clear improvements.
4) Real-world Relevance: The one-to-all setting makes ProbSurgery practical for multitask learning.
Weaknesses:
1) Computational Complexity: The additional probabilistic sampling step may introduce latency, but this is not discussed.
2) Sensitivity to Hyperparameters: The impact of hyperparameters like KL regularization $\lambda$ is not fully analyzed.
3) Gaussian Assumption: In this paper, the authors regularize the corrected representation as a normal Gaussian distribution. However, they do not provide sufficient motivation for this choice. The assumption that representation bias follows a Gaussian distribution requires further justification and discussion.
Other Comments Or Suggestions: This paper can be improved from the following three parts:
1) Discuss computational cost of ProbSurgery vs. deterministic methods.
2) Provide ablation studies on KL regularization λ and sampling variance.
3) Verify Gaussian assumptions for features.
Questions For Authors: See Weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your insightful comments and give the response as follows.
**Q1 Computational Complexity**: In practical implementation, we only sample once to generate the representation bias in the ProbSurgery module, which does not incur any additional cost compared to the deterministic method. Our results demonstrate that a single sample is sufficient to achieve state-of-the-art performance—outperforming the previous Surgery method. In the next version of our manuscript, we will provide further insights and discussions on how the number of samples impacts the algorithm's performance.
Besides, we agree with your viewpoint that sampling multiple times can probably improve the estimation accuracy of the representation bias, thereby enhancing overall model performance. Specially, we sample $\rho$ times and compute their average values during model training. However, even if we perform multiple samplings, it does not incur extra computational cost since this averaging operation only involves first-order derivatives during backpropagation. In the following table, we try different sampling numbers and compare their performance. We can observe that increasing the sampling number does not lead to a significant improvement in overall performance. Therefore, to ensure better scalability, we have chosen not to introduce this hyperparameter explicitly, and instead, we set it to 1 by default.
|Sampling numbers|1|3|5|10|
| - | - | - | - | - |
|Average acc on eight tasks|86.74|86.87|86.90|86.91|
**Q2 Sensitivity to Hyperparameters**: Based on our empirical experience, we have set the trade-off coefficient for the KL term to $1 \times 10^{-3}$. To further explore the algorithm's sensitivity to this hyperparameter, we conducted experiments with Weight Average on ViT-B/32 and show the result in the following table, which shows that the algorithm's average accuracy remains nearly constant across various hyperparameter values, demonstrating that our method is insensitive to this parameter.
|the trade-off coefficient $\lambda$|$1\times10^{-4}$| $1 \times 10^{-3}$ | $1 \times 10^{-2}$ | $1 \times 10^{-1}$ |
|-|-|-|-|-|
|Average acc on eight tasks|86.17|86.70|86.40|86.02|
**Q3 Gaussian Assumption for representation**: In this work, we treat the representation bias as a Gaussian mainly for tractability and flexibility. The normal distribution provides a compact parameterization ($\mu$ and $\sigma$) that is easy to optimize, while also serves as a high-entropy “baseline” choice—minimizing assumptions about the true shape of the bias. Although the real distribution may deviate from exact Gaussianity, empirical tests show that this assumption reliably captures the dominant uncertainty in merged representations. | null | null | null | null | null | null |
Deep Fuzzy Multi-view Learning for Reliable Classification | Accept (poster) | Summary: This paper introduces FUML, a novel multi-view classification framework using Fuzzy Set Theory to handle conflicting views and improve uncertainty estimation. It employs a Dual-reliable Multi-view Fusion (DRF) strategy and entropy-based uncertainty quantification, achieving robust classification and reliability. Experiments show superior performance over existing methods.
Claims And Evidence: Yes, this paper is supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria make sense.
Theoretical Claims: Yes, I have checked the theoretical part in this paper.
Experimental Designs Or Analyses: Yes, I checked the soundness and validity of all experimental designs and analyses in this paper.
Supplementary Material: Yes, I checked all the content in the supplementary material.
Relation To Broader Scientific Literature: The key contributions of this paper are closely related to the broader scientific literature in multi-view learning, uncertainty estimation and Fuzzy Set Theroy. Specifically, this paper compares prior work in trusted multi-view classification based on Evidential Theory (Including TMC, ETMC, UMIC, and ECML) . Besides, this paper is related to and based on uncertainty estimation andfuzzy theory, and adopts fuzzy modeling, providing new ideas on how to apply fuzzy set theory to multi-view learning.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. This paper effectively clarifies its research motivation with the aid of Figure 1 and introduces a novel approach to integrating fuzzy set theory into multi-view learning.
2. This paper innovatively proposes a TMVC method based on the fuzzy set theory, avoiding the problem of EDL-based TMVC methods being sensitive to conflicts and inaccurate uncertainty estimation. The proposed FUML is simple but effective.
3. The paper has sufficient experiments, for example, comparative experiments with 13 latest baselines (including five untrusted MVC baselines and eight trusted MVC baselines) on eight public datasets. In addition, including the appendix, there are 11 types of experiments, which fully demonstrate the effectiveness of the proposed method.
4. The proposed method achieves significant improvements over the best baseline, especially on the conflict test set. For example, the accuracy is improved by 4.83%, 7.31% and 14.6% on Scene, LandUse and Leaves datasets respectively.
Weaknesses:
1. In the Experimental Setup section, the author mentioned the method of adding conflict samples, but did not provide details on the noise intensity and noise ratio of the added Gaussian noise. Please add relevant description.
2. In the experiments, the authors include some multimodal classification methods such as PDF and QMF in the comparison, while this paper mainly focuses on multi-view classification. Is this fair?
3. In Figure 3, the uncertainty effectiveness Analysis is only performed on Fashion and LandUse dataset which only has three views. I am concerned about the results of other datasets, such as Handwritten or MSRC, which have six views. Because as shown in Table 2, for such datasets with many views, conflicting views often don’t cause a significant drop in accuracy, but should cause an increase in uncertainty.
4. The motivation for introducing necessity is unclear. Nor does authors explain the impact of necessity on performance.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. The FUML method in this paper is mainly used for multi-view classification. Can it be applied to other multimodal classification tasks?If so, what adjustments are needed?
2. Why should necessity be introduced? Corresponding experiments should be supplemented.
Besides, please answer the questions in the weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the identification of our novelty and the positive comments. Below are our point-by-point responses to your concerns:
**Q1: In the Experimental Setup section, the author doesn’t provide details on the noise intensity and noise ratio of the added Gaussian noise.**
**R1**: To create a test set with conflicting instances, following the methodology outlined in (Xu, Cai, et al. "Reliable conflictive multi-view learning."), we apply two transformations: 1)We add Gaussian noise with mean 0 and variance 0.5 to $10\%$ of the instances in the test set. 2)We randomly changed the view information of $50\%$ of the instances in the test set, making the labels of some views inconsistent with the other views. We will clarify this in the next version.
**Q2: In the experiments, the authors include some multimodal classification methods, such as PDF and QMF, in the comparison. Is this fair?**
**R2**: Yes, it is fair. To ensure a fair comparison, we replace the backbone networks of QMF and PDF with the same fully connected layer as FUML while preserving their core models and loss functions. These experimental details can be found in Appendix B.3.
**Q3: Concerns about the uncertainty estimation results of Handwritten or MSRC.**
**R3**: We compare our FUML with ETMC and ECML on the Handwritten and MSRC datasets for the OOD task. The normal test sets serve as in-distribution, while the conflicting test sets serve as OOD. The evaluation metric is FPR95, with lower values indicating better performance. The results below demonstrate the superiority of FUML.
| | ETMC | ECML | FUML(Ours) |
| ---- | ----- | ----- | ---------- |
| HW | 0.920 | 0.950 | 0.845 |
| MSRC | 0.952 | 0.974 | 0.926 |
**Q4: The motivation for necessity is unclear, and its impact on performance is unexplained. Why introduce it? Relevant experiments are needed.**
**R4**: In Fuzzy Set Theory, necessity is introduced to quantify the certainty that a sample does not belong to other categories, as membership alone cannot capture between-class relationships. To evaluate the role of necessity, we removed the necessity in FUML and only used conflicts in the fusion process. The experimental results in the table below prove that necessity can’t be removed.
| | Fashion(Normal) | LandUse(Normal) | Fashion (Conflict) | LandUse(Conflict) |
| ------------- | --------------- | --------------- | ------------------ | ----------------- |
| w/o Necessity | $97.70\pm0.41$ | $44.64\pm3.65$ | $94.91\pm0.47$ | $39.00\pm3.29$ |
| Ours | $98.96\pm0.25$ | $76.71\pm0.46$ | $96.68\pm0.32$ | $69.14\pm2.43$ |
**Q5: Can FUML be applied to multimodal classification tasks? If so, what adjustments are needed?**
**R5**: Yes, our FUML framework exhibits strong generalizability and can be readily adapted to various multimodal classification tasks. The primary adjustments required involve replacing the feature extraction backbones to accommodate different data modalities. Specifically, Vision Transformer (ViT) can be employed for image feature extraction, wav2vec for audio processing, TimeSformer for video analysis, and word2vec for textual feature representation.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. All my concerns have been addressed, I will raise my score to 4.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review of our manuscript. We sincerely appreciate the time and effort you dedicated to evaluating our work and providing insightful comments. | Summary: This paper proposes a novel multi-view classification method based on Fuzzy Set Theory, which models classification outputs as fuzzy memberships. After that, authors introduces a category credibility learning loss and a Dual-reliable Fusion (DRF) strategy to manage conflicting views and improve uncertainty estimation over Evidential Deep Learning. Adequate experiments and analysis demonstrate FUML's superior performance in accuracy and reliability over existing methods.
Claims And Evidence: Yes, this paper is supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria make sense.
Theoretical Claims: Yes, I checked the theoretical proof section of this paper in the appendix.
Experimental Designs Or Analyses: Yes, I checked the soundness and validity of all experimental designs and analyses in this paper.
Supplementary Material: Yes, I checked all the content in the supplementary material.
Relation To Broader Scientific Literature: This paper introduces a novel trusted multi-view classification method, which is based on fuzzy set theory and can accurately classify conflicting multi-view instances and precisely estimate classification uncertainty. This work builds on the work of “Trusted Multi-View Classification with Dynamic Evidential Fusion” which first proposed trusted multi-view classification based on evidential deep learning (EDL), and “Reliable Conflictive Multi-view Learning”, which first proposed conflictive multi-view learning. Its contributions extend beyond multi-view classification, offering new angles for single-view, multimodal classification and out-of-distribution detection.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
S1. This paper has a clear motivation and a novel perspective and effectively solves the problems of existing EDL-based TMVC methods being sensitive to conflicts and inaccurate uncertainty estimation of conflicting multi-view samples.
S2. This paper is well-written and clearly presented. As far as I know, the use of fuzzy set theory to introduce output uncertainty in multi-view classification is novel.
S3. The proposed Dual-reliable Multi-view Fusion is quite neat and effective.
S4. The author conducted a large number of experiments to verify the effectiveness of their FUML, including both qualitative results (such as Figure 3-10) and quantitative results (such as Table 2-10). In addition, as shown in Table 2, in the case of conflicting multi-view classification, its performance far exceeds the best baselines, especially on the Leaves dataset.
Weaknesses:
W1. There are some grammatical errors in the writing: line 057: “... have be proposed to ...” should be “... have been proposed to ...”. Line 072: “global conflicts among views overly emphasizes dominant evidence” should be “global conflicts among views overly and emphasizes dominant evidence”. Line 179: " Thirdly the weights ...” should be “ Thirdly, the weights ...”.
W2. The choice of activation function in Eq. 3 warrants further discussion. Specifically, it is unclear whether other activation functions, such as those used in Evidential Deep Learning, could be applied and whether they would offer comparable or improved effectiveness.
W3. In Eq. 8, the exp function is selected as g(). Why not use other monotonically increasing functions, such as ReLU and sigmoid? This deserves further discussion.
W4. In Definition 3.3, the authors used cosine distance to measure the distance between memberships, thereby measuring the conflict between views. Why not use Euclidean distance or Dot Product Similarity?
Other Comments Or Suggestions: See the weaknesses.
Questions For Authors: Q1. In this paper, the authors pointed out that the norm is first used, and then the ReLU function is used to model the membership in the fuzzy set theory. Can the ReLU function be replaced by other functions?
Q2. In Eq. 8, the exp function is selected as g(). Why not use other monotonically increasing functions?
Q3. In Definition 3.3, the authors used cosine distance to measure the distance between memberships from different views. Why not use Euclidean distance or Dot Product Similarity? What is the advantage of cosine similarity?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your detailed comments. We believe the following point-to-point response can address all the concerns:
**Q1: There are some grammatical errors in the writing.**
**R1**: Thanks. We will correct these grammatical errors you raise and carefully review the manuscript to ensure no other grammatical errors remain.
**Q2: Can the ReLU function be replaced by other functions?**
**R2**: No. To map the logits of a neural network as memberships, the process involves two key steps: First, $L_p$-normalization is applied to constrain the logits within the range $[-1, 1]$. Then, a ReLU activation function is employed to further restrict the outputs to the interval $[0, 1]$. The resulting values can be interpreted as memberships for their respective categories. Therefore, both ReLU and $L_p$ normalization are essential parts to qualify membership. If ReLU is replaced by exp or softplus used in the Evidential Deep Learning, the output value will not be in the range of $[0,1]$, which undoubtedly violates the definition of membership.
**Q3**: In Eq. 8, the exp function is selected as g(). Why not use other monotonically increasing functions, such as ReLU and sigmoid?
**R3**: The choice of exp is an empirical result: The performance comparison of ReLU, Sigmoid, and the exp function is shown in the following table. From these results, we can find that exp has the best or second-best performance. Therefore, we leverage exp function as g().
| | Fashion(Normal) | LandUse(Normal) | Fashion(Conflict) | LandUse(Conflict) |
| --------- | --------------- | --------------- | ----------------- | ----------------- |
| ReLU | $98.81\pm0.26$ | $76.75\pm0.56$ | $96.51\pm0.29$ | $68.80\pm2.49$ |
| Sigmoid | $98.75\pm0.27$ | $76.34\pm0.78$ | $96.57\pm0.29$ | $68.94\pm1.89$ |
| exp(Ours) | $98.96\pm0.25$ | $76.71\pm0.46$ | $96.68\pm0.32$ | $69.14\pm2.43$ |
**Q4**: Why not use Euclidean distance or Dot Product Similarity? What is the advantage of cosine similarity?
**R4**: Compared with Euclidean distance and dot product similarity, cosine similarity only evaluates the consistency of decisions through the angle between memberships, avoiding the sensitivity of Euclidean distance and dot product similarity to vector length, thereby more accurately measuring decision conflicts. | Summary: The paper introduces FUML, a novel multi-view classification framework that explicitly addresses the uncertainty caused by conflicting information across views. By leveraging Fuzzy Set Theory, the authors model the outputs of deep classifiers as fuzzy memberships, capturing both possibility and necessity. A tailored loss function (i.e., the category credibility learning loss) is proposed to guide the optimization of these fuzzy outputs. In addition, a Dual-reliable Fusion (DRF) strategy is designed to weight each view based on its estimated uncertainty and inter-view conflict, ensuring that noisy or misaligned views contribute less to the final decision. Extensive experiments on eight public datasets demonstrate that FUML not only improves classification accuracy over 13 state-of-the-art baselines but also provides more robust uncertainty estimates, particularly in scenarios with conflicting view information.
Claims And Evidence: The primary claims of the paper are that:
1.FUML achieves state-of-the-art classification accuracy while providing reliable uncertainty quantification.
2.The integration of fuzzy memberships (combining possibility and necessity) offers a more reliable decision credibility measure than existing evidential approaches.
3.The DRF strategy effectively mitigates the adverse effects of conflicting views.
These claims are supported by comprehensive experimental results comparing FUML with both “untrusted” and “trusted” baselines on standard as well as deliberately corrupted (conflicting) test sets. The inclusion of ablation studies and additional analyses (in the supplementary materials) further substantiates the advantages claimed by the authors.
Methods And Evaluation Criteria: The methodology is built upon a solid foundation of Fuzzy Set Theory. Modeling the classifier outputs as fuzzy memberships is an innovative approach that enables the joint estimation of possibility (likelihood) and necessity (exclusion of other classes). The loss function is carefully designed to ensure that the network learns to align its predictions with the ground truth while avoiding over-optimization issues in the unmatched categories.
Evaluation is conducted on eight diverse datasets using standard metrics (accuracy, precision, F-score) and by reporting improvements over baselines. Moreover, the study uses both normal and synthetically corrupted testing sets to assess the robustness of the proposed uncertainty estimation and fusion strategy.
Theoretical Claims: The paper presents two theoretical propositions (Propositions 3.4 and 3.5) which state that fusing a clean view with a conflicting view increases the overall uncertainty, which is a desirable property for flagging unreliable decisions. The proofs and reasoning are convincing.
Experimental Designs Or Analyses: The experimental design is thorough, incorporating experiments on eight public benchmarks with a mix of standard and conflicting instances. The authors report results over multiple random seeds to capture variability, and ablation studies are conducted to isolate the contribution of each component of FUML.
Supplementary Material: The supplementary material appears to be comprehensive, including detailed proofs of the theoretical claims, extended experimental results (e.g., on additional datasets and under different settings), and further ablation studies. This extra material reinforces the paper's contributions and offers a deeper insight into both the methodology and its empirical performance.
Relation To Broader Scientific Literature: FUML is positioned well within the existing body of work on multi-view learning and uncertainty estimation. It builds upon and extends previous methods such as Evidential Deep Learning-based approaches (e.g., ETMC, TMC, ECML) and Dempster-Shafer theory, while introducing the use of fuzzy set theory, which is a perspective that has been underexplored in this context. The paper also clearly outlines how its contributions differentiate from prior works by addressing the limitation of underestimating uncertainty in conflicting instances.
Essential References Not Discussed: This paper has provides essential related works to understanding the key contributions of the work.
Other Strengths And Weaknesses: Strengths:
1.Novel use of Fuzzy Set Theory to model classifier outputs, resulting in a richer representation of decision credibility.
2.A well-motivated fusion strategy (DRF) that explicitly accounts for view-specific uncertainty and conflict.
3.Comprehensive experiments and ablation studies demonstrating both accuracy gains and improved uncertainty estimates.
Weaknesses:
1.Clearer explanations of the theoretical components and a more intuitive discussion of the loss function behavior are encouraged.
2.Details on computational complexity and hyperparameter sensitivity are limited.
3.Additional discussion on limitations and potential failure cases in real-world noisy environments would be useful.
Other Comments Or Suggestions: 1.It may be helpful to include a discussion on potential real-world applications where the robust uncertainty estimation could be critical, as well as scenarios where the method might struggle.
1.A more detailed error analysis could provide insights into the types of conflicting instances where FUML offers the greatest advantage.
Questions For Authors: 1. Why do you use $(1 - u_i^v)(1 - o_i^v)$? Would other combinations (e.g., weighted sum) improve performance?
2. How sensitive is the performance of FUML to the choice of the normalization and activation functions in mapping logits to fuzzy memberships? Could alternative functions affect the uncertainty estimation?
3. Can you provide further insight into the computational complexity of the DRF strategy compared to conventional fusion methods?
4. Could you discuss on the robustness of FUML in the presence of adversarial noise, and whether the uncertainty estimation mechanism is capable of flagging such instances effectively?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We appreciate your valuable comments. Below is our point-by-point response.
**Q1: Weaknesses (1)**
**R1**: 1) In the next version, we will provide a clearer explanation of the theoretical components. 2) An intuitive discussion of the loss function is as follows: Directly aligning category credibility $(m^j_{ik} + 1 - \max\{m_{il}^j \mid l \neq k \})/2$ with the label $y^j_{ik}$ will lead to local optima. Specifically, when $y^j_{ik} = 0$ and $y^j_{il} = 0 (l \neq k)$, minimizing $(1 - \max \\{m_{il}^j \mid l \neq k \\} )$ drives $m_{il}^j$ toward 1 instead of 0, leading to incorrect optimization. To address this, the loss function (Eq.(4)) replaces $(m^j_{ik} + 1 - \max\\{m_{il}^j \mid l \neq k \\})/2$ with $(m^j_{ik} + 1 - m^j_{il})/2$ when $y^j_{ik} = 0$, ensuring correct optimization. We will clarify this in the next version.
**Q2: Weaknesses (3)**
**R2**: Adverse weather conditions and data communication issues often introduce noise and misalignment in multi-view data. Existing EDL-based TMVC methods struggle with accurate classification under such conflicts and may underestimate classification uncertainty. In the next version, we will include this discussion in the Introduction Section.
**Q3**: Other Comments Or Suggestions (1).
**R3**: 1) In medical diagnosis, integrating multiple medical images (e.g., CT, MRI) and genetic data is crucial for disease classification. However, noise or pathological differences between imaging devices would cause inconsistencies, necessitating accurate uncertainty estimation about the classification results for doctors (e.g., ``classified as a malignant tumor, but MRI and CT conflict'') to assist in risk assessment. 2) Our FUML exhibits limited effectiveness in two-view data scenarios. These discussions will be included in the next version.
**Q4: Other Comments Or Suggestions (2)**
**R4**: Appendix Tables 7–9 show FUML achieving top or near-top performance in conflicting MVC across eight datasets with diverse noise and misalignment. Notably, FUML significantly outperforms other baselines on unaligned datasets. Overall, FUML demonstrates robustness in handling noisy and unaligned multi-view data, particularly in unaligned cases. This discussion will be incorporated into Appendix C.1 in the next version.
**Q5: Questions For Authors (1)**
**R5**: 1) Environmental factors often cause conflicts, which can mislead classification decisions. To mitigate this, we propose fusing only clear (low-uncertainty) and well-aligned (low-conflict) views. Consequently, we employ $(1−u_i^v)(1−o_i^v)$. 2) No, other combinations fail to improve performance. The table below proves this.
| | Fashion | LandUse |
| --------------------------- | --------------- | --------------- |
| $0.5(1-u^v_i)+0.5(1-o^v_i)$ | $96.22\pm0.26$ | $39.36\pm5.82$ |
| $0.8(1-u^v_i)+0.2(1-o^v_i)$ | $95.97\pm 0.31$ | $48.71\pm11.16$ |
| $0.2(1-u^v_i)+0.8(1-o^v_i)$ | $96.30\pm 0.54$ | $59.26\pm 5.16$ |
| Ours | $96.68\pm0.32$ | $69.14\pm2.43$ |
**Q6: Questions For Authors (2)**
**R6**: Regarding normalization, Appendix C.6 details the impact of the normalization parameter p on performance. The results show that FUML generally achieves optimal performance when p is between 2 and 5. As for activation functions, ReLU is essential for quantizing membership. Using alternative functions (exp or softplus) would yield outputs outside the $[0,1]$, violating fuzzy set theory's membership definition and precluding subsequent category credibility modeling and uncertainty estimation.
**Q7: Weaknesses (2) & Questions For Authors (3)**
**R7**: Although the computational complexity is $O(V^2)$, the number of views in multi-view classification (MVC) is typically small ($V < 10$), making the additional computational overhead negligible. Our experiments demonstrate that our method exhibits no significant increase in inference time compared to baselines, approximately by less than $8\\%$ (i.e., 0.0231s vs. 0.0215s).
**Q8: Questions For Authors (4)**
**R8**: 1)We add PGD adversarial noise attacks with different maximum perturbation magnitudes (eps) to the test set of the Fashion dataset. The results below show FUML's superior resistance to adversarial noise attacks. 2)We perform an OOD task on the Fashion dataset, using normal test sets as ID and PGD-attacked sets (eps=0.10) as OOD. Evaluated by FPR95 (lower is better), FUML achieved 0.68, outperforming ETMC and ECML (both 1.00).
| | 0 | 0.05 | 0.10 |
| ---------- | --------------- | --------------- | -------------- |
| PDF | $98.95\pm0.19$ | $22.07\pm0.90$ | $13.54\pm0.89$ |
| ECML | $97.93\pm 0.35$ | $52.58\pm 0.51$ | $42.74\pm1.35$ |
| FUML(Ours) | $98.96\pm0.25$ | $94.45\pm0.18$ | $93.40\pm0.19$ |
---
Rebuttal Comment 1.1:
Comment: After a thorough review of the rebuttal, I am convinced that the authors have satisfactorily addressed all of my concerns. They have provided robust evidence and detailed experimental results that not only clarify the raised issues but also highlight the novel contribution of integrating fuzzy set theory for trusted classification. Furthermore, the authors' responses to the comments of other reviewers reinforce the strength and validity of their approach. The novel use of fuzzy memberships to capture decision credibility, alongside comprehensive ablation studies and comparative experiments, convincingly demonstrates both improved classification accuracy and enhanced uncertainty estimation. In light of these points, I fully support accepting this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your support. We will further improve the quality of the final manuscript based on your constructive suggestions. | Summary: This paper proposes a deep fuzzy multi-view learning method (FUML) to classify conflicting multi-view instances and precisely estimate intrinsic uncertainty. Specifically, FUML models logits as fuzzy memberships, employs Shannon's entropy to estimate uncertainty, and utilizes the cosine metric to measure the conflict degree between views. These factors are then combined to perform weighted fusion for multi-view classification.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria make sense for the multi-view classification.
Theoretical Claims: I have checked all Theoretical Claims.
Experimental Designs Or Analyses: I have checked all the experiments in the experimental section.
Supplementary Material: I have read all the content in the appendix.
Relation To Broader Scientific Literature: The paper is closely related to multi-view learning and uncertainty estimation. It makes improvement to conflicting multi-view fusion.
Essential References Not Discussed: The paper has included the main related works that are crucial for understanding the context.
Other Strengths And Weaknesses: Strengths
1. The topic of this paper is interesting as it addresses conflict-aware multi-view fusion.
2. The writing of the paper is clear and well-structured.
After carefully reviewing this paper, I have the following concerns:
(1) Concerns about uncertainty estimation
This paper uses Shannon's entropy to estimate the uncertainty of multi-view results. However, entropy primarily measures the uniformity of category distributions, making it unreliable for distinguishing between hard-to-classify in-distribution (ID) samples and true out-of-distribution (OOD) samples, as both may exhibit high entropy. Additionally, deep neural networks often suffer from overconfidence, and without proper uncertainty calibration, OOD samples may be assigned artificially low entropy when misclassified with high confidence. It remains unclear why Shannon's entropy alone is used as an uncertainty measure for noisy or OOD detection tasks, as demonstrated in C.5.
(2) Concerns about conflict measurement
The paper uses cosine similarity to estimate the conflict degree among multiple views. However, this approach may overestimate conflicts in cases where two views share the same most probable class but differ in lower-ranked class distributions. This can lead to unnecessary penalization of views that are actually aligned in their final decision. Additionally, the method does not account for the varying importance of different categories, which may result in suboptimal weighting in the fusion process.
(3) Concerns about the fusion method
1. The current fusion method employs an exponential weighting mechanism based on the product of uncertainty and conflict degree. While this approach effectively suppresses highly uncertain and highly conflicting views, it may introduce a notable limitation: i.e., low uncertainty inherently diminishes the impact of conflict degree. This issue arises due to the multiplicative interaction between uncertainty and conflict, which implicitly assumes that both factors should jointly determine reliability in a symmetric manner.
2. This method finds it difficult to address conflicts when fusing only two views with the same uncertainty degree due to the symmetry of cosine similarity.
(4) Concerns about computational complexity
This method uses Eq.(7) to measure the conflict degree, introducing a quadratic computational complexity ($O(V^2)$) for pairwise conflict computation, which raises scalability concerns for high-dimensional multi-view data.
(5) Concerns about experiments
It is suggested to add an ablation study to analyze the effects of uncertainty and conflict separately in multi-view fusion. Furthermore, as this paper is evaluated on feature-based multi-view datasets, it is recommended to conduct experiments on multi-modal datasets, such as BRCA, LGG, and NYUD2, to enhance the evaluation. Moreover, it is recommended to compare some of the latest methods for conflicting view fusion, e.g., [1].
(6) Minor Issues
The paper should be carefully double-checked, as there are some errors. For example, it should be $m_{il}^v$ in Eq.(1), and $k$ should be italicized in Line 211.
[1] Navigating Conflicting Views: Harnessing Trust for Learning.
Other Comments Or Suggestions: See Weaknesses.
Questions For Authors: Overall, considering the points raised in the Major Weaknesses section, I have assigned the current rating accordingly. I am open to further discussion with the authors if there are any misunderstandings in my review. I will make the final decision based on the authors' responses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable feedback. Below are our point-to-point responses:
**Q1: Concern 1**
**R1**: In fact, we don't only use Shannon’s entropy for OOD detection. The uncertainty estimation mechanism of FUML includes the following steps: We first calculate the category credibility based on fuzzy membership. The category credibility contains possibility and necessity information, thereby alleviating overconfidence. Then we use Shannon’s entropy of category credibility to estimate uncertainty. The closer the category credibility is to 0.5, the greater the entropy, indicating that the model is less able to make decisions. As can be seen in Figure 8, the uncertainty increases with the increase of noise intensity, which proves the effectiveness of our FUML for OOD detection.
**Q2: Concern 2**
**R2**: 1) In fact, rather than overestimating the conflict between two views, our approach correctly reflects subtle semantic differences. Specifically, the difference in lower-ranked class distributions represents the unique characteristics and information captured by each view. For instance, in Figure 1(a), despite both the RGB and text views being classified as “bathroom”, the RGB view includes additional semantics like “toilet paper” and “toilet”, which the text view lacks. 2) We are confused about your intentions. This paper considers multi-view fusion and doesn't take into account the long-tail distribution.
**Q3: Concern 3**
**R3**: 1) Our assumption that uncertainty and conflict should jointly determine fusion weights is well-founded: Either higher uncertainty or conflict will reduce its reliability, and it is reliable only if both are low. Specifically, high uncertainty reflects the inherent unreliability of predictions from the corresponding view, while high conflict indicates inconsistencies between decisions from different views, both of which lead to unreliable fusion outcomes. Thus, we use uncertainty and conflict together to compute the fusion weights, so as to more accurately reflect the reliability of the corresponding views. 2) When fusing two uncertainly identical and conflicting views, it is impossible to make a correct decision from a human perspective, and therefore our FUML can't either.
**Q4: Concern 4**
**R4**: Although the computational complexity is $O(V^2)$, the number of views is small (usually $V < 10$) in MVC, so the extra computational overhead is negligible. Our experiments indicate that our method exhibits no significant increase in inference time compared to baselines, with an approximate increase of $8\\%$ (i.e., 0.0231s vs. 0.0215s)
**Q5: Concern 5**
**R5**: 1) We conduct ablation experiments on the conflicting test sets. The classification accuracy below shows that removing $u_i^v$ or $o_i^v$ leads to performance degradation, indicating the effectiveness of considering both uncertainty and conflict.
| | AVG | $1-u^v_i$ | $1-o^v_i$ | Ours |
| ------- | ---------------- | ---------------- | ---------------- | ---------------- |
| Fashion | $96.15 \pm 0.22$ | $96.27 \pm 0.22$ | $96.43 \pm 0.32$ | $96.68 \pm 0.32$ |
| LandUse | $67.71 \pm 2.30$ | $68.14 \pm 2.47$ | $68.50 \pm 2.02$ | $69.14 \pm 2.28$ |
2)Since LGG is unavailable, we substitute it with Prokaryotic[1]. The experimental results below demonstrate the superior performance of our FUML. On the NYUD2 dataset with Gaussian noise (mean=0, variance=5), FUML demonstrates superior performance, achieving a classification accuracy of $61.44\pm1.84$ compared to baseline methods (TMC: $59.12\pm1.98$, QMF: $60.32\pm2.63$, PDF: $61.83\pm1.78$).
| | PDF | ECML | CCML | Ours |
| ----------- | -------------- | -------------- | -------------- | -------------- |
| BRCA | $82.40\pm1.27$ | $81.49\pm0.86$ | $79.66\pm1.86$ | $81.78\pm1.24$ |
| Prokaryotic | $70.18\pm4.79$ | $55.05\pm5.23$ | $54.95\pm5.18$ | $73.15\pm4.90$ |
3)Due to the unavailable source code for "Navigating Conflicting Views: Harnessing Trust for Learning", we compared our FUML with another latest work, TUNED (AAAI 2025)[4], as shown blow, demonstrating the superiority of our method.
| | HW | NUSOBJ | Fashion | Scene | LandUse |
| ----- | ---------------- | ---------------- | ---------------- | ---------------- | ---------------- |
| THNED | $96.75 \pm 0.55$ | $34.09 \pm 0.14$ | $86.99 \pm 0.75$ | $67.22 \pm 0.58$ | $46.64 \pm 2.10$ |
| Ours | $98.78 \pm 0.36$ | $47.08 \pm 0.32$ | $96.68 \pm 0.32$ | $72.71 \pm 1.75$ | $69.14 \pm 2.43$ |
**Q6: Concern 6**
**R6**: We will correct Eq.(1) and thoroughly proofread the paper for any remaining errors in the next version.
[1]The landscape of microbial phenotypic traits and associated genes.
[2]Trusted Unified Feature-Neighborhood Dynamics for Multi-View Classification.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response. However, I believe my main concerns remain largely unaddressed.
**(1) Concerns about uncertainty estimation**
The authors mention that category credibility is based on fuzzy membership and incorporates possibility and necessity to mitigate overconfidence. This seems to refer to the loss in Eq. (4), which is only active during training. However, during inference, the model uses a deterministic pipeline—from logits to fuzzy memberships via $L_p$-norm and ReLU, then to category credibility and entropy-based uncertainty estimation—without any calibration techniques (e.g., temperature scaling).
As a result, in out-of-distribution (OOD) cases—where Eq. (4) imposes no constraint during training due to the absence of label supervision—the model may still produce highly confident but incorrect outputs. For instance, a fuzzy membership like [0.8,0.1,0.1] leads to a category credibility of approximately [0.85,0.15,0.15], still reflecting strong overconfidence.
Moreover, the uncertainty in FUML (Eq. 6) is computed as the average of binary entropy values across all category credibility scores. This treats all classes independently and equally, regardless of their semantic relevance. For example, even low credibility scores (e.g., 0.15) contribute significantly to total uncertainty. Thus, a totally correct prediction like [0.85,0.15,0.15], which clearly favors one class, still yields moderate uncertainty (≈0.61), which contradicts the intuitive expectation that such confident predictions should yield lower uncertainty.
**(2) Concerns about conflict measurement**
My concern is not whether views encode subtle semantic differences, but rather the risk of over-penalizing semantically aligned views. The current conflict metric, based on cosine similarity between class distribution vectors, does not account for top-1 prediction agreement.
For example, consider the following four views:
view1: [0.85, 0.10, 0.05] (Correct)
view2: [0.60, 0.15, 0.25] (Correct)
view3: [0.10, 0.85, 0.05] (Incorrect)
view 4: [0.20, 0.70, 0.10] (Incorrect)
The conflict scores are: $o_1 = 0.479$, $o_2 = 0.400$, $o_3 = 0.477$, and $o_4=0.372$. Despite View1 and View2 being correct, their conflicts are as high as (or higher than) the incorrect views, which may distort the fusion process by penalizing accurate views.
While I acknowledge that the conflict metric is intended to capture distributional disagreement rather than prediction correctness, it is important to recognize that conflict values directly influence fusion weights in the current framework. This creates a mismatch between distributional difference and semantic trust, especially when a wrong prediction appears aligned at the distribution level. I recommend incorporating top-k class consistency or other semantic alignment mechanisms to improve the robustness of conflict estimation.
**(3) Concerns about the fusion method**
**a. Unreasonable multiplicative formulation in fusion weighting**
I would like to emphasize that my concern is not about the idea of jointly considering uncertainty and conflict—which is entirely reasonable—but rather about the specific multiplicative formulation used to combine them. The exponential weighting function based on the product of uncertainty and conflict (Eq. 8) creates an imbalanced interaction: when either is small, the weight remains high, even if the other factor signals unreliability. For instance, a confident but semantically conflicting view may still be over-weighted, while a more moderate but consistent view may be under-weighted. This design fails to reflect true semantic reliability.
**b. Inadequate handling of two conflicting views**
Additionally, I understand the authors’ point that in some two-view cases, ambiguity is unavoidable even from a human perspective. However, this does not justify treating both views as equally reliable by default. In such cases, models may still benefit from leveraging subtle but actionable asymmetries, possibly including the confidence margin between top predictions, view-specific calibration behavior, or consistency with other samples in the batch (e.g., soft alignment). Unfortunately, the current fusion strategy—due to its strictly symmetric and multiplicative structure—has no mechanism to capture these signals. As a result, it is not that the model “can’t decide,” but rather that the design prevents it from trying. I encourage the authors to consider fusion strategies that allow for asymmetric cues or conditional weighting in the future, so that the model can still make meaningful distinctions even when uncertainty values are similar.
---
Reply to Comment 1.1.1:
Comment: We truly appreciate your valuable and constructive feedback. Below is our point-by-point response.
**Q1: Concern 1**
**R1**:
1) During inference, category credibility also incorporates the possibility and necessity measures, reducing overconfidence by providing uncertainty. Our FUML uses a deterministic pipeline but estimates uncertainty like EDL-based methods, and outperforms them in handling conflicting (Fig.3). Future work will explore calibration techniques to improve the accuracy of uncertainty.
2) FUML uses category credibility to reduce overconfidence. For example, an OOD sample with membership [0.8, 0.1, 0.1] yields an uncertainty of 0.55 using only membership, but derived category credibility [0.8, 0.15, 0.15] generates higher uncertainty (0.61). This indicates it reduces overconfidence for OOD samples by providing conservative uncertainty, as shown in C.5.
3) Eq.6 does not treat all classes independently; instead, it captures semantic relevance by reflecting how much a sample doesn‘t belong to alternative classes. This consideration of inter-class relationships, along with treating all category credibility equally, avoids underestimating potential risks by focusing solely on the main class, contributing to a more nuanced uncertainty estimation. The moderate uncertainty (0.61) of [0.85, 0.15, 0.15] prevents overconfidence. Conversely, more discriminative like [0.99, 0.0, 0.0], yields a low uncertainty (0.08), i.e., only reliable enough to generate low uncertainty.
**Q2: Concern 2**
**R2**:
1) The case you provided assumes access to labels for conflict evaluation, which is not practical at test time. Instead, our FUML computes conflict solely from the distribution of each view's memberships. In your example, it's the second class (not the first) that is dominant. Specifically, view3 and view4 both have high values in the second class, view1 and view3 share a similar level of discriminativeness, and view4 is more discriminative than view2 (0.70 vs. 0.60). So, the second class is more likely to be correct. Thus, the computed conflict scores are correct, where the conflicts of view3 and view4 are smaller than ones of view1 and view2, respectively, i.e., no clear over-penalization occurs in our FUML.
2) We acknowledge the potential mismatch between distributional differences and semantic trust. Our FUML addresses this by not relying solely on the conflict. View-specific uncertainty plays a complementary role in weighting decisions, enabling more robust fusion.
3) Based on your suggestion, we conducted experiments on the conflicting test set of Fashion. We compare top-k class consistency (retaining only top-k class probabilities and zeroing out the rest) with our FUML. Results below show our method outperforms top-K truncation, indicating that considering all classes enables more effective conflict measurement. Nonetheless, we greatly appreciate your suggestions and will explore semantic alignment mechanisms in the future.
| top-1 | 96.45 |
| ----- | ----- |
| top-2 | 95.58 |
| top-3 | 96.57 |
| Ours | 96.68 |
**Q3: Concern 3**
**R3**:
a) We appreciate your perspective, yet we hold a different view. The purpose of using multiplication formulas is to avoid overestimating weights, which is also a highlight of this work. It ensures that either high uncertainty (u) or high conflict (o) is sufficient to reduce the influence of the corresponding view. For example:
v1: u = 0.1, o= 0.1; (confident & consistent)
v2: u = 0.1, o = 0.9; (confident & conflicting)
v3: u= 0.5, o= 0.1. (moderate & consistent)
As shown below, compared with a baseline ((1-u)+(1-o))/2, our FUML neither over-weights v2 nor under-weights v3, achieving a more balanced assessment.
| | v1 | v2 | v3 |
| -------- | ---- | ---- | ---- |
| baseline | 0.40 | 0.27 | 0.33 |
| Ours | 0.46 | 0.22 | 0.32 |
b) In extreme cases where two conflicting views have the same uncertainty, our FUML does assign them equal weights. This doesn't mean it fails to make a decision. Instead, it offers a prediction with high decision uncertainty, which is appropriate under such ambiguity. Moreover, our FUML is scalable and can benefit from leveraging subtle but actionable asymmetries if relevant prior information is given. For example, our $(1-u^v) (1-o^v)$ can be extended to $q^v (1-u^v) (1-o^v)$ for asymmetric fusion, where $q^v$ represents a view-specific weight prior. We conduct experiments on the Fashion dataset, as shown below.
| $q^1$; $q^2$; $q^3$ | Acc |
| ----- | ----- |
| 0.6; 0.2; 0.2 | 96.61 |
| 0.2; 0.6; 0.2 | 96.54 |
| 0.2; 0.2; 0.6 | 96.44 |
| Ours | 96.68 |
From the results, hardcoded priors don't yield performance gains. However, we agree with your insightful suggestion, such as confidence margins or cross-sample consistency, to conditionally weight views. This is a promising direction for future work, and we thank you again for the valuable feedback. | null | null | null | null | null | null |
Unraveling the Interplay between Carryover Effects and Reward Autocorrelations in Switchback Experiments | Accept (poster) | Summary: This work investigates the effectiveness of the switchback experimental design in A/B testing. The paper studies this design with multiple estimators and proves results that are estimator-agnostic. These results link the efficiency of the switchback design to the carryover effect and the correlation structure between rewards and help construct a guideline to when to use switchback experimentation instead of the alternating day design. The benefits are also confirmed by empirical experimentation.
## After rebuttal: I think that this work meets ICML standards and is worth sharing with the community. I raised my score to reflect this.
Claims And Evidence: The main claim is supported by convincing evidence.
Methods And Evaluation Criteria: The evaluation criteria makes sense.
Theoretical Claims: I did not check fully the correctness of the proofs.
Experimental Designs Or Analyses: The experimental design looks valid.
Supplementary Material: I did not spend time reading he supplementary material.
Relation To Broader Scientific Literature: This work improves our understanding of the switchback design as an alternative to A/B testing/Alternating day design. It uses the general MDP formulation to shed light on the value of the switchback design and provide practical guidelines to which strategy to adopt depending on the problem in hand.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: **Strengths:**
- The paper treats the important A/B testing problem and provides insight into the relatively new switchback experimental design and guidelines on when to use it.
- The paper is well-positioned compared to related work.
- The results derived are quite general and do not depend on a heavy assumption.
- The claims made are supported with interesting theoretical and empirical results.
**Weaknesses:**
- The paper can be hard to follow if the reader is not familiar with the literature.
- The paper focuses on the switchback and alternating day designs but does a poor job motivating them compared to vanilla A/B testing.
Other Comments Or Suggestions: See questions below please.
Questions For Authors: I have a few questions to the authors:
- When there is no carryover effect, how does the standard A/B testing estimator (split the population, run the two policies independently on A/B populations) compare to alternative day/switchback? does the A/B testing estimator match the alternating day design (given the independence assumption)?
- Is the MSE really what we care about in these scenarios? Can we construct a valid confidence interval to use for decision making with the switchback estimator?
- The model proposed generalises the MDP framework but its limitations were not discussed. How can we sure that the insight derived from this model are transferable to real world scenarios?
I am leaning for an accept for this paper, I just want my questions to be clarified and I will reconsider my score.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > **Comparison against standard A/B**
Excellent comment! First, we would like to clarify that different estimators apply to different use cases. Taking ridesharing as an example, there are three different types of experiments: (i) temporal randomization (over time), (ii) spatial randomization (across geographic areas), and (iii) user-level randomization (across drivers/passengers). Our primary focus is (i), which applies to the evaluation of order dispatch policies that must be implemented city-wide, making (ii) and (iii) unsuitable. Spatial randomization (ii) is typically used for testing localized subsidy policies in different regions, while user-level randomization (iii) applies when
assigning personalized subsidies to individual users. Second, when constraining to temporal randomization, the population corresponds to the entire time horizon, with each time interval representing an individual unit. When no carryover effects exist and
residuals are uncorrelated (satisfying i.i.d. assumptions), similar to Theorem 1, we can show that standard uniform randomization over time is equivalent to both AD and SB. This is because temporal ordering becomes irrelevant under the uncorrelatedness assumption. Similarly, when randomizing is conducted at the daily level rather than per time unit, standard procedures are equivalent to AD designs.
We will be very happy to include these discussions in the paper shall it be accepted, to better connect AD, SB
to vanilla A/B testing.
> **MSE & confidence intervals (CIs)**
This is another excellent comment. We agree that CIs and p-values are equally important, as A/B testing is essentially a statistical inference problem. Our designs are tailored to minimize the MSE of the resulting ATE estimators. A closer look at the proof of Theorem 1 suggests that the three RL-based estimators are asymptotically normal, and their MSEs are dominated by their asymptotic variances. As such, optimal designs minimizing the variance of the resulting ATE estimator also minimize the length of confidence intervals and maximize the power of the resulting hypothesis test.
As we use RL-based estimators for A/B testing, existing methods developed in the RL literature are directly applicable for CI construction. These methods can be categorized into the following four types:
1. **Concentration-inequality-based methods** that construct CI based on concentration inequalities (Thomas et al., 2015, AAAI; Thomas & Brunskill, 2016, ICML; Feng et al., 2020, ICML)
2. **Normal-approximation-based methods** that utilize the asymptotic normality of the ATE estimator to construct Wald-type CIs (Luckett et al., 2020, JASA; Liao et al., 2021 JASA; Shi et al., 2021, ICML; Kallus and Uehara et al., 2022, OR, Liao et al., 2022, AoS; Shi et al., 2022, JRSSB; Wang et al., 2023, AoS)
3. **Bootstrap methods** that employ resampling for CI construction (Hanna et al., 2017, AAAI; Hao et al., 2021, ICML)
4. **Empirical likelihood methods** (Dai et al., 2020, NeurIPS).
During the rebuttal, we conducted additional simulation studies, employed a non-parametric bootstrapping method to construct CIs and reported the coverage probability (CP) & the average CI width in these [plots](https://www.dropbox.com/scl/fo/knx6re4t6gzlh6911bzwe/AAvJOdKE1zy14xHbwqZtKlA?rlkey=e5pfd7grawxqt336a96sn9o35&st=guzcuroc&dl=0). It can be seen that most CPs are over 92%, close to the nominal level. Meanwhile:
* For small values of $\lambda$, more frequent policy switch reduces the average CI width;
* When $\lambda$ is increased to 10%, AD produces the narrowest CI on average.
These results verify our claim that a reduction in MSE directly translates to a shorter CI.
> **Limitations**.
Our framework relies on the Markov assumption. We have discussed the validity of this assumption in our motivating ridesharing example and outlined several potential approaches to handle settings when this assumption is violated. Refer to our response #2 to Reviewer yQjJ.
> **Transfer findings to real world scenarios**.
In the discussion section, we proposed a three-step workflow for applying our theoretical framework to
real-world experimental design:
(i) To make sure our findings are transferable, the first step is to properly determine the time interval to satisfy the Markov assumption. When historical dataset is available, this assumption can be verified based on existing tests (Chen & Hong, 2012, Econ Theory; Shi et al., 2020, ICML; Zhou, et al., 2023,
JRSSB).
(ii) The second step is to examine the size of the carryover effects, which in practice can be estimated using company's simulator that evaluate new product's impact prior to deployment.
(iii) The last step is to determine the correlation structure, which can again be achieved using the historical data.
> **Presentations**.
We will polish the paper to make it more accessible to general audience.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. I have a question about the decision making part:
All of the methods you cited require the estimator to be unbiased, or else your estimator will concentrate around a biased quantity. It is good to have smaller confidence intervals but it's not a good sign if your estimator does not concentrate towards the true ATE value.
ATE estimation via DRL can present biases when the value nor the importance weights are correctly specified. How can we be sure that they are correctly specified? In the other hand, do we know how this bias dissipates? Is the estimator consistent for example? Do we have theoretical results proving that one can omit this bias?
---
Reply to Comment 1.1.1:
Comment: Excellent comment. We first clarify that in theory, even when both the estimated value function and importance weights are misspecified, the DRL estimator can remain __asymptotically unbiased__ -- the bias is of order $o_p(n^{-1/2})$ and therefore much smaller than its root MSE -- provided that both estimated nuisance functions (e.g., value and importance weights) converge to their ground truth at a rate of $o_p(n^{-1/4})$. This finding is also implied by our proof of Theorem 1 in Appendix C.4, which we elaborate on below. In practice, these rate conditions can be met by using flexible machine learning models, e.g., neural networks, to estimate the nuisance functions. Our empirical study, reported in the [link](https://www.dropbox.com/scl/fo/parfie6b7vaft6rxnaomc/AH_wdfm5zAxK_oyGAlYXAps?rlkey=f3lew1k3hf74psbhkkx8drv24&st=j5hwjr1t&dl=0), also confirms that the __biases of both DRL and LSTD estimators are consistently much smaller than their root MSEs__, under the two data generating processes in our simulation studies. Even when the $o_p(n^{-1/4})$ condition is violated, one can adopt __DRL variants__ that either debias this estimator or account for model misspecification to __produce valid confidence intervals__, as we describe later.
We next discuss the convergence rate condition, along with the intuition behind why DRL's bias remains negligible. The $o_p(n^{-1/4})$ condition is __mild__, as $o_p(n^{-1/4})$ is much slower than the standard parametric rate of $O_p(n^{-1/2})$ and practically feasible to achieve given the approximation capabilities of modern machine learning models. Similar rate conditions have been widely imposed in the causal inference and RL literature (see, e.g., Chernozhukov et al., 2018 ; Kallus and Uehara, 2020; Farrell et al., 2021).
The reason DRL remains asymptotically unbiased under this condition lies in its Neyman orthogonality property (see Equation (1.8) in Chernozhukov et al., 2018), which ensures that DRL's bias due to estimation errors in the nuisance functions only appears at the __second order__. More precisely, the bias is proportional to the product of the estimation errors of the value function and the importance weights. This is formalized in Part II of our proof of Theorem 1-DRL in Appendix C.4, where the term $J_3$ in Equation (26) is shown to be upper bounded by the final term in Equation (30), which is indeed a __product__ of the two errors. Consequently, to ensure the bias remains $o_p(n^{-1/2})$, it is sufficient for both errors to decay at a rate of $o_p(n^{-1/4})$.
Finally, we note that the $o_p(n^{-1/4})$ convergence rate condition can be further relaxed to $O_p(n^{-\kappa})$ for any $\kappa > 0$. This is a much weaker condition, as $\kappa$ can be arbitrarily small. In such cases, one can adopt the proposal by Shi et al. (2021) to deeply debias the DRL estimator, ensuring that its bias remains much smaller than its root MSE. Moreover, even in cases where the nuisance function estimators do not converge at all (i.e., $\kappa = 0$), one may still obtain valid confidence intervals by adopting the proposals by Jiang and Huang (2020) and Zhou et al. (2023). These methods enlarge the resulting confidence interval by explicitly accounting for model misspecification error to maintain valid coverage.
**References**
[1] Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., & Robins, J. (2018). Double/debiased machine learning for treatment and structural parameters.
[2] Kallus, N., & Uehara, M. (2020). Double reinforcement learning for efficient off-policy evaluation in markov decision processes. Journal of Machine Learning Research, 21(167), 1-63.
[3] Farrell, M. H., Liang, T., & Misra, S. (2021). Deep neural networks for estimation and inference. Econometrica, 89(1), 181-213.
[4] Shi, C., Wan, R., Chernozhukov, V., & Song, R. (2021, July). Deeply-debiased off-policy interval estimation. In International conference on machine learning (pp. 9580-9591). PMLR.
[5] Jiang, N., & Huang, J. (2020). Minimax value interval for off-policy evaluation and policy optimization. Advances in Neural Information Processing Systems, 33, 2747-2758.
[6] Zhou, W., Li, Y., Zhu, R., & Qu, A. (2023). Distributional shift-aware off-policy interval estimation: A unified error quantification framework. arXiv preprint arXiv:2309.13278. | Summary: This work studies how carryover effects and autocorrelations influence switchback experiments in A/B testing. The authors conduct theoretical and experimental analyses and evaluate three main estimators using both synthetic and real-world datasets.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No, I only read the theorem and three corollaries in the main text.
Experimental Designs Or Analyses: Yes, I checked the design of both experiments and the details in Appendices A and B.
Supplementary Material: Yes, A and B.
Relation To Broader Scientific Literature: This work contributes to the policy deployment in the real world, thus potentially aiding the development of offline RL.
Essential References Not Discussed: No, the literature review is extensive.
Other Strengths And Weaknesses: **Strengths:**
> First, this work is well-structured and easy to follow. To my knowledge, this work is the first formal exploration of switchback experiments in A/B testing with common switchback strategies, demonstrating that the SB strategy can outperform the AD (alternating-day) design under specific conditions. The toy example effectively clarifies the motivation behind the research and subsequent experimental designs. Notably, the authors validated their findings using real-world data, observing trends consistent with theoretical predictions.
> Additionally, the paper provides actionable insights (the take-home message) that are valuable for both researchers and practitioners in the field.
**Weaknesses:**
> First, the definitions and quantitative measurements of carryover effects and reward autocorrelations are under-explored. Consequently, terms like "positively correlated," "negatively correlated," and "large carryover effect" remain somewhat subjective (though the metrics introduced in Section 4.2 could serve as a reference).
> Another limitation is the assumption of the Markovian property, which may not hold in real-world scenarios. Given the complexity of capturing complete market features and participants’ policies, the system is often partially observable, aligning more closely with a POMDP. Incorporating data from more realistic scenarios could refine the model’s applicability.
Other Comments Or Suggestions: 1. This work could introduce more real-world scenarios with A/B testing data to verify the results.
2. Placing Fig.6 under Fig.5 may look better.
Questions For Authors: 1. The carryover effect seems to be the one-step effect when the new policy takes over the baseline policy. However, the new policy may have long-term impacts. For example, in ride-hailing scenarios, a coupon delivered to the customer may be consumed in the most expensive order, which may exceed the duration of a single policy in the SB experiment. Why do carryover effects only cover the single-step case?
2. If *δ* in Sec. 4.2 is close to 0, the new policy and baseline policy cause very similar transitions, and in other words, the transition is irrelevant to the action. However, we do not know the influence of a non-zero $\delta$. Could you provide some illustrations or measurements on the transitions under the new policy and the baseline when $\delta$ is fixed to zero and when $\delta$ takes on different non-zero values?
3. When $\lambda$ in Sec. 5.2 is set to 15\%, the results of AD are reversed when compared to Fig.3 and Fig.4. What are the results when $\lambda$ is set in the range (5\%,15\%)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > **Subjective definitions**
We did not define these terms to make the paper easy to follow. To address your comment, here are their precise mathematical definitions:
* ''Positively (negatively) correlated'' refers to the covariance $\sigma_e(t_1, t_2)$ being positive (resp. negative);
* A ''large carryover effect" occurs when $\delta \gg (R\_{\max} T)^{-2} \sum_{\substack{k_2-k_1=1,3,5, \ldots\\ 0\le k_1<k_2< T/m}} \sum_{l_1,l_2=1}^{m} \sigma_{e}(l_1+k_1m, l_2+k_2 m)$ so that Equation (5) will be dominated by the second term.
> **Markov assumption (MA)**
We thank the reviewer for highlighting the MA, which is critical for our RL-based estimators. In collaboration with our ride-sharing industry partner, we've observed that intervals of 30 minutes or 1 hour typically satisfy MA, showing strong lag-1 correlations with rapidly decaying higher-order correlations. This justifies the use of RL in our application.
When applied to more general applications, we recommend to properly select the interval length to meet MA as an initial step in the design of experiments (see Fig. 6). If that's challenging, we further propose three approaches below, tailored to different degrees of violation of MA. Our current results directly extend to the first two cases, while the third case requires further investigation:
1. **Mild violation**: Future observations depend on the current observation-action pair and a few past observations. This mild violation can be easily addressed by redefining the state to include recent past observations. With this modified state, MA is satisfied. Our RL-based estimators and theoretical results remain valid.
2. **Moderate violation**: Future observations depend on a few past observation-action pairs. Here, the RL-based estimators remain applicable if the state includes these historical state-action pairs. However, our theoretical results on optimal designs must be adjusted. Preliminary analyses show that, under weak carryover effects and positively correlated residuals, the optimal switching interval extends to 1+k (where k is the number of included past actions) rather than switching at every time step. This is because each observed reward is affected by a k+1 consecutive actions, not just the most recent action. More frequent switching under these conditions causes considerable distributional shift, inflating the variance of the ATE estimator.
3. **Severe violation**: Data follows a POMDP. Although the existing literature provides doubly robust estimators and AD-like optimal designs (Li et al., 2023, NeurIPS) to handle such non-Markov MDPs, these estimators suffer from the "curse of horizon" (Liu et al., 2018, NeurIPS). Recent advances propose more efficient POMDP-based estimators (Liang & Recht, 2025, Biometrika) and designs (Sun et al., 2024, arXiv:2408.05342); however, these proposals are limited to linear models. Extending these methodologies to accommodate more general estimation procedures (e.g., Uehara et al., 2023, NeurIPS) represents an important direction for future research.
We also remark that in the first two cases, existing tests are available for testing the Markov assumption and for order selection (Chen & Hong, 2012, Econ Theory; Shi et al., 2020, ICML; Zhou, et al., 2023, JRSSB).
> **More real-world scenarios**
Refer to our response #2 to Reviewer xXCu regarding other data sources.
> **One-step effect**
We appreciate this insightful comment and would like to clarify that our analysis does not assume a one-step carryover effect. Under the MDP framework, each action potentially influences all subsequent rewards throughout the trajectory, and our RL-based estimators explicitly account for these long-term effects. As illustrated in Figure 2, an action (e.g., $A_{t-1}$) affects not only the immediate next reward ($R_t$) but also subsequent rewards. Nevertheless, the MDP structure is first-order Markovian, implying that these delayed effects propagate entirely through the state $S_{t+1}$.
> **Measuring $\delta$**
$\delta$ can be measured using the size of delayed effects (e.g., $\lambda$), defined as the difference between the ATE and the direct effect. This is because under the MDP formulation, delayed effects are caused by differences in state transitions between different actions. Thus, these effects serve as natural measures for $\delta$. Practically, $\lambda$ can be estimated by computing the difference between our RL-based estimator for ATE and existing direct effect estimators (Dudik, 2014, Statistical Science).
> **Results when $\lambda=$5% ~15\%**
We conducted additional simulations using real datasets from the two cities and reported the [results](https://www.dropbox.com/scl/fi/h6pkz5y2pqybfodu5okn6/plot_all_rmse.png?rlkey=pq87oyi40xtlsbyxwqxyi8bp1&st=8zw4zrtv&dl=0) when $\lambda= $ (7.5%, 10%, 12.5%). Notably, as $\lambda$ increases, AD appears as the optimal design due to the increase in the carryover effect. This again aligns with our theory.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. | Summary: The paper explores various switchback experimental designs within MDPs, analyzing how carryover effects and reward autocorrelations affect estimator performance in A/B tests. The authors suggest a practical workflow for choosing designs based on the interplay of carryover effects and autocorrelation of rewards, demonstrating empirically (via simulations and real-world datasets) the conditions under which alternating-day (AD) designs or high-frequency switchbacks (SB) are more efficient.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The correctness appears acceptable overall.
However, I found the three corollaries a little oversimplified. I was expecting to see some more realistic assumptions of covariance structures, like mixing conditions.
Experimental Designs Or Analyses: In real-data experiments (Section 5.2), the authors rely on a single ridesharing company's data, which inherently contains significant structural autocorrelations and daily cycles. Could the authors provide sensitivity analyses or robustness checks to other data sources?
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper makes a meaningful contribution by connecting RL-based policy evaluation with experimental design under temporal dependencies. It extends prior work by systematically analyzing how different estimators behave under varying carryover and autocorrelation conditions, which hasn’t been thoroughly explored before.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: No.
Questions For Authors: Could authors provide a clearer comparison or benchmarking of their suggested RL-based estimators against more conventional estimators from econometrics that already handle autocorrelation and carryover effects effectively?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1. Realistic Covarince Structures** :
In Corollaries 1–3, we focused on autoregressive, moving average, and exchangeable covariance structures primarily to derive closed-form expressions for the first term (AC(m)) in Equation (5). While these structures might seem simple, they are widely used in practice (Williams, 1952, Biometrika; Berenblut et al., 1974, Biometrika; Zeger et al., 1988, Biometrika).
During the rebuttal, we have relaxed these assumptions and obtained the following corollaries that allow more realistic covariance structures:
**Corollary 4 (Truncated covariances)**. Suppose $\sigma_e(i,j)=0$ whenever $|i-j|> K$ but remains positive when $|i-j|\le K$. Then AC(m) decreases monotonically in $m$ for any $m\ge K$.
**Corollary 5 (Decaying covariances)**. Suppose $\sigma_e$ is nonnegative and satisfies $\sigma_e(i,j)=O(|i-j|^{-c})$ for any $c>2$. Then for any $m_1\le m_2$, we have $AC(m_1) \ge AC(m_2) + o(T^{-1}m_1^{-1})$.
The truncated covariance assumption in Corollary 4 is substantially weaker than the moving average assumption in Corollary 2. First, it does not require the covariance function to be stationary. Second, residuals are not required to be sums of white noise processes.
Similarly, the decaying covariance assumption in Corollary 5 is considerably weaker than the autoregressive assumption in Corollary 1. Like Corollary 4, it does not require stationarity. Additionally, it accommodates polynomial decay of covariance functions with respect to the time lag, thus allowing for stronger temporal dependencies compared to the exponential decay assumed in Corollary 1. As a trade-off, Corollary 5 provides a weaker guarantee: monotonicity holds asymptotically, only when $m_1$ and $m_2$ are sufficiently large.
> **2. Sensitivity Analysis and Other Data Sources**:
Regarding Sensitivity Analysis:
(i) In synthetic environments, we conducted a sensitivity analysis to assess the robustness of our results under four additional covariance structures: **moving average, exchangeable, uncorrelated, and autoregressive with a negative coefficient**. The findings, summarized in Figures S1 & S2 (Appendix B), consistently align with our theories.
(ii) In the real-data analysis, we analyzed two datasets collected from different cities. We conducted a sensitivity analysis on the effect size, from **zero to small, moderate, and large**. Once again, these results support our theoretical conclusions.
Regarding Other Data Sources:
Publicly available experimental datasets suitable for our analysis are scarce. However, during the rebuttal, we found a publicly available [simulator](https://github.com/callmespring/MDPOD), which simulates a ridesharing market environment described in Xu et al. (2018, KDD). In this simulator, drivers and customers interact within a 9×9 spatial grid over 20 time steps per day. We used this simulator to conduct additional simulation studies, comparing the MDP-based order dispatch policy (Xu et al., 2018, KDD) against a distance-based dispatch method that minimizes the total distance between drivers and passengers. We varied the number of days $n=28,35,42,49,56$ and set $m=1,5,10,20.$ For each scenario, we tested 100 orders with the number of drivers generated from a uniform distribution $U(40,45).$
We report the RMSEs of ATE estimators computed under different designs, aggregated over 200 simulations, in this [link](https://www.dropbox.com/scl/fo/dp1j8bqnrkerckiz4mgvd/APksU1ySy9EI6DRjuIu67fE?rlkey=2s6hr0kk9ye0pau9qveapfnen&st=uz6alqna&dl=0), see the first plot. It can be seen that less frequent switching generally achieves smaller RMSE (m=10, 20). Further analysis shows that ATE is relatively small (~2%), suggesting weak carryover effects. Additionally, we observed negatively correlated reward residuals (see the second plot in the link). Under these conditions, our theory suggests that AD performs the best (see bullet point 3 on page 7), which aligns with the empirical results.
> **3. RL-based vs Conventional Estimators**:
We ran extra simulations to compare the RL-based estimator with three non-RL estimators:
* Bojinov et al.'s (2023, Management Science) sequential importance sampling (IS) estimator, which handles carryover effects through time-dependent IS ratios;
* Hu et al.'s (2022, arXiv:2209.00197) difference-in-mean estimator, which employs burn-in to discard rewards during policy transitions to handle carryover effects;
* Xiong et al.'s (2024, arXiv:2406.06768) simple IS estimator that doesn't handle carryover effects.
As shown in our [results](https://www.dropbox.com/scl/fi/51e0y436ekpbzmprdycxo/compare_LogMSE_all.png?rlkey=02cjwfxl4y3m659fg3j3zc21c&st=g66v0ps9&dl=0), RL-based estimators consistently outperform methods (i) and (ii) in all cases. Moreover, DRL and LSTD perform much better than (iii) in most cases.
We deeply appreciate your thoughtful suggestions and will incorporate these analyses into our paper shall it be accepted. | null | null | null | null | null | null | null | null |
BiMark: Unbiased Multilayer Watermarking for Large Language Models | Accept (poster) | Summary: In this work, the authors introduce BiMark, which encodes and extracts multi-bit messages via watermarking while preserving text quality. Meanwhile, BiMark enables model-agnostic detection via a bit-flip unbiased reweighting and enhances detectability via a multi-layer architecture.
Claims And Evidence: The claims are clear and reasonable, yet lack the explicit correspondence of evidence.
Methods And Evaluation Criteria: BiMark enables model-agnostic detection by a vocabulary size-based reweighting approach and enhances detectability via multi-layer reweighting mechanism. However, the main idea of preserving text quality is borrowed from unbiased reweighting.
Theoretical Claims: Firstly, it needs to provide computation complexity of BiMark.
Secondly, the authors provide Type-II error of single-layer unbiased reweighting, while failing to extend to multilayers.
Experimental Designs Or Analyses: Firstly, the experiments are not fairly compared. The authors evaluate the model-agnostic detection of BiMark by comparing SynthID and KGW (the author mentioned only shot red list in analysis), while other representative works are missing, e.g., soft red-list, MPAC, and GINSEW.
Secondly, the ablation studies are not clear in presentation or analyses. It is hard to identify the legend correspondence, and it is confusing to take d=50 for studying the detectability, yet d=50 is not studied in the effect of multilayer.
The analyses are not organized in a readable way, making it doubtful to use generative language models directly.
Supplementary Material: There is no code release or complexity analysis for introducing multi-layer watermarking.
Relation To Broader Scientific Literature: BiMark's contributions are situated within the broader scientific literature on watermarking techniques for Large Language Models (LLMs). Specifically, BiMark integrates multilayer architecture enhancing detectability without compromising generation quality, and an information encoding approach supporting multi-bit watermarking.
BiMark advances the field of digital watermarking, offering a robust solution for content authentication and intellectual property protection in AI-generated text.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strenghts
1 The authors provide a clear and comprehensive category of related watermarking work.
2 The motivation is interesting and worth exploring.
3 BiMarker is a model-agnostic approach that is compatible with existing generative models.
weakness:
1 Both Fig.1 and Fig.6 need more illustration. It is hard to follow by the current version.
2 The presentation is hard to follow and needs to be polished better.
3 The computation burden requires deeper analysis and potential solutions.
4 BiMark is vulnerable to attacks, which needs deeper exploration.
Other Comments Or Suggestions: 1 The experimental analysis should enhance
2 The figure could be illustrated more clearly.
3 The project should release the code for better contribution.
Questions For Authors: See Above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1**:*Computational cost*
**R1**:
For a single-layer reweighting, the method requires first calculating the probability of a vocabulary subset, then obtaining scaling factors based on these original probabilities, and finally adjusting the probability of candidate tokens. The complexity of this operation is $O(|\mathcal{V}|)$.
For multi-layer setting, the operation is iterative, making the computation cost grow linearly with the increase in the number of layers, leading to a total complexity of $O(|d\mathcal{V}|)$.
The table below shows the runtime for BiMark during a single inference pass in our experiments:
|Method|Batch_size=1|Batch_size=50|
|---------------------|-----------|--------|
|BiMark|0.036 s|0.047 s|
The experimental result shows BiMark can benefit significantly from large batch size.
**Q2**: *Type-II error of unbiased multi-layer reweighting.*
**R2**: Although it is challenging to provide a close-form expression of type-II error rate of multi-layer watermarking at present since the current layer's probability depends on the reweighted distribution of the previous layer, the analytical idea is the same as single-layer reweighting.
Specifically, the detectability of multi-layer watermarking gradually decreases as unbiased reweighting reduces distribution entropy across layers.
Intuitively, our method iteratively adjusts probability on tokens through unbiased reweighting based on our red-green vocabulary partitions. In a simplified case with two probabilities $p$ and $1-p$, each layer of reweighting transfers some probability $\Delta$ between these groups with fair chance, creating a detectable pattern.
According to type-II error rate analysis of our single-layer unbiased reweighting, higher entropy distributions allow higher expected green ratio in watermarked text, improving detectability.
However, this process also inherently reduces entropy because entropy $(-\sum_x p(x)\log p(x))$ is concave, and our reweighting creates a more "peaked" distribution in expectation.
As a result, with each successive reweighting layer, the entropy of the reweighted distribution decreases in expectation, leading to diminishing marginal contributions to detectability.
This analysis explains our empirical observation that detection performance does not increase monotonically with the number of layers. Deeper layers operate on increasingly low-entropy distributions, offering proportionally smaller gains in overall watermark strength.
**Q3**: *Experimental comparison.*
**R3**: We would like to clarify our experimental choices:
For zero-bit case, BiMark was compared with Soft-Red-List (it only supports zero-bit watermarking). The labelling of Soft-Red-List as "KGW" in the figures will be corrected for consistency in the revised version.
For multi-bit case, BiMark was compared with MPAC. MPAC is not available for the zero-bit scenario since it is specifically designed for multi-bit watermarking tasks and is built upon Soft-Red-List.
GINSEW \[1\] is not available as it is intended for a different purpose—specifically, protecting text-generation models from theft through distillation rather than detecting watermarks in individual text segments.
\[1\] Protecting language generation models via invisible watermarking.
**Q4**: *Ablation study*
**R4**: In Fig.5(c\), we set the number of reweighting layers to $d=50$. We apologize that the legend does not clearly reflect this setting.
We chose this large value to more clearly demonstrate how scaling factor affects watermark detectability when the number of layers is fixed.
While the relationship between scaling factor and detectability holds for any value of $d$, the effects are more obvious with larger $d$ values, making the trends easier to visualize.
**Q5**: *Potential vulnerability to attack*
**R5**:
We have additionally evaluated the vulnerability of our method under paraphrasing attack, as shown in R3 to Reviewer kPVR.
Our current work focuses primarily on the watermarking mechanism itself. We acknowledge the importance of attack resistance, and we will investigate broader attack scenarios and conduct vulnerability analysis as part of our future work.
**Q6**: *Code release*
**R6**: Thank you for suggestions. We plan to release the code implementation at the time of the final decision announcement.
**Q7**:*Concerns about representation*
**R7**: Thank you for your suggestions. We will enhance our paper by: (1) providing more detailed experimental analysis (2) redesigning Figs. 1 and 6 to better illustrate BiMark's mechanisms; and (3) improving overall presentation with consistent terminology and clearer explanations throughout the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. It mitigates most of my concerns. Considering the potentially unresolved issues, I would like to keep the positive rating unchanged. | Summary: The paper presents BiMark, a watermarking technique for large language models (LLMs) that ensures text quality preservation, model-agnostic detection, and message embedding capacity—three key properties often challenging to balance in existing watermarking methods. BiMark introduces a bit-flip unbiased reweighting mechanism that enables detection without model access, a multilayer reweighting approach to enhance detectability, and an information encoding scheme supporting multi-bit watermarks. Through theoretical analysis and empirical validation, BiMark achieves higher extraction rates for short texts while maintaining low perplexity and comparable performance to non-watermarked text in downstream tasks. The framework provides a robust, practical, and efficient watermarking solution for AI-generated content detection.
Claims And Evidence: BiMark is evaluated for text quality preservation, model-agnostic detection, multi-bit embedding, and robustness against text modifications. Empirical results indicate that unbiased reweighting minimizes impact on text fluency and perplexity, with watermark detection verified on Llama3-8B and Qwen2.5-3B. Message embedding is tested through extraction rate comparisons (Table 2), showing up to 30% higher recovery for short texts. Robustness is assessed using synonym substitution experiments (Table 3), though further evaluation against paraphrasing and other adversarial attacks could enhance understanding of the method’s resilience. These results contribute to evaluating BiMark’s effectiveness across different aspects of LLM watermarking.
Methods And Evaluation Criteria: (1) Effectiveness and Robustness: BiMark is evaluated on multiple models (Llama3-8B, Qwen2.5-3B) and NLP tasks (summarization, translation). Text quality preservation and model-agnostic detection are assessed using perplexity and performance metrics, while message embedding capacity is examined through extraction rate comparisons.
(2) Practicality and Efficiency: The selected datasets align with watermarking evaluation needs. The paper does not explicitly analyze the computational efficiency of multilayer reweighting or discuss detailed strategies for large-scale deployment, which could provide further clarity on real-world applicability.
Theoretical Claims: The proof for unbiased reweighting (Theorem 4.2) demonstrates that probability distributions remain unchanged, ensuring unbiasedness across multiple layers through statistical independence. The reasoning follows standard probability principles, with no major errors identified. The type-II error analysis applies the Central Limit Theorem and z-tests, though additional empirical validation under different entropy conditions could improve reliability.
Experimental Designs Or Analyses: The paper evaluates message embedding capacity by comparing BiMark’s extraction rates with state-of-the-art methods (MPAC). Results (Table 2) indicate higher extraction rates and lower perplexity. Robustness is tested using synonym substitution, but further evaluation against paraphrasing or adversarial rewriting is needed. Text quality preservation is assessed with summarization and translation tasks (CNN/DailyMail, WMT’16), using BERTScore, ROUGE, and BLEU (Table 4).
Supplementary Material: The supplementary material was reviewed, focusing on theoretical proofs, statistical analyses, and experimental settings. The proofs for unbiased reweighting, type-II error analysis, and multilayer reweighting appear sound. Experimental configurations for benchmark datasets and comparison methods are well-justified.
Relation To Broader Scientific Literature: BiMark builds on prior LLM watermarking techniques like Soft Red-List (Kirchenbauer et al., 2023) and MPAC (Yoo et al., 2023b) but improves text quality preservation, model-agnostic detection, and multi-bit embedding. It extends unbiased reweighting (Hu et al., 2023) with a multilayer approach, enhancing watermark robustness while maintaining detection reliability without model access.
Essential References Not Discussed: The references are sufficient to the best of the reviewer’s knowledge, covering key works on LLM watermarking, unbiased reweighting, and robustness.
Other Strengths And Weaknesses: Strong points
The paper presents an approach to LLM watermarking using unbiased reweighting and multilayer detection, aiming to balance text quality, model-agnostic detection, and message embedding. While prior methods often degrade text fluency for watermark robustness, BiMark seeks a trade-off to maintain detectability without significant quality loss. Theoretical foundations, including unbiased reweighting proofs and type-II error analysis, support its claims. Empirical validation on Llama3-8B and Qwen2.5-3B in summarization and translation tasks demonstrates its effectiveness. Its model-agnostic design removes dependence on internal parameters, suggesting scalability for AI-generated content authentication, though real-world feasibility remains an open question.
Weak points
(1)While the paper tests robustness against synonym substitution, it does not evaluate more sophisticated adversarial attacks, such as paraphrasing-based obfuscation, GAN-based perturbations, or adaptive attacks designed to break watermark detection. The authors are encouraged to evaluate the proposed method against other attacks or discuss the limitations of the watermarking approach.
(2)The paper does not provide explicit inference-time benchmarks for the multilayer reweighting method, making it unclear how computational overhead scales with increasing layers or longer text sequences. The authors are suggested to report the runtime or discuss the computational complexity of the proposed method.
(3)The multilayer watermarking approach shows non-monotonic performance as the number of layers increases. However, the paper does not explore adaptive selection strategies that could optimize performance for different text lengths or generation settings. The authors are encouraged to provide explanations for this observation.
(4)Vague symbol and statements. The notation M[i][v] is used in the multi-bit watermark extraction process. However, it is not immediately clear whether M stores absolute counts or normalized probabilities, making it difficult to assess how well the voting mechanism works under noisy conditions. Moreover, it says that “However, the detection performance does not improve monotonically with the number of layers” While the paper mentions this non-monotonic behavior, it does not clearly explain why it occurs or whether it is due to statistical noise, overfitting, or implementation constraints.
Other Comments Or Suggestions: Refer to the comments
Questions For Authors: Refer to the comments
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1**: Sophisticated adversarial attacks
**R1**: We conducted advanced paraphrasing attacks and reported experimental results in our response to Reviewer kPVR in *R3*.
**Q2**: Inference-time benchmarks
**R2**: We analyzed the computational cost and give experimental time cost of BiMark. Please refer to our response to Reviewer jksy in *R1*.
**Q3**: Non-monotonic performance as the number of layers increases
**R3**: The non-monotonic relationship between layer count and detection performance can be explained through our analytical framework:
- Initial enhancement: Adding layers initially strengthens detection by creating more statistical patterns across independent vocabulary partitions.
- Entropy reduction effect: As shown in our theoretical analysis (*R2* of Our response to Reviewer jksy), each layer of unbiased reweighting reduces the expected entropy of the probability distribution.
- Diminishing benefits mechanism: When too many layers are applied, the probability distribution becomes increasingly concentrated, causing later layers to operate on highly skewed distributions. This creates a situation where green lists in later layers may have extremely low probability, making generated tokens unlikely to fall into these green lists and adding noise to the detection process.
We observed this phenomenon more prominently with larger scaling factors, as they cause more dramatic probability redistribution at each layer, accelerating both the initial enhancement and subsequent diminishing benefits.
**Q4**: Meaning of symbols
**R4**: $M$ stores absolute counts for detection.
During detecting watermark, a ratio of green counts is calculated based on the absolute counts in $M$.
Though there is noise caused by randomness of text generation, for certain length of watermarked text, the green ratio of watermarked text will significantly exceed 1/2 and that of non-watermarked text will be around 1/2. | Summary: In this paper, the authors introduces BiMark, a watermarking framework for LLMs designed to address the challenges of text quality preservation and model-agnostic detection. BiMark utilizes a bit-flip unbiased reweighting mechanism, a multi-layer architecture, and an advanced information encoding strategy to embed watermarks without degrading text quality. Through theoretical and experimental validation, BiMark maintains comparable performance on downstream tasks such as summarization and translation.
Claims And Evidence: In abstract, the authors claimed that "through theoretical analysis and extensive experiments, we validate that, compared to state-of-the-art methods, BiMark reducing perplexity by up to 11%." However, the unbiased property only ensures that the distribution of watermarked LMs is equal to the original LM distribution. Reducing the PPL should not be expected in an unbiased watermark. Thus, this experimental observation contradicted with the unbiased property introduced in this paper.
In introduction, the authors claimed that "adapting existing unbiased reweighting methods for reliable model-agnostic detection is challenging." However, this challenge has been addressed by the prior work e.g. dipmark and SynthID-Text [1]. Thus, the model-agnostic detection is generally not a challenge in LLM watermarking research.
[1] Scalable watermarking for identifying large language model outputs. Nature, 2024.
Methods And Evaluation Criteria: This work missing several important baseline watermarking methods. In multi-bit watermark detection experiments, Qu et al., 2024 [1] and Fernandez et al., 2023 [2] should be included as baseline. In zero-bit distortion free watermark detection experiments, unbiased watermark and dipmark should also be included.
Besides, the robustness of the proposed watermark is not measured in the experiments.
[1] Provably Robust Multibit Watermarking for AI-generated Text via Error Correction Code.
[2] Three bricks to consolidate watermarks for large language models.
Theoretical Claims: The theoretical claims look correct to me
Experimental Designs Or Analyses: In prior work, the watermark algorithms are usually evaluated from three perspective: detectability, quality, and robustness. This paper only consider the detectability and the quality, a comprehensive evaluation of the robustness of the proposed method is missing.
Supplementary Material: No.
Relation To Broader Scientific Literature: The reweight algorithm proposed in this paper is similar to the gamma-reweight and the dipmark. The authors should compare their method with the prior work and discuss why the proposed method is better than the prior work.
Essential References Not Discussed: Gamma-reweight and dipmark are cited in the work, but the authors fail to discuss those two methods, given the similarity between the bit-flip reweight and the prior work.
Other Strengths And Weaknesses: See above comments
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1**:*Perplexity*
**R1**:
We reported perplexity in our experiments because it is one of the most commonly used metrics for evaluating the quality of generated text. We will revise the relevant statement in the final paper.
**Q2**:*Challenge of unbiased watermark and model-agnostic detection*
**R2**:
We agree with you that prior work such as DipMark and SynthID has addressed the model-agnostic requirement. However, our work uniquely aims to integrate three critical properties—text quality preservation, model-agnostic detection, and message embedding capacity—into a single cohesive framework, a task that remains challenging in the current research landscape. We will revise our statement accordingly to prevent confusion.
**Q3**:*Baseline methods and evaluation criteria*
**R3**:
Additional comparative evaluations with \[1\] are conducted, as presented in the following table. Note that \[2\] is not suitable as a baseline due to its significant inefficiency. Previous studies have confirmed its high computational cost—for example, embedding a 32-bit message requires approximately 29,000 seconds \[1\].
The table below reports the bit match rates of multi-bit watermark detection experiments on 8-bit, 16-bit and 32-bit message embedding and extraction, where Length indicates the number of tokens in the watermarked text being detected.
|Length|50|50|50|100|100|100|200|200|200|300|300|300|
|-------------------------|:-----:|:------:|:------:|:-----:|:------:|:------:|:-----:|:------:|:------:|:-----:|:------:|:------:|
|Method|8-bit|16-bit|32-bit|8-bit|16-bit|32-bit|8-bit|16-bit|32-bit|8-bit|16-bit|32-bit|
|MPAC\[3\]|78.81|66.06|51.03|89.75|78.21|65.35|96.4|89.04|78.15|98.57|93.83|84.7|
|BCH\[1\]|96.54|71.94|49.08|98.74|84.4|64.67|99.16|95.25|85.46|100|96.21|90.46|
|BiMark|95.26|85.55|66.35|97.62|93.31|82.69|98.15|95.54|89.68|97.88|95.86|90.22|
Compared with \[1\], BiMark is an unbiased watermarking method, and the Error Correction Code component of \[1\] has the potential to integrate with BiMark to further improve multi-bit watermark performance.
Additional comparative evaluations with DiPmark \[4\] are conducted.
We exclude comparison with Gamma-Reweight \[5\] as it is not model-agnostic detectable, thus beyond the scope of the application scenario of our study.
Additional evaluations on robustness are conducted through the advanced paraphraser model Dipper \[6\].
The table below reports TPR@1%FPR of watermark detection on watermarked text with and without paraphrasing, where (20,0) indicates parameters lex_diversity and order_diversity of Dipper are 20 and 0.
|Length|50|50|50|50|100|100|100|100|200|200|200|200|300|300|300|300|
|---------------------------------------------|:---------:|:-------:|:-------:|:--------:|:---------:|:-------:|:-------:|:--------:|:---------:|:-------:|:-------:|:--------:|:---------:|:-------:|:-------:|:--------:|
|Method|-|(20,0)|(0,20)|(20,20)|-|(20,0)|(0,20)|(20,20)|-|(20,0)|(0,20)|(20,20)|- |(20,0)|(0,20)|(20,20)|
|Soft-Red-List\[7\]|68.88|15.43|35.27|13.08|92.53|32.39|69.94|27.4|98.11|56.25|90.84|49.4|99.78|71.16|96.9|66.52|
|SynthID\[8\]|97.25|54.82|87.03|50|98.04|83.14|96.52|76.13|99.48|97.83|99.55|94.75|100|97.78|100|97.21|
|DiPmark\[4\]|59.57|10.86|24.4|10.25|76.07|23.71|34.03|15.56|89.94|41.62|62.63|0.236|94.76|65.39|89.57|42.81|
|BiMark|97.87|67.4|89.17|59.94|98.42|78.37|95.71|70.9|99.81|91.62|99.2|87.28|100|98.93|100|98.35|
\[1\] Provably Robust Multibit Watermarking for AI-generated Text via Error Correction Code.
\[2\] Three bricks to consolidate watermarks for large language models.
\[3\] Advancing Beyond Identification: Multi-bit Watermark for Large Language Models.
\[4\] A Resilient and Accessible Distribution-Preserving Watermark for Large Language Models.
\[5\] Unbiased Watermark for Large Language Models.
\[6\] Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defence.
\[7\] A Watermark for Large Language Models.
\[8\] Scalable watermarking for identifying large language model outputs.
**Q4**: Robustness analysis and evaluation
**R4**: A primary robustness evaluation has been provided in Section 5.1 Table 3 in our paper. **Additional evaluations are conducted to test BiMark's resilience against paraphrasing attacks, as outlined in R3**.
**Q5**:Comparison with DiPmark and Gamma-Reweight
**R5**:
|Feature / Method| DiPmark | Gamma-Reweight | BiMark |
|------------------------------------------------------------------------------|---------|----------------|--------|
Multi-layer mechanism for enhanced detectability and robustness|X|X|✔️|
|Supports embedding/extracting multi-bit messages|X| X|✔️|
|Model-agnostic detectability|✔️|X|✔️| | Summary: This paper introduces BiMark, a comprehensive framework for watermarking large language models that achieves three critical objectives: text quality preservation, model-agnostic detection, and message embedding capacity. The core innovation is a novel probability distribution reweighting method with a multilayer architecture. The approach iteratively adjusts probability distributions across random vocabulary partitions using pseudorandom bits, keeping the modified probability consistent with the original probability in expectation to maintain text quality. The authors further develop methods for encoding and extracting messages within these patterns. Through theoretical analysis and experiments, the paper demonstrates BiMark’s ability to embed multi-bit information without degrading text quality, along with enhanced robustness to existing methods.
Claims And Evidence: The claims made in the submission are well-supported by convincing evidence through both theoretical analysis and experimental results. Lemma 4.1, Theorem 4.2, and Appendices A.1 and A.2 provide theoretical guarantees for the unbiasedness of reweighting and the detectability of the watermark. Sections 4.3 and 4.4 describe the information embedding methods, and Figure 6 and Algorithm 1-2 in Appendix C detail the complete algorithmic process. The experiments in Section 5 demonstrate the method's ability to carry multi-bit information and the impact of watermarks on text quality, and include ablation experiments to show the impact of multi-layer mechanisms on the detectability and robustness of watermarks.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the watermarking problem addressed. The experiments comprehensively evaluate key aspects: message embedding capacity, text quality preservation, and watermark detectability.
Theoretical Claims: The theoretical is convincing.
Experimental Designs Or Analyses: The paper’s experimental design is sound, though the attack scenarios in robustness experiments could be more diverse.
Supplementary Material: I reviewed almost all of the supplementary material.
Relation To Broader Scientific Literature: The paper properly situates its contributions within the existing literature:
1. [1] introduced using pseudorandom number generators with previous tokens as seeds for vocabulary bipartition during inference, adjusting token probabilities to inject watermarks, with detection via z-test of green token proportions.
2. [2] introduced unbiased probability distribution reweighting for preserving text quality, using cumulative probability thresholds for vocabulary partitioning, but requiring model access for detection.
3. [3] extended [2] with more refined probability reweighting but didn’t address message embedding.
4. [4] extended [1] to enable message embedding through encoding vocabulary partition.
BiMark naturally combines the red-green list partitioning from [1] with a novel unbiased reweighting method with a multilayer mechanism to achieve message-carrying, model-agnostic detection while preserving text quality.
[1] Kirchenbauer, John, et al. "A watermark for large language models." International Conference on Machine Learning. PMLR, 2023.
[2] Hu, Zhengmian, et al. "Unbiased Watermark for Large Language Models." The Twelfth International Conference on Learning Representations.
[3] Wu, Yihan, et al. "A Resilient and Accessible Distribution-Preserving Watermark for Large Language Models." International Conference on Machine Learning. PMLR, 2024.
[4] Yoo, KiYoon, Wonhyuk Ahn, and Nojun Kwak. "Advancing Beyond Identification: Multi-bit Watermark for Large Language Models." Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 2024.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: **Strengths**:
1. Introduces a novel vocabulary-based unbiased reweighting method using a coin-flip-like random variable to control probability redistribution direction, maintaining expected probabilities while creating detectable watermark patterns.
2. Proposes an innovative multilayer mechanism where multiple independent vocabulary partitions and unbiased reweighting allow each token to be influenced by multiple green lists, providing more observable evidence for watermark detection.
3. Creatively employs one-time-pad cryptography to encode messages into the watermarking control sequence, maintaining the unbiased property while carrying hidden information.
4. Provides both theoretical and experimental analysis of the unbiasedness and the detectability of the watermark.
**Weaknesses**:
1. Limited discussion of computational resources required for the multilayer unbiased reweighting method, particularly as layers increase.
2. Robustness experiments lack sufficient diversity of attack scenarios.
Other Comments Or Suggestions: The paper contains several typographical inconsistencies:
1) In Figure 3, the authors use the term “level” in the diagram but use “layer” throughout the rest of the paper.
2) In Figure 4 legends, the method name “KGW” appears, but “Soft Red List” is used in the paper.
Questions For Authors: 1. What is the computational impact of the multilayer architecture, particularly as the number of layers increases?
2. How should scaling factors for unbiased reweighting be selected, and what factors does this selection depend on?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1**: Computational cost of multi-layer reweighting
**R1**: Please refer to our response to Reviewer jksy in *R1*.
**Q2**: Robustness experiments
**R2**: We conducted advanced paraphrasing attacks to our watermarked text.
The detectability of BiMark under paraphrasing shows desirable robustness.
Analysis reveals multi-layer watermarking provides fine-grained watermark evidence to keep the green ratio of watermarked text steadily deviating from 1/2 after paraphrasing.
For more details, please refer to our response to Reviewer kPVR in *R3*.
**Q3**: Terminology consistency
**R3**: Thank you for pointing out the inconsistencies in our terminology. We will correct these issues and thoroughly revise the paper to avoid other potential issues.
**Q4**: Scaling factor selection
**R4**: For selecting scaling factors, we recommend adjusting based on the application scenario:
- For better text quality: use smaller scaling factors with more layers.
- For computational efficiency: use larger scaling factors with fewer layers.
The underline insight behind these selection lies in entropy reduction effect of multi-layer reweighting. For details, please refer to our response to Reviewer jksy in *R2*. | null | null | null | null | null | null |
Efficient Logit-based Knowledge Distillation of Deep Spiking Neural Networks for Full-Range Timestep Deployment | Accept (poster) | Summary: This paper proposes Temporal-wise Logit-based Distillation, which trains the SNN timestep by timestep using the labels and the teacher output of the ANN, and uses the overall average output as a guidance signal. The authors theoretically prove the convergence of their losses and experimentally demonstrate the performance of their method on static and DVS datasets.
## update after rebuttal
Further responses from the authors demonstrate the effectiveness of their method.
Claims And Evidence: The authors claim in the t-SNE visualization in Figure 4 that their method has better clustering than the standard SNN training method, but this effect is hard to perceive.
Methods And Evaluation Criteria: The proposed method is incremental rather than innovative:
1. The temporal-wise cross-entropy (TWCE) loss proposed in this paper is the same as the TET loss (see Eq. 9 in the TET paper [1]);
2. The timestep-by-timestep distillation method is similar to the spatial self-distillation loss in TSSD [2], with the difference that this paper uses a pre-trained ANN, whereas TSSD uses the final output of the SNN to bootstrap intermediate outputs.
In addition, the authors consider the total output of the SNN as an ensemble for self-distillation, suggesting the inclusion of a discussion and comparison with [3]. Moreover, [3] also showed that their method is able to achieve better inference performance with less than the training timestep, so I suggest a fair comparison with [3].
```
[1] Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting. In ICLR. 2022.
[2] Self-Distillation Learning Based on Temporal-Spatial Consistency for Spiking Neural Networks. In Arxiv. 2024.
[3] Rethinking Spiking Neural Networks from an Ensemble Learning Perspective. In ICLR. 2025.
```
Theoretical Claims: There are no obvious weaknesses in the theoretical proofs in this paper.
Experimental Designs Or Analyses: About the experiment:
1. Although the proposed method shows a clear performance advantage in comparative experiments, I think that **the performance advantage claimed by the authors is mainly due to the experimental setup (the performance of the baseline model) rather than to their method**. For example, using ResNet-18 at T=6, the baseline of this paper achieves an accuracy of 79.26% on CIFAR100, exceeding the performance of other methods with the same architecture. The addition of the proposed method increases the accuracy by only 0.53%, which does not indicate the performance advantage of its method. I suggest that the authors make a fair comparison with other methods under the same conditions.
2. I would like the authors to provide ablation experiments on the DVS dataset, as the use of an ANN teacher may not promote the temporal properties that are important for SNNs. In addition, I suggest that the authors add the results using VGGSNN on CIFAR10-DVS, as this is a commonly used architecture for SNN methods, and TKS [1] achieved an accuracy of 85.3% using VGGSNN on CIFAR10-DVS with 10 timesteps.
3. Could the method proposed in this paper be applied to the Transformer architecture, which might be able to improve the performance of the SOTA SNN?
4. The authors should provide ablation experiments using only $\mathcal{L}_{TWSD}$, as this is one of their most innovative points.
5. The visualization in Figure 4 does not reflect the advantages of the method proposed in this paper, and it is recommended that the authors provide more comparisons or other visualizations.
6. It is recommended that the authors report the performance of the baseline model in Table 7 to reflect the performance advantages of their method at different timesteps.
```
[1] Temporal Knowledge Sharing Enable Spiking Neural Network Learning From Past and Future. IEEE Transactions on Artificial Intelligence. 2024.
```
Supplementary Material: The authors provide the code in the supplementary material. The methods in the code are consistent with what is described in the paper, but the implementation of some of the key functions is missing and therefore the code provided does not work.
In light of this, I remain skeptical of the accuracy reported in the paper, i.e., that the superior accuracy is due to the baseline model rather than the proposed method.
Relation To Broader Scientific Literature: I think the main contribution of this paper is similar to [1,2] with a weak core innovation.
```
[1] Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting. In ICLR. 2022.
[2] Self-Distillation Learning Based on Temporal-Spatial Consistency for Spiking Neural Networks. In Arxiv. 2024.
```
Essential References Not Discussed: This paper's view of the final output of the SNN as an ensemble (in Section 3.2) is similar to [1]'s, and I suggest that the authors discuss the differences between the two.
```
[1] Rethinking Spiking Neural Networks from an Ensemble Learning Perspective. In ICLR. 2025.
```
Other Strengths And Weaknesses: I think this paper's method of using the overall output of the SNN to distill the output on a timestep basis is concise and somewhat original, and I suggest that this is the main contribution of this paper. It is recommended that the authors include ablation of this loss in the experimental section to verify its validity.
Other Comments Or Suggestions: Authors should compare fairly with other methods in their experiments, not just achieve improved accuracy with a high baseline model.
In addition, there are some minor errors in the paper that need to be improved, such as an incomplete sentence in line 78 and a double citation of [1] in the references section.
```
[1] Surrogate Module Learning: Reduce the Gradient Error Accumulation in Training Spiking Neural Networks. In ICML. 2023.
```
Questions For Authors: I have no further questions. I am willing to adjust my score if the author clearly demonstrates the innovation and effectiveness of their method.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer BB72
---
### __R1:__ Contributions reclaim.
Our contributions primarily focus on a **logits-based distillation** framework tailored for SNNs, effectively leveraging spatio-temporal features **without incurring additional overhead**. Specifically, we address two critical deployment concerns: **accuracy degradation from ANNs** and **retraining costs when adjusting timestep settings**, achieving this through a lightweight, end-to-end solution validated by theoretical and empirical analysis. Furthermore, the trained SNNs demonstrate a notable advantage in terms of **robustness across inference timesteps**, as exemplified by stable performance from T=1 to T=6 (Tab. 7), enabling practical, flexible trade-offs between inference cost and accuracy. Overall, our clearly articulated methodology, novel conceptualization of temporal robustness, theoretical and empirical validation meaningfully advance the field of SNN distillation. Connections and differences with related works are further detailed in responses **R1.1**-**R1.3**, positioning our contributions within the existing literature.
### __R1.1:__ Relation to TET [1] and its underlying concept
Our work relates to TET [1], which significantly advanced direct SNN training by highlighting temporal-wise decoupling of the classification objective. Inspired by this underlying concept, our paper introduces, for the first time, a clear definition and adaptation of temporal-wise decoupling into logits KD domain. Specifically, we demonstrate—both theoretically and empirically—the advantage of designing logits KD aligned with the spatio-temporal characteristics inherent to SNNs.
Unlike TET, which focuses exclusively on decoupling cross-entropy loss and convergence boundary analysis, our method emphasizes applying temporal decoupling explicitly within logits KD, highlighting that performance improvements can be obtained without additional complexity (“free lunch”, **see R4 of Reviewer p1Y4**). Moreover, our work uniquely emphasizes the robustness across different inference timesteps, showcasing stable convergence for submodels (Sec. 3.4 and Tab. 7), an advantage previously unexplored by TET or related studies. Additionally, by integrating self-distillation within our temporal-decoupling framework, we fully utilize supervision signals at all timesteps without introducing extra overhead, further extending beyond TET.
### __R1.2:__ Relation and comparison with TSSD [2]
We regard TSSD as representative work in self-distillation, exploring soft-label generation from temporal and spatial perspectives. Unlike TSSD, which introduces overhead through auxiliary classifiers and extra forward processes, our method achieves improvements by using existing supervision signals (teacher, ensemble, and task labels in logits-KD framework) within a fixed backbone, requiring no additional branches or training overhead.
### __R1.3:__ Ensemble perspective with [3]
We recognize [3] as an excellent contemporaneous study that similarly explores submodels within SNNs from ensemble perspectives. While [3] constructs KL between adjacent timesteps (t and t-1), our method builds KL between the overall ensemble output and each individual time t. We will include discussion in the updated version, enabling readers to more comprehensively grasp ensemble learning in SNNs. We plan a detailed comparison once [3] becomes publicly available.
### __R2:__ About experiment setup.
In setting up, our work has maintained strict alignment with recent SNN studies [OTTT, SLTT, ASGL, rateBP]. We replicated logits KD, TKS on C100 setup:
||base|logits-KD|TKS|ours|
|-|-|-|-|-|
|R18,C100,T=6|78.62|79.07|79.15|79.80|
On Imagenet and C10-DVS (Tab. 2 & 3), base models do not exhibit special advantage. The result on large-scale __Imagenet__ is indicative of performance.
### __R3:__ Ablation study on CIFAR10-DVS
||$L_{TWCE}$|$L_{TWCE}+L_{TWSD}$|$L_{TWCE}+L_{TWSD}+L_{TWKD}$|
|-|-|-|-|
|vggssn,C10DVS,T=10|83.2|85.8|86.3|
### __R4:__ Results on spikingformer [4]
|Arch|Spikingformer|+ours|
|-|-|-|
|trans-2-384,C100,T=4|78.34|80.77|
|trans-4-384,C100,T=4|79.09|81.12|
### __R5:__ Ablation on TWSD
||$L_{SCE}$|$L_{SCE}+L_{TWSD}$|$L_{TWCE}$| $L_{TWCE}+L_{TWSD}$ |
|-|-|-|-|-|
|R18,C100,T=4|77.95|78.50|78.58|78.94|
|R18,C100,T=6|78.62|79.15|79.26|79.63|
### __R6:__ Relation of Fig. 4 to the main claim and Tab. 7 baseline results
Thank you for suggestions. Please refer to **R1 of Reviewer p1Y4**.
### __R7:__ Regarding reproducibility and minor errors
Following modifications will ensure the code runs correctly:
1. In`util.py`, change`args.arch().lower()`to `args.stu_arch.lower()`within`get_model_name`function.
2. Delete`init_model(model)`in`bptt_model_setting`function.
3. In run command, replace`--epoch`with`--num_epoch`.
The appendix provides necessary details. We plan to open-source complete codes and ensure reproducibility in future releases. Minor errors will be fixed, and we appreciate your feedback.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors' response, additional experiments confirm the validity of the proposed method on the Transformer architecture as well as the TWSD loss.
However, I still consider the rest of the losses, except for the TWSD loss, to be incremental, especially since I note that a paper [1] accepted by CVPR 2025 similarly proposes distillation by time step, which weakens the innovation of this paper. I also noticed that the results of using all losses in the ablation studies in response to R5 (78.94 and 79.63) are not consistent with the highest accuracy reported in the paper, and I hope the authors will respond to this in the next round.
Given the incremental novelty and confusing experimental results, I don't think this paper currently meets acceptable standards for ICML. However, the experimental results provided by the authors demonstrate the effectiveness of their method, so I look forward to further responses from the authors.
[1] Temporal Separation with Entropy Regularization for Knowledge Distillation in Spiking Neural Networks. CVPR. 2025.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments. We would like to further discuss our work with you.
Firstly, sorry for not making it clear, the results of 78.94 and 79.63 in R5 are accurate and are based on TWCE and TWSD, excluding TWKL; the highest accuracy in our paper is achieved with all losses, which include TWCE, TWKL, and TWSD. Here, TWCE+TWSD can be considered as an ablation result of TWKL.
Regarding novelty, we believe our contribution goes beyond simple modifications of the loss function; more importantly, it involves a deep analysis, exploration, and insight into the concept of time-decoupled thinking in logits-based distillation. Both theoretical analysis and empirical validations are provided to illustrate the alignment of the technical solution with its motivation. Furthermore, we delve deeper into the benefits of time robustness brought by the concept of time-decoupled, clearly demonstrating these through analysis and empirical results. To our knowledge, this discovery about the benefits of time robustness from the time-decoupled idea is first explored in our work. Overall, beyond mere performance improvement, we think the clear methodology, the new technical definition of temporal robustness in distilled SNNs explored in deployment scenarios, relevant theoretical analysis, and corresponding empirical validations undeniably contribute to the advancement of the SNN distillation field.
Regarding the work [1]. [1] is a commendable concurrent work that also focuses on and demonstrates the effectiveness of time-decoupled KL loss in a distillation framework. In considering the concept of time-decoupled, our work differs from [1] in that: 1. We start with the idea of decoupling SCE into TWCE (proposed in TET), considering that CE loss is a consistency form of KL loss (where CE can be seen as a special form of KL divergence between logits and hard labels), and unify this under a comprehensive logits-based loss framework. We provide in-depth insights into the implementation of time-decoupled in logits-based knowledge distillation from perspectives of loss convergence and submodel ensemble, and offer theoretical analysis of its effectiveness. 2. We delve deeper into the benefits of time robustness brought by the concept of time-decoupled, clearly demonstrating these through both theoretical analysis and empirical validation. To our knowledge, this discovery about the benefits of time robustness from the time-decoupled idea is first explored and deeply analyzed in our work. 3. As mentioned, within this time-decoupled logits framework, we further propose TWSD, a highly compatible, cost-free self-distillation target, which pushes the concept of time-decoupled logits distillation to its limits.
We appreciate you pointing out [1], which was accepted at CVPR 2025, published after we submitted this paper to ICML. We will include a discussion on its relation and differences to provide a more comprehensive literature context, and we appreciate your feedback. | Summary: This paper proposes a novel distillation framework for deep SNNs that optimizes performance across full-range timesteps without
specific retraining, enhancing both efficacy and deployment adaptability. We provide both theoretical analysis and empirical validations to illustrate that training guarantees the convergence of all implicit models across full-range timesteps.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes. Appendix A.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: No discussion of recent paper [1] using distillation in SNNs.
No discussion of recent paper [2] in learning methods for SNNs.
[1] Quantized Spike-driven Transformer. ICLR 2025.
[2] Scaling Spike-driven Transformer with Efficient Spike Firing Approximation Training. IEE T-PAMI 2025.
Other Strengths And Weaknesses: S:
a novel knowledge distillation framework is proposed. This framework optimizes the performance of SNNs over the entire time step range by means of time decoupling, without the need to retrain for specific time steps, significantly improving the deployment adaptability of SNNs.
well-written
W:
Is the proposed method still in MS-ResNet [1]?
No discussion of recent paper [2] using distillation in SNNs.
No discussion of recent paper [3] in learning methods for SNNs.
[1] Quantized Spike-driven Transformer. ICLR 2025.
[2] Scaling Spike-driven Transformer with Efficient Spike Firing Approximation Training. IEE T-PAMI 2025.
[3] Gated attention coding for training high-performance and efficient spiking neural networks. AAAI 2024.
Other Comments Or Suggestions: See above.
Questions For Authors: Is the proposed method still in MS-ResNet [1]?
Discuss the recent paper [2] using distillation in SNNs.
Discuss the recent paper [3] in learning methods for SNNs.
[1] Quantized Spike-driven Transformer. ICLR 2025.
[2] Scaling Spike-driven Transformer with Efficient Spike Firing Approximation Training. IEE T-PAMI 2025.
[3] Gated attention coding for training high-performance and efficient spiking neural networks. AAAI 2024.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Reviewer zowF
---
### __R1:__ Is the proposed method still in MS-ResNet [1]?
Following the experimental setup in [3], our frameword can also support the MS-ResNet [4] structure as expected, with the experimental results as follows:
|CIFAR100,T=6 | baseline | logits-KD | ours |
|-|-|-|-|
|MS-ResNet-18|76.41|79.63|80.49|
### __R2:__ Discuss the recent paper [2] using distillation in SNNs.
Thank you for your comment. We acknowledge that [2] is an excellent piece of work and represents a significant advancement in the study of spiking transformer architectures. We note that distilling spiking transformer architectures requires special consideration of the choice of the teacher model ANN, as the design philosophy of spiking transformer architectures differs from traditional ANN architectures. Unlike the ResNet structure, it is not possible to find an ANN counterpart with the same structure, making this a unique consideration for transformer distillation schemes. At this point, the differences in structural design necessitate a more ingenious and novel heuristic design for the loss objective aligned with features between the ANN teacher and the SNN teacher under the feature-based distillation framework, which introduces greater design complexity in practical implementations. Therefore, we believe that logits-based, end-to-end distillation offers more practical advantages in the direction of spiking transformers. Our work aptly discusses how to fully unleash the potential of logits-based distillation in exploiting the unique spatiotemporal characteristics of SNNs, thereby laying a solid foundation for further exploration of logits-based distillation in spiking transformer architectures.
We will update our paper with the above discussion to strengthen our positioning in the literature. Thank you for your suggestion.
### __R3:__ Discuss the recent paper [3] in learning methods for SNNs.
The work in [3] is quite fascinating. We understand it as implementing adaptive encoding of spike sequence inputs during the network training process by adding an attention module at the input layer, which significantly enhances the adaptability of SNN architectures to static inputs and better utilizes the SNN’s capability to process spatiotemporal information. Interestingly, under this encoding method, the characteristics of the intermediate features at different times change with time, and the spike representations carry temporal dimension information. In this scenario, the standard distillation framework would cause the features at different times to fit towards the same ANN features, thus diluting the desired diversity in the temporal dimension. The prposed distillation framework considered in our paper, which is designed for temporal-wise decoupling, is better suited for such scenarios. We think the great potential of the proposed distillation framework in this direction is worth further exploration.
We will include the above discussion in the updated version. Thank you for your suggestion.
---
## Reference:
[1] Quantized Spike-driven Transformer. ICLR 2025.
[2] Scaling Spike-driven Transformer with Efficient Spike Firing Approximation Training. IEEE TPAMI 2025.
[3] Gated attention coding for training high-performance and efficient spiking neural networks. AAAI 2024.
[4] Advancing Spiking Neural Networks towards Deep Residual Learning. IEEE TNNLS 2024.
---
Rebuttal Comment 1.1:
Comment: I appreciate your detailed explanation. I decided to raise my score from 3 to 4. | Summary: This paper proposes a Temporal-wise logit-based distillation method (TWLD), which consists with three different loss (classification loss, distilation loss, and self-distillation loss) for SNN output at each time step. This new method significantly improves the performance of SNNs and their robustness to time step change.
Claims And Evidence: Yes, the authors provide theoretical proof and sufficient experimental verification.
Methods And Evaluation Criteria: Yes, the proposed method has improved the performance upper bound of SNN, which may make sense for the research and application of SNN.
Theoretical Claims: Yes, the authors provide theoretical proofs of the effectiveness of the corresponding claims.
Experimental Designs Or Analyses: Most of the experiments provided by the this paper are reasonable and effective. However, I think the t-SNE experiment (Figure.4) is unnecessary as it does not demonstrate the effectiveness of the new method compared to Standard Logits-based Distillation.
Supplementary Material: I reviewed the supplementary materials in appendix, but did not check the provided code.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strength:
1. The algorithm provided by the author is simple and effective, achieving SOTA performance on ImageNet.
2. The author provided enough theoretical proof and relevant experiments to demonstrate the effectiveness of the algorithm.
Weakness:
1. The author needs to explain in detail the practical significance of achieving a more robust SNN with time step changing.
2. The author did not provide whether their method affects the important SNN factors such as firing frequency.
3. The results presented in Table 6 show that compared to the original distillation algorithm, this paper only improved by 0.73%. The author needs to provide more detailed comparisons on the training cost of the algorithm.
Other Comments Or Suggestions: Figure 1 should provide the comparison between the proposed method and the original distillation learning, rather than direct training.
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Reviewer p1Y4
---
### __R1:__ Relation of Fig. 4 to the main claim and Tab. 7 baseline results
Thank you for your suggestion. We use Fig. 4 to show the distinction from the standard logits-based distillation. In the case of the proposed temporal-wise distillation in Fig. 4b, the clustering effects of the submodels exhibit a high degree of similarity. For example, the upper, lower, and central-left clusters in each subplot of Fig. 4b (which will be marked in the updated images) all display a similar pattern across the five sub-figures. This largely echoes our analysis in Section 3.4, where the loss of each submodel is essentially embedded within a larger submodel framework, resulting in a uniform convergence effect in their clustering outcomes. In the updated version, we will specifically highlight the observed clustering phenomena in Fig. 4b to better complement the discussion mentioned above.
Additionally, we think the following results can more intuitively demonstrate the effectiveness of the proposed method compared to standard logits-based distillation paired with Fig 4:
**Table X1. Comparison of inference performance based on models trained over different timesteps. The experiments were conducted using ResNet-18 on the CIFAR-100 dataset.**
|Method|Timesteps|T=1|T=2|T=3|T=4|T=5|T=6|
|-|-|-|-|-|-|-|-|
|standard logits-KD|T=2|**73.58**|**77.02**|77.31|77.63|77.80|77.80|
||T=4|72.44|76.50|**77.57**|**78.32**|78.47|78.50|
||T=6|71.08|76.25|77.52|78.25|**78.63**|**79.07**|
|ours|T=2|74.19|77.32|77.65|77.95|78.13|78.14|
||T=4|75.08|77.76|78.40|79.10|79.21|79.36|
||T=6|**75.09**|**77.80**|**78.70**|**79.32**| **79.60**|**79.80**|
The table above shows inference results, supplementing the baseline results shown in Tab. 7. Horizontally, it demonstrates how inference accuracy changes when adjusting timesteps after model training. Under standard logits-KD, different models excel within specific inference timestep ranges: T=2 model performs best at timesteps 1–2, T=4 at times 3–4, and T=6 at time 5–6. In contrast, ours consistently achieves optimal performance using the model trained at the maximum timestep (T=6), clearly demonstrating the effectiveness of our method in training robust SNNs that maintain accuracy across varying inference timesteps.
Thank you for your suggestion; we will include these results and analysis in the updated version.
### __R2:__ Practical significance of temporal robustness
Ensuring the robustness of SNNs models at different inference timesteps can provide the following two technical advantages:
1. **Horizontal view from Table X1.** Taking the model trained with T=6 as an example, it shows stable performance across the inference window from T=1 to 6. Once deployed, the model does not require additional considerations for adaptation and switching across different inference states. This allows us to practically balance inference costs and performance directly, providing a viable model approach for scenarios that require real-time control of inference costs based on computational resources.
2. **Vertical View from Table X1**. For the model trained with T=4, we can invest in greater training costs to distill the T=6 model, and use the submodel at T=4 (essentially the same model) to achieve better performance. This offers an effective and feasible way to enhance performance by leveraging surplus training resources, providing a viable technical solution for scenarios where large training resources are available and performance enhancement is a critical issue.
### __R3:__ Results of firing rates
|R18,C100|t=1|t=2|t=3|t=4|t=5|t=6|avg|
|-|-|-|-|-|-|-|-|
|logits-KD [T=4]|0.1799|0.2137|0.2045|0.2091|/|/|0.2018|
|ours [T=4]|0.1819|0.2194|0.2026|0.2138|/|/|0.2044|
|logits-KD [T=6]|0.1761|0.2034|0.2023|0.1966|0.2060|0.1941| 0.1964|
|ours [T=6]|0.1775|0.2101|0.1937|0.2063|0.1952|0.1980|0.1980|
### __R4:__ About training costs and practical effectiveness
Thank you for your suggestion. We have added measurements of training costs:
||logits-KD [T=4] | ours [T=4] | logits-KD [T=6] | ours [T=6] |
|-|-|-|-|-|
|Time (s/batch)|0.17367|0.17443|0.26766|0.26811|
|Memory (MB)|6333.15|6333.20|9105.12|9105.71|
Consistent with the discussion, the extra training overhead from ours is negligible compared to the overall backbone. We will include results in the updated version to reinforce the lightweight nature.
Following your comments, we conclude that ours achieves a 0.73% performance improvement over logits-KD at virtually no extra cost and also introduces robustness across varying inference timesteps. We think advantages of ours in practical use are evident.
### __R5:__ Include logits-KD in Fig. 1
Thank you for your suggestion. We will add the performance of logits-based KD to Fig. 1 to more clearly show effectiveness. The necessary data is already available in Table X1.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response to my questions. I raised my score to 4, but I still think there is a slight gap before this paper can be directly accepted. I think that authors should discuss in more detail the actual SNN scenarios where their method is suitable instead of just raising performance. For example, they could discuss how the method can reduce the actual inference overhead of SNN through the SEENN method.
---
Reply to Comment 1.1.1:
Comment: Thank you for your suggestion. We noted that the SEENN project ingeniously designed temporal pruning, achieving a trade-off improvement between inference time and performance through early exit, and provided a dynamic adjustment scheme for inference. We replicated the SEENN-I scheme, setting it under logits-KD and our method on CIFAR-100, and compressed the inference time with the following results:
| CIFAR100| CS | 0.7| 0.8 | 0.9 | 0.99 | 0.999 |
|-|-|-|-|-|-|-|
| logits-KD,T6 | acc | 73.23 | 74.65 | 77.12 | **78.75** | **79.03** |
| | avg_time | 1.139| 1.280| 1.606| 2.424| 3.076 |
| logits-KD,T4 | acc | **74.14** | **75.53** | **77.53** | 78.28 | 78.32 |
| | avg_time | 1.138| 1.268| 1.568| 2.168 | 2.697 |
| ours, T6 | acc | **76.61** | **77.58** | **79.05** | **79.75** | **79.79** |
| | avg_time | 1.165| 1.316| 1.690| 2.493| 3.188 |
| ours, T4 | acc | 76.51 | 77.46 | 78.73 | 79.09 | 79.10 |
| | avg_time | 1.164| 1.306| 1.620| 2.211| 2.752|
First, consistent with our previous observations, logits-KD training at T=4 performed better than at T=6 in scenarios where the inference time was significantly reduced. We think this is because the submodel at T=4 is more advantageous during moments T=1~4, hence at a specific time point, e.g., t=2, the model trained at T=4 predicts more accurately; in this case, due to the reduction in inference time, lots of models might early exit at t=2, leading to a more reliable performance for the T=4 model compared to the T=6 model. This supports the importance of time robustness as a model property, which significantly impacts optimization when pruning inference time. The compression results of ours at T=4 and T=6 demonstrated the advantages brought by time robustness in actual SNN scenarios. The gains from time robustness allow us to achieve similar performance with even less reduced inference time, which can be used to further reduce the actual inference overhead of SNNs within the SEENN framework. Thank you for pointing out SEENN as a good example of considering actual scenarios. We will include it in our work to better demonstrate the effectiveness and potential application space of our work, and we appreciate your comments. | null | null | null | null | null | null | null | null |
In-Context Learning as Conditioned Associative Memory Retrieval | Accept (poster) | Summary: This paper proposes a novel interpretation of in-context learning(ICL)as the memory retrieval of the modern Hopfield model from a conditional memory set. The paper is well-written and the techniques proposed are novel.
Strengths: The innovative Hopfield-based perspective, solid theoretical framework, clear structure, and intuitive experimental focus are promising foundations for a strong paper.
Limitation: Clarify key concepts, tighten mathematical rigor, expand experiments, complete missing sections, and better highlight novelty.
Claims And Evidence: The claims made in the submission are generally clear, but there is one minor problem.
Claim 2 (Hidden Concept Learning via Reshaping):
Problematic: The evidence is theoretical and relies heavily on a simplified Gaussian linear model, with no robust empirical validation in complex LLM settings. The ghost example construction (Appendix E.5) is a weak link—its small MSE values suggest feasibility, but there’s no analysis of its representativeness or stability, and the process is under-specified (e.g., optimization details). This claim needs stronger empirical support beyond linear regression.
Claim 3 (Generalization Bound) :
Problematic: The experimental support is solid for linear regression but doesn’t fully convince for broader ICL applications due to task simplicity.
Methods And Evaluation Criteria: 1. The methods are heavily theoretical and tailored to a one-layer attention model, which oversimplifies multi-layer LLMs like GPT-2. Extending to multi-layer settings would better match the application. Ghost example construction lacks a clear benchmark or prior method for comparison, making its novelty sensible but its practicality uncertain.
2. Linear regression doesn’t capture ICL’s complexity (e.g., semantic understanding, multi-task learning), undermining applicability to real LLMs. While controlled, it lacks the richness of benchmark datasets (e.g., GLUE, SuperGLUE), reducing ecological validity.
Theoretical Claims: The theoretical claims made in the paper proved to be correct.
Experimental Designs Or Analyses: 1. Sensitivity to Covariance Shifts (Section 4.1.1, Table 1, Appendix E.2)
Strengths: The design controls the test distribution shift, aligning with the claim that ICL is sensitive to covariance changes. Using multiple prompt sizes tests the generalization bound (Claim 3). Synthetic data allows precise manipulation, which is sensible for isolating this effect.
Issues:
Overly Simple Task: Linear regression is too trivial for GPT-2, which is designed for complex NLP tasks. Covariance shifts in this context (Gaussian mixtures) don’t mirror real-world ICL challenges (e.g., domain shifts in text).
Sample Size: 6400 samples (64 batches × 100) is reasonable but lacks justification for statistical power. No confidence intervals or significance tests are reported, weakening validity.
Prompt Length Extension: Extending prompts beyond pretraining (49 to 75) tests generalization but assumes GPT-2 can extrapolate, which isn’t validated.
2. Sensitivity to Response Accuracy (Section 4.1.2, Table 2, Appendix E.3)
Strengths: Permuting labels to simulate inaccuracy is a reasonable way to test sensitivity, directly addressing Claim 4. Multiple prompt sizes align with the generalization bound hypothesis.
Issues:
Permutation Method: Randomly permuting labels introduces noise but doesn’t model realistic errors (e.g., systematic biases in prompts). The permutation rate ($1 - \alpha$) is applied uniformly, which may not reflect varying error distributions.
Task Simplicity: As with covariance shifts, linear regression limits the experiment’s relevance to ICL’s typical complexity.
Supplementary Material: I checked all the supplementary materials.
Relation To Broader Scientific Literature: The paper outlines four main contributions:
Equivalent Memory Reshaping in ICL: Prompt examples reshape pretrained weights in the attention mechanism, interpreted as memory retrieval in a modern Hopfield model (MHM).
BMA with Reshaped Memory: Constructs an explicit hidden concept expression via reshaped weights, showing ICL performs Bayesian Model Averaging (BMA) based on this concept.
Generalization Bound in ICL: Derives a bound for a one-layer attention model, showing performance improves with more prompts at a rate of $\mathcal{O}(1/t)$.
Experimental Results: Uses reshaping to explain ICL’s sensitivity to covariance shifts, response accuracy, and prompt-test similarity, validated with linear regression experiments.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Notation Inconsistencies: The notation is occasionally sloppy or inconsistent. For instance, on Page 2, $P_\theta(\cdot \mid C_t, R_t)$ is used to denote the LLM's predictive distribution, but $C_t$ and $R_t$ are matrices, while the conditioning typically expects a sequence.
Pseudo-Inverse Usage: In Section 3.1, the use of pseudo-inverses (e.g., $\tilde{K}_t^+$ and $\tilde{V}_t^+$) to define $W_1$ and $W_2$ assumes invertibility conditions that are not explicitly justified. If $\tilde{K}_t$ or $\tilde{V}_t$ are rank-deficient (common in high-dimensional settings), this formulation may be unstable or undefined without regularization.
Generalization Bound: The bound in Theorem 3.2, $\mathbb{E}{P_t}[R(P_t) - R{\text{emp}}(P_t)] \leq \frac{M C_0 (4\Gamma + 1) e^{2\Gamma}}{t}$, decreases as $\mathcal{O}(1/t)$, but the proof sketch relies on $\epsilon$-stability without justifying why the attention mechanism satisfies this property under the reshaping framework. The constant terms ($M$, $C_0$, $\Gamma$) are also not well-constrained, reducing practical interpretability.
Other Comments Or Suggestions: No.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your detailed review. We have revised our draft and addressed all concerns. **The revised version (changes marked in BLUE) and the code for external experiments are available in this [anonymous Dropbox folder](https://www.dropbox.com/scl/fo/9m982gnk45wc1w705wjsa/AKYGxykAcCFGADGQEHCN218?rlkey=h03t7ipyv8qbt9bvesfivxcsw&st=ze6gjwhu&dl=0).**
> **C1**: Linear Regression
**Response**
Thanks for your comment. In the revised version, we extend our experimental evaluation to cover more complex tasks. Specifically, we include new results using GPT-2 on decision tree and 2-layer neural network tasks, following a similar setup as in the linear regression experiments. Additionally, we incorporate experiments with the GPT-J model on the real-world “TweetEval: Hate Speech Detection” dataset to evaluate performance on a practical sentiment classification task. Please refer to Section 4.1.4 for details.
> **C2**: Extending to multi-layer setting
**Response**
Thanks for your insightful comment. We provide an overview of how to extend our analysis to multi-layer models. The detailed discussion appears in the Section A.2 of the revised version, and we restate it here.
We regard the multi-layer transformer as iteratively retrieving memory patterns in the modern Hopfield model (MHM) through the following four steps:
* Design energy function for multi-layer MHM: We design an energy function matching the multi-layer transformer [Hoover23]. The memory pattern needs retrieval after reshaping and iteration. This step is challenging.
* Give clear expression of memory reshaping on model weights: This step is feasible following our framework in Sec. 3.1.
* Extend Bayesian method to multi-layer transformer: The Bayesian method in [Zhang23] and our work assume perfect pre-training. We must study perfect pre-training to match real multi-layer transformers. This step is challenging.
* Obtain generalization bound of ICL: This step is feasible following our framework in Appendix D.5. We need to derive the ϵ-stability of the multi-layer transformer.
[Zhang23] What and how does in-context learning learn? bayesian model averaging, parameterization, and generalization
[Hoover23] Energy Transformer
> **C3**: Notation Inconsistencies
**Response**
We thank the reviewer for pointing out the potential notational inconsistency. $C_t$ and $R_t$ are written as matrices for clarity and convenience. In practice, they represent token sequences processed by the LLM in an autoregressive manner.
> **C4**: Pseudo-Inverse Usage
**Response**
Thank you for raising this point. We acknowledge that the use of pseudo-inverses can indeed present numerical stability challenges. In this section, we primarily focus on the theoretical analysis of the in-context learning mechanism. We recognize that practical applications may require additional considerations, such as the implementation of regularization techniques, to address these stability concerns effectively. We add this to the second point of the limitation part (Appendix A.2).
> **C5**: Generalization Bound
**Response**
* For the $\epsilon$-stability part, we want to clarify that under the reshaping framework, the softmax attention maintains its form and properties. The derivation of $\epsilon$-stability is not dependent on how the model performs memory reshaping. Hence, the attention mechanism satisfies $\epsilon$-stability.
* The constants $(M, C_0, \Gamma)$ in our bound are standard parameters in theoretical analysis. $M$ represents the Lipschitz constant of the loss function. $\Gamma$ bounds the operator norm of weight matrices. $C_0$ is also a norm bound. These constants are commonly used in generalization bounds. They can be estimated empirically or bounded using standard techniques in learning theory.
---
Rebuttal Comment 1.1:
Comment: I have carefully read all the reviewers' comments as well as the authors' responses, and my final opinion is to keep the score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your careful consideration of both the reviewers' comments and the authors' responses. Really appreciate your thorough evaluation. Thanks very much! | Summary: This paper considers, both theoretically and empirically, how a single attention head can be seen to perform in context learning (ICL) via the reshaping of an energy landscape shaped by the prompt. This work addresses the following topics:
1. Analyzing the effect of prompts through an interpretation of attention as the one-step Modern Hopfield Model (MHM) introduced by Ramsauer et al. 2020
2. Extending the work extends the work of [Zhang et al. 2023b] on Bayesian Model Averaging (BMA) for ICL to attention
3. Characterizing bounds on the performance of ICL based on the quantity and quality of prompt examples
Claims And Evidence: C1. "Equivalent memory reshaping" of the pretrained weights in attention aptly describes ICL
C2. BMA explains how ICL works by performing BMA on the constructed hidden concept
C3. A one-layer attention model using reshaped memory has a clear generalization bound for ICL
C4. Equivalent memory reshaping in ICL explains model sensitivity to:
(i) covariance shifts
(ii) response accuracy
(iii) similarity between prompts and test examples
Claims C1-C3 are fully supported by the strong theoretical contributions. I am not sufficiently confident in the empirical results to validate C4.
Methods And Evaluation Criteria: The method analyses is sound (though in its current form, limited to a single head of a single layer of attention). I have confusion about the experimental results. See weaknesses
Theoretical Claims: There are many theoretical claims and corresponding proofs in the paper. I did not have time to review many of them.
- Lemma 2.2: Memory reshaping in MHMs (by making the most similar stored pattern equal to the query) increases the separation ($\Delta$) between stored memories. This proof is correct.
Experimental Designs Or Analyses: The main experimental question is determining whether memory reshaping predicts observable behaviors of ICL, which the paper studies by considering a toy dataset consisting of a mixture of Gaussians. It is not clear from the presentation of the work how the experiments relate to the theory proposed earlier in the work, and it would be helpful if the theoretical claims (specifically, the bound) was verified in the experiments. See weaknesses.
Supplementary Material: - I had to read App E.1 Experimental Details to understand the results in the main paper
- I checked the (accessible) proof for lemma 2.2 (App D.2)
The paper links Hopfield Networks and the in-context learning behavior of LLMs in an interesting way. As far as I know, this is a novel perspective that may produce quite interesting insight when scaled to larger models.
Relation To Broader Scientific Literature: The paper links Hopfield Networks and the in-context learning behavior of LLMs in an interesting way. As far as I know, this is a novel perspective that may produce quite interesting insight when scaled to larger models.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ### Strengths
- **Well organized**. The paper is well organized and written. Despite no familiarity in formally studying ICL, I was able to piece together the fundamental problem that the authors are trying to solve from their background sections.
- **Novel and meaningful contributions**. The main paper represents, as far as I know, a rigorous and novel perspective of ICL as memorization and energy minimization. Whether this scales to real Transformers with multiple heads of attention and multiple distinct layers is unclear (see weaknesses), but the approach is nonetheless meaningful.
### Weaknesses
- **Toy evaluations may not extend to language**. The theory and experiments consider ICL like a regression task, where the task of the model is only to predict the single next token. In traditional AR LLMs, ICL may be used to predict multiple tokens, and it is unclear how this paper generalizes to real scenarios and real-sized models.
- **Theory and evaluations are limited to a single layer, single attention head**. Again, the study simplifies Transformers dramatically. Unclear how it will generalize
- **Unclear takeaways from the experimental results**. It is not clear how to interpret the experimental results. Specifically, where do we see plotted the generalization bound against empirically observed behavior? What do the dashed gray lines mean in Figs. 1 & 2? Why does performance in Tables 1,2, & 3 peak at 45 ICL samples and decline (rather significantly) after? Please make it easier to understand how the experiments align with the theory
### Summary
There are core aspects of this work that I did not understand going into this paper, but the authors gave a proper background to the topic and presented their results in a clear and organized way. However, the paper has its flaws. The missing motivation for the experiments and the minimal discussion of the experimental results makes it unclear what the benefits of this approach are. It is also unclear if/how well the analyses will scale beyond the toy evaluations and models studied in this work.
Nonetheless, the strong theoretical analyses, the novel perspective, overal professionalism of the paper compel me to believe this paper is a proper fit for ICML. I am happy to increase my score if any of my misunderstandings can be clarified throughout the rebuttal process.
Other Comments Or Suggestions: - "Reshaped attention head" is mentioned for the first time in [L039 right col], but is not common knowledge and is not defined at this point in the paper.
### Typos
- MHM in the abstract is not defined, and MHM is not an appropriate acronym for "associative memory model" as mentioned in the introduction. It is first introduced in [L029 right col], but should be mentioned earlier.
- L368 "We use 4 different GM distribution" when only 3 are listed?
Questions For Authors: Q1: Lemma 2.2: How does reshaping matrix $W$ play into the **energy function** of the MHM? Is it applied s.t. $\tilde{\xi}_k = W \xi_k$?
Q2: Sec 3.1: I'm a little confused about memory reshaping in the context of a prompt. Generally, autoregressive transformers take all prompt examples at once and the attention weight is not updated. Is this paper stating that attention processes all prompt tokens at once? Or that there is some kind of recurrence over the tokens?
Q3: How should I interpret $d_c$ and $d_r$ in [L064 right col]? Do you consider a case where a each query and response is only a single token (in which case $d_c = d_r = d$, the embedding size of the model)? This is not described.
Q4: Eq. (2.1) is a bit confusing. It seems to indicate that the LLM is pretrained only to predict queries $c_{t+1}$, but autoregressive LLMs trained to predict all next tokens. Also, $C_t$ and $R_t$ are from the prompt, not necessarily the pretraining set. What is the connection here?
Q5: Can you explain why the "ghost examples" [L200-211] are necessary for testing this theory? Is it only to make it so the experiments vectorize across GPUs correctlY? I'm afraid I don't understand why ghost examples are necessary for the theory or the experiments.
Q6: It is not clear to me how the results in Table 1 use the GPT-2 model at all. Can you please explain the experiments in more detail?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your detailed review. We have revised our draft and addressed all concerns. **The revised version (changes marked in BLUE) and the code for external experiments are available in this [anonymous Dropbox folder](https://www.dropbox.com/scl/fo/9m982gnk45wc1w705wjsa/AKYGxykAcCFGADGQEHCN218?rlkey=h03t7ipyv8qbt9bvesfivxcsw&st=ze6gjwhu&dl=0).**
> **C1**: Single next token
**Response**
We provide details on how to extend our method to multiple tokens in the 5th term of the limitation section (Appendix A.2) of the revised version. Please see the revised version for details.
> **C2**: Simplifies Transformers dramatically
**Response**
We provide an overview of how to extend our analysis to multi-layer models. The detailed discussion appears in the 4th term of the limitation section (Appendix A.2) of the revised version. Please see the revised version for details.
> **C3**: Interpretation of the experimental results
**Response**
* **Generalization Bound**: Figures 1, 2, and 3 show that the R-squared value increases when the number of in-context examples is below 50—the same number used during pretraining. This supports our observation that the generalization bound improves with more prompt examples.
* **Dashed Gray Lines**: These indicate the threshold of 50 in-context examples, which matches the pretraining setup.
* **Performance Decline Beyond 50**: During pretraining, we use a prompt length of 50 (Appendix E.1). When evaluated on longer contexts, GPT-2 shows a drop in R-squared. We conjecture that this is due to its use of absolute positional encodings (Remark 4.1).
> **C4**: Motivation and benefits of the experiments
**Response**
**Motivation for the experiments**:
We design these experiments to validate three key properties of ICL: the model’s sensitivity to (i) covariance shifts, (ii) response accuracy, and (iii) the similarity between prompts and test examples. We then provide theoretical explanations for these properties based on our results. Our empirical and theoretical validations provide the practical guidelines for improving ICL performance.
**Benefits of this approach**:
Memory reshaping provides a theoretical explanation for three key properties of ICL.
**Our results can scale beyond the toy evaluations well**. In the revised version, we extend our experimental evaluation to decision trees, 2-layer neural network tasks, and practical sentiment classification tasks. Please refer to Section 4.1.4 for details.
> **C5**: Typos
**Response**
We have added the definition and corrected the typos in the revised version. The reshaped attention head refers to a linear transformation of the key and value matrices.
> **Q1**:
**Response**
Yes, we apply it as $\tilde{\xi}_k = W \xi_k$.
> **Q2**:
**Response**
We consider that transformers take all prompt examples at once, and the attention weights are not updated.
> **Q3**:
**Response**
Yes, we consider the single token case.
However, there is no constraint requiring $d_c = d_r = d$, as stated in the second paragraph of Section 3.1.
Our method can also be extended to multi-token settings, with details provided in the Limitations section (Appendix A.2) of the revised version.
> **Q4**:
**Response**
Yes, in Eq. (2.1), we do not consider predicting multiple tokens.
In Eq. (2.1), we denote the pretrained LLM as $P_{\theta}$, where $\theta$ represents the pretrained model parameters. $C_t$ and $R_t$ are independent of $\theta$, Eq. (2.1) simply describes using $P_{\theta}$ to predict $c_{t+1}$ based on the prompt $C_t$ and $R_t$.
> **Q5**:
**Response**
We apologize for the confusion. The answer is No.
The importance of the ghost example is for mathematical consistency. Here are some clarifications.
* As shown in Remark 3.1, we use the proxy attention based on the BMA formulation, and this is not the actual attention in LLM.
* Consider the setting where we only have the input $c_{t+1}$ without $C_t$ and $R_t$. For the proxy attention mechanism equation (3.1), $K_t$ and $V_t$ have no meaning.
* Therefore, we construct $t$ ghost examples and prepend them before $c_{t+1}$.
* The aims are that the Eq. (3.1) is reasonable and the t ghost examples do not change the response of the $c_{t+1}$ in ICL.
> **Q6**:
**Response**
We apologize for the confusion. Due to space constraints, we provide the full details in Appendix E. Below, we summarize the core idea:
* Pretraining of GPT-2: We use the GPT-2 architecture but pretrain it on synthetic data generated from a linear regression setting. Each pretraining sample contains 50 query-response pairs. We apply MSE loss as training loss to predict the response for each pair.
* As described in Appendix E.2, we evaluate model performance when the number of in-context examples is less than, equal to, or greater than 50. We use R-squared as the evaluation metric and report values at 15, 30, 45, 60, and 75 examples in Table 1. The full R-squared curve is shown in Figure 3.1(a).
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions and updating your draft for clarity. I believe the novel contributions and perspective of this work outweigh the limitations (which are now clearly discussed in Appendix A.2 of the updated draft). I am satisfied with the updated changes in the manuscript and believe the paper is now a stronger submission. I am updating my score to a 4.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thorough review and for the improvement in the score. We are pleased to have addressed your concerns and truly appreciate your feedback. Thanks very much! | Summary: **Main Findings**:
1. Memory Reshaping: This paper proposes that ICL can be understood as a process of memory reshaping within the Hopfield model framework. Specifically, the input prompt examples can reshape the energy landscape of the probabilistic energy-based memory model, thus relocating the distribution of local minima.
2. Bayesian Model Averaging with Reshaped Memory: The authors formulate the explicit expression, demonstrating that ICL performs conditional estimation based on this hidden concept. Besides, softmax attention can approximate Bayesian Model Averaging.
3. Generalization Bound: This paper derives a generalization bound for ICL in a one-layer attention model, demonstrating that the performance of ICL improves with an increasing number of prompt examples.
**Main Algorithmic/Conceptual Ideas**:
1. Memory Reshaping Mechanism: This paper introduces the concept of memory reshaping, where the pre-trained attention mechanism's weights are linearly transformed by the prompt examples.
2. Hidden Concept Learning: The authors argue that ICL implicitly learns a hidden concept from the prompt examples, which is then used for conditional estimation. This hidden concept is shown to be a shared property among the prompt examples.
3. Hopfield Model Interpretation: This paper provides a new perspective on how transformers process and adapt to context by framing ICL as memory retrieval in the modern Hopfield model.
**Main Results**:
1. Theoretical Results: This paper provides mathematical formulations and proofs for memory reshaping, the connection to BMA, and the generalization bound.
2. Empirical Results: The experiments confirm the theoretical predictions, showing clear trends in ICL performance with respect to covariance shifts, response accuracy, and prompt-test query similarity.
Claims And Evidence: The claims made in the paper are generally well-supported by clear and convincing evidence. The theoretical analysis provides a solid foundation, and the empirical experiments validate the key findings.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper are highly suitable for the problem of understanding ICL in LLMs. The theoretical framework is rigorous, and the experimental design effectively validates the claims. The choice of metrics and baselines provides a clear and interpretable evaluation framework.
However there is still room for further exploration in more diverse tasks and more models. Considering that the model used in the paper (GPT-2) and the task (linear regression) are rather simple, whether the findings are general to more complex models and tasks remains to be proved.
Theoretical Claims: The theoretical claims made in the paper are supported by clear and convincing evidence. The proofs are mathematically rigorous and logically sound, and the empirical results validate the theoretical predictions.
Experimental Designs Or Analyses: The experimental designs and analyses in the paper are generally sound and well-structured, providing clear evidence for the theoretical claims. The choice of tasks, models, and metrics is appropriate for the problem at hand. However, there are areas where further exploration could strengthen the findings and enhance the generalizability of the results.
1. Task Diversity: Testing the framework on more complex tasks (e.g., classification, natural language inference) could provide deeper insights into the behavior of ICL.
2. Model Diversity: Evaluating the framework on more advanced models (e.g., GPT-3, GPT-4) could strengthen the generalizability of the findings.
3. Real-World Data: Using real-world datasets and more diverse distribution shifts could provide a more realistic evaluation of ICL.
Supplementary Material: I check the supplementary material, especially the Proofs of Main Text part.
Relation To Broader Scientific Literature: This paper offers a new perspective on In-Context Learning by interpreting it as memory retrieval in the modern Hopfield model. This novel interpretation can bring contributions to this area.
However, the guidelines provided can be further improved. Most of the guidelines are quite intuitive like 'select similar examples' and 'use more similar examples'. This resricts the contributions to the broader scientific literature. If the authors could provide more novel guidelines, I believe this will bring more contributions to not only the interpretability of ICL and LLM, but can also help the users of LLM.
Essential References Not Discussed: No missing related works.
Other Strengths And Weaknesses: **Strength**:
1. Novel Interpretation of ICL: The paper offers a new perspective on In-Context Learning (ICL) by interpreting it as memory retrieval in the modern Hopfield model. This interpretation is creative and provides a fresh understanding of how ICL works.
2. Theoretical Contributions: The paper makes significant theoretical contributions by deriving a generalization bound for ICL and showing how softmax attention can approximate Bayesian Model Averaging. These results provide a deeper understanding of the underlying mechanisms of ICL.
3. Clarity: The paper is well-organized and clearly presented. The theoretical framework is explained in detail, and the mathematical derivations are rigorous and easy to follow.
**Weaknesses** :
1. Limited Scope: The theoretical analysis focuses on a single-layer attention model, which may not fully capture the complexity of modern LLMs with multiple layers. Extending the analysis to multi-layer models would strengthen the findings.
2. Linear Regression Task: The experiments are limited to a linear regression task, which is relatively simple. Testing the framework on more complex tasks would provide a more comprehensive evaluation of ICL.
3. Lacking novel guidelines: This paper provides a novel understanding of ICL and get several guidelines. However, most of the guidelines are rather intuitive, like "As demonstrated in Section 4.1.3, selecting prompts similar to the test query enhances in-context learning performance." and "By Theorem 3.2, increasing the number of relevant prompt examples reduces the ICL generalization error". These guidelines of course can prove the proposed understanding, but may lack contributions. If the authors can further reach a novel guideline based on the proposed interpretation theoretically, and prove it empirically, this paper will bring more contributions to the research area.
Other Comments Or Suggestions: No further comments.
Questions For Authors: 1. Is the proposed interpretation general to more complicated models and more challenging tasks?
2. Are there more further guidelines your interpretation can provide?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your detailed review. We have revised our draft and addressed all concerns. **The revised version (changes marked in BLUE) and the code for external experiments are available in this [anonymous Dropbox folder](https://www.dropbox.com/scl/fo/9m982gnk45wc1w705wjsa/AKYGxykAcCFGADGQEHCN218?rlkey=h03t7ipyv8qbt9bvesfivxcsw&st=ze6gjwhu&dl=0).**
> **C1**: Limited Scope
**Response**
Thanks for your insightful comment. We provide an overview of how to extend our analysis to multi-layer models. The detailed discussion appears in the Section A.2 of the revised version, and we restate it here.
We regard the multi-layer transformer as iteratively retrieving memory patterns in the modern Hopfield model (MHM) through the following four steps:
* Design energy function for multi-layer MHM: We design an energy function matching the multi-layer transformer [Hoover23]. The memory pattern needs retrieval after reshaping and iteration. This step is challenging.
* Give clear expression of memory reshaping on model weights: This step is feasible following our framework in Sec. 3.1.
* Extend Bayesian method to multi-layer transformer: The Bayesian method in [Zhang23] and our work assume perfect pre-training. We must study perfect pre-training to match real multi-layer transformers. This step is challenging.
* Obtain generalization bound of ICL: This step is feasible following our framework in Appendix D.5. We need to derive the ϵ-stability of the multi-layer transformer.
[Zhang23] What and how does in-context learning learn? bayesian model averaging, parameterization, and generalization
[Hoover23] Energy Transformer
> **C2 & Q1**: Linear Regression Task: Is the proposed interpretation general to more complicated models and more challenging tasks?
**Response**
Thanks for your valuable question. The answer is **Yes**.
In the revised version, we extend our experimental evaluation to cover more complex tasks. Specifically, we include new results using GPT-2 on decision tree and 2-layer neural network tasks, following a similar setup as in the linear regression experiments. Additionally, we incorporate experiments with the GPT-J model on the real-world “TweetEval: Hate Speech Detection” dataset to evaluate performance on a practical sentiment classification task. Please refer to Section 4.1.4 for details.
> **C3 & Q2**: Lacking novel guidelines: Are there more further guidelines your interpretation can provide?
**Response**
Thank you for your comment. Our interpretation indeed offers further guidelines that help explain phenomena like hallucination in LLMs and avoid hallucination. For example:
* **Biased Hidden Concept**:
Eqs. (2.2)–(2.4) illustrate how the hidden concept influences the in-context learning response under a Bayesian approach. In Lemma 3.1, we derive an explicit expression for this hidden concept from the memory reshaping perspective. Irrelevant or inconsistent prompt examples create a biased hidden concept, which in turn leads to hallucination.
* **Insufficient Separation of Memory Patterns**:
The modern Hopfield model (MHM) shows that the separation of memory patterns affects retrieval error. We conjecture that unreasonable prompt examples reduce the separation between different memory patterns, increasing the retrieval error and contributing to hallucination.
Our framework provides a clear and practical guideline: to reduce hallucination, prompt examples should be relevant and consistent with the test input. To validate this, we refer to the experiments on the model’s sensitivity to the similarity between prompts and test examples. These experiments can be used here, including those with the GPT-2 model on linear regression, decision tree, and 2-layer neural network tasks.
---
Rebuttal Comment 1.1:
Comment: The rebuttal solves most of my concerns. I keep my rating of weak accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your careful consideration of the responses. Really appreciate your thorough evaluation. Thanks very much! | Summary: This paper proposes a model to interpret in-context learning as associative retrieval. The paper argues that in-context learning can be seen as a linear transformation of key-value weights. Under a GLM assumption, their modified attention construction (BMA Attention) is shown to approximate softmax attention and converge to the presumed target, in the limit of infinite in-context examples in the prompt. Their theory is supported using a linear regression experimental setup within the ICL setting.
Claims And Evidence: There are a lot of claims made in the paper, but often, the corresponding analysis/results either does not present strong enough evidence to establish them or connect them back well to the larger picture. I list these below:
- Connection to Associative Memory: In line 35, col 2, it is claimed that the paper defines an energy function for LLMs to interpret it as a Hopfield model and interpret ICL as reshaping attention heads. However, in the following discussion in 3.1, I see no mention of an energy function for the LLM, but only an analysis using a modified attention mechanism. Thus, the only connection to retrieval seems to be the proposed memory reshaping mechanism, which appears as a tenuous link to reducing retrieval error in Lemma 2.2.
- Relevance of Hidden Concepts: I am unsure as to how the notion of hidden concepts introduced in section 2.2 fits the larger context. Specifically, the generative model (with a GLM assumption) is used in section 3.2 to yield a certain prediction model that approximates z* via bar{z}t. However, I do not follow why the true target v{t+1} would be a function of the synthetic prompt form in 3.7, or what intuition does 3.9 offer within the larger context of the associative retrieval.
- Mathematical Impreciseness: In 3.4, it is implicitly assumed that there exists a linear transformation to go from the ghost prompts to the true keys and values, which will simply not be true if the ghost examples constitute of lower rank matrices. The discussion should note that the pseudo-inverse structure only minimizes the approximation error to the true matrix and does not directly approximate it. Similarly, in Assumption 3.1, (k, v) are assumed to be iid, without defining any underlying sampling distribution, either over them or the original (c, r) pairs. In addition, I am not even sure if this assumption holds true either in their model (which is a 1-layer transformer operating directly on embeddings) because of the weight matrix or the joint optimization process over the ghost examples that naturally introduces dependencies or in practice, since the justification that layer norm will somehow achieve this is not clear in the sense that layer norm only operates on a per token basis and affects no intra batch dependencies.
- Missing Proof Sketches: The proof sketch is missing for certain theorems and lemmas. But more importantly, the intuition on how to interpret this result in the larger context of the paper is often missing as well, meaning that as a reader, it is hard to get a high-level understanding of what's being said.
It is certainly possible that I missed some links that the authors had in mind while writing the paper. However, I still stress that they should be made more explicit to make the paper more coherent for the general reader.
Methods And Evaluation Criteria: While the experimental validation of the proposed theory is not the main contribution of the paper, there are some issues with it as well.
- The paper attempts to validate the theoretical claims with linear regression ICL experiments, interpreting their findings from a memory shaping viewpoint. However, the experimental setup is rather general, and the resulting trends would be predictable even without any intuition of reshaping. I understand that a claim like this is not easy to establish in general, but the current setup does not lend a lot of extra credence either.
- The setting of linear regression is perhaps not the most informative for LLMs, and the authors should consider other experiments on language centric datasets [see setups in 1, 2] for a stronger justification. Additionally, certain details for the experimental setup (such as how to measure similarity between examples in 4.1.3) are missing and need to be specified.
References:
[1]: In-Context Learning with Many Demonstration Examples - Li et al., 2023
[2]: Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition - Xiong et al., 2024
Theoretical Claims: Proofs were checked on a high level, with no obvious issues found.
Experimental Designs Or Analyses: See Methods and Evaluation Criteria
Supplementary Material: Proofs were checked on a high level, with no obvious issues found. Experimental details were looked at.
Relation To Broader Scientific Literature: The paper belongs to the line of work which interprets certain mechanisms in transformers as a retrieval in associative memories, with the primary contribution being an explicit mechanism of memory reshaping to explain ICL.
Essential References Not Discussed: There is some missing discussion of related literature and what exactly are the paper's unique contributions.
- The paper borrows the setup heavily from [1], but the line to demarcate their own novel results from the prior work is not made very clear for the reader.
- Discussion of ML theory work [see 2, 3 as a starting point] that interprets Transformers as associative memories, and how it differs from their work is missing.
References:
[1] What and How does In-Context Learning Learn? Bayesian Model Averaging, Parameterization, and Generalization - Zhang et al., 2023
[2] Birth of a Transformer: A Memory Viewpoint - Bietti et al., 2023
[3] Scaling Laws for Associative Memories - Cabannes et al, 2024
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your detailed review. We have revised our draft and addressed all concerns. **The revised version (changes marked in BLUE) and the code for external experiments are available in this [anonymous Dropbox folder](https://www.dropbox.com/scl/fo/9m982gnk45wc1w705wjsa/AKYGxykAcCFGADGQEHCN218?rlkey=h03t7ipyv8qbt9bvesfivxcsw&st=ze6gjwhu&dl=0).**
> **C1**: No mention of an energy function
**Response**
Thanks for the comment.
We use a toy model (one-layer attention) to study the in-context learning of LLM. For the one-layer attention, the energy function of this toy model is equivalent to that of MHM. We argue such an atomic setting is prevalent for analytical feasibility in literature [XIe22, Zhang23].
[Xie22] An Explanation of In-context Learning as Implicit Bayesian Inference
[Zhang23] What and How does In-Context Learning Learn? Bayesian Model Averaging, Parameterization, and Generalization
> **C2**: Notion of hidden concepts
**Response**
We apologize for the confusion.
The main point arises from memory reshaping in Eq. (3.5). To interpret ICL using memory reshaping, we must incorporate synthetic prompts, as noted in Remark 3.2. In Eq. (3.7), we aim to demonstrate the formulation of Eq. (3.6) with memory reshaping, thus including the synthetic prompts.
Eq. (3.9) explicitly formulates Lemma 2.1, relating to the mathematical formulation of ICL. We incorporate memory reshaping in Eq. (3.9) to demonstrate how it influences the hidden concept and subsequently impacts ICL performance.
> **C3**: Linear transformation and pseudo-inverse
**Response**
Thanks for your insightful comments. **We acknowledge these limitations and have added them to the 2nd and 3rd terms of limitation part (Appendix A.2 of the revised version).** Please refer to the revised version for details. We promise to add a technical remark to the main text in the final version if space allows.
Furthermore, it is acceptable that Assumption 3.1 does not define an explicit underlying sampling distribution, as the theoretical analysis remains valid under general distributional conditions without requiring a specific form.
> **C4**: Missing proof sketches
**Response**
We have added proof sketches for Lemma 2.1, Lemma 2.2, Lemma 3.1, and Theorem 3.1 ahead of the official proofs in the Appendix. We also provide the following high-level overview of our theoretical results. We promise to move these to the main text in the final version, if space allows.
**High-Level Understanding of Theoretical Results:**
* Lemma 2.1: Interprets In-Context Learning (ICL) in Large Language Models (LLMs) through Bayesian Model Averaging (BMA).
* Lemma 2.2: Shows that memory reshaping in Modified Hopfield Models (MHMs) reduces retrieval errors. Given the equivalence of the energy functions between MHMs and LLMs, this motivates us to apply memory reshaping to interpret ICL in LLMs.
* Lemma 3.1: Integrates memory reshaping into the Bayesian Model Averaging framework, providing a more explicit interpretation of ICL under the Gaussian linear case than Lemma 2.1.
* Theorem 3.1: Establishes the connection between Bayesian Model Averaging and Softmax Attention, showing that the explicit interpretation provided by Lemma 3.1 corresponds exactly to ICL within the attention mechanism.
> **C5**: The experimental setup is rather general
**Response**
Thank you for your helpful comment. We agree this is a limitation and have added it to the **6-th item of limitations section (Appendix A.2 of the revised version).** Please refer to the revised version for details. We also suggest one possible solution. However, verifying this solution is difficult. We leave that for future work.
> **C6**: Linear regression is simple
**Response**
Thanks for your valuable comment.
In the revised version, we extend our experimental evaluation to cover more complex tasks. Specifically, we include new results using GPT-2 on decision tree and 2-layer neural network tasks, following a similar setup as in the linear regression experiments. Additionally, we incorporate experiments with the GPT-J model on the real-world “TweetEval: Hate Speech Detection” dataset to evaluate performance on a practical sentiment classification task. Please refer to Section 4.1.4 for details. The results follow a similar pattern to the linear regression setting, consistent with our theoretical predictions.
We apologize for the confusion. We use the cosine similarity between different queries as the similarity measure. We have added this and other details to the revised version.
> **C7**: Essential references
**Response**
Thank you for your question. We have included a comparison with these three works in the revised version. Please see Appendix A.1 of the revised version for details.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal; I have taken a look at the updated submission and appreciate the efforts to address my concerns. I still have a few lingering issues about the presentation:
C1) I am aware of the connection between MHMs and one layer attention models. However, there is no correspondence demonstrated between the defined equation for retrieval (eqn. 3.8/3.9) and the energy function of MHMs (or any energy function for that matter). Therefore, I again stress that in the context of this submission, claiming that "we define an energy function for LLMs" is misleading for the reader. The authors should either establish this or consider rephrasing this.
C3) With regards to keys and values iid assumption, I do not mean to say that the authors should specify a particular distribution. Rather, since K, V are functions of the prompts and query tokens, the authors should state what is being assumed over the actual sampling distribution. I would also urge the authors to at least include the theoretical limitations as remarks next to the pertinent results, as opposed to deferring it to the appendix for providing the readers with appropriate context.
With these changes, I'd be happy to improve my recommendation for the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your detailed review and feedback. We appreciate your willingness to consider improving your recommendation.
We have revised our draft and addressed your concerns. The revised version (changes marked in BLUE) and the code for external experiments are available in this **[anonymous Dropbox folder](https://www.dropbox.com/scl/fo/9m982gnk45wc1w705wjsa/AKYGxykAcCFGADGQEHCN218?dl=0&e=2&rlkey=h03t7ipyv8qbt9bvesfivxcsw&st=ze6gjwhu)**.
---
>**C1**: Revision of the Energy Function Description.
**Response**
Thank you for your suggestion regarding the description of the energy function. We agree with your assessment and have made the following revisions to the manuscript to ensure clarity:
* `line 032-034, column 2`: Revised "we define an energy function for LLMs" to "we parametrize this energy function";
* `line 129-131, column 2`: Revised "The energy function of LLM" to "The energy function of one-layer attention".
>**C2**: Clarification on Assumptions and Theoretical Limitations.
**Response**
Thank you for your clarification on the iid assumption related to keys and values. We apologize for any previous confusion and have provided additional details in the manuscript as follows:
* **Considering the actual sampling distribution, Assumption 3.1 further assumes that $\\{c_t, r_t\\}^T_{t=1}$ are independently and identically distributed.**
* We have added this and theoretical limitations to Remark 3.3 (`line 178-183, column 2`) and Remark 3.6 (`line 268-274, column 1; line 220-222, column 2`).
---
We hope these revisions address your concerns and contribute to the manuscript's improvement. We are grateful for your insightful feedback. | null | null | null | null | null | null |
Private Federated Learning using Preference-Optimized Synthetic Data | Accept (poster) | Summary: The paper introduces POPri, a novel method for private on-device learning that leverages DP synthetic data generated via LLMs. They use Direct Preference Optimization (DPO) to fine-tune LLMs for generating high-quality synthetic data. POPri outperforms existing methods, in terms of next-token prediction accuracy, particularly on the newly introduced LargeFedBench benchmark.
Claims And Evidence: See weaknesses and questions.
Methods And Evaluation Criteria: See weaknesses and questions.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: See weaknesses and questions.
Supplementary Material: Yes, I went over the supplementary material.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: **Strengths**
1. The authors introduce LargeFedBench, a new federated benchmark for LLM evaluation, which includes Congressional Speeches and bioRxiv datasets. This benchmark is designed to avoid contamination and provides a large number of clients, making it a valuable resource for future research.
2. The paper introduces a novel approach to private on-device learning by framing it as an LLM preference optimization problem. This is a significant departure from traditional DP-FL and synthetic data methods, which rely on prompt engineering or in-context learning.
**Weakness**
1. It presents very limited downstream task evaluations. We can hardly judge if it works for a broader scope of downstream tasks.
2. The experiments are primarily focused on text data (e.g., bioRxiv, Congressional Speeches). It would be beneficial to see how POPri performs on other types of data, such as images or structured data, to assess its generalizability.
3. This paper uses DPO to improve synthetic data generation, but it does not compare it with other possible methods, such as RLHF.
4. While the paper briefly mentions the ethical implications of using LLMs trained on public data, it does not delve deeply into the potential risks of synthetic data generation, such as the possibility of generating biased or harmful content.
Other Comments Or Suggestions: Please do not change the template format.
Questions For Authors: 1. The paper uses DPO to fine-tune the LLM. Why was DPO chosen over other preference optimization methods like RLHF? What are the specific advantages of DPO in the context of generating DP synthetic data?
2. The paper claims that POPri has significant communication and computation cost advantages over DP-FL. Can you break down the specific cost savings in terms of client upload/download communication and computation, and how these savings are achieved?
3. In the context of POPri, how does the choice of the rejected sample (e.g., middle-ranked vs. last-ranked) impact the quality of the synthetic data generated? What is the rationale behind selecting the middle-ranked sample as the rejected sample?
Ethical Review Flag: Flag this paper for an ethics review.
Ethics Expertise Needed: ['Other expertise']
Ethical Review Concerns: This submission significantly changed the original ICML template.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **It presents very limited downstream task evaluations…It would be beneficial to see how POPri performs on other types of data**
On more tasks: Thank you for this comment—we are working on an evaluation of a text classification task. Due to the tight deadline of the rebuttal preparation period, we will provide the results in the coming days as the results finish running.
On other modalities: We completely agree that our evaluation does not allow one to generalize to other modalities. Focusing on a single modality is common in recent DP synthetic data papers, including several works that have won oral or spotlight awards at recent top ML conferences [1-5]. We will clarify in the introduction and abstract that our paper covers only the text modality.
[1] Lin et al, “Differentially private synthetic data via foundation model APIs 1: Images”, ICLR 2023
[2] Xie, C. et al, Differentially private synthetic data via foundation model APIs 2: Text, ICML 2024 (Spotlight)
[3] Hou, C. et al, Pre-text: Training language models on private federated data in the age
of LLMs, ICML 2024 (Oral)
[4] Qian et al. "Synthcity: a benchmark framework for diverse use cases of tabular synthetic data." NeurIPS 2023
[5] Dockhorn et al. "Differentially Private Diffusion Models." Transactions on Machine Learning Research 2022.
[6] Lin et al, "Differentially Private Synthetic Data via APIs 3: Using Simulators Instead of Foundation Model." 2025
**Why was DPO chosen over other preference optimization methods like RLHF?**
Thank you for your comment. We compared against IPO (Identity Preference Optimization) [7] in our paper which is a common mainstream alternative method to DPO in Table 3 of the appendix. We found that IPO did not perform as well as DPO.
[7] A General Theoretical Paradigm to Understand Learning from Human Preferences
Gheshlaghi Azar et al., 2023, https://arxiv.org/abs/2310.12036
**While the paper briefly mentions the ethical implications of using LLMs trained on public data, it does not delve deeply into the potential risks of synthetic data generation, such as the possibility of generating biased or harmful content.**
Thank you for noting this—we will expand our Impact Statement to discuss each of these important points.
**Can you break down the specific cost savings in terms of client upload/download communication and computation, and how these savings are achieved?**
Thank you for the comment–our description of the communication and computation cost savings was not clear. First, we describe the communication savings. In our experiment setting, DP-FL clients have to upload and download a DistilGPT2 model, which consists of 82M floats. Meanwhile, POPri communicates K x J = 1800 x 10 = 18000 text embeddings of size 384 for a total of 7M floats to download. POPri clients communicate back a vector of size 18000 (one score per text) to the server, resulting in 18K floats to upload. We have
| Method | Download (floats) | Upload (floats) |
|--|--|--|
| DP-FL | 82M | 82M |
| POPri | 7M | 18,000 |
| Reduction Factor (DP-FL / POPri) | **~11.7×** | **~4555.6×** |
Next, we provide a cost savings calculation in terms of compute. Experiment setting: we sample 1000 clients per round on the bioRxiv dataset:
| Method | Client Runtime (V100 GPU) in GPU seconds | Server Runtime (A100 GPU) in GPU seconds |
|--|--|---|
| DP-FL | 4.8 sec | - |
| POPri | 0.01 sec | 13,547.84 sec |
The client compute cost is small for POPri (about 500x smaller) than DP-FL. Meanwhile, the server side compute cost is small for FedAvg, and large for POPri. POPri is suitable in cases where (1) we care a lot about the downstream model quality, (2) the client computation and/or communication is expensive, and (3) server computation is relatively cheap and powerful. We will include this analysis in the paper.
**In the context of POPri, how does the choice of the rejected sample (e.g., middle-ranked vs. last-ranked) impact the quality of the synthetic data generated? What is the rationale behind selecting the middle-ranked sample as the rejected sample?**
This choice was a heuristic based on our experiments, discussed partially in Appendix E.4. Choosing very low-ranked samples (namely, rank 10) was bad–during preference optimization, the model was learning to distinguish between a good sample and a very bad sample, which distorted its performance. We needed to select more challenging preference pairs for DPO. However, if we had chosen rank 3 as the negative sample (for instance), the DP noise would be more likely to swap the ordering of the first and second-ranked samples. Hence, as an empirically-motivated compromise, we chose the 5th-ranked sample, which performed the best in our ablation. | Summary: The authors introduce a client-level differentially private (DP) federated learning algorithm that leverages synthetic data generation assisted by large language models (LLMs). Unlike previous approaches that rely only on prompting for synthetic data generation, their proposed POPri algorithm fine-tunes the LLM's weights using Direct Preference Optimization (DPO). Through extensive experiments, they demonstrate the effectiveness of POPri, showing improved performance compared to existing methods.
Claims And Evidence: Some of the claims are sort of confusing to me, especially on the part of privacy guarantees. In particular, I have the following questions:
1. The threat model is unclear to me. Are the authors considering trusted server or un-trusted server model? Also, why do the authors add noise both on client and server sides (as in line 17 and 20 in Algorithm 1)?
2. The DP nearest neighbors algorithm shown in Algorithm 2, is it a differential private algorithm since I don't see any privatization technique used in Algorithm 2?
3. what do $e_{pri}$ and $e_j$ represent in Algorithm 2? Is $E_i$ just one embedding or a collection of embeddings?
Methods And Evaluation Criteria: The proposed methods seem reasonable to me. However, the approach appears to be a straightforward extension of private evolution algorithms where POPri utilizes privatized clients responses to fine-tune the LLMs responsible for data generation. This may limit the paper's overall contribution. Additionally, I am concerned about the computational complexity of this method, as fine-tuning is likely to be significantly more resource-intensive than prompting.
Theoretical Claims: There are no theoretical claims in the paper but I suggest the authors giving a clear argument on the privacy analysis.
Experimental Designs Or Analyses: The experimental design makes sense to me. However, the performance improvement is not particularly surprising, given that the proposed method is significantly more resource-intensive than the baselines. Compared to DP-FedAvg and DP-FTRL, POPri benefits from LLM-assisted synthetic data generation. Meanwhile, compared to PE, POPri chooses to fine-tune the LLMs instead of prompting, which is more computationally demanding.
That said, the paper does not clearly address the cost-utility tradeoff of this approach. While it covers communication and computation complexity on the client side, most of the computation occurs on the server side. I believe it is essential to include a discussion on the overall computational complexity of the proposed method (both client and server sides) relative to existing approaches.
Supplementary Material: I looked in the supplementary material and have several questions on Algorithm 2 mentioned in the section "Claims And Evidence".
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: In Line 20 of Algorithm 1, why does the server aggregate the scores from all clients instead of the clients in $\mathcal{S}^t$?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Are the authors considering trusted server or un-trusted server model? Also, why do the authors add noise both on client and server sides (as in line 17 and 20 in Algorithm 1)?**
Thank you for pointing out the typo; we should only add noise in line 17. We consider the server to be untrusted. Our reported (epsilon, delta) are with respect to an adversary that can see only aggregated updates at every round during training, similar to past work on FL[1,2]. We will clarify.
[1] Kairouz et al, 2021 “Practical and Private (Deep) Learning without Sampling or Shuffling” https://arxiv.org/abs/2103.00039
[2] McMahan et al, 2017 “Learning Differentially Private Recurrent Language Models” https://arxiv.org/abs/1710.06963
**I don't see any privatization technique used in Algorithm 2? what do e_pri and e_j represent in Algorithm 2? Is E_i just one embedding or a collection of embeddings?**
First, we would like to apologize for an oversight; Algorithm 2 is not the one actually used in our experiments. In POPri the clients actually score each synthetic sample based on the average cosine similarity between the synthetic embedding and their private data embeddings. The experimental results reported in the paper use this cosine similarity scoring method. Updated pseudocode for Algorithm 2 is below:
>
> 1. **Input:** Embeddings of private client data $E_i$ for $i \in S^t$, embeddings of synthetic data $E_{syn}$, total synthetic samples $M = K \times J$
> Scores $\leftarrow 0^M$
> 2. $\text{Scores}[j] = \frac{1}{|E_i|} \sum_{e_{pri} \in E_i} \frac{\langle e_{pri}, e_j \rangle}{\|e_{pri}\| \|e_j\|}$ for $e_j \in E_{syn}$
> 3. **return** $Scores/\|Scores\|$ if $\|Scores\| > 1$ else Scores
We switched from nearest-neighbor (NN) to cosine similarity primarily performed better. As shown in this experiment on the bioRxiv dataset with full participation (https://imgur.com/a/pg1H5iq), using cosine similarity results in a significantly lower FID score compared to the NN histogram. This improvement is because NN histograms produce sparser scores—often assigning zero to all synthetic samples associated with a given prompt in POPri. The chosen and rejected samples end up being essentially random. By contrast, cosine similarity provides more informative and continuous scoring.
On privatization: we use the gaussian mechanism (line 17) where clients add noise. Their outputs are securely aggregated [3] before being given to the server. This gives the user-level distributed DP guarantee mentioned earlier.
[3] Bonawitz et al, “Practical Secure Aggregation for Privacy-Preserving Machine Learning” https://eprint.iacr.org/2017/281.pdf
**However, the approach appears to be a straightforward extension of private evolution algorithms**
Thank you for your comment. You are correct that POPri is an extension of PE. However, we believe that properly re-interpreting PE as a preference optimization problem is not obvious a priori. In order to do so, we had to make careful design decisions such as (1) what client feedback we should collect and (2) how we should use the client feedback. This included carefully organizing the synthetic samples and their feedback to ensure that preference datasets could be built, and experimenting/reasoning over what LLM fine-tuning method to use (DPO, SFT, IPO).
**I believe it is essential to include a discussion on the overall computational complexity of the proposed method (both client and server sides) relative to existing approaches.**
This is a valid concern. The computational cost of POPri is indeed significantly larger than PE on the server side. Below, we provide a table on the server and client compute cost breakdowns when we sample 1000 clients per round on the bioRxiv dataset:
| Method | Client Runtime in GPU seconds | Server Runtime in GPU seconds |
|-|---|--|
| PrE-Text | 0.0027 sec | 326.25 sec |
| POPri | 0.01 sec | 13,547.84 sec |
| FedAvg | 4.8 sec | - |
The client compute cost of POPri and PrE-Text are both small compared to FedAvg. However, POPri has the highest server compute cost. POPri is suitable in cases where (1) we care a lot about the downstream model quality, (2) the client computation and/or communication is expensive, and (3) server computation is relatively cheap and powerful. We will add this explanation and data to the main paper.
**In Line 20 of Algorithm 1, why does the server aggregate the scores from all clients instead of the clients in S^t**
This is a typo, thank you for noticing. The server aggregates scores only from clients in S^t. We will fix it! | Summary: In the paper, the authors present a novel approach to improving the utility of differentially private federated learning (DP-FL) by leveraging preference-optimized synthetic data generated through large language models (LLMs). The proposed method aggregates client feedback into preference pairs, and then fine-tunes an LLM to generate high-fidelity synthetic data while maintaining differential privacy (DP) guarantees.
Claims And Evidence: The claims mentioned in the paper are all supported either theoretical or experimentally.
Methods And Evaluation Criteria: Regarding the methods, my concerns are that:
1. Notations in Algorithm 2 are confusing. What’s $e_{pri}$? Also in the line 2 of the Algorithm 2, the expression of calculating histogram is hard to understand, and in the paper, there are no descriptions about the details of this line.
2. From figure 4, the authors only visualize when the evaluation dataset only has one cluster. How about multiple clusters? Can the proposed method can handle data with multiple clusters?
3. In section 5.2, the paragraph about the data distribution evolution, the authors mentions that early stopping can help avoids the overfitting. However, the details about early stopping are missing.
4. Regarding DPO: the authors found that directly use rank 2 response as the rejected samples is not as well as the middle-ranked samples because of DP noise swapping ranking. In the preference learning literature, there are some existing works aiming to handle such pairwise noise, for example, the Distributionally robustifying DPO (Dr. DPO)[1]. I am just curious about whether such DPO method will work when taking rank 2 samples as the rejected samples.
[1] Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization. https://arxiv.org/pdf/2407.07880
Theoretical Claims: The authors provide the privacy guarantee of the proposed method.
Experimental Designs Or Analyses: 1. Similar to Point 2, can the proposed method handle the data with multiple clusters?
2. In figure 3, why the accuracy suddenly drops when number of clients per round reaches $10^4$ and the total number of clients is larger than $10^5$ in Congressional Speeches dataset?
3. Regarding baselines: the authors only includes the work DP Model Parameters type baselines, DP-FedAvg and DP-RFRL are works in 2016 and 2017, and the authors should include more recent works such as methods in [2], [3]
[2] Yae Jee Cho, Luyang Liu, Zheng Xu, Aldi Fahrezi, and Gauri Joshi. 2024. Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 12903–12913, Miami, Florida, USA. Association for Computational Linguistics.
[3] FDLoRA: Personalized Federated Learning of Large Language
Model via Dual LoRA Tuning, https://arxiv.org/pdf/2406.07925
Supplementary Material: I check the Algorithm details and the additional experimental results in the Appendix
Relation To Broader Scientific Literature: 1. Replacing heuristic PE with preference optimization for higher-quality DP synthetic data.
2. Introducing LargeFedBench , a scalable, uncontaminated benchmark addressing gaps in prior datasets.
Essential References Not Discussed: DP Model Parameters type methods:
[2] Yae Jee Cho, Luyang Liu, Zheng Xu, Aldi Fahrezi, and Gauri Joshi. 2024. Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 12903–12913, Miami, Florida, USA. Association for Computational Linguistics.
[3] FDLoRA: Personalized Federated Learning of Large Language
Model via Dual LoRA Tuning, https://arxiv.org/pdf/2406.07925
Other Strengths And Weaknesses: Strengths
1. The authors provide a comprehensive related work.
2. The authors give a comprehensive ablation study.
Other Comments Or Suggestions: Typo: in the past paragraph of section 2.1, “Goal” -> “Goal.”
Questions For Authors: See point 1, 2, and 3 in Methods And Evaluation Criteria part
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Notations in Algorithm 2 are confusing. What’s e_pri? Also in the line 2 of the Algorithm 2, the expression of calculating histogram is hard to understand, and in the paper, there are no descriptions about the details of this line.**
First, we would like to apologize for an oversight; Algorithm 2 is not the one actually used in our experiments. In POPri the clients actually score each synthetic sample based on the average cosine similarity between the synthetic embedding and their private data embeddings. The experimental results reported in the paper use this cosine similarity scoring method. Updated pseudocode for Algorithm 2 is below (which explains what e_pri is):
>
> 1. **Input:** Embeddings of private client data $E_i$ for $i \in S^t$, embeddings of synthetic data $E_{syn}$, total synthetic samples $M = K \times J$
> Scores $\leftarrow 0^M$
> 2. $\text{Scores}[j] = \frac{1}{|E_i|} \sum_{e_{pri} \in E_i} \frac{\langle e_{pri}, e_j \rangle}{\|e_{pri}\| \|e_j\|}$ for $e_j \in E_{syn}$
> 3. **return** $Scores/\|Scores\|$ if $\|Scores\| > 1$ else Scores
We switched from nearest-neighbor (NN) to cosine similarity primarily performed better. As shown in this experiment on the bioRxiv dataset with full participation (https://imgur.com/a/pg1H5iq), using cosine similarity results in a significantly lower FID score compared to the NN histogram. This improvement is because NN histograms produce sparser scores—often assigning zero to all synthetic samples associated with a given prompt in POPri. The chosen and rejected samples end up being essentially random. By contrast, cosine similarity provides more informative and continuous scoring.
[1] Lin et al, “Differentially private synthetic data via foundation model APIs 1: Images”, ICLR 2023
**From figure 4, the authors only visualize when the evaluation dataset only has one cluster. How about multiple clusters? Can the proposed method can handle data with multiple clusters?**
Thank you for the question. Figure 4 shows the bioRxiv dataset, which is only one of our benchmark datasets. Our Congressional Speeches dataset, which contains congressional speeches from the United States, Canada, and the United Kingdom, contains multiple clusters by country: (see t-SNE plot https://imgur.com/a/IVKgbDL). Given that POPri performs the best on Congressional Speeches as well, our experiments do suggest that POPri can effectively handle data with multiple clusters.
**However, the details about early stopping are missing.**
In Appendix B.1, we explain our early stopping criterion. Note that our FedAvg and DP-FTRL baselines also engage in the same form of early stopping (using validation set loss) for fair comparison, and PE (and PE+SFT) did not benefit from stopping early in our experiments.
**I am just curious about whether such DPO method will work when taking rank 2 samples as the rejected samples.**
Thank you for the great suggestion! Dr. DPO is a strong work tackling the pairwise noise issue in DPO, and we will discuss it in the paper. We are working on an experiment comparing against cDPO (a version of DPO robust to label noise, https://ericmitchell.ai/cdpo.pdf), and will update it here. We also already compare against IPO (Gheshlaghi Azar et al 2023, https://arxiv.org/abs/2310.12036) in Table 3 of the appendix, which also prevents overfitting to incorrect labels (see https://ericmitchell.ai/cdpo.pdf), and found that it performed worse than DPO.
**In figure 3, why the accuracy suddenly drops when number of clients per round reaches 10^4**
In federated learning, past work has shown that sampling more clients per round can reduce generalization performance (see Figure 3 in https://arxiv.org/abs/2106.07820, NeurIPS 2021). The authors of that work believe that the “number of clients per round” parameter is similar to batch size in centralized training, where larger batch size can lead to worse generalization (https://arxiv.org/abs/1609.04836, ICLR 2017).
**Regarding baselines: the authors only includes the work DP Model Parameters type baselines, DP-FedAvg and DP-RFRL are works in 2016 and 2017, and the authors should include more recent works such as methods in [2], [3]**
LoRA (used in [2]) methods generally are designed to improve the efficiency of learning, not the final model quality. LoRA (and other parameter-efficient fine-tuning variants) generally do not outperform full fine-tuning. In [2], their method performs worse than FedAvg (Full) in Table 3. [3] is out of scope for our paper because they study a setting where each client can have a separate model, whereas our setting aims to train a single model. Hence, we do not believe that either of these papers should be included as baselines. Moreover, we do not know of any algorithms that outperform DP-FTRL in the setting we are considering.
We thank the reviewer for their time in helping improve our paper, and hope that our responses addressed the reviewer’s concerns!
---
Rebuttal Comment 1.1:
Comment: The authors' response has addressed my concern, and I raise my score to 3.
---
Reply to Comment 1.1.1:
Comment: Thank you!
In our original rebuttal we mentioned that we would run a comparison against cDPO to see if it could help address the label noise issue. The experiment setting is as follows: we used the bioRxiv dataset, set eps=7, learning rate = 8e-7, and used the third-ranked sample (in our experiments using the second-ranked sample did not work regardless of noise setting, possibly due to the two samples being too difficult to distinguish from each other).
We present our results here: https://imgur.com/a/xamA7yZ
We find that using cDPO with a label_smoothing parameter to 0.02 and 0.05 improved the FID of the synthetic samples, with 0.02 performing better than 0.05. The reviewer's intuition was correct--using robust versions of DPO can improve the results by making us more resilient to label noise. | Summary: This paper proposes a method for differentially private federated learning of language data, which finetunes a pretrained LLM with synthetic data generated according to client preferences. The method is lightweight in terms of client computation and client-server communication, achieves guarantees of differential privacy, and experimentally outperforms baselines. Ablation studies are also included.
## Update after rebuttal
Discussion with the authors helped clarify a few points (choice of DP parameter $\delta$, relative communication cost of different algorithms) and I will maintain my recommendation of acceptance.
Claims And Evidence: All of the claims in the paper are backed up by theoretical and experimental evidence. I did not find any problematic claims.
Methods And Evaluation Criteria: The evaluation methods make sense for the problem at hand. The authors introduced a dataset of federated language data, with special attention paid to avoid any data contamination, which brings rigor to the experiments and is a contribution in its own right.
Theoretical Claims: There are no proofs in the paper. The only theory-adjacent claims are the ones related to differential privacy guarantees, but these guarantees are essentially handled by the Opacus library.
Experimental Designs Or Analyses: I am very satisfied with the experimental design of the paper. The experiments compare against a number of relevant baselines, report error bars, carefully handle data contamination, and run interesting ablation studies.
Supplementary Material: No supplementary material was included.
Relation To Broader Scientific Literature: This paper contributes to the practice of federated learning with provable guarantees of privacy, particularly in the context of training language models. The proposed method appears very practical for use with edge devices (e.g. mobile phones), since the computation and communication costs for the clients is very small compared with classic FL methods. This line of work has important societal implications, since large tech companies can leverage existing infrastructure to immediately implement and deploy these kinds of algorithms with user devices and user data.
Essential References Not Discussed: I am not aware of any essential references not discussed.
Other Strengths And Weaknesses: Strengths
1. The proposed algorithm outperforms baselines pretty convincingly. The gap is not huge (between 1 and 2 absolute percentage points improvement to next-token prediction accuracy), but the error bars are non-overlapping, so I think that the improvement is reliable.
2. The experimental design is rigorous. The authors report error bars, and they introduce their own dataset to eliminate the possibility of data contamination. I also appreciate the ablation studies, which demonstrate robustness to the number of participating clients and probe the design choices of the proposed algorithm. In particular, I appreciate the ablation study in Figure 4, which uncovers a potential weakness of the proposed algorithm.
3. The paper is very well written. All of the technical ideas are well communicated, all of the baselines and potential variations of the problem setup are explained well, and the novelty of the proposed work is clearly distinguished. The authors are also very transparent about the possible limitations of their work, which I appreciate.
Weaknesses
1. The technical novelty of the paper is somewhat incremental. To my understanding, the algorithm is essentially an extension of Private Evolution (PE), but clients provide feedback on synthetic data through preference ranking instead of by choosing the best samples. Still, the performance of the proposed algorithm is convincing enough that I think this is a minor weakness.
Other Comments Or Suggestions: One small formatting suggestion. For the equation on line 240, column 2, I recommend to use \left( and \right) for parentheses, so that the parantheses are not smaller than the expressions inside of them.
Questions For Authors: 1. Can you explain the choice of $\delta$ given in the paragraph starting on line 376, column 1? I'm curious whether the gap between the proposed algorithm and baselines might grow or shrink depending on the choice of $\delta$. Did you ever perform any evaluations with different choices of $\delta$?
2. Can you elaborate on the "Cost analysis" in Section 5.1? I'm confused about why POPri communicates 10x more embeddings than PE methods. Is this because of a fundamental difference in the algorithms, or is the number of communicated embeddings a parameter that you can freely choose? In the latter case, is it possible that the improved performance of PoPri compared to PE methods is caused by the larger number of communicated embeddings? Did you ever compare against PE methods while controlling for the communication cost? I want to clarify that this is not a major issue, as it seems the current experiments control for the privacy budget instead of the communication budget, which is reasonable. Still, I want to understand where this 10x difference comes from.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **The technical novelty of the paper is somewhat incremental. To my understanding, the algorithm is essentially an extension of Private Evolution (PE), but clients provide feedback on synthetic data through preference ranking instead of by choosing the best samples. Still, the performance of the proposed algorithm is convincing enough that I think this is a minor weakness.**
Thank you for your comment. You are correct that POPri is an extension of PE. However, we believe that re-interpreting PE as a preference optimization problem is not obvious a priori.
PE was initially presented as an alternative to fine-tuning synthetic data models. In [1], the authors argue that PE can be used to generate DP synthetic data when fine-tuning a model is computationally impractical, but one has access to a foundation model API. Their key insight was that in this setting, one can obtain high-quality DP synthetic data by converting the synthetic data generation problem from a DP optimization problem to an iterative private selection problem, which is much lower-dimensional than private optimization. In particular, the private selection problem aims to select a single item from a set with the highest reward (according to a dataset-specific score function), subject to a DP constraint. Since [1] did not need to fine-tune the base foundation model, they were able to get away with using extremely coarse-grained feedback from the clients—namely, private selection data.
Our paper adds important nuance to this understanding of private synthetic data by arguing that even fine-tuning problems benefit from asking clients to solve a private selection problem. How? By not viewing clients’ private selections as “selections” at all, but as preferences. Indeed, this perspective motivates several design choices in POPri (for example, generating J samples from each of K random prompts to enable the construction of a preference dataset) which are essential for making our method outperform PE. As you observed, our empirical results improve significantly as a result of this interpretation.
[1] Lin et al, “Differentially private synthetic data via foundation model APIs 1: Images”, ICLR 2023
**Can you explain the choice of $\delta$ given in the paragraph starting on line 376, column 1? I'm curious whether the gap between the proposed algorithm and baselines might grow or shrink depending on the choice of $\delta$. Did you ever perform any evaluations with different choices of $\delta$?**
This is a good question. While we did not directly conduct ablations with different values of delta (due to computational limits and time constraints for the rebuttal), we did run experiments for different values of epsilon. Because the (epsilon,delta) values are computed using a privacy accounting mechanism, we can equivalently interpret the results from (epsilon=7, delta=1e-6) (Appendix E, Figure 6) as a smaller epsilon with a larger delta. For example, under the same noise and number of rounds, the result from (Appendix E, Figure 6) is also (epsilon=1, delta=0.45). Under this interpretation, we can directly compare Figures 3 and 6, as keeping epsilon constant and varying delta. Since the results change minimally across plots, our intuition is that the results are not very sensitive to delta, even across large differences. We will add a full ablation in the final version where we vary delta explicitly to verify this intuition.
**Can you elaborate on the "Cost analysis" in Section 5.1? I'm confused about why POPri communicates 10x more embeddings than PE methods. Is this because of a fundamental difference in the algorithms, or is the number of communicated embeddings a parameter that you can freely choose? In the latter case, is it possible that the improved performance of PoPri compared to PE methods is caused by the larger number of communicated embeddings? Did you ever compare against PE methods while controlling for the communication cost? I want to clarify that this is not a major issue, as it seems the current experiments control for the privacy budget instead of the communication budget, which is reasonable. Still, I want to understand where this 10x difference comes from.**
Thank you for this great suggestion. We have not run experiments in which we control the communication cost. The number of communicated embeddings is a number we can freely choose (K x J), and we can choose K and J. The reason the communication is 10x is because we set K to be the same as N_syn in PE so that the POPri preference dataset had the same size as the PE iterative synthetic dataset, and set J to be 10 because that number worked well among different choices we made (in early experiments we had tried 2, 4, 6, 8, 10).
We thank the reviewer for their time in helping improve our paper!
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I think that including a full ablation around epsilon and delta will definitely improve the paper. My concern still holds regarding the limited technical novelty, but I think that the results are strong enough to justify acceptance. I will keep my score the same. | null | null | null | null | null | null |
Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges | Accept (poster) | Summary: The paper demonstrates the capability of language models for self-improvement: achieving improved performance on problem instances beyond the training data, using model generations alone (without additional labeled data). The work focuses on two settings: length generalization, where the model is trained on shorter sequences and extrapolates to longer ones; and easy-to-hard generalization, where model is trained on easy/simple problem instances and is trained to generalize to harder instances. The authors study different arithmetic problems (addition and multiplication), copying and maze-solving, and show impressive results in all settings. To achieve self-improvement, the authors initially train a model on short/simple instances with fully supervised labels, and then iteratively increase the length/complexity of the problems, training the model on its own generations instead of using labeled data. The authors utilize data filtering techniques - majority voting and filtering based on length, and demonstrate their effectiveness in achieving out-of-distribution generalization in harder settings. Finally, the authors analyze how errors accumulate during the execution of the self-improvement procedure, and how data filtering can mitigate this.
Claims And Evidence: The method and the results shown in the paper are novel, with very impressive performance on hard tasks like multiplication and maze solving. The paper is very well-written, the ideas are clearly introduced and the authors did a very good job in presentation of the results. The study of error accumulation is also interesting in its own right, and overall the experiments demonstrate the success and limitation of different methods very thoroughly.
Some concerns regarding the results:
- I think that the comparison to other length generalization works is a little misleading. Length generalization is interesting for two reasons: First, because we don't have training data with long sequences; and second, because training on long sequences is very costly. The method presented in the paper addresses the first problem, i.e. - gives a solution that could be applied in cases where we don't have training data with long sequences (although, the method does require input "questions" for longer sequences, also discussed in the limitations section). However, it does not improve the computational cost of training on long sequences, and if anything makes the training cost much higher. While I believe that addressing only the first issue is important, the authors should discuss this limitation explicitly. Additionally, I think that discussing the overall computational cost of the method (and ideally compare it to other methods for length generalization) would be helpful.
- Related to the above, I think the authors should directly compare their results, at least in some of the experiments, to other length generalization methods on the same tasks. The authors claim that other length generalization methods change the positional encoding, architecture and data to address length generalization, in contrast to this paper. However, the paper also uses a somewhat non-standard positional encoding (NoPE). So, how do the results in the paper compare to length generalization methods that make "reasonable" changes (i.e., changing the positional encoding but not the data)?
- In the majortiy voting based filtering, the authors use an ensemble of models, all trained from scratch with different seeds and datasets. This seems much more expensive than sampling with temperature. Does temperature sampling simply not work, requiring this more complex method? I think that discussing the computational cost of this ensemble method would be useful, and also maybe comment on how it could be improved (e.g., how large should k be for the method to work)?
More minor comments:
- In the addition experiments, the authors claim that they achieve generalization to over 100 digits, but from the plots it seems that performance could keep improving. How far does the generalization actually reach? Did the authors run experiments beyond this range?
- The description of relative length filtering is unclear. How is $\tau$ used? Are the lengths filtered based on a fixed constant, or relative to the maximal length L?
To summarize, I think this is a very good paper, demonstrating the effectiveness of self-improvement method for achieving out-of-distribution generalization in different settings. The experiments are very convincing, but I believe that proper discussion of computational cost compared to other methods could improve the paper.
Methods And Evaluation Criteria: See above.
Theoretical Claims: N/A
Experimental Designs Or Analyses: See above.
Supplementary Material: No
Relation To Broader Scientific Literature: See above.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and appreciating the novelty, clarity, and thoroughness of our approach and experiments. We address your specific comments and suggestions below:
**Computational Cost**
> Length generalization is interesting for two reasons: First, because we don't have training data with long sequences; and second, because training on long sequences is very costly. ... While I believe that addressing only the first issue is important, the authors should discuss this limitation explicitly. Additionally, I think that discussing the overall computational cost of the method (and ideally compare it to other methods for length generalization) would be helpful."
We appreciate your highlighting this point. While our manuscript briefly mentions the inherent differences of our method from other length generalization approaches (lines 82, 623-626), we agree that explicitly mentioning computational costs is crucial. Indeed, our iterative training method does require higher computational resources compared to single-round training strategies, as it involves multiple iterations of model training. We will clearly discuss this computational trade-off and compare it with other methods explicitly in the revised manuscript.
**Comparison with Positional Encoding Method**
> I think the authors should directly compare their results, at least in some of the experiments, to other length generalization methods on the same tasks.
We acknowledge your suggestion to explicitly compare our approach with other positional encoding (PE) methods. We want to emphasize that our framework is fundamentally architecture-agnostic and can be combined seamlessly with various positional encodings, including both NoPE and RoPE.
Specifically, we used NoPE in experiments involving small models trained from scratch due to its better length generalization properties compared to absolute position encoding (APE) or RoPE. Conversely, for our pretrained model experiments, we employed pretrained LLaMA models utilizing RoPE. Hence, the choice of positional encoding is orthogonal to our core contribution. We will clarify and elaborate this point.
**Clarification on Majority Voting Filtering**
> In the majortiy voting based filtering, the authors use an ensemble of models, all trained from scratch with different seeds and datasets. This seems much more expensive than sampling with temperature. Does temperature sampling simply not work, requiring this more complex method? I think that discussing the computational cost of this ensemble method would be useful, and also maybe comment on how it could be improved (e.g., how large should k be for the method to work)?
We did not consider temperature scaling because our tasks are structured to have a specific input-output format. Thus, temperature sampling did not yield sufficient output diversity. Specifically, we observed very high confidence (over 0.999) in top-1 outputs using beam search (for reverse addition task), indicating minimal diversity even with temperature sampling. However, we anticipate temperature scaling could be more effective for pretrained models that can generate more varied outputs.
Regarding computational cost, we conducted ablation studies for majority voting in Appendix Section B.2.4, exploring data cost and performance trade-offs. Given computational costs scale linearly with the number of ensemble models, we will add explicit commentary on this trade-off, as well as provide new results on how accuracy and performance vary with ensemble size (k) and consensus thresholds in our revision.
**Q1. Generalization beyond 100 digits**
> In the addition experiments, the authors claim that they achieve generalization to over 100 digits, but from the plots it seems that performance could keep improving. How far does the generalization actually reach? Did the authors run experiments beyond this range?
Our experiments show that self-improvement can continue indefinitely as long as data accuracy remains high. We halted our experiments around 100-digit sequences due to memory constraints, but since performance remained stable, we anticipate continued successful generalization beyond this length.
**Q2. Relative Filtering**
> The description of relative length filtering is unclear. How is used? Are the lengths filtered based on a fixed constant, or relative to the maximal length L?
We apologize for the unclear description. Currently, relative length filtering uses fixed thresholds (e.g., threshold value of 2 for forward addition and 10 for multiplication). We will clarify this clearly in our revised manuscript. Following the suggestion from Reviewer vrfN, we will also add sensitivity analyses for filtering thresholds. Employing thresholds relative to maximal sequence length L would be a valid (and insightful) approach as well, although it is not used in our experiments. | Summary: This paper presents a self-improvement approach where standard decoder transformer models iteratively generate and learn from their own predictions. The authors show that this self-improvement approach allows models to achieve extreme length generalization, where the length of a test instance can be up to 5x-6x longer than the length of a training instance, for addition, string copy, multiplication, and shortest path problems. For these problems, length generalization corresponds naturally to easy-to-hard generalization, where longer input instances are also harder to solve. The core of the proposed self-improvement approach is a data filtering procedure. Data filtering ensures that the self-generated labels are accurate enough so that they will not degrade the model's performance when it is fine tuned on its own predictions.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: There is no theoretical claim.
Experimental Designs Or Analyses: I checked all experimental designs and analyses in the main paper, as well as the motivation for data filtering discussed in the appendix. The experiments are well designed and thorough.
Supplementary Material: Sections A, B.2, B.3, B.4, B.5, B.6.
Relation To Broader Scientific Literature: Achieving good length generalization is an important and difficult problem. The results in this paper can motivate future work on achieving length generalization on more complex tasks, by adopting appropriate data filtering within a self-improvement framework.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: While I appreciate the comprehensive empirical evaluation and analysis presented in this paper, I think the scope of this paper is very much limited by the simplicity of the tasks considered by the authors. On these tasks, it is natural that the length of the input corresponds nicely to the difficulty of the problem. However, in many (and more realistic) cases, such a correspondence is not that straightforward. More importantly, as the authors show in the paper, the success of self-improvement hinges on the quality of the model's prediction at every round, which in turn is almost entirely dependent on the data filtering procedure. Without the data filtering procedure, the self-improvement framework will lead to much worse results. It is unclear how to design good data filtering procedure on more complex tasks. Therefore, the implications of the findings presented in this paper to more complex settings is not clear. Besides stating this as a limitation, the authors should at least discuss how the results on simple tasks would be useful for more complex tasks.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your careful review and for recognizing the thoroughness of our experimental design.
**W1 Limitations to simple tasks**
> This paper is very much limited by the simplicity of the task[...] length of the input corresponds nicely to the difficulty of the problem. However, in many (and more realistic) cases, such a correspondence is not that straightforward.
We agree that for most of the tasks we considered, longer length implies higher difficulty. But for tasks like maze solving, a larger number of nodes do not correspond strictly to harder instances; it is possible to have a smaller maze that is harder due to a higher branching factor. It would indeed be interesting to consider tasks whose difficulty scales constantly or inversely with length. Moreover, we absolutely agree that estimating the difficulty of real-world tasks is an important future direction, and we discuss this point more in the Limitations section of our paper.
For more on real-world applicability, please refer to our general response below.
**W2 Data filtering**
> Without the data filtering procedure, the self-improvement framework will lead to much worse result. It is unclear how to design good data filtering procedure on more complex tasks.
Indeed data filtering is crucial to the success of our self-improvement framework. One key finding from our experiments is the effectiveness of simple majority vote filtering. This strategy is notable because it does not rely on task-specific heuristics, making it potentially applicable to more complex scenarios.
Moreover, filtering based on majority voting to ensure self-consistency is a widely-used approach, as we mention in our paper (L265).
**W3 Implications for Complex Settings**
> Therefore, the implications of the findings presented in this paper to more complex settings is not clear. Besides stating this as a limitation, the authors should at least discuss how the results on simple tasks would be useful for more complex tasks.
We acknowledge your valid concern regarding the implications of our findings for more complex, real-world settings. As we discuss explicitly in our Limitations Section, defining and quantifying task difficulty remains an open and significant challenge in practical applications. However, our results provide valuable insights into systematically scaling task difficulty, an essential step for enabling transformers to tackle more complex real-world problems effectively.
Moreover, our experiments show that models exhibit robustness to imperfect difficulty scheduling, especially when starting from pretrained models. This robustness improves further with increased training rounds. Additionally, we find that decomposing complex problems into intermediate steps enhances the model's ability to generalize. Notably, pretrained models are particularly effective at leveraging these decompositions, highlighting a promising avenue for applying our method to complex tasks.
---
## General response to overlapping concerns:
> Real-world Applicability and Generalization Beyond Synthetic Tasks.
We acknowledge this valid concern. We selected synthetic tasks for their well-defined difficulty and clear evaluation metrics, which allow for controlled and rigorous analysis.
Additionally, the scope of our chosen tasks aligns with common practice in the length generalization literature. Many influential works (Li et al., 2023;Ruoss et al., 2023;Zhou et al., 2023;McLeish et al., 2024;Kazemnejad et al., 2024;Sabbaghi et al., 2024;Cho et al., 2024;Zhou et al., 2024;Fan et al., 2024) have focused on the problem of generalizing beyond lengths seen during training using synthetic tasks, gaining recognition for providing critical insights into model limitations and capabilities. In fact, this line of research dates back to early works such as Neural GPUs(Kaiser and Sutskever, 2016) and Universal Transformers (Dehghani et al., 2019). We cite these studies to legitimize our methodology, demonstrating that insights derived from synthetic tasks are widely considered meaningful and impactful.
Compared to many prior works, our work considers a broader range of synthetic tasks with varying difficulty, encompassing arithmetic operations, string manipulations, and maze-solving problems. These tasks allow us to stress-test self-improvement in diverse settings and build insights applicable to more complex domains. Although out of the scope of this paper, we plan to extend self-improvement to long-context natural language reasoning benchmarks.
Overall, we believe our paper presents an important step forward. We demonstrate improvements to the existing length generalization literature by providing a method that can continue the trend of generalization indefinitely, on a wider range of tasks. We supplement the work with robust empirical analyses. Our work establishes foundational insights on how self-generated data and iterative self-improvement can extend the capabilities of transformer models. | Summary: This paper proposes and validates a simple and intuitive idea to train a model to solve hard problems that require long reasoning processes. The authors train the models on a task with progressively increasing complexity, leveraging the models' capability to generalize to slightly harder ones for self-improvement. The authors validate the success of this method with a number of synthetic tasks, including a maze puzzle. Analyses highlight the importance of filtering correct self-synthetic data through length filtering and majority voting in this method.
Claims And Evidence: The main claim of this paper is that language models can generalize easy-to-hard and we can leverage this capability to self-improve models to solve progressively complex problems of a task. This claim is supported by convincing evidence of several synthetic tasks including difficult maze-solving.
The author emphasizes that reliable filtering is central to consistent self-improvement and that error avalanche caused by label noise as a key failure case of the self-improvement process. The results Fig. 6 and Fig. 9 justify this.
The rate of self-improvement can be exponential and pretrained models can achieve faster acceleration in easy-to-hard generalization. The evidence in Fig. 12 supports this.
Methods And Evaluation Criteria: This paper presents the effectiveness of the methods under synthetic tasks. These tasks make the task difficulties easy to control which might cause concerns about adopting the method to real-world tasks. Besides, it is unclear in what situations the proposed unsupervised filtering method works or fails.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experiment designs are presented clearly. I am particularly interested in how much difficulty increased in each round of iteration for each task. As mentioned in the Introduction, controlling the weak-to-strong curriculum is crucial, as models require a structured difficulty schedule to avoid catastrophic failure. But the authors only discuss this by directly putting the final setups without explaining the rationales in the Appendix.
Supplementary Material: No.
Relation To Broader Scientific Literature: Results in this paper seem to relate to the inference length scaling of reasoning models[1].
It would be interesting to discuss the connection between length scaling during RL and the findings in this paper.
[1] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
Essential References Not Discussed: Not found.
Other Strengths And Weaknesses: Strengths:
1. This paper is clearly organized and the experiments are extensive.
2. The claim in this paper is well justified through well-controlled experiments.
3. It is interesting to see unsupervised filtering technique is enough to drive fil
Weaknesses:
1. The method in this paper is limited to synthetic settings with several implicit assumptions: (1) single task with fine-grained controllable difficulties; (2) the existence of (unsupervised) filtering rules that enable near-perfect filtering, without which the accuracy drops obviously after few iterations.
It would be better if the authors can discuss more on how their findings can potentially help facilitate real-world tasks.
2. It is not surprising to see reliable filtering is critical for the success of .
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your recognition of our extensive experimental validation and clear presentation. We respond to your questions inline below.
**W1 Limitations to Synthetic Setting**
> It would be better if the authors can discuss more on how their findings can potentially help facilitate real-world tasks.
Please refer to our general response below in our response to Reviewer MReD (General response to overlapping concerns).
**W2 Filtering**
> The method in this paper is limited to synthetic settings with several implicit assumptions: ... (2) the existence of (unsupervised) filtering rules that enable near-perfect filtering, without which the accuracy drops obviously after few iterations.
> It is not surprising to see reliable filtering is critical for the success of [.] (note: this comment appears to be cut off)
We agree that filtering is central to the framework. One interesting result is that simple majority voting over models trained with different seeds is surprisingly effective. This unsupervised, task-agnostic approach allows for easy application to a variety of tasks, including more complex ones.
**Difficulty schedule**
> The method in this paper is limited to synthetic settings with several implicit assumptions: (1) single task with fine-grained controllable difficulties ...
> I am particularly interested in how much difficulty increased in each round of iteration for each task. As mentioned in the Introduction, controlling the weak-to-strong curriculum is crucial, as models require a structured difficulty schedule to avoid catastrophic failure. But the authors only discuss this by directly putting the final setups without explaining the rationales in the Appendix.
Thank you for pointing out the need to clarify how we selected difficulty increments. Our primary consideration was to ensure data quality sufficient for reliable training in subsequent rounds, thus preventing error avalanches (Section 8). We recognize that our rationale behind difficulty increments could be clearer, and we will explicitly connect the choice of difficulty scaling with the avoidance of error accumulation. Additionally, we will include an experiment demonstrating the consequences of overly aggressive difficulty scaling, further clarifying the importance of difficulty schedule.
Our existing "accelerated self-improvement" experiments (Section 7.2, Section B.4, Figures 12 & 23) also offer insights into the flexibility of our difficulty schedules, demonstrating the allowable range of difficulty increases with self-improvement rounds.
**Connection to Length Scaling of Reasoning Models**
> Results in this paper seem to relate to the inference length scaling of reasoning models[1].It would be interesting to discuss the connection between length scaling during RL and the findings in this paper. ([1] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning)
Thank you for pointing out the potential connection between our results and the inference length scaling observed in reasoning models like DeepSeek-R1. Indeed, both lines of work emphasize iterative and incremental learning paradigms. We add a short discussion of this connection and its implications in our revised manuscript. | Summary: The paper introduces a self-improvement framework for transformer models that enables them to progressively tackle problems beyond the training distribution. Rather than modifying the underlying transformer architecture, the authors leverage an iterative self-training procedure in which a model generates its own training data and incrementally learns from increasingly difficult examples. The work is demonstrated across a range of algorithmic tasks—including arithmetic (both reverse and forward addition, as well as multiplication), string manipulation (copying and reversing), and maze solving. Two key unsupervised data filtering techniques are proposed: length filtering, which removes outputs shorter than an expected threshold, and majority voting, which retains only those examples that a set of independently trained models agree on. The experiments show that models trained with this framework can, for instance, generalize from 10-digit to 100-digit arithmetic problems and similarly extend their capabilities in other tasks.
Claims And Evidence: The paper’s core claims are:
- Self-improvement enables transformers to overcome length and difficulty generalization challenges. They back this up with extensive experimental results on arithmetic (up to 100-digit addition, up to 10-by-10 multiplication), string tasks (copying, reversing up to 120 tokens), and maze pathfinding (from up to 9 hops to 30 hops, or from 30 nodes to 100 nodes). The evidence is convincing: they show consistent gains and near-perfect accuracies over multiple self-improvement rounds.
- Data filtering is critical for stable self-improvement. The authors highlight “error avalanches,” illustrating how low-quality self-labeled data can accumulate and degrade performance. They demonstrate that straightforward filters—length-based filtering and majority voting across multiple models—significantly reduce noise. They also document label noise injection experiments that lead to self-improvement collapse, underscoring the importance of high-fidelity synthetic labels. This is well-supported by thorough ablations.
- Pretrained models accelerate and improve self-improvement. The authors show that starting from larger LLaMA-based checkpoints (1B and 3B parameters) makes it easier to expand to out-of-distribution difficulties. These results, while not deeply analyzed in terms of underlying reasons, are plausible and are backed by side-by-side comparisons of smaller vs. larger models.
- Long-horizon iterative training can lead to exponential improvements. They present an “accelerated self-improvement” schedule where the new round’s difficulty is not just “one step harder” but rather includes all newly mastered tasks. This quickly expands the range of tasks the model can solve. The experimental plots do show faster improvement under these scheduling choices.
Methods And Evaluation Criteria: - Self-Improvement Procedure: The iterative framework alternates between data generation (using the current model) and training with an expanded dataset, where new examples are filtered by length and consensus.
- Data Filtering Techniques: Two unsupervised filters (length filtering and majority voting) are presented to prune noisy outputs.
- Evaluation: Performance is measured by exact-match accuracy using greedy decoding. For arithmetic tasks, generalization is quantified by the maximum digit length (or operand size) that achieves near-perfect accuracy.
Theoretical Claims: The paper is primarily experimental, focusing on empirical performance improvements rather than deep theoretical guarantees.
Experimental Designs Or Analyses: The experimental setup is sound:
- Controlled Difficulty Increase: For each task, a clear definition of difficulty (e.g., number of digits or hops) is provided. The incremental increase per round is systematic.
Ablation Studies: The paper examines the impact of filtering techniques, accelerated curricula, and the role of pretraining.
- Error Avalanche Analysis: By injecting synthetic errors and tracking their impact, the authors provide a useful analysis of how self-improvement can collapse if noise is not managed.
One potential weakness is that all experiments are conducted on synthetic or algorithmic tasks. While these tasks are standard in the literature on length generalization, additional evaluations on more diverse real-world tasks might help demonstrate broader applicability.
Supplementary Material: The supplementary material includes:
- Detailed descriptions of hyperparameters and training schedules.
- Additional results, including further ablations on error noise and extended generalization curves.
- Extended discussions on the generation of training data and filtering strategies.
Relation To Broader Scientific Literature: The paper is well-situated within the current literature:
- It builds on prior work in length generalization (e.g., Anil et al., 2022; Zhou et al., 2023) and transformer modifications.
The self-improvement framework is related to recent studies on self-training and self-refinement in large language models (e.g., Huang et al., 2022; Singh et al., 2023).
- The idea of easy-to-hard curriculum learning connects with literature on curriculum learning and weak-to-strong generalization.
- The work distinguishes itself by showing that without modifying the transformer architecture, one can achieve significant extrapolation capabilities using a controlled self-improvement loop.
Essential References Not Discussed: The paper is well-cited
Other Strengths And Weaknesses: Strengths:
- The iterative framework is a compelling strategy to push the boundaries of what transformers can generalize.
- By relying on standard transformer architectures without task-specific modifications, the method is both simple and potentially widely applicable.
- The paper presents extensive ablations and analyses, particularly on error propagation and the impact of filtering.
Weaknesses:
- The experiments focus solely on synthetic tasks. More real-world applications would be valuable to assess practical utility. I would expect the authors try their method to adapt a model trained on shorter text sequences to process longer inputs.
- The method’s performance on tasks with more complex or less structured data remains to be explored.
- Why do we need self-improvement while we can use positional encoding and not requires training to generalize to longer sequence? The authors did mention that "While effective in controlled setups, these approaches are often incompatible with large language models (LLMs) in practice, as they introduce task-specific modifications that are difficult to scale across diverse applications." but RoPE, for example is applied on Deepseek and it is not "task-specific".
Overall, this paper presents a solid contribution to understanding how self-generated data and controlled curricula can extend transformer capabilities beyond their initial training distribution. While certain aspects—such as scalability to non-synthetic tasks and formal theoretical analysis—remain open, the empirical results and methodological innovations make this work a significant and promising step forward.
Other Comments Or Suggestions: N/A.
Questions For Authors: - How do you envision scaling this self-improvement framework to tasks beyond synthetic benchmarks? Could you provide examples of potential real-world applications or domains where you expect similar performance gains?
- Have you conducted experiments to assess how sensitive the method is to the choice of filtering thresholds in both length filtering and majority voting? What guidelines can you offer for tuning these parameters in practice?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback and for acknowledging that the iterative framework is a compelling strategy to push the boundaries of the generalization capabilities of Transformer models, and recognizing our extensive ablations and analyses. Please find our responses to each of the concerns you have raised below.
**W1,W2, Q1 Real world applications**
> W1 The experiments focus solely on synthetic tasks. More real-world applications would be valuable to assess practical utility. I would expect the authors try their method to adapt a model trained on shorter text sequences to process longer inputs
> W2 The method's performance on tasks with more complex or less structured data remains to be explored.
> Q1 How do you envision scaling this self-improvement framework to tasks beyond synthetic benchmarks? Could you provide examples of potential real-world applications or domains where you expect similar performance gains?
Please refer to our General Response below in our response to Reviewer MReD (General response to overlapping concerns) regarding the limitations on synthetic tasks.
**W3-1 Necessity of self-improvement vs. positional encodings**
> Why do we need self-improvement while we can use positional encoding and not requires training to generalize to longer sequence?
While approaches like RoPE indeed improve length generalization, they are complementary rather than replacements. Our self-improvement method is architecture-agnostic, requires no positional encoding modification, and naturally scales across diverse applications without architecture changes. This generality positions our method advantageously, especially when considering broad applicability to existing large pretrained models, avoiding retraining costs or architectural modifications. For example, for our pretrained model experiments, we employed pretrained LLaMA models utilizing RoPE. Hence, the choice of positional encoding is orthogonal to our core contribution. We will clarify and elaborate this point.
**W3-2. On RoPE scaling for long-context**
> The authors did mention that 'While effective in controlled setups, these approaches are often incompatible with large language models (LLMs) in practice, as they introduce task-specific modifications that are difficult to scale across diverse applications.' but RoPE, for example is applied on Deepseek and it is not 'task-specific'."
RoPE itself is only a positional encoding choice, but we agree that RoPE scaling, used in DeepSeek, is proposed as a way to generalize to longer contexts without retraining the model. However, existing works on long context model evaluation [https://openreview.net/pdf?id=293V3bJbmE] shows that RoPE still struggles to achieve reliable performance on OOD input lengths. Furthermore, applying RoPE scaling on our synthetic tasks did not help with length generalization in our experiments. We will add our result on using RoPE scaling in the Appendix.
**Q2. Sensitivity of Filtering Thresholds**
> Have you conducted experiments to assess how sensitive the method is to the choice of filtering thresholds in both length filtering and majority voting? What guidelines can you offer for tuning these parameters in practice?
We agree this is important and will add further clarification in the paper. Based on our current observations, the impact of length filtering is task-dependent. In tasks like multiplication where errors tend to drop intermediate steps, the threshold is less sensitive. In contrast, tasks like forward addition(where incorrect outputs are typically just 1–2 digits short), stricter filtering is needed to avoid error propagation. Regarding majority voting, we plan to expand our ablations to show how the number of models affects filtering effectiveness. As indicated in prior work (e.g., https://arxiv.org/pdf/2306.15400), even a small number of highly accurate training examples can suffice for strong length generalization. Therefore, filtering for high-precision data by strong consensus among multiple models - even at the cost of sample quantity - can be beneficial. We thank the reviewer for suggesting this analysis and will include it in our revision. | null | null | null | null | null | null |
Stealing That Free Lunch: Exposing the Limits of Dyna-Style Reinforcement Learning | Accept (poster) | Summary: The paper investigates the performance discrepancies of Dyna-style model-based reinforcement learning (DMBRL) algorithms across different benchmark environments, specifically OpenAI Gym and DeepMind Control Suite (DMC). The authors highlight a significant performance gap, where DMBRL algorithms like Model-Based Policy Optimization (MBPO) and Aligned Latent Models (ALM) perform well in OpenAI Gym but struggle in DMC, despite the environments sharing similar tasks and physics backends.
Main Findings:
Performance Gap: DMBRL algorithms show a performance drop in DMC compared to OpenAI Gym.
Model Error Impact: High model error in DMC environments leads to significant performance degradation, even with short synthetic rollouts.
Critic Divergence: Inaccurate synthetic transitions cause critic divergence, hindering learning in DMC.
Main Results:
Empirical Evaluations: MBPO and ALM were evaluated across six OpenAI Gym tasks and fifteen DMC tasks, showing that Dyna-style enhancements can prevent policy improvements in DMC.
Model Error Analysis: Even with a perfect model, MBPO cannot consistently outperform its base off-policy method (SAC) in DMC.
Critic Divergence Mitigation: Layer Normalization mitigates critic divergence but does not enable MBPO to outperform SAC consistently.
Plasticity Loss Mitigation: Periodic resets of off-policy base components improve MBPO's performance in some DMC tasks but do not match SAC's performance.
Contributions:
Identifying Performance Gap: The paper demonstrates a significant performance gap of DMBRL methods across different benchmarks.
Analyzing Causes: The authors analyze potential causes for the performance discrepancy and apply modern mitigation approaches.
Accelerating Experimentation: The paper introduces a new JAX-based implementation of MBPO, achieving up to a 40x decrease in wall-clock time, enabling more accessible DMBRL studies.
## update after rebuttal
Thank you for the author's reply. It answered almost all of my questions. I have no further questions.
Claims And Evidence: Yes, Well-Supported Claims:
Performance Gap: DMBRL algorithms like MBPO and ALM perform well in OpenAI Gym but struggle in DMC. This is backed by empirical evaluations across multiple tasks.
Model Error Impact: High model error in DMC leads to performance degradation, supported by detailed measurements and analysis.
Critic Divergence: Inaccurate synthetic transitions cause critic divergence, hindering learning in DMC. Evidence includes Q-value measurements and analysis of synthetic data impact.
Plasticity Loss: Periodic resets of off-policy base components improve MBPO's performance in some DMC tasks, though not universally.
Areas for Further Clarification:
Dyna-Style Enhancements and Plasticity Loss: More detailed analysis could strengthen the claim that Dyna-style enhancements exacerbate plasticity loss.
Intrinsic Performance Gap: The claim that the performance gap is intrinsic to Dyna-style algorithms could benefit from more direct evidence or theoretical analysis.
Methods And Evaluation Criteria: Proposed Methods:
Empirical Evaluations: The paper evaluates MBPO and ALM across multiple tasks in OpenAI Gym and DeepMind Control Suite (DMC). This comparative approach is appropriate for identifying performance gaps and understanding the robustness of DMBRL algorithms.
Model Error Analysis: The authors measure model error across training distributions in both OpenAI Gym and DMC. This method is relevant for understanding how model accuracy impacts performance.
Critic Divergence Analysis: The paper measures average critic Q-values during training to assess critic divergence. This method helps identify how inaccurate synthetic transitions affect learning.
Plasticity Loss Mitigation: The authors perform periodic resets of the off-policy base components to mitigate plasticity loss. This method is sensible for evaluating the impact of plasticity loss on performance.
Perfect Model Experiments: The paper modifies the DMC simulator to use a perfect model to evaluate the impact of model accuracy on performance. This method provides insights into the intrinsic limitations of Dyna-style algorithms.
Evaluation Criteria:
Benchmark Datasets: The use of OpenAI Gym and DMC as benchmark datasets is appropriate. Both are widely accepted testbeds for reinforcement learning, providing a variety of continuous control tasks with similar physics backends.
Performance Metrics: The paper uses normalized final return and sample efficiency as performance metrics. These metrics are standard in reinforcement learning and provide a clear basis for comparison.
Model Error Metrics: The authors measure model error as a percentage of the training distribution. This metric is relevant for assessing the accuracy of the predictive model.
Critic Divergence Metrics: The paper measures average critic Q-values to assess critic divergence. This metric is appropriate for evaluating the impact of synthetic data on learning.
Theoretical Claims: This paper focuses on empirical evaluations rather than theoretical claims, so there are no formal proofs to verify. However, the key conceptual claims are well-supported by empirical evidence:
Dyna-style enhancements can prevent policy improvements in DMC: Supported by performance metrics across multiple tasks.
High model error in DMC leads to performance degradation: Backed by detailed measurements of model error.
Critic divergence due to inaccurate synthetic transitions hinders learning: Evidenced by Q-value measurements and analysis.
Plasticity loss contributes to MBPO's failures: Demonstrated through periodic resets and their impact on performance.
Issues:
Dyna-Style Enhancements and Plasticity Loss: Could use more detailed theoretical analysis.
Intrinsic Performance Gap: Would benefit from additional theoretical insights.
Experimental Designs Or Analyses: The paper uses sound experimental designs to support its claims:
Comparative Performance Evaluation: Valid comparison of MBPO and ALM across OpenAI Gym and DMC tasks.
Model Error Analysis: Measures model error effectively to show its impact on performance.
Critic Divergence Analysis: Tracks Q-values to demonstrate the effects of synthetic data on learning.
Plasticity Loss Mitigation: Uses periodic resets to assess the impact of plasticity loss on performance.
Perfect Model Experiments: Modifies DMC to use a perfect model, isolating the effect of model accuracy.
Supplementary Material: I reviewed key parts of the supplementary material in the paper:
Appendix A (FAQs): Clarifies the scope and implications of the research.
Appendix B (Timing Claim): Details runtime efficiency improvements.
Appendix C (Benchmark Differences): Discusses differences between OpenAI Gym and DMC.
Appendix D (Implementation Details): Provides details on ALM and MBPO implementations.
Appendix E (Hyperparameters): Lists hyperparameters for reproducibility.
Appendix F (Full Results): Offers additional data supporting the main findings.
Relation To Broader Scientific Literature: Key Relations to Prior Work:
Dyna-Style Algorithms:
Prior Work: Based on Sutton (1991), with extensions like MBPO (Janner et al., 2019).
Contribution: Shows performance gaps across OpenAI Gym and DMC, highlighting robustness issues.
MBRL Algorithms:
Prior Work: MBPO and ALM (Ghugare et al., 2023) demonstrate success in specific benchmarks.
Contribution: Reveals degraded performance in DMC, emphasizing the need for diverse benchmarking.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: Here are key questions for the authors that could clarify important points:
Synthetic Data Bias: How do you ensure synthetic data doesn’t introduce bias into the critic’s learning process? Are there methods to filter or correct this data?
Plasticity Loss Solutions: Besides periodic resets, have you considered other methods to address plasticity loss, such as regularization or dynamic architectures?
Generalizability: Have you tested these algorithms on other benchmarks or real-world applications to see if the performance gap persists?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **How do you ensure synthetic data doesn’t introduce bias into the critic’s learning process? Are there methods to filter or correct this data?**
We performed sweeps over the synthetic-to-real data ratio (Figure 12) and found that introducing any amount of synthetic data into off-policy learning in DMC leads to a substantial degradation in policy performance. Moreover, in our Dyna-style experiments using perfect synthetic data (i.e., no model error), we showed in Figure 6 that learning still fails to outperform the base off-policy methods on which these algorithms are built. These findings suggest that the failure is not due to model inaccuracies or data quality, but rather due to algorithmic issues intrinsic to how synthetic data is integrated.
While methods such as those proposed in [1,2] attempt to filter or adapt synthetic rollouts, our results imply that such corrections are insufficient to fix the core algorithmic shortcomings. Even with perfect data, the performance gap remains, underscoring the limits of current Dyna-style designs.
Addressing this challenge remains an active area of research for us, and we welcome further discussion on promising directions for improving synthetic data integration in off-policy learning.
**Besides periodic resets, have you considered other methods to address plasticity loss, such as regularization or dynamic architectures?**
Yes. While we did not frame them explicitly in terms of plasticity, we explored several techniques targeting related issues. Layer normalization was employed in prior work [3] as a method that simultaneously mitigates overestimation and stabilizes learning dynamics - both of which relate to plasticity loss. Additionally, we experimented with unit ball normalization [4], which was proposed to combat overestimation and thereby also helps address plasticity indirectly. We did not include these results in the main text as they did not substantially outperform the more commonly used layer norm baseline.
We did not pursue further architectural or regularization strategies because our literature review found no alternatives that clearly outperform these methods. Periodic resetting, layer norm, and unit ball normalization appear to represent the state-of-the-art approaches for maintaining plasticity in high-update-ratio settings. If we have overlooked a promising direction, we would sincerely appreciate any further insight from the reviewer.
**Have you tested these algorithms on other benchmarks or real-world applications to see if the performance gap persists?**
We did not. Our motivation was to critically re-examine claims of sample efficiency made by prior dyna-style methods (e.g., ALM, MBPO), which were accepted largely based on Gym benchmarks. These methods were widely adopted under the assumption that they outperformed their model-free counterparts in sample efficiency. Our work revisits that conclusion using a comparable benchmark - DeepMind Control Suite (DMC) - which shares the same MuJoCo physics backend.
This choice minimizes the likelihood that differences in performance are due to changes in dynamics simulation, and therefore emphasizes the impact of the Dyna-based algorithmic changes introduced in ALM and MBPO. Our results demonstrate that the original claims made about those methods do not generalize even within this extremely similar experimental setting. While evaluating additional domains might provide further insight into generalization challenges, we chose to focus on a clean and controlled comparison to highlight foundational flaws in current Dyna-style designs. Introducing more benchmarks could risk diluting the clarity of this central message.
**Citations:**
[1] B. Frauenknecht, A. Eisele, D. Subhasish, F. Solowjow, and S. Trimpe, “Trust the Model Where It Trusts Itself -- Model-Based Actor-Critic with Uncertainty-Aware Rollout Adaption,” Jun. 21, 2024, arXiv: arXiv:2405.19014. doi: 10.48550/arXiv.2405.19014.
[2] Y. Li, Z. Dong, E. Luo, Y. Wu, S. Wu, and S. Han, “When to Trust Your Data: Enhancing Dyna-Style Model-Based Reinforcement Learning With Data Filter,” Oct. 16, 2024, arXiv: arXiv:2410.12160. doi: 10.48550/arXiv.2410.12160.
[3] C. Lyle, Z. Zheng, E. Nikishin, B. A. Pires, R. Pascanu, and W. Dabney, “Understanding plasticity in neural networks,” Nov. 27, 2023, arXiv: arXiv:2303.01486. Accessed: Jun. 29, 2024. [Online]. Available: http://arxiv.org/abs/2303.01486
[4] M. Hussing, C. Voelcker, I. Gilitschenski, A. Farahmand, and E. Eaton, “Dissecting Deep RL with High Update Ratios: Combatting Value Overestimation and Divergence,” Mar. 09, 2024, arXiv: arXiv:2403.05996. Accessed: Apr. 08, 2024. [Online]. Available: http://arxiv.org/abs/2403.05996 | Summary: This paper investigates why two MBRL algorithms (MBPO & ALM) perform well on OpenAI Gym but poorly on DMC.
The authors show that apart from Dyna model prediction error, the synthetic rollouts themself could arrest policy improvement rather than enhance it when deployed across more diverse environments.
Claims And Evidence: The claims made in the submission are mostly supported by clear and convincing evidence
Methods And Evaluation Criteria: The methods used mostly fit the problem and the authors' claims.
Theoretical Claims: Few theoretical proofs are involved in this paper.
Experimental Designs Or Analyses: The soundness/validity of the experimental designs or analyses is good if the implementation is correct.
Supplementary Material: I've reviewed the appendix. No other supplementary material is submitted.
Relation To Broader Scientific Literature: This paper studies the problem that the performance of MBRL algorithms varies depending on the environment and problem characteristics. Some previous work pointed out the inefficiency of MBRL on DMC tasks, and some previous work studied the impact of model error.
Essential References Not Discussed: No missing key references found yet.
Other Strengths And Weaknesses: **Strengths:**
1. There are many papers investigating new MBRL models, but few studies the underlying reason why modern MBRL works, or why a learned model could improve the sample efficiency of model-free RL algorithms. Thus the community lacks relevant theories for design. This paper is a good start in bridging this gap.
2. The authors' claims are clear and the evidence is supportive. They studied the impact of model error, underestimation, learning dynamics and so on, so that the limitation of the Dyna style's learning is clear.
**Weaknesses:**
See comments and questions.
Other Comments Or Suggestions: 1. At the abstract/intro of this paper, I think it would be better to say something like "$\underline{\text{control tasks}}$ in OpenAI Gym", since it contains environments other than control tasks.
2. On the left column of the line 057, I suggest cite ALM when it first appears and reduce other repeated citations later.
Questions For Authors: 1. What would be the result of Figure 12 if the model is perfect?
2. For MBPO/ALM on OpenAI Gym that surpasses SAC (or like DreamerV3/EfficientZeroV2 on DMC), could you also observe the underestimation issue? This may provide insights for further improving the SOTA MBRL methods.
3. For the left column of the line 425. When resetting the predictive model, it may take more updates than the original train ratio to optimize the model. If the number of updates is too low, then the model may fail to converge and provide little information. If the number of updates is sufficient, then the key insights 1 may still hold. Could you clarify the details on this?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful suggestions to improve the clarity and presentation of our paper. We will incorporate the suggested changes to:
- More precisely describe the subset of OpenAI Gym environments used in our experiments.
- Cite ALM upon first mention and reduce repeated citations later on to avoid redundancy.
**What would be the result of Figure 12 if the model is perfect?**
This is an excellent question and one we are also curious about. As depicted in Figure 6, we generated a single data point for both humanoid-stand and hopper-stand at the 0.05 synthetic ratio using perfect transitions (i.e., querying the ground truth environment). This process required approximately 4 GPU-days and substantial CPU usage, due to the environment transitions becoming the bottleneck. Unlike synthetic rollouts from learned models (which are efficiently parallelizable via JAX), real environment rollouts suffer from parallelism limitations that drastically slow down runtime.
Extrapolating from our timing, generating the full version of Figure 12 with perfect transitions would require roughly 40 GPU-days across multiple machines. Given the resource cost and limited additional insight expected beyond what is already shown in the paper, we opted not to pursue the full experiment at this time. However, if we succeed in optimizing environment rollouts via vectorized environments in the future, we agree this would be a worthwhile avenue to explore.
That said, our preliminary results suggest that even with perfect transitions, performance does not recover to the level of pure real-data training, regardless of synthetic-to-real ratio. This aligns with our central claim that the degradation arises not solely from model bias but from algorithmic limitations in how synthetic data is incorporated.
**For MBPO/ALM on OpenAI Gym that surpass SAC (or methods like DreamerV3/EfficientZeroV2 on DMC), do you also observe the underestimation issue?**
Thank you for this insightful question. We interpret it as: "Do value underestimation issues persist in model-based methods when they outperform SAC in Gym or state-of-the-art model-free methods in DMC?" Please let us know if we have misunderstood.
In our experiments, we did not observe value underestimation in ALM or MBPO when evaluated on OpenAI Gym. This suggests that underestimation is largely a DMC-specific phenomenon in our setting. As discussed in the paper, even with a perfect model, underestimation is much less pronounced in DMC compared to a learned model, implicating model error as a contributor to Dyna failing, but not the only issue. However, Section 4.2 further shows that MBPO still fails to match SAC in DMC even when underestimation is not prominent, indicating that underestimation, while influential, is not the sole factor - nor a "smoking gun" - behind the performance gap.
**For the left column of line 425 - could you clarify the update count after model reset?**
Certainly. During our investigation of model plasticity, we explored multiple reset strategies inspired by prior work [1]. We found that increasing the reset frequency up to a point improved performance. The strategy we ultimately adopted - resetting every 20k environment steps - aligns with the reset frequency recommended in [1] for a replay ratio of 128. Empirically, this approach worked better in our setting than the more conservative reset frequency from [1] prescribed for a replay ratio of 20 (every 2.56 × 10⁶ gradient steps, equivalent to ~128k env steps), or intermediate strategies between the two.
We will clarify this detail in the revision to aid reproducibility and interpretability.
Citations:
[1] P. D’Oro, M. Schwarzer, E. Nikishin, P.-L. Bacon, M. G. Bellemare, and A. Courville, “Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier,” 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response, most of my concerns are resolved. I'll keep my score as it is. | Summary: This paper shows that Dyna-style off-policy model-based reinforcement learning (DMBRL) algorithms perform well in OpenAI Gym, while their performance can drop significantly in DeepMind Control Suite (DMC). And the paper analyzes potential causes (model error, lay normalization, etc) for this discrepancy, which ultimately fail to consistently resolve these problems.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The paper does not contain proofs or theoretical developments.
Experimental Designs Or Analyses: Yes.
Supplementary Material: There is no supplementary material. So I only reviewed the Appendix.
Relation To Broader Scientific Literature: The paper shows a significant gap in the literature regarding the performance of Dyna-style model-based reinforcement learning (DMBRL) algorithms across different benchmarks, specifically OpenAI Gym and DeepMind Control Suite (DMC). The findings challenge the prevailing assumption that DMBRL algorithms, such as Model-Based Policy Optimization (MBPO), generalize well across similar environments.
Essential References Not Discussed: The paper could benefit from discussing PlaNet (Hafner et al., 2019) and PETS (Chua et al., 2018), which are other model-based RL algorithms that have been evaluated in both Gym and DMC. Additionally, the paper could discuss more recent advancements in model-based RL. This could provide a broader perspective on the limitations of purely Dyna-style methods.
Other Strengths And Weaknesses: **Strengths:**
1. The paper presents a observation that Dyna-style algorithms, which perform well in OpenAI Gym, significantly underperform in DeepMind Control Suite (DMC). This finding challenges the assumption that these algorithms generalize well across similar benchmarks and raises important questions about the robustness of DMBRL methods.
2. The paper provides a thorough empirical analysis, including extensive experiments across multiple environments and benchmarks. The authors systematically investigate potential causes for the performance gap, such as model error, overestimation bias, and neural network plasticity, and provide detailed results to support their conclusions.
**Weaknesses:**
1. While the paper focuses on MBPO and ALM, it does not sufficiently explore other model-based RL algorithms that have shown success in DMC, such as DreamerV3. A more comprehensive comparison with these algorithms would provide a clearer picture of why certain model-based methods succeed where others fail.
2. The paper briefly mentions differences between OpenAI Gym and DMC in terms of reward structures, termination conditions, and physical parameters, but it does not delve deeply into how these differences might contribute to the performance gap. A more detailed analysis of these environmental factors could provide additional insights into the limitations of Dyna-style algorithms.
3. The paper mentions hyperparameter sweeps but does not provide a detailed analysis of how sensitive the results are to different hyperparameter settings. Given that hyperparameters can significantly impact the performance of RL algorithms, a more thorough exploration of this aspect would strengthen the paper's conclusions.
Other Comments Or Suggestions: 1. **Clarification on Model Error Metrics:** The paper uses "percent model error" as a metric, but it is not clearly defined how this metric is calculated. Providing a precise definition or formula for this metric would improve the clarity of the results.
2. **Reproducibility:** While the authors claim a JAX-based implementation, it would be helpful to provide the code details.
Questions For Authors: 1. The paper suggests that the performance gap is "fundamental," but have the authors considered the possibility that it could be mitigated through more sophisticated model architectures or training techniques, such as meta-learning or transfer learning?
2. Given that DreamerV3 performs well, what specific design choices or architectural differences might explain its success compared to MBPO and ALM?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **On comparisons with other model-based RL algorithms:**
We appreciate the reviewer’s suggestion regarding DreamerV3, PlaNet, and PETS. Our study focuses on Dyna-style algorithms MBPO and ALM, which use synthetic rollouts in proprioceptive, state-based settings. Pixel-based methods such as DreamerV1-V3 and PlaNet abstract away low-level issues central to our study. From the perspective of pixels, two hopper environments that move and look similar are effectively indistinguishable, whether simulated in Gym or DMC. In contrast, state-based methods must engage with environment-specific dynamics and state representations, which our results show can vary significantly despite a shared MuJoCo backend. For this reason, pixel-based approaches fall outside the scope of our analysis.
DreamerV3, although impressive on proprioceptive tasks in addition to pixel-based, significantly differs from simpler Dyna-style methods in complexity and design philosophy. Understanding its success relative to MBPO and ALM, as noted in Section 2.1 and our FAQ, is valuable but warrants a dedicated analysis. Additionally, DreamerV3 has not been evaluated on OpenAI Gym tasks, complicating comparisons due to benchmarking inconsistency (as discussed in our paper).
Similarly, PETS and TD-MPC diverge from the Dyna paradigm by employing learned models for online trajectory optimization (e.g., model predictive control), rather than using synthetic rollouts for policy/value training. Our work targets hallucinated experience in Dyna-style pipelines.
Ultimately, while broader comparisons are important, evaluating methods that differ in observation modality or depart from the Dyna structure falls outside our intended scope. Our goal is to identify previously overlooked failure modes in a prominent subclass of model-based RL - Dyna-style methods - under realistic, proprioceptive conditions.
**On the possibility that more sophisticated models could close the performance gap:**
We thank the reviewer for the insightful question regarding whether more sophisticated model architectures or training techniques (e.g., meta-learning) could close the performance gap. While such advances may indeed improve results, they would only reinforce our central claim: standard Dyna-style methods like MBPO and ALM exhibit fundamental limitations when used “out of the box,” even with extensive tuning. These methods were originally presented as simple, sample-efficient augmentations to off-policy RL, relying on short synthetic rollouts and standard neural network dynamics models. Yet across 15 diverse DMC environments, they consistently underperform their base off-policy RL learners - and sometimes even random policies - revealing deeper algorithmic issues.
**On environmental differences between DMC and Gym:**
We appreciate the reviewer’s point that differences in reward structures, termination conditions, and dynamics between OpenAI Gym and DMC may influence performance. While we address these factors (Section 2.2, Appendix C), our analysis in the paper is deliberately algorithm-focused. We show that the base algorithm, SAC, performs well in both testbeds, while its Dyna-style counterpart (e.g., MBPO) fails in DMC. This contrast isolates synthetic rollouts - not environment differences - as the primary source of degradation, challenging the assumption that Dyna-style methods reliably enhance strong off-policy learners across domains. We agree that further work should explore how environment-side factors interact with Dyna-style methods. These are important questions, but beyond the current paper’s scope.
**On hyperparameter sensitivity:**
As described at the end of Section 4.1, we performed extensive hyperparameter sweeps on key environments (e.g., hopper-stand, humanoid-stand), varying model size, retraining frequency, learning rates, and gradient steps. While these reduced model error in some cases, they did not produce consistent or meaningful gains in policy performance beyond a random policy in many cases. Sweeps over the off-policy algorithm's hyperparameters yielded similarly negligible effects.
This insensitivity supports our central claim: the failure of Dyna-style methods arises from fundamental challenges in leveraging synthetic rollouts, not suboptimal hyperparameter choices. We therefore did not emphasize sensitivity analyses, as tuning alone does not address the core limitations of these approaches. We are happy to include additional details in the camera-ready manuscript.
**Clarification on the “percent model error” metric:**
We will revise the text to include an explicit formula and explanation of our percent model error metric.
Specifically, let $\hat{y}$ denote the model's prediction (next observation and reward) and let $y$ be the corresponding ground-truth target. We define:
$$
\text{Percent Model Error} = \frac{ \left\| \hat{y} - y \right\|_2 }{ \left\| y \right\|_2 } \times 100
$$
**Reproducibility:**
See response to Reviewer gMX1.
---
Rebuttal Comment 1.1:
Comment: Thank authors for a detailed response to my review.
The rebuttal argues that sophisticated techniques (e.g., meta-learning) would merely "reinforce" their claim of Dyna’s limitations. However, this reasoning conflates algorithmic flaws with practical deployability:
- If advanced methods (e.g., uncertainty-aware rollouts or hierarchical models) could mitigate the issues, the problem lies in engineering, not theory.
- The paper’s sweeping conclusion—"no free lunch"—loses weight without testing whether the "lunch" could be improved with better ingredients (e.g., robust models).
The rebuttal mentions "extensive" sweeps. Could the authors release full sweep data (e.g., in supplementary materials) to demonstrate its claim?
---
Reply to Comment 1.1.1:
Comment: We respect the philosophical distinction the reviewer highlights, and agree that separating theoretical soundness from practical implementation is important. However, in deep RL, theoretical assumptions are often quickly challenged once deep neural networks and complex continuous-control tasks are introduced. In this context, the performance of an algorithm becomes closely tied to its engineering choices.
To reiterate our position, our claim is not that Dyna-style methods are theoretically unsound and we explicitly state this in Appendix A, FAQ 1, Section 2.1, and in our original rebuttal. Rather, our work highlights that widely used practical instantiations of Dyna-style algorithms, specifically those using short synthetic rollouts with simple MLP dynamics models, are brittle across realistic benchmarks. Importantly, methods like MBPO and ALM were not presented as algorithms tailored specifically to OpenAI Gym. They were positioned as general-purpose solutions for continuous control, and are widely cited and adopted as such. That they fail to generalize beyond Gym, even after extensive hyperparameter sweeps (which we will include in the camera-ready appendix), suggests that the issue is not one of tuning, but of fundamental limitations in their standard design. While more sophisticated techniques (e.g., uncertainty-aware rollouts, hierarchical models, meta-learning) might improve performance, the need for such non-trivial modifications constitute entire new publications and only reinforces our core point: the promise of “plug-and-play” model-based RL - as conveyed by methods like MBPO and ALM - is undermined if success depends on significant architectural or procedural overhauls. We will revise the language in the introduction and conclusion to clarify that our critique is aimed at the promise of standard Dyna-style algorithms (i.e,. MBPO and ALM), and not at the theoretical soundness of the broader Dyna framework.
To further clarify, we included DreamerV3 as an example of a Dyna-style method that achieves success, but only after years of sustained engineering effort. Its impressive performance, in our view, reinforces rather than contradicts our claim.
Thus, our “no free lunch” conclusion is not a rejection of model-based RL, but a caution: several popular Dyna-style methods, in their standard forms, lack the robustness and generality often assumed based on their published results. If improving the “lunch” requires entirely new ingredients, then it is no longer the same recipe. | Summary: The paper investigates why popular Dyna‐style model‐based reinforcement learning (RL) methods, such as MBPO and ALM, perform well on OpenAI Gym tasks but struggle on the DeepMind Control Suite (DMC), despite both benchmarks having similar physics and task structures. The authors document a consistent performance gap: when synthetic rollouts—core to the Dyna approach—are used, these methods can even underperform a random policy in DMC, while in Gym they improve sample efficiency over their model‐free counterparts like SAC. The key contributions include:
- Demonstrating that Dyna-style enhancements, which generate synthetic transitions to supplement real experience, can impede policy improvement in DMC environments even when the underlying off-policy algorithm works well on its own.
- Conducting extensive experiments to analyze potential causes such as high model error, critic divergence (stemming from over- or underestimation due to unrealistic synthetic data), and issues with network plasticity. The study shows that even using a perfect predictive model or applying techniques to stabilize critic updates does not fully close the performance gap.
- Introducing a new JAX-based implementation that accelerates experimentation by up to 40× compared to previous PyTorch-based approaches, thereby reducing the computational barrier for thorough evaluation of model-based RL algorithms.
Claims And Evidence: The assertion that Dyna-style enhancements consistently hinder performance in DMC is well illustrated by experiments with MBPO and ALM, yet the evidence is drawn from a specific set of environments and methods. While the paper argues these issues are inherent to the Dyna family, the existence of approaches like DreamerV3 (which succeed in DMC) suggests that further investigation is needed to determine if the observed limitations are truly universal across all Dyna-style methods.
Methods And Evaluation Criteria: The paper’s methodology is well-aligned with its goal of probing the limits of Dyna-style RL. The authors use two widely recognized benchmarks—OpenAI Gym and DeepMind Control Suite—to highlight that improvements seen with synthetic rollouts in one setting do not necessarily carry over to another with similar physics but subtle differences in task dynamics. The paper further dissects key factors (e.g., model error, critic divergence, and network plasticity) through controlled experiments and ablation studies.
Theoretical Claims: The paper invokes the “no free lunch” concept to argue that no single RL algorithm can be optimal across all environments, but it does not provide a formal proof of this claim—it leans on established theoretical principles and empirical observations.
Experimental Designs Or Analyses: The experimental design and analyses are sound, with thoughtful controls and comparisons.
The paper rigorously compares Dyna-style methods (MBPO and ALM) against their base off-policy counterparts (e.g., SAC) across two distinct benchmark suites—OpenAI Gym and the DeepMind Control Suite (DMC). This dual-benchmark evaluation is a strong point because it highlights the performance discrepancies in different settings. The use of controlled ablation studies (e.g., varying the synthetic-to-real data ratio, using a perfect predictive model, and applying periodic resets) provides granular insights into potential sources of failure such as critic divergence and network plasticity. This detailed analysis helps isolate the factors contributing to the performance gap.
Supplementary Material: Yes, I reviewed the frequently asked questions. Code is missing for this paper and authors mention that it will be released soon.
Relation To Broader Scientific Literature: The paper revisits and challenges the celebrated gains of MBPO (Janner et al., 2019) and similar Dyna‐style methods, which had previously been shown to dramatically improve sample efficiency in OpenAI Gym. By exposing a performance gap when these techniques are applied to the DeepMind Control Suite (DMC), the paper questions the generality of prior successes.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The work creatively challenges established successes of Dyna-style methods by systematically comparing performance across benchmarks that, on the surface, appear similar. This cross-benchmark analysis exposes hidden limitations, an angle that is relatively novel in the literature. By demonstrating that enhancements such as synthetic rollouts can actually impede performance in certain environments, the study raises important questions about the generality and robustness of widely adopted model-based RL methods.
Other Comments Or Suggestions: It would be great if code is available for this work.
Questions For Authors: I have no questions
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and positive review. We appreciate your recognition of our contributions and the strengths of our experimental design and analysis.
Regarding your mention of DreamerV3, we agree that its success in DMC highlights the diversity of model-based RL approaches. For further detail, we refer the reviewer to the section titled "On comparisons with other model-based RL algorithms" in our response to Reviewer TyLB, clarifying why DreamerV3 falls outside the scope of our evaluation of Dyna-style methods.
Regarding the code, we consider the code an integral part of our contributions and are actively refining it. If the paper is accepted, we will certainly release the code alongside the camera-ready version. | null | null | null | null | null | null |
Neural Collapse Beyond the Unconstrained Features Model: Landscape, Dynamics, and Generalization in the Mean-Field Regime | Accept (spotlight poster) | Summary: This paper provides a theoretically rigorous proof on the neural collapse (NC) phenomenon with a three-layer neural network in the mean-field regime under mean-square-error (MSE) loss. It shows that under gradient flow (GF), the within-class variability (namely NC1) vanishes as the time $t$ tends to infinity. Also, it establishes an upper bound on the vanishing test error, ensuring both NC and good generalization performance can be achieved simultaneously.
Claims And Evidence: The theoretical results are proven in the main text as well as in the appendix. Experiment results also show a collapse in NC1.
Methods And Evaluation Criteria: Yes, the experiment is done on a 3-layer MLP, matching the setting in the theory. Standard datasets MNIST and CIFAR-10 are used.
Theoretical Claims: I only checked the proof sketch in the main text. They seem reasonable and straightforward. Due to time limit and my lack of expertise in mean-field dynamic theory, I am not able to check all proofs from line to line in the appendix.
However, if other reviewers could point out a potential mistake in the proof, I would check it closer and participate in the discussion.
Experimental Designs Or Analyses: The experiment results match and validate the theoretical claim.
Supplementary Material: No supplementary material is provided so I cannot check the validity of the code and the reproducibility of the experiment results.
Relation To Broader Scientific Literature: I think the contribution of this paper is limited, when compared to broader scientific literature.
It is known that NC is very significant when the network is trained under MSE loss. In particular, it is quite intuitive that the features concentrate at their class mean when the train loss is small under MSE loss. (It would be a different story when other loss, say cross entropy loss, is considered.) Hence, the qualitative statement in the main result (corollary 4.9) is not very surprising. The statement could have been more significant when it could also quantitative, like measuring the (non-)asymptotic rate of convergence etc.
Another result (Theorem 5.3) on the vanishing test error is nicer in this sense, however, it requires the data to be almost linearly separable.
More importantly, [1] already extended the MSE NC result from unconstraint feature model (UFM) to a deeper one with several linear last layers, which is of the same goal of this paper. This paper has also discussed deeply on the comparison between [1] and their work, which I really appreciate, but I feel like the main improvement of this paper over [1] is just removing the weight balancedness condition in [1], which is only a technical requirement. Also, [1] considered networks with any depth while this paper only considered a 3-layer network.
That is why I think this paper has limited contributions related to the broader scientific literature.
---
[1] Jacot, Arthur, et al. "Wide neural networks trained with weight decay provably exhibit neural collapse." arXiv preprint arXiv:2410.04887 (2024).
Essential References Not Discussed: I think most related works have been discussed thoroughly in the paper.
Other Strengths And Weaknesses: This paper does not require the weight balancedness assumptions used in [1].
The theorems and lemmata are presented and elaborated in a clear manner, which I really appreciate.
---
[1] Jacot, Arthur, et al. "Wide neural networks trained with weight decay provably exhibit neural collapse." arXiv preprint arXiv:2410.04887 (2024).
Other Comments Or Suggestions: I have no other comments.
Questions For Authors: I would like to ask:
1. Could you explain more about the difference between the result from this paper and [1]?
2. Related to [1] where they emphasized weight decay (WD) in the title, how could one interpret the effect of WD on the main result, say Corollary 4.9?
3. Are there any technical difficulties to extend the setting to deeper and more complex networks?
4. What is the hardest part when one tries to extend the argument in the mean-field setting to other losses, say CE?
---
[1] [1] Jacot, Arthur, et al. "Wide neural networks trained with weight decay provably exhibit neural collapse." arXiv preprint arXiv:2410.04887 (2024).
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the valuable and detailed comments, we address concerns below.
**1. Supplementary materials.**
We will add this in the revision.
**2. NC not surprising under MSE loss.**
We politely disagree with the claim that “it is quite intuitive that the features concentrate at their class mean when the train loss is small under MSE loss.” It is true that small MSE loss implies that the output of the neural network is roughly the same for all data points in the same class. However, this is only a **necessary but not sufficient condition** for NC1 to occur. The technical subtlety is that, since the feature dimension is larger than the number of classes, there might be components of the features in the space orthogonal to the last linear layer. **Without the balancedness condition**, it is not obvious to show that these orthogonal components vanish at stationary points. Intuitively, these components make the weights more isotropic, thus they could potentially decrease the entropic penalty, but increase the L2 penalty, meaning that the tradeoff needs to be carefully considered. Thus, our results in Section 4.2 require a suitable characterization of the learned feature and careful manipulations to show that the orthogonal components are indeed vanishing with training loss and gradient norm.
**3. Extension to CE loss.**
Please see point 1 of our response to Reviewer g3N9.
**4. Measuring rate of convergence.**
We provide a convergence rate for the loss in Equation (24). This is already non-trivial given the non-convex nature of the problem and to our best knowledge, no explicit rate of convergence is known in the mean-field regime beyond the 2-layer case (which has a convex free energy). Characterizing the convergence rate of the gradient norm is harder, as it requires controlling the curvature along the whole trajectory.
**5. Linear separability assumption in Theorem 5.3.**
In general, it is not true that NC1 and vanishing test error co-occur under MSE loss without any assumptions on the data distribution. In fact, the occurrence of NC1 implies that the model overfits the data, and overfitting is not always benign without additional assumptions: classic benign overfitting results for simple models such as linear regression require a decay in the data spectrum and proper alignment between data spectrum and signal. In this sense, showing the co-occurrence of NC1 and vanishing test error may be regarded as a harder problem than benign overfitting, and the latter phenomenon has not been investigated in the mean-field regime to our best knowledge.
**6. Comparison of our work to [1].**
First, [1] operates in the NTK regime where the behavior of the weights is fundamentally different from the mean-field regime that we consider. More precisely, the proof of Theorem 4.4 in [1] distinguishes two phases in training: in the first phase, the loss becomes small and no feature learning takes place; in the second phase, the weight matrices in the linear part of the network become balanced and NC1 occurs. In contrast, our work operates in the mean-field regime where feature learning takes place from the beginning and, therefore, there is no need for a second phase of the dynamics: NC1 occurs as soon as the loss and its gradient are small. Our experiments on ResNet-18 and VGG-11 also indicate this relation (see point 1 of our response to Reviewer AaqF for more details).
Second, the balancedness condition in [1] is not just a technical requirement, but the key reason for the occurrence of NC1. In fact, the balancedness condition rules out the possibility of having any components of the features in the orthogonal space to the last linear layer. However, our results both theoretical (see Lemma 4.5) and experimental (see the end of Section 4.2) conclusively demonstrate that NC1 holds even if the network is not balanced.
**7. Impact of weight decay.**
Firstly, non-zero weight decay is necessary to ensure non-vanishing $W$ and features at all stationary points. Secondly, the weight decay affects the order of the empirical loss we converge to: when $\lambda_\rho = \beta^{-1}$, the empirical loss at convergence is of order $\beta^{-1}$. This is reflected in Corollary 4.9, as better NC1 guarantees (smaller $\delta_0$) require smaller weight decay $\beta^{-1} \leq poly(\delta_0).$
**8. Extension to deeper/more complex networks.**
We expect that our results can be extended to a deep network with many linear layers, see point 4 of our response to Reviewer AaqF. In contrast, analyzing deep networks with many nonlinear layers would require fundamentally different techniques and we regard it as an exciting future direction. Towards this goal, two possible strategies are (i) to characterize features at an $\epsilon_S$-stationary point, or (ii) to directly consider the evolution of features during training.
[1] Jacot, Sukenik, Wang, Mondelli, Wide neural networks trained with weight decay provably exhibit neural collapse.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed answer. Now I have a better understanding of the contribution of the paper.
I would like to keep my original scoring. And I wish the authors the best for the remaining review period. | Summary: The authors study the how a certain aspect of neural collapse (NC) - namely, within-class variability tending to zero, can be provably associated with convergence to nearly stationary points of the loss function in noisy gradient flow dynamics for a 3 layer neural network, where the last two layers have linear activations.
Claims And Evidence: The claims made in the submission seem to be well supported by formal proofs.
Methods And Evaluation Criteria: Since the paper is mainly theoretical in nature, the claims relate to bounds and convergence properties. The evaluation criteria are therefore mainly the voracity of the proofs. The empirical evidence are only meant to show that even imbalanced networks tend towards small loss/gradient norm and NC1 together, and for that they roughly make sense.
Theoretical Claims: I checked the main theorems given in the paper, but did not re-derive them.
Experimental Designs Or Analyses: I've checked the soundness of the experiments given, and found them to be lacking, in the sense that the experiments do not verify the claims made in the paper.
While Fig.1 shows a qualitative behavior that aligns with the statement that NC1, gradient norm, and the empirical loss all converge to small values together, they do not constitute a sufficient support for these claims in practice. Two things can be done when providing empirical results for a theory paper:
1) All statements are exact under the assumptions made, and no further verification is needed, aside from perhaps a numerical simulation that shows, for instance, that in a completely controlled setting, SGD on the 3 layer network with linearly separable data (say very clustered gaussian mixture) converges to NC1 with predicted dependence on the training/test loss that is given by gradient flow on the free energy.
2) An attempt to make a broad statement connected to real networks beyond the setting studied, not knowing all the constant factors, but showing that the relation between NC1/train loss/gradient norm is maintained by looking at quantities that scale with (for example the $\beta$ parameter) by performing a scan over many values.
This paper chose neither, and only shows a qualitative result, that I feel does not really contribute much, given the theoretical results.
Supplementary Material: Yes, the parts necessary to understand theorems and certain bounds given in the main text.
Relation To Broader Scientific Literature: The results of these authors relate to the broad phenomenon of neural collapse which is observed in trained classifiers across different networks and datasets, and attempts to connect a specific NC property in a tractable setup with the loss landscape both at training and test, thus relating NC to generalization performance, which is of broad interest.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: **Strengths**:
- The paper is well written and fairly easy to follow, apart from minor clarifications that might be useful to see.
- The theoretical results seem to be correct.
The main contributions are extending previous results. In particular, under mild assumptions of small gradient norm and small loss, the authors show that a 3-layer network with the last two layers being linear converge under gradient flow to NC1.
This relaxes assumptions from previous works, specifically:
1. Remove the balancedness assumption on all the linear layers and show NC1 occurs even for unbalanced layers
2. Mean field in the regime of only the first layer width tending to infinity, while the width of the second layer remains of constant order.
They further show that for linearly separable (and bounded) data the test error decreases in correspondence with NC1 convergence.
These results indeed shed further light on the phenomenon of neural collapse, and attempts to connect it further with generalization performance.
**Weaknesses:**
1. As opposed to previous works (Jacot et al, Sukenık et al.) there is no discussion regarding NC2 and NC3.
2. It seems that the major difference between this and previous works is the extension of the two linear layer setting to the SGD case, where noise induces an effective entropy regularization term in the dynamics.
3. The connection made between the theoretical results and real-world networks is not studied seriously, but seems a bit of an afterthought.
The combination of these weaknesses makes the contribution, in my opinion, overly incremental.
Other Comments Or Suggestions: I did not find any glaring typos.
Questions For Authors: 116 - What does Law(a,u) mean? it is never specified clearly.
112 - I think the phrasing implies you introduce an entropy regularization term but it comes simply from performing SGD on the predefined loss, so the dynamics are done on a free energy which includes the entropy penalty term, right?
Does condition A3 mean the results don’t hold for ReLU? how would they change?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the detailed review. We address concerns below.
**1. Lack of soundness in experimental design.**
We followed the suggestions and did additional experiments, please download pdf in https://github.com/conferenceanonymous152/icml25
(a) For three-layer networks, Figure 3 shows the linear relation between log of NC1 and log of gradient norm/training loss throughout training. Different points correspond to different epochs of training (purple ones in the top-right correspond to early stages, and yellow ones in the bottom-left correspond to late stages), and we average the results of 4 independent experiments. The plots indicate a polynomial relation between the quantities as predicted by Equation (17).
(b) For practical models, we show that ResNet-18 and VGG-11 trained on CIFAR-10 exhibit a similar behavior as three-layer networks. Figure 4 is similar to Figure 1 in the paper. The scatterplots in Figures 5, 6 plot the $\log-\log$ relation between NC1 and gradient norm/training loss, and they indicate a polynomial relation in the terminal phase of training (while the early phase of training may be a little different).
**2. NC2-NC3.**
To prove NC2-NC3, [1] uses that the number of linear layers is large. If we stack many linear layers on top of the feature-learning layer and run gradient flow, we expect that NC2-NC3, as well as NC1, can be shown. As for NC1, our approach could be adapted as follows. Let the network be $f(x; \rho, W_2, \dots, W_L) = W_L \dots W_2 h_\rho(x),$ and define $ W^\top =W_L \dots W_2$. Then, the characterization in Theorem 4.2 still holds, since the condition of Definition 4.1 only involves the gradient computed over $\rho$. The lower bound in Lemma 4.3 holds and the upper bound becomes $\sigma_{max}(W) \leq O( (\log \beta)^L ) = o(\beta)$, which implies Corollary 4.4. Theorem 4.8 can be proved by letting $t_*$ be the minimal first hitting time for $W_2, \dots, W_L$ and $\rho$ and following a similar strategy. Finally, since approximate NC1 holds for $H_\rho(x),$ it also holds for $W^\top H_\rho(x)$, for any $W$ satisfying Lemma 4.3.
As for NC2-NC3, we note that $W_2, \dots, W_L$ are balanced for all stationary points $(\rho, W_2, \dots, W_L)$. Thus, if the condition number of $W$ is bounded by a constant independent of $L$ (as in Theorem 3.1 of [1]), one can show NC2-NC3 following the approach in [1]. We note that **$W_1 := \mathbb{E}_\rho[a a^\top]$ is not balanced with $W_2, \dots, W_L$** due to Lemma 4.5 of our work. Thus, to show NC1 for a network with many linear layers, the results of [1] cannot be applied.
**3. Major differences compared to previous work, additional entropic regularization term and question about line 112.**
The entropic regularization term is not implicitly induced by SGD, but it is explicitly added in the form of a Brownian noise. In fact, we consider noisy gradient flow for training, i.e., mean-field Langevin dynamics. This is standard in the mean-field regime [2, 3], and noisy gradient flow is the limiting dynamics of noisy SGD with step size $\eta$ and any batch size (including full-batch) on networks of width-$N$ as $\eta \to 0$ and $N \to \infty$ [3].
What distinguishes our work is that we show NC1 for a network that (i) is trained end-to-end and (ii) exhibits feature learning throughout training. In contrast, previous work focuses either on UFM (which does not capture the effect of data) or on a dynamics reaching near-interpolation in the NTK regime [1] (which does not exhibit feature learning [4]). Furthermore, the occurrence of NC1 in [1] relies on the balancedness of all linear layers, which may not hold, see Lemma 4.5. Instead, our result only needs small training loss and gradient norm, which is commonly satisfied as long as the training dynamics converges. Finally, we show for the first time the co-occurrence of NC1 and vanishing test error for a class of data distributions. For additional comparison of our work to [1], please see point 6 of the response to Reviewer sgNN.
**4. Connection between theoretical results and real-world networks.**
See point 1.
**5. Line 116.**
$Law(a,u)$ means the joint measure of random variables $a,u.$
**6. Assumptions (A3).**
ReLU does not satisfy (A3), but we still expect our results to hold by taking the limit of a sequence of approximations to ReLU. We note that (A3) is purely technical (it ensures the well-posedness of the Wasserstein gradient flow) and standard in works [2, 3] considering the mean-field regime.
[1] Jacot, Sukenik, Wang, Mondelli, Wide neural networks trained with weight decay provably exhibit neural collapse.
[2] Suzuki, Wu, Nitanda, Mean-field langevin dynamics: Time-space discretization, stochastic gradient, and variance reduction.
[3] Mei, Montanari, Nguyen, A mean field view of the landscape of two-layer neural networks.
[4] Yang, Hu, Tensor Programs IV: Feature Learning in Infinite-Width Neural Networks. | Summary: The paper theoretically studies the phenomenon of neural collapse in classification, focusing on its most basic property: the vanishing of within-class variability. Unlike data-agnostic prior work analysing the unconstrained features model (UFO), this work adopts a data-specific perspective by considering a three-layer neural network with a feature-learning component in the mean-field regime. The authors establish a connection between NC1 and the loss landscape, proving that points with small empirical loss and gradient norm are roughly NC1 solutions. They also demonstrate that gradient flow dynamics on the three-layer model converges to NC1 solutions. Finally, they show that NC1 and vanishing test error can coexist for well-separated data distribution, connecting neural collapse to generalisation.
**Update after rebuttal**
I thank the authors for the detailed response. I consider the paper a solid and relevant contribution, so I recommend its acceptance.
Claims And Evidence: Although I have not thoroughly checked the proofs, the claims seem reasonable and supported by rigorous mathematical analysis. The authors make use of common assumptions in the literature, present proof sketches and a fair discussion of related work.
Methods And Evaluation Criteria: The methods and evaluation criteria make sense for the theoretical analysis conducted in this work.
Theoretical Claims: I did not check the proofs in detail.
Experimental Designs Or Analyses: The paper presents comprehensive numerical experiments using MNIST and CIFAR-100 to validate the theoretical results. The experiments are well-designed, and the results are consistent with the theoretical claims.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: The paper seems to be well-positioned within the broader scientific literature. The authors provided a comprehensive review of the related work and are very clear about the difference between their work and previous work. They also highlight the fact that their work can offer new insights into gradient-based optimization in deep learning.
Essential References Not Discussed: I am unaware of any essential references that were not discussed in the paper.
Other Strengths And Weaknesses: Strengths:
- The paper bridges neural collapse to loss landscape, gradient flow dynamics and generalisation. It presents a rigorous and fairly general analysis.
- The idea of taking a two-layer neural network in the mean field regime and stacking a linear layer is quite interesting.
- The paper is well-written, and the results are well-stated. It is also well-motivated (going beyond UFO) and clearly tries to review, connect and extend previous literature.
Weaknesses:
- It is probably not feasible analytically, but a small remark on the difficulties of extending the results to cross-entropy loss would be interesting. I believe the mean-field regime has been explored only for the squared loss, which would be a complication for such an analysis (see questions for authors).
Other Comments Or Suggestions: I believe there is a typo in the paper's title: "Unconstrainted" should be "Unconstrained".
Questions For Authors: 1) If a similar analysis for cross-entropy loss were possible, would the authors expect drastic changes in the qualitative picture?
2) Could you clarify the technical reason for considering the approximated model truncating the second layer in Section 5? Would the following conclusions be affected if the full model was considered?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable review and positive feedback. We address questions and concerns below.
**1. Difficulty in extending results to CE loss, and expected qualitative results.**
Thank you for the question. It is indeed interesting to extend our results to CE loss and we summarize two major difficulties below:
* The intuitive difference between MSE loss and CE loss is that small MSE loss implies that the output of the neural network is roughly the same for all data points in the same class, as the label of data points in the same class is the same. In our model, we stack a linear layer after the feature learning part, which implies that data points in the same class having approximately the same output is a **necessary but not sufficient condition** for NC1 to happen. To see this, suppose $h(x_1), h(x_2)$ are the features of two data points in the same class. Then, $||h(x_1)-h(x_2)||\_2^2 \geq \frac{1}{\lambda_\max(W^\top W)}||W^\top h(x_1)-W^\top h(x_2)||_2^2.$ Now, in order to extend our results to CE loss, one would have to rule out the possibility that different data points in the same class could have different logits, and it is unclear how to do so in our setting even when the loss is small. However, we believe that this is just a technical issue coming from the fact that we consider a linear classifier in the last layer.
* Another technical issue is that our results require the characterization of features in Equation (6). This only holds for MSE loss, and a different characterization would be needed for CE loss.
That being said, we do expect a similar characterization to hold for CE loss. As a first step towards handling the CE loss, we note that there are indeed papers in the mean-field regime training with CE loss [1,2] or training with general losses that include CE [3]. However, all those papers consider two-layer networks, [1] requires no regularization terms, and [2] focuses on the specific problem of learning parities.
**2. The technical reason for considering the approximated model truncating the second layer in Section 5, and whether the following conclusions would be affected if the full model was considered.**
Having an approximated second layer implies that there exists a unique global optimum for the Gibbs minimizer as in Equation (27) and such a global optimum is achievable by mean-field langevin dynamics [3]. Specifically, one needs that $\nabla_\theta \frac{\delta}{\delta \rho} \mathcal{L}_n(\rho)(\theta) $ is Lipschitz in $\theta, \rho$ (cf. Proposition 2.5 in [4]), and without truncation this cannot be guaranteed. However, we note that our results in Section 5 hold uniformly for large enough $R$, which means that we also expect the same conclusion for the original model which corresponds to $R = +\infty.$ Rigorously speaking, one cannot interchange the order of the limits (in the time $t$ and the truncation parameter $R$), but we believe this to be a purely technical issue.
**3. Typo in the title.**
Thanks for spotting this, we will correct it in the revision.
[1] Chizat, Bach, Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss.
[2] Suzuki, Wu, Oko, Nitanda, Feature learning via mean-field langevin dynamics: classifying sparse parities and beyond.
[3] Suzuki, Wu, Nitanda, A. Mean-field langevin dynamics: Time-space discretization, stochastic gradient,
and variance reduction.
[4] Hu, Ren, Siska, Szpruch, Mean-field langevin dynamics and energy landscape of neural networks.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. I consider the paper a solid and relevant contribution, so I recommend its acceptance. | Summary: This paper studies the emergence of variability collapse in the penultimate layer representations of a three layer mean-field NN. The authors show that points for which the gradient norm is small (approximate stationary points) also show variability collapse, and that the level of variability collapse is controlled by the gradient norm. Next, they show that gradient flow on the parameters of a three layer mean-field NN wrt MSE loss converges to approximate stationary points. Finally they consider the case of linearly separable data and show that variability collapse is also related to small test error in this case.
Claims And Evidence: All of the above claims come with theoretical proofs and are convincing. The authors also consider differences between their entropy regularized free energy landscape and the landscape of L2 regularized networks, leading to solutions with imbalanced layers.
Methods And Evaluation Criteria: Yes
Theoretical Claims: I followed the proof sketches in the main paper but did not work through the appendix.
Experimental Designs Or Analyses: Yes, the experiments measuring balancedness of the solution seem to be sound.
Supplementary Material: No
Relation To Broader Scientific Literature: This paper advances the study of neural collapse in two ways - first by moving beyond the unconstrained features model they allow for considering the role of the data/kernel. Next, they consider approximate stationary points and show that variability collapse still holds. This is in contrast with prior works that typically consider stationary points where the gradient is exactly zero.
Essential References Not Discussed: The discussion is comprehensive
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you very much for finding that our paper advances the study of neural collapse and recommending a strong acceptance! We are happy to have a further discussion in case additional questions or comments come up. | null | null | null | null | null | null |
Harnessing Heterogeneous Statistical Strength for Personalized Federated Learning via Hierarchical Bayesian Inference | Accept (poster) | Summary: This paper proposes a hierarchical Bayesian inference framework for personalized federated learning (PFL) to address issues arising from statistical heterogeneity across client data. The main contribution involves specifying a conjugate hyper-prior over personalized posterior parameters, enabling joint inference of personalized and global posterior distributions. Experimental results across synthetic and real datasets (FMNIST, CIFAR10, CIFAR100). The authors further demonstrate analyctically that existing Bayesian PFL methods (e.g., *pFedBayes* and *pFedVEM*).
Claims And Evidence: The claims of achieving superior performance and addressing statistical heterogeneity are well-supported by empirical evaluations on synthetic and standard benchmark datasets (FMNIST, CIFAR10, CIFAR100). However, it is not entirely clear why the addition of global hyperpriors significantly outperforms simpler aggregation methods such as that from pFedBayes. Since the aggregation of existing methods correspond to particular priors under certain assumptions, it is unclear how an uninformative hyperprior would yield substantial performance improvements.
Methods And Evaluation Criteria: The methods and the evaluation criteria are appropriate for assessing performance in personalized federated learning scenarios. The use of benchmark datasets (FMNIST, CIFAR10, CIFAR100) and synthetic data for controlled heterogeneity simulations aligns well with the targeted problems.
Theoretical Claims: I checked the claim regarding the special cases (the derivation showing that pFedBayes and pFedVEM methods are a special case of the proposed hierarchical Bayesian formulation).
Experimental Designs Or Analyses: The experimental designs and analyses appear sound. The use of different client counts (50, 100, and 200) and varying data distributions to simulate realistic federated settings was well-justified. No significant issues were found in the experimental setups.
Supplementary Material: I did not review the supplementary material in great detail.
Relation To Broader Scientific Literature: The paper clearly situates itself within the broader scientific literature by discussing connections and distinctions to related Bayesian-based FL methods and personalized FL approaches. However, essentially this method appears primarily to introduce a prior over local parameters, a concept already capable with existing algorithms and methodlogies in the literature, though maybe not quite *explored* in directly the same manner (see references below). Additionally, while placing a hierarchical prior might be conceptually principled, it introduces significant computational overhead compared to simpler methods like pFedBayes, which typically avoid full stochastic VI both locally and globally within each communication round. This additional computational cost and its justification are not adequately discussed.
Essential References Not Discussed: I think these two papers can do the model (or model type) that this paper describes. And I think the authors should really try and readdress their novelty in terms of these two papers. At the very least, these papers should be referenced and discussed within:
- Kim, Minyoung, and Timothy Hospedales. "Fedhb: Hierarchical bayesian federated learning." arXiv preprint arXiv:2305.04979 (2023).
- Hassan, Conor, Robert Salomone, and Kerrie Mengersen. "Federated variational inference methods for structured latent variable models." arXiv preprint arXiv:2302.03314 (2023).
Other Strengths And Weaknesses: **Strengths:**
- Hierarchical Bayesian approaches are elegant and address statistical heterogeneity effectively.
- Comprehensive empirical studies clearly demonstrate the benefits and effectiveness of the method.
- The method is well-motivated, and the connections to existing methods are clearly explained.
**Weaknesses:**
- Novelty might be perceived as limited since many works propose models and algorithms for federated learning of hierarchical models nowadays.
- Computational complexity and scalability aspects are not explicitly addressed.
- The additional computational cost of optimizing global hyperpriors is not clearly justified, particularly compared to simpler arithmetic aggregation methods (e.g., pFedBayes).
- Without highly informative priors, it is unclear how substantial performance improvements are achieved.
Other Comments Or Suggestions: **Other comments or suggestions:** Clarifying the computational complexity of the algorithm, compared to existing methods, would strengthen the work.
Questions For Authors: - Could you elaborate on the computational complexity of the hierarchical Bayesian method compared to existing methods?
- Could you elaborate on the novelties of your model and algorithm formulation relative to the two references that I included?
- Are there specific scenarios or practical constraints where the proposed hierarchical Bayesian approach might underperform compared to simpler methods?
- Could you clarify how much performance improvement is directly attributable to the hierarchical hyperpriors versus the optimization method itself?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank Reviewer kx6d for taking the time to provide detailed and constructive review on our paper.
**"..it is not entirely clear why the addition of global hyperpriors significantly outperforms simpler aggregation methods "**: We show that simpler aggregation methods implicitly set the regularization coefficients $\lambda_1$ and $\lambda_2$ to $0$. As in Table 2 (Page 8), our ablation study shows that $\lambda_1=\lambda_2=0$ leads to inferior performance. Simpler aggregation averages out the local posterior parameters. This hinders their ability of capturing non-IID features since the arithmetic means blur the statistical heterogeneity of the local data. Defining a hyperprior over the personalized parameters lets us share their statistical strength through the hyperprior and retaining the unique properties of local models.
**"...placing a hierarchical prior...introduces significant computational overhead compared to simpler methods like pFedBayes...its justification are not adequately discussed."**: As described in Section 3.3, we split the overall ELBO into server part and client part to avoid the potential communication overhead caused by frequently synchronizing client parameters, since the client part of the ELBO can be optimized locally in multiple rounds after receiving updates from the server. Comparing with simpler methods, the only additional computation overhead, as discussed in Appendix A.4, is about tuning the regularization coefficients $\lambda_1$ and $\lambda_2$ via grid search. This can be justified by the statistically significant performance gain across benchmarks.
**"Novelty might be perceived as limited...in terms of these two papers"**: Although paper (Kim, et al. 2023) proposes to define Normal-inverse Wishard (NIW) distribution and mixture models as hyper-priors over personalized posterior parameters, in the implementation, they simplify the NIW to "spiky Gaussian" by fixing the covariance to a singular identity matrix, as in their Equation (16). This means they fail to aggregate the covariances of the personalized posteriors. This is also verified by the poorer performance the paper reported in their experiments. In comparison, we propose half-Gaussian, a special case of NIW, over the covariances and effectively aggregate the personalized posteriors. Besides our more comprehensive experiments, we also provide theoretical analysis shedding light on the relationships and limitations of the existing Bayesian pFL methods. Paper (Hassan, et al. 2023) introduces a FL model with hierarchical latent variables, instead of directly performing Bayesian inference on distribution parameters as in our paper. The work's empirical evaluation is quite limited, since the model is not tested on any benchmark datasets.
Although the two papers are arXiv drafts, we will discuss them in our camera-ready version and clarify the novelty of our work.
**"Computational complexity and scalability aspects are not explicitly addressed"**:
- Regarding the computational complexity, Figure 3 (Page 7) shows our method outperforms the baselines with fewer communication rounds. The average local training time (seconds) per communication round in Cifar10 with all 100 clients available is
|Methods|Time
|--|--
|non-Bayes (FedAvg)|21.42
|non-Bayes Base, Bayes Head (pFedVEM)|43.14
|full-Bayes (Ours, pFedBayes)|90.38
- For scalability, we validated and tested our framework with the largest number of clients (i.e., 200) in comparison with what the baseline methods reported on the image datasets. For example, pFedBayes uses maximum 20 clients, FedAvg 100 clients, and pFedVEM 200 clients. To demonstrate the scalability, we conduct an experiment with a total of 1000 clients on Cifar10, where each client has unique local data of 5 labels out of 10, as below:
|Methods| PM | GM
|--|--|--|
|FedAvg | -| 48.7±1.1
|pFedVEM| 59.0±0.4|48.6±0.9
|Ours| **61.0± 0.3**| **52.3±0.4**
Our method scales well with large number of clients.
**"Are there specific scenarios or practical constraints where the proposed hierarchical Bayesian approach might underperform compared to simpler methods?":** Our PMs' performance is comparable with the baselines on the synthetic data. This is because the synthetic data are relatively simple in terms of lower feature dimensionality and smaller dataset sizes. The baselines can thus perform well capturing the non-IID information. The similar case is on FMNIST dataset which is relatively simple and does not have many non-IID features. Therefore, the advantages of personalization algorithms are not obvious. This can be verified by the fact that all the baselines including FedAvg achieves similar performance.
**"Could you clarify how much performance improvement is directly attributable to the hierarchical hyperpriors versus the optimization method itself?"**: Our ablation study in Table 2 (Page 8) suggests that properly setting the parameters of the hyperprior is important to the performance gain.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal.
> "...placing a hierarchical prior...introduces significant computational overhead compared to simpler methods like pFedBayes...its justification are not adequately discussed.": As described in Section 3.3, we split the overall ELBO into server part and client part to avoid the potential communication overhead caused by frequently synchronizing client parameters, since the client part of the ELBO can be optimized locally in multiple rounds after receiving updates from the server. Comparing with simpler methods, the only additional computation overhead, ....
having to grid search over a hyperparameter space represents a big problem for a practical FL algorithm in my opinion.
> Although paper (Kim, et al. 2023) proposes to define Normal-inverse Wishard (NIW) distribution and mixture models as hyper-priors over personalized posterior parameters, in the implementation, they simplify the NIW to "spiky Gaussian" by fixing the covariance to a singular identity matrix, as in their Equation (16). This means they fail to aggregate the covariances of the personalized posteriors ...
Sure, they choose a variational family that doesn't capture covariance, but this seems to be the only methodological difference. The comment on the comparison between your work and the work of Hassan et al. 2023 does not make sense. You both have the goal of posterior inference of latent variables (or "distribution parameters" - I don't know what this means). the difference is that you have different algorithms and corresponding objectives to fit such a model.
Again, thanks for your rebuttal. I will keep my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for your further comments on our work. We would like to make a further clarification on the two points.
>"having to grid search over a hyperparameter space represents a big problem for a practical FL algorithm":
Grid search for our model is not costly, since the search space is only 2-dimensional with $\lambda_1$ and $\lambda_2$. Our comprehensive experiments shows that these regularization coefficients are critical to the superior performance for Bayesian personalized federated learning (pFL) in general. Turning them off hurts all the Bayesian pFL significantly. **Our work uncovered this major performance bottleneck of the previous methods. The small additional cost can thus be well justified.**
>"Sure, they choose a variational family that doesn't capture covariance, but this seems to be the only methodological difference. The comment on the comparison between your work and the work of Hassan et al. 2023 does not make sense":
- Please note that (Kim, et al. 2023)'s failing to aggregate the covariances of the personalized posteriors by fixing the covariance to a singular identity matrix actually leads to two fundamental difference in our methodologies. First, without modeling covariances, **their local models degenerate to non-Bayesian neural networks (i.e., regular feedforward neural networks with MAP point estimation)**, as shown in Equations (16) and (21). In contrast, our local models are fully Bayesian neural networks. Second, because they fix the hyperpriors, they failed to discover the relationship and limitations of the Bayesian pFL methods.
- The latent variables introduced by (Hassan et al. 2023), as described in Section 4.1 Hierarchical Bayesian Neural Network for Heterogeneous Data, model only the weights of the first hidden layer of the local neural networks as random variables. They only provided MAP point estimates for the rest of the weights in the neural network architectures.
We will clarify these difference in our camera-ready version. We would appreciate if Reviewer could reconsider your score on our work. | Summary: This paper proposes a novel hierarchical Bayesian inference framework for Personalized Federated Learning (PFL) by specifying a conjugate hyper-prior over the parameters of personalized posteriors. This approach enables the joint computation of a global posterior distribution for aggregation, alongside personalized posteriors at the local level. The proposed framework strikes an elegant balance between local personalization and global model robustness, offering a more efficient and scalable solution for PFL.
Claims And Evidence: This paper proposes a novel hierarchical Bayesian inference framework for Personalized Federated learning. I believe that the claims made in the submission are well supported by evidence, but need further refinement (see comments)
Comments
1.Because the authors want to improve local personalized learning while ensuring the ability to generalize the global model, the baseline portion of the experiment should be compared more with the latest approaches to personalized federated learning.
2.The experiments in this paper are primarily conducted on the FMNIST, CIFAR-10, and CIFAR-100 datasets. It is recommended to evaluate the model’s performance on datasets with a larger number of categories, such as Tiny-ImageNet, to assess its generalizability. Additionally, in Table 1, the global model performs poorly on FMNIST when the number of clients is 100 and 200. It would be necessary for the authors to provide an explanation for this phenomenon.
3.The paper frequently uses the abbreviation 'BNN' without specifying its full form. For clarity and consistency, it would be helpful to define 'BNN' explicitly when it is first introduced.
4. A minor typo issue in the caption of Figure 1. "Our PFL framework is on the left, not the right."
Methods And Evaluation Criteria: The method proposed in this paper preserves the client's personalized features and ensures the global generalization capability, but it needs further improvement.
Theoretical Claims: No separate theoretical analysis section in the text.
Experimental Designs Or Analyses: 1.Because the authors want to improve local personalized learning while ensuring the ability to generalize the global model, the baseline portion of the experiment should be compared more with the latest approaches to personalized federated learning.
2.The experiments in this paper are primarily conducted on the FMNIST, CIFAR-10, and CIFAR-100 datasets. It is recommended to evaluate the model’s performance on datasets with a larger number of categories, such as Tiny-ImageNet, to assess its generalizability. Additionally, in Table 1, the global model performs poorly on FMNIST when the number of clients is 100 and 200. It would be necessary for the authors to provide an explanation for this phenomenon.
Supplementary Material: I have carefully reviewed the entire supplemental material, which primarily consists of additions to the experimental section and the pseudo-code.
Relation To Broader Scientific Literature: This paper proposes a personalized federated learning method based on hierarchical Bayesian inference, which is of significance in helping the model adaptively adjust the trade-off between sharing and personalization among different clients by constructing a shared global prior and a personalized posterior.
Essential References Not Discussed: The paper comprehensively discusses the prior work related to its main contributions, appropriately cites and compares relevant approaches in the area of bayesian federated learning and personalized federated learning, and does not omit any important references that will have a significant impact on the understanding of the proposed approach.
Other Strengths And Weaknesses: Strengths:
1. By defining the conjugate super-prior on the distribution of personalized a posteriori parameters, the article achieves global model optimization while preserving the personalized features, and verifies the effectiveness of the method on three datasets.
2. The article demonstrates that the existing Bayesian PFL methods are special cases of its framework, and the optimization objectives of these methods can be obtained by introducing additional constraints, which provides a unified perspective for subsequent research.
Weaknesses:
1.In Equation (5), for the sake of computational simplification, does the method assume independence between global and local parameters? In practical scenarios, there is likely some degree of dependency between them.
2.Hierarchical Bayesian inference requires computing the global posterior distribution on the server and frequently synchronizing client parameters. When the number of clients K is large, this may introduce significant communication overhead.
Other Comments Or Suggestions: I noticed a minor typo issue in the caption of Figure 1. Our PFL framework is on the left, not the right.
Questions For Authors: I'll reorganize all my queries here
1.Because the authors want to improve local personalized learning while ensuring the ability to generalize the global model, the baseline portion of the experiment should be compared more with the latest approaches to personalized federated learning.
2.The experiments in this paper are primarily conducted on the FMNIST, CIFAR-10, and CIFAR-100 datasets. It is recommended to evaluate the model’s performance on datasets with a larger number of categories, such as Tiny-ImageNet, to assess its generalizability. Additionally, in Table 1, the global model performs poorly on FMNIST when the number of clients is 100 and 200. It would be necessary for the authors to provide an explanation for this phenomenon.
3.The paper frequently uses the abbreviation 'BNN' without specifying its full form. For clarity and consistency, it would be helpful to define 'BNN' explicitly when it is first introduced.
4.In Equation (5), for the sake of computational simplification, does the method assume independence between global and local parameters? In practical scenarios, there is likely some degree of dependency between them. It would be beneficial to clarify whether this assumption affects the model's expressiveness.
5.Hierarchical Bayesian inference requires computing the global posterior distribution on the server and frequently synchronizing client parameters. When the number of clients K is large, this may introduce significant communication overhead.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer 5j3i for taking the time to provide detailed and constructive review on our paper.
**"the baseline portion of the experiment should be compared more with the latest approaches to personalized federated learning"**: To the best of our knowledge, we included the latest competing Bayesian and non-Bayesian FL methods as the baselines. In addition, we show the performance comparison below with two pFL published in 2024. Please note that they are less comparable because both of them require additional datasets to pre-train the GMs.
|Method|Number of clients|Cifar10 PM|Cifar100 PM
|--|--|--|--|
|FedBNN [1] |20|**79.9 ± 0.3**|57.3 ± 0.8
|FedPPD [2] |10|61.86|53
|Ours|50|77.1 ± 0.1| **66.6 ± 0.4**
By aggregating information from even more clients (more clients, more challenging the FL task is) we still achieve comparable or better performance in Cifar100.
**"evaluate the model’s performance on datasets with a larger number of categories, such as Tiny-ImageNet,.."**: As suggested by the reviewer, we conduct experiments on the Tiny-ImageNet dataset with 50 clients, where each client has unique local data consisting of 50 labels out of 200, and report the results below:
|Methods| PM | GM
|--|--|--|
|FedAvg | -| 10.1 ± 0.7
|pFedVEM|33.45 ± 0.2|18.5 ± 0.4
|Ours| **42.4 ± 0.1**| **25.7 ± 0.1**
It shows both PM and GM of our framework achieve the best performance. We will add the results in the camera-ready version.
**"in Table 1, the global model performs poorly on FMNIST when the number of clients is 100 and 200.."** Our global model is the second best on FMNIST. This is mainly because the FMNIST dataset is relatively simple and does not have many non-IID features. Therefore, the advantages of personalization algorithms are not obvious. This can be verified by the fact that all the baselines including FedAvg achieves similar performance. We will clarify this in the camera-ready version.
**Abbrevations and typos:** We will provide explanations for the abbrevations, and correct all the typos in the camera-ready version.
**"does the method assume independence between global and local parameters?"**: As discussed in Section 3.2 Line 126-128, the local parameters of client k’s personalized posterior $\theta_k$ are direct instantiations of the hyper-prior $p(\theta_g|\alpha)$. So they are dependent on the global parameters.
**"Hierarchical Bayesian inference requires computing the global posterior distribution on the server and frequently synchronizing client parameters"**: As discussed in Section 3.3, we split the overall ELBO into server part and client part to avoid the potential communication overhead caused by frequently synchronizing client parameters. Because the client part of the ELBO can be optimized locally in multiple rounds after receiving updates from the server.
We will clarify the points in the camera-ready version.
**References**
[1] Makhija, Disha, Joydeep Ghosh, and Nhat Ho. "A Bayesian Approach for Personalized Federated Learning in Heterogeneous Settings." Advances in Neural Information Processing Systems 37 (2024): 102428-102455.
[2] Bhatt, Shrey, Aishwarya Gupta, and Piyush Rai. "Federated learning with uncertainty via distilled predictive distributions." Asian Conference on Machine Learning. PMLR, 2024. | Summary: The paper proposes a hierarchical Bayesian inference framework for personalized federated learning (PFL), addressing the issue of statistical heterogeneity across decentralized client datasets. The main conceptual contribution is introducing a conjugate hyperprior over personalized posterior parameters, allowing simultaneous inference of local personalized models and global aggregated models. This approach generalizes prior Bayesian methods for federated learning, aiming to balance local model personalization with global model robustness. Experimental results show competitive performance, especially in global model accuracy.
Claims And Evidence: The paper's primary claim—that the hierarchical Bayesian approach effectively harnesses statistical heterogeneity—is supported through extensive empirical evaluation. However, the claim of significantly improved personalized model performance compared to existing methods is not convincingly demonstrated. While the global model does perform robustly, personalized models show only modest or comparable improvements.
Methods And Evaluation Criteria: The experimental design is generally sound. Authors evaluated the method across synthetic and real datasets (Fashion-MNIST, CIFAR-10, CIFAR-100), assessing statistical heterogeneity and scalability with varied client numbers. However, a significant limitation is that Bayesian methods inherently incur high computational costs, which was not explicitly compared against simpler, non-Bayesian baselines using larger networks or alternative optimization methods.
Theoretical Claims: I checked the theoretical derivations of the hierarchical Bayesian inference framework and the special-case reductions to existing methods (pFedBayes and pFedVEM). The derivations appear correct, clearly showing how previous methods can be recovered as special instances of the proposed model under specific constraints. No major theoretical issues were observed.
Experimental Designs Or Analyses: Considering the existing body of work on Bayesian inference for federated learning, I find the novelty of the proposed approach limited, and the empirical results presented are not particularly strong. Specifically, the method does not significantly improve personalized model accuracy—the most crucial metric in personalized federated learning—compared to existing approaches. Performance improvements, if any, are minimal. Furthermore, the selected baselines do not explicitly require personalization, and better results could be achieved by training a single model with access to all data, suggesting that the primary challenge with the chosen datasets may lie in optimization rather than modeling methodology. Additionally, Bayesian methods generally demand significantly greater computational resources and present scalability issues. Consequently, non-Bayesian approaches, which can leverage much larger network sizes due to their efficiency, should also be included in the evaluations to provide a fairer comparison.
Supplementary Material: I skimmed through the supplementary material.
Relation To Broader Scientific Literature: The main conceptual contribution is introducing a conjugate hyperprior over personalized posterior parameters, allowing simultaneous inference of local personalized models and global aggregated models. This approach generalizes prior Bayesian methods for federated learning, aiming to balance local model personalization with global model robustness. Experimental results show competitive performance, especially in global model accuracy.
Essential References Not Discussed: To the best of my knowledge, the related work is well covered.
Other Strengths And Weaknesses: See my review above.
Other Comments Or Suggestions: See my review above.
Questions For Authors: See my review above.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We thank Reviewer DMYx for taking the time to provide a review on our paper.
**"...personalized models show only modest or comparable improvements":** The results in Table 1 (Page 6) shows the personalized models (PMs) of our framework achieves significant better performance than the baseline methods across all the benchmark datasets. Our PM's performance is comparable with the baselines only on the synthetic data. This is because the synthetic data are relatively simple in terms of lower feature dimensionality and smaller dataset sizes than the benchmark data. All the baselines can perform well in terms of capturing the personalized information.
**"Bayesian methods inherently incur high computational costs...:** This general claim about Bayesian inference is not necessarily applicable for FL problems. Figure 3 (Page 7) shows that our method achieves better performance with significant fewer communication rounds than other Bayes and non-Bayes FL approaches. The average local training time (seconds) per communication round in Cifar10 with all 100 clients available is
|Methods|Time
|--|--
|non-Bayes (FedAvg)|21.42
|non-Bayes Base, Bayes Head (pFedVEM)|43.14
|full-Bayes (Ours,pFedBayes)|90.38
So the overall computational cost is not necessarily higher than non-Bayes FL methods.
**"I find the novelty of the proposed approach limited...":** The novel contributions of our work are
- we propose a novel hierarchical Bayesian inference framework for personalized federated learning (pFL), introducing a hyper-prior over personalized posterior parameters;
- we develop a stochastic variational inference scheme for the framework, enabling joint computation of global and local posteriors to balance local personalization and global robustness;
- we subsume the existing Bayesian pFL methods as special cases of our general framework, shedding light on their relationships and limitations;
- our framework significantly outperforms existing pFL methods in terms of both GMs and PMs across the benchmark datasets.
We will clarify the novel contributions in the camera-ready version.
**"the selected baselines do not explicitly require personalization, and better results could be achieved by training a single model with access to all data, suggesting that the primary challenge with the chosen datasets may lie in optimization rather than modeling methodology":**
- The benchmark datasets we used are widely adopted for evaluating FL methods. Furthermore, as in Section 6.2, we applied common data partitioning approaches to examine various categories of statistical heterogeneity that exist in real-world FL scenarios including label distribution skew (i.e., clients possess varying label distributions), label concept drift (i.e., feature distributions differ among clients), and data quantity disparity (clients have different amounts of data). This is obviously a more challenging problem scenario than "training a single model with access to all data".
- Figure 4 (Page 8) shows that our method provides greater benefits for the clients to participate in the collaborative training process in comparison with training a single model on its own or the competing FL approaches.
- Although our work focuses on personalized FL, we included both personalized and non-personalized state-of-the-art FL methods as baselines for performance comparison.
**"Bayesian methods generally demand significantly greater computational resources and present scalability issues...":** To the best of our knowledge, there is no evidence suggesting Bayesian FL suffers from "greater computational resources and present scalability issues". We validated and tested our framework with the largest number of clients (i.e., 200) among the FL literature reported on the image datasets. To further show the scalability of our method, we conduct experiments with a total of 1000 clients using the Cifar10 dataset, where each client has unique local data of 5 labels out of 10., and report the results below:
|Methods| PM | GM
|--|--|--|
|FedAvg | -| 48.7 ± 1.1
|pFedVEM| 59.0 ± 0.4|48.6 ± 0.9
|Ours| **61.0 ± 0.3**| **52.3 ± 0.4**
As shown in the table our method scales well with even 1000 clients and still outperfroms both Bayes FL (pFedVEM) and non-Bayes FL (FedAvg).
**"non-Bayesian approaches, which can leverage much larger network sizes due to their efficiency, should also be included in the evaluations to provide a fairer comparison."** In our performance comparison in Table 1 and communication efficiency comparison in Figure 3, we reported state-of-the-art performance of both Bayes and non-Bayes FL methods. The non-Bayes FLs include FedAvg, pFedMe, FedRep, etc. The results shows that our method achieves significant better performance with less communication rounds. Using larger network sizes for non-Bayes FL will also incurs higher computational cost and not necessarily achieves better performance for non-IID FL data settings.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. I appreciate the proposed approach and find it interesting. However, I am still not convinced by the provided evaluation. As mentioned in my review, the evaluation only considers a very limited set of networks (not even ResNet18) and a small number of communication rounds (≤ 100). This setup may be unfair to non-Bayesian methods, which typically perform fewer local computations and require more communication rounds to reach comparable or better accuracy. These additional rounds could come at a similar or even lower cost than the proposed Bayesian approach—as demonstrated in your rebuttal, FedAvg is more than 4x cheaper.
Furthermore, I stand by my original point: **“the selected baselines do not explicitly require personalization, and better results could be achieved by training a single model with access to all data, suggesting that the primary challenge with the chosen datasets may lie in optimization rather than modeling methodology.”** For instance, FedAvg optimizes the same objective as a centralized model with access to all data. It is worth noting that we can simulate global SGD with IID client sampling by performing only one local step. The issue is that this would require a prohibitively large number of communication rounds. FedAvg mitigates this by using multiple local steps, which introduces client drift—making this fundamentally an optimization problem under a constrained communication budget. Thus, a model trained on all the data without federation remains a valid baseline and can be viewed as an upper bound on FedAvg performance.
If this centralized baseline outperforms any global or personalized model, then personalization is not necessary because the modeling itself does not require it. Therefore, for a meaningful comparison, it is essential that the considered baselines genuinely require personalization.
In addition, I find the method’s dependence on many hyperparameters—often inherited from multiple components—problematic. These hyperparameters are not necessarily consistent across experiments.
In summary, while I like the direction and potential of the proposed method, the mentioned weaknesses—particularly the limited evaluation and questionable choice of baselines—prevent me from recommending a higher score. Thus, I maintain my current evaluation. | Summary: The paper introduces a hierarchical Bayesian inference framework for personalized federated learning (pFL) that specifies personalized posterior parameters, enabling joint computation of global and local posteriors to balance personalization and global robustness. While theoretically subsuming existing Bayesian pFL methods (e.g., `pFedBayes`, `pFedVEM`), it can produce superior global model (GM) and personalized model (PM) performances under statistical heterogeneity.
## Update after rebuttal
---
I updated the score to **accept** because the authors made a great effort to address all of my concerns.
I have summarized below how my evaluation changes after the rebuttal:
* The authors have faithfully performed additional experiments and theoretical analyses according to the review.
* Thus, I can happily conclude that both empirical and theoretical improvements are made during the rebuttal period.
Claims And Evidence: ### Main Claim
---
The proposed hierarchical Bayesian framework i) outperforms existing pFL methods in terms of both GM and PM performances by ii) effectively sharing a model weights while retaining local-specific knowledge, and iii) also subsumes existing Bayesian pFL methods.
### Breakdown
---
i) pFL Performance
* This main claim is well supported by empirical performances in Table 1 (Page 6) and Figure 2 (Page 5) even under practical setup, involving heterogeneity adjusted by $(\alpha, \beta)$ and partial client participation.
* In Figure 4, it shows better PM accuracy across varying local data sizes on benchmark datasets, indicating better collaboration benefits of the proposed method.
ii) Effective Sharing Strategy
* The authors adopted base-head separation, which is widely used in practice, for the effectiveness of pFL.
* However, only a simple separation has been provided, and none of the ablation (e.g. the order of updating, only sharing either the head or the base) or in-depth analysis/justification is provided.
iii) Subsumption
* In Section 4, the authors neatly expanded that their proposed method generalizes existing Bayesian pFL methods.
* Describing how such a subsumption helps or improves existing methods is more helpful in highlighting the proposed method, but such expositions are lacking in the current form.
Methods And Evaluation Criteria: Yes, the proposed method and the evaluation criteria follow a typical scheme of pFL methods, and the authors properly simulated statistical heterogeneity due to label skewness and concept drift, both of which well reflect the real-world situation.
Theoretical Claims: Theoretical convergence analysis is absent, and only relies on empirical convergence results (Figure 3, 4 and 6).
Experimental Designs Or Analyses: Although all experiments are limited to image classification benchmark datasets with a moderate number of clients (i.e. <200 clients), this is a common setup in the FL literature and is therefore acceptable.
Supplementary Material: I checked all the supplementary material.
Relation To Broader Scientific Literature: FL typically assumes a point estimate of a global model parameter. With the contribution of the paper, FL can be extended to find the posterior distribution of a global model parameter, which can contribute to Bayesian inference-related tasks.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths
* The proposed method subsumes existing Bayesian pFL methods.
* The variational inference-based optimization is easy to be implemented in practice thanks to Algorithm 1.
Weaknesses
* There is lack of justification of why the hyper prior distribution should be half-normal distribution.
* The approximation of $q(\boldsymbol{\theta}_g;\phi)$ simplified Eq. 8, however, it can possibly affect to the overestimate of GM weights, high sensitivity to initialization, etc. Thus, proper correction method could be exploited, e.g., Laplace approximation, temperature scaling.
* The proposed method requires to tune the hyperparameter of hyper prior distribution, i.e., $\lambda_1, \lambda_2$.
* The proposed method has only been validated with a moderate number of clients, which may not fully reflect a large-scale cross-device FL setup.
* The proposed method only shows the pFL performance, and bypassed the strength of Bayesian inference (e.g., uncertainty quantification).
Other Comments Or Suggestions: Please see 'Questions For Authors' section.
Questions For Authors: * In what practical situation can we consider using the proposed method? The Bayesian perspective of pFL is intriguing in itself, and I believe that a complementary practical impact would also increase the validity of the proposed method. For example, authors could consider adding uncertainty quantification-related benchmark experiments under noisy-label scenario (e.g., see experimental setup of 'robustness to label noise' section in (Hahn, Jeong and Lee, 2022))
* Could the authors please provide any reasons of why the PM performance does not outperform other methods like PM did in Figure 2?
* How much local computation overhead is required for local BNN training, i.e., the computation of Eq. (10), compared to the non-Bayesian setup, e.g., `FedAvg`?
* Please also consider adding a comparison of the communication cost/complexity of the proposed method to that of `pFedBayes` and `pFedVEM`.
* I presume that the order in which the base and head are updated is important for pFL performance. In Algorithm 1, lines 807-823, the authors suggest updating the head first and the base later. Could you please provide simple comparative results when i) the base is updated first and the head later, when ii) only the base is updated and communicated, following (Collins et al., 2021) and (Oh et al., 2022)? If approach ii) shows decent performance, I think it can contribute to a lightweight Bayesian posterior update of the current proposed method.
* Is there any justification for choosing the half-normal distribution as the prior rather than the (inverse) Wishart distribution? Any supporting evidence (e.g. appropriateness for hierarchical Bayesian inference, empirical comparisons, etc.) would be appreciated.
* In the caption of Figure 1, 'Overview of our PFL framework based on hierarchical Bayesian inference (right)' -> 'Overview of our PFL framework based on hierarchical Bayesian inference (**left**)'
* Please avoid using $(\alpha,\beta)$ in adjusting the statistical heterogeneity as it overlaps with the notation of hyperparameter of a hyper prior distribution defined in Eq. 3.
> (Hahn, Jeong and Lee, 2022) Connecting Low-Loss Subspace for Personalized Federated Learning
> (Collins et al., 2021) Exploiting shared representations for personalized federated learning
> (Oh et al., 2022) FedBABU: Towards Enhanced Representation for Federated Image Classification
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer 7W15 for taking the time to provide a constructive and detailed review.
**"..only a simple separation has been provided.."**: For fair comparison, the base-head setting in our framework follows (Collins et al., 2021) and pFedVEM (Zhu et al., 2023). As Reviewer suggested, we investigate the impact of the updating orders of the PM's base and head. The performance on CIFAR10 with 50, 100, and 200 clients are as below:
|Updating Order|Sharing Component|50 Clients (PM)|50 Clients (GM)|100 Clients (PM)|100 Clients (GM)|200 Clients (PM)|200 Clients (GM)
|--|--|--|--|--|--|--|--
|Base Only (Oh et al., 2022)|Base Only|76.8±0.2|-|72.9±0.2|-| 68.3±0.4|-
|Base->Head|Both Head & Base|76.7±0.4|65.1±0.5|72.6±0.8|60.6±0.6|68.3±0.4|57.3±0.6
|Head->Base|Both Head & Base |**77.1±0.1**|**65.9±0.9**|**74.7±0.3**|**63.8±0.3**|**70.4±0.2**|**59.5±0.7**
The results show that the setting in our paper, as in the third row, is the best one. Note that if we only share base or head, the global model will be incomplete.
**"..how such a subsumption helps or improves existing methods.."**: The benefits from our general framework are as follows:
- we proved that one of the constraints of the existing methods is that they implicitly set the regularization coefficients $\lambda_1$ and $\lambda_2$ to $0$. As in Table 2, our ablation study shows that $\lambda_1=\lambda_2=0$ leads to inferior performance.
- Through the principled general framework we proposed, it shed light on the relationships and limitations of the existing methods.
- we derived the relationship between the coefficients and the hyper-parameters, showing how they govern the statistical strength of the sharing process.
**"..the strength of Bayesian inference..uncertainty quantification"**: Following (Hahn, Jeong and Lee, 2022) setting, we quantify prediction uncertainty under symmetric label flipping with a noise ratio of 0.2 on cifar10:
|Methods|50 Clients: ECE (Acc)|100 Clients: ECE (Acc)
|--|--|--
|FedAvg (GM)|0.37±0.06 (36.8±0.9)|0.35±0.08 (37.9±2.5)
|pFedVEM (PM)|**0.13±0.01** (70.1±0.4)|0.17±0.0 (68.0±0.4)
|Ours (PM)| 0.15±0.0 (**72.6±0.4**)|**0.16±0.01 (70.3±0.3)**
Our method's uncertainty quantification is comparable with pFedVEM but achieves higher accuracy.
**"Theoretical convergence analysis is absent.."**: Evaluating the statistical convergence of the local personalized posteriors, we can show that the posterior mean $E[\theta_k|D]$ is in between the MLE estimate $\hat{\theta}_k $ and the global mean $\mu_g$: $E[\theta_k|D_k,\mu_g,\tau^2_g]=\alpha_k\mu_g+(1-\alpha_k)\hat{\theta}_k$ where the convergence speed towards the global mean is given by $\alpha_k=\frac{\sigma^2_k}{\sigma^2_k+\tau^2_g}$. It shows there is more shrinkage for clients with smaller measurement precision.
**"..justification of..half-normal distribution"**: half-normal is commonly used as a prior over the standard deviation of Gaussian. As a special case of inverse-Wishard dist., conjugate over Gaussian variance, half-normal is computationally more stable and more efficient.
**"The approximation of $q(\theta_g|\phi)$.."**: Simplifying $q(\theta_g|\phi)$ to a delta function improves communication efficiency without sacrificing performance. Using the delta function, we only need to broadcast a single $\theta_g$ value back to the clients. With the original variational distribution, it needs to broadcast a set of samples to each client.
**"..requires to tune the hyperparameters..**: Although tuning $\lambda_1$ and $\lambda_2$ incurs additional computation overhead as discussed in Appendix A.4, the performance gain is statistically significant across benchmarks.
**"..validation with a moderate number of clients.."**: We validated our framework with the largest number of clients (i.e., 200) among the baseline methods on the image datasets (e.g., pFedBayes with maximum 20 clients, FedAvg 100 clients, and pFedVEM 200 clients). As Reviewer suggested, we conduct experiments with 1000 clients on Cifar10 as below:
|Methods| PM | GM
|--|--|--|
|FedAvg|-|48.7±1.1
|pFedVEM|59.0 ± 0.4|48.6±0.9
|Ours|**61.0 ± 0.3**| **52.3±0.4**
as shown above, our method scales well with large client size.
**"why the PM performance does not outperform other methods..in Figure 2"**: The synthetic datasets are relatively simple in terms of lower feature dimensionality and smaller dataset sizes. All the baselines can perform well in terms of capturing the non-IID information.
**"local computation overhead..for local BNN training"**: Average local training times (seconds) of BNNs per communication round on Cifar10 with all 100 clients available are:
|Methods|Time
|--|--
|non-Bayes (FedAvg)|21.42
|non-Bayes Base, Bayes Head (pFedVEM)|43.14
|full-Bayes (Ours, pFedBayes)|90.38
**"comparison of the communication cost..":** the communication cost comparison was reported in Figure 3 and in Section 6.2.1.
We will clarify the above points and correct typos and notations in the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed responses from the authors.
### Why Not Lower Score
---
As most of my concerns are addressed, I have **raised the score to weak accept.**
Based on the rebuttals, please consider adding or supplementing the following in the revised manuscript:
- More emphasis on subsumption as one of the main contributions
- Empirical scalability even under large number of clients (i.e., $K=1,000$)
- Discussion of the limitation/weakness of the proposed method in the practical scenario, i.e., increased local computational overhead, compared to non-Bayesian FL methods
- Appeal to the strength of the Bayesian approach more with empirical results for the practical FL scenario where label noise exists
- Supportive empirical evidence for the chosen update order (in the appendix)
I sincerely hope that these will help improve the presentation and communication of the research results.
### Why Not Higher Score
---
Please also note that there's room for additional score raising as long as **one remaining concern** is answered.
- Pertaining to the statistical convergence that the authors answered in the rebuttal, could the authors please provide more expositions on the statistical convergence analysis? (e.g., detailed procedure or even proof sketch)
- What does 'more shrinkage' mean in this context? Please clarify.
---
Reply to Comment 1.1.1:
Comment: We thank Reviewer 7w15 for your further comment on our work and raising the score. We will follow Reviewer's suggestion, adding and supplementing the details and additional experiments listed in the comment to our camera-ready version.
We provide the analysis about the statistical convergence of our model below.
After computing the approximate global posterior in Equation (4) via stochastic variational inference (i.e., Section 3.3), we analyze the convergence of the personalized distribution for each client $k$: $p(\mathcal{W}_k|\theta_k)$, where $\theta_k=\\{\mu_k,\sigma^2_k\\}$ are the parameters of the client $k$'s personalized posterior, as in Section 3.2, and $\mathcal{W}_k$ denotes the weights of the local model (i.e., Bayesian neural networks).
Specifically, we analyze the convergence behavior of the personalized distributions towards the global mean $\bar{\mu}=E[\mu_g|\mathcal{W}]$, which is close to the pooled estimate $\bar{\mathcal{W}}$. Let the hyperparameters $\alpha$ to be fixed, and we adopt the approximation to the global posterior $p(\mu_g,\sigma^2_g|\mathcal{W})=\delta(\mu_g-\bar{\mu})\delta(\sigma^2_g-\bar{\sigma}^2)$ for simplicity, then the marginal distribution of the personalized distribution parameters is:
\begin{align}
p(\theta_k|\mathcal{W})&=\int p(\theta_k|\mathcal{W}_k,\theta_g)p(\theta_g|\mathcal{W})d\theta_g \\\\
&\approx p(\theta_k|\mathcal{W}_k,\bar{\mu},\bar{\sigma}^2)
\end{align}
note that $\theta_g=\\{\mu_g,\sigma^2_g\\}$. Meanwhile, the above marginal posterior can be expressed as
\begin{align}
p(\theta_k|\mathcal{W}_k,\bar{\mu},\bar{\sigma}^2)&\propto p(\mathcal{W}_k|\theta_k)\cdot p(\theta_k|\bar{\mu},\bar{\sigma}^2)\\\\
&=\mathcal{N}(\mathcal{W}_k|\theta_k)\cdot\mathcal{N}(\theta_k|\bar{\mu},\bar{\sigma}^2)\\\\
&=\exp(-\frac{(\hat{\theta}_k-\theta_k)^2}{\sigma^2_k})\cdot\exp(-\frac{(\theta_k-\bar{\mu})^2}{\bar{\sigma}^2})\\\\
&=\exp(-(\frac{1}{2\sigma^2_k}+\frac{1}{2\bar{\sigma}^2})\theta^2_k+(\frac{\hat{\theta}_k}{\sigma^2_k}+\frac{\bar{\mu}}{\bar{\sigma}^2})\theta_k-(\frac{\hat{\theta}^2_k}{2\sigma^2_k}+\frac{\bar{\mu}^2}{2\bar{\sigma}^2}))
\end{align}
where $\hat{\theta}_k=\bar{\mathcal{W}}_k$ denotes the local MLE estimate. Since the marginal posterior $p(\theta_k|\mathcal{W}_k,\bar{\mu},\bar{\sigma}^2)=\mathcal{N}(\theta_k|E[\theta_k|\mathcal{W}],Var[\theta_k|\mathcal{W}])$ is a Gaussian due to conjugacy, by matching the coefficients in the quadratic and linear terms of the two expressions, we have the variance of the marginal distribution as
\begin{align}
\frac{1}{2Var[\theta_k|\mathcal{W}]}&=\frac{1}{2\sigma^2_k}+\frac{1}{2\bar{\sigma}^2}\\\\
Var[\theta_k|\mathcal{W}]&=\frac{\sigma^2_k\bar{\sigma}^2}{\sigma^2_k+\bar{\sigma}^2}
\end{align}
and the mean of the marginal distribution as
\begin{align}
\frac{E[\theta_k|\mathcal{W}]}{Var[\theta_k|\mathcal{W}]}&=\frac{\hat{\theta}_k}{\sigma^2_k}+\frac{\bar{\mu}}{\bar{\sigma}^2}\\\\
E[\theta_k|\mathcal{W}]&=(\frac{\hat{\theta}_k}{\sigma_k}+\frac{\bar{\mu}^2}{\bar{\sigma}^2})\cdot Var[\theta_k|\mathcal{W}]\\\\
&=\beta_k\bar{\mu}+(1-\beta_k)\hat{\theta}_k
\end{align}
where $\beta_k=\frac{\sigma^2_k}{\sigma^2_k+\bar{\sigma}^2}$. It indicates that the mean of the personalized distribution parameters: $E[\theta_k|\mathcal{W}]$ lies in between the local MLE estimate $\hat{\theta}_k=\bar{\mathcal{W}}_k$ and the global mean $\bar{\mu}$. The convergence of the personalized parameters towards the global mean is governed by $\beta_k$. Thus, we see that there is larger convergence for clients with smaller measurement precision (e.g., due to smaller data sizes or noisy labels), since $\beta_k\rightarrow 1$ as $\sigma^2_k\rightarrow\infty$.
In this context, 'more shrinkage' refers to the convergence behaviors towards the global model. | null | null | null | null | null | null |
Leveraging Diffusion Model as Pseudo-Anomalous Graph Generator for Graph-Level Anomaly Detection | Accept (spotlight poster) | Summary: In this paper, the authors proposed a novel GLAD method named AGDiff. AGDiff leverages diffusion models to generate pseudo-anomalous graphs, which addresses the challenge of anomaly scarcity in GLAD. Particularly, a latent diffusion process with perturbation conditions is proposed to generate diverse pseudo graph anomalies, and a joint training scheme is employed to integrate the training of pseudo anomaly generation and anomaly detection. Experiments on moderate-scale and large-scale imbalanced GLAD tasks demonstrated the effectiveness of AGDiff.
Claims And Evidence: Yes. The key claims of this paper are well-supported by convincing evidence. For example, (1) Comparison results (Tables 1 and 2) demonstrate the consistent superiority of AGDiff over existing baselines across diverse datasets. (2) Scoring distribution analysis (Figure 2) shows overlap between normal and pseudo-anomalous graphs, with real anomalies distinctly separated, validating the claim that the generated pseudo-anomalous graphs resemble normal graphs with subtle deviations.
Methods And Evaluation Criteria: Yes. The proposed method is well-motivated, the latent diffusion process and joint training paradigm address the challenge of anomaly scarcity in GLAD problem. The evaluation criteria used in the experiment are suitable for GLAD.
Theoretical Claims: Yes. The theoretical analysis in Appendix A intuitively demonstrated the superiority of the proposed method via the comparison between two algorithms (\text{A}_{1} and \text{A}_{2}).
Experimental Designs Or Analyses: Yes. The experimental design is technically sound, with fair comparisons to graph kernels and GNN-based GLAD baselines. The authors conducted experiment on different GLAD scenarios and compared the proposed AGDiff with latest GLAD methods, which make the experimental results convincing.
Supplementary Material: I have reviewed the appendix of this paper, including (1) Appendix A: Algorithm comparison and empirical validation support the theoretical claims. (2) Appendix B: Dataset details are thorough. (3) Appendix C: Implementation details are clear. (4) Appendix D: t-SNE visualization, along with the statistical metric Silhouette Score (SC) highlights the separation of the learned embeddings of AGDiff compared with other methods.
Relation To Broader Scientific Literature: AGDiff bridges diffusion models and GLAD by leveraging a conditioned latent diffusion process to solve the anomaly scarcity problem in GLAD. It also contrasts with reconstruction-based GLAD methods, such as MUSE, by explicitly generating pseudo-anomalies.
Essential References Not Discussed: This paper has adequately discussed recent works in GLAD and diffusion models, and the authors also described the connection between the proposed method with other related works, which is sufficient to understand the key contribution of the paper.
Other Strengths And Weaknesses: *Pros:*
- The paper is well-written, and the motivation is clearly illustrated. This work bridges a gap between diffusion modeling and graph-level anomaly detection, offering a novel perspective in the field.
- The proposed perturbation conditioned latent diffusion process enables the model to eliminate reliance on labeled anomalies, making it more applicable to real-world domains.
- Extensive experiments were conducted to demonstrate the effectiveness of the proposed method. Particularly, AGDiff exhibited superior performance in the large-scale imbalanced GLAD compared to the latest baselines.
*Cons:*
- In Eq. (10), the condition vector $\textbf{c}$ is generated by perturbing the input. However, it is unclear what role $\textbf{c}$ plays in the generation of the pseudo-anomalous graphs.
- The difference between the proposed AGDiff and the diffusion-based anomaly detection approaches should be elaborated.
- The experiment fixes the use of a diffusion process with $T=1,000$ steps, while the influence of varying $T$ on the anomaly detection performance is unclear.
- The reproducibility of this work is relatively limited as the authors did not provide the implementation code for AGDiff.
Other Comments Or Suggestions: The limitations of this work are not discussed in the conclusion. The authors are encouraged to provide additional discussion.
Questions For Authors: See Other Strengths And Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **We thank the reviewer for recognizing our work. Below are our responses to your concerns.**
**To W1:** The condition vector $\mathbf{c}$ is crucial for ensuring the generation quality of pseudo graph anomalies. By perturbing the initial latent embedding through a learnable perturbation transformation, $\mathbf{c}$ injects additional variability into the latent diffusion process. This ensures that the denoising network is conditioned to deviate from purely "normal" reconstructions. We have evaluated the influence of $\mathbf{c}$ via an Ablation Study (Table 3), where removing $\mathbf{c}$ led to a substantial performance drop. In that case, the generated graphs may closely resemble normal graphs, making it more difficult for the anomaly detector to distinguish them and thus reducing its overall discriminative capability.
**To W2:** AGDiff differs fundamentally from existing diffusion model-based methods [1,2,3]. Whereas existing approaches focus on modeling data normality, treating anomalies as outliers with significant reconstruction errors, AGDiff leverages the diffusion model to simulate subtle deviations from normal data and adaptively generate diverse pseudo-anomalous graphs that provide rich anomalous signals for refining the decision boundary learning. This proactive generation strategy facilitates the training of a powerful anomaly detector (as evidenced by the performance comparisons in Tables 1\&2), enabling the model to better distinguish between normal and real anomalous graphs. We have included some discussions in Section 2.2, and we will elaborate further in the paper to emphasize our contributions.
```
[1] Wolleb, J., et al. Diffusion models for medical anomaly detection, MICCAI, 2022.
[2] Tebbe, J., et al. Dynamic addition of noise in a diffusion model for anomaly detection, CVPR, 2024.
[3] Li, J., et al. Diffgad: A diffusion-based unsupervised graph anomaly detector, ICLR 2025.
```
**To W3:** To clarify the effect of $T$ on the model's performance, we have conducted additional experiments by varying $T$ from 250 to 1500. The experimental results are as follows:
$\begin{matrix}
\\hline
\text{Datasets}&\text{Metrics}&T=250&T=500&T=750&T=1000&T=1250&T=1500\\\\\hline
\text{MUTAG}&\text{AUC}&91.52(7.84)&95.20(3.20)&\bf95.84(3.84)& 95.83(2.15)&92.00(0.00)&92.00(0.00)\\\\
&\text{F1-Score}&88.00(4.00)&92.00(0.00)&\bf94.00(2.00)& 89.45(1.37)&87.00(0.00)&87.00(0.00)\\\\\hline
\text{ER\\_MD}&\text{AUC}&86.61(4.38)&89.30(0.33)&84.48(2.17)&\bf91.21 (1.84)&82.34(3.13)&82.18(0.91)\\\\
&\text{F1-Score}&78.30(6.60)&83.02(1.89)&79.25(5.66)&\bf86.04 (2.26)&73.58(1.89)&73.58(1.89)\\\\\hline
\end{matrix}$
As seen above, the performance (e.g., on MUTAG) generally improves from $T=250$ (91.52\% AUC) to $T=750$ (95.84\% AUC) or $T=1000$ (95.83\% AUC), with performance plateauing or declining when $T\geq 1000$. A trend was similarly observed on ER_MD, which indicates that a low $T$ may be insufficient to generate high-quality pseudo graphs. Nevertheless, a high $T$ can also lead to over-diffusion, making the generated graphs excessively similar to normal graphs, which in turn diminishes the capability of the anomaly detector to differentiate them and increases computational cost as well. These findings highlight the importance of selecting appropriate $T$ to balance the generation quality and computational cost. We will add this experiment and relevant discussions to our paper.
**To W4:** We have provided the implementation code of AGDiff at https://anonymous.4open.science/r/AGDiff-137F/. This link is fully anonymous, with some pre-trained models attached for reproducibility.
**To Other Comments:** We will follow your suggestion to further discuss the limitations of our work in the conclusion, such as (1) Our method assumes a sufficiently representative distribution of normal graphs, which may not hold in shifting or highly heterogeneous environments. (2) While AGDiff can generate pseudo graph anomalies to enhance decision boundary training, it is currently limited to static graphs. Future work could explore solving challenging GLAD tasks in heterogeneous environments or dynamic graph settings. | Summary: This paper presents a diffusion-based method for generating pseudo-anomalous graphs to address anomaly scarcity in GLAD. It employs a latent diffusion process with perturbation conditions and a joint training scheme for anomaly detection. Experiments on both balanced and imbalanced datasets demonstrate its effectiveness.
Claims And Evidence: Yes, the manuscript claims that the proposed method, which integrates an innovatively designed latent diffusion model as a perturbator to generate pseudo-anomalous graphs, outperforms reconstruction error-based approaches by leveraging these generated graphs to refine the decision boundary of the anomaly detector. Experimental results, along with the theoretical analysis in Appendix A, support this claim.
Methods And Evaluation Criteria: Yes, the motivation and proposed solution are well-justified. The evaluation criteria are fair and aligned with established anomaly detection benchmarks.
Theoretical Claims: Yes, the theoretical analysis in Appendix A demonstrates that the proposed solution outperforms reconstruction-based models.
Experimental Designs Or Analyses: Yes, the experimental settings are fair, and the evaluation metrics align with existing graph-level anomaly detection literature. The study includes multiple perspectives, including both moderate-scale balanced and large-scale imbalanced experiments.
Supplementary Material: There is no supplementary material in this paper, but I have reviewed all appendices.
Relation To Broader Scientific Literature: AGDiff offers a novel approach to anomaly detection. It generates pseudo-anomalous data and trains a strong classifier to precisely define the decision boundary, tightly enclosing normal data. Additionally, it leverages a diffusion model with perturbation conditions to ensure pseudo-anomalies differ from normal data. This makes the method both innovative and effective.
Essential References Not Discussed: This paper presents a comprehensive discussion of both GLAD approaches and diffusion-based methods.
Other Strengths And Weaknesses: Strengths:
1) The paper is well-written, and the idea is novel (refer to the comment in **"Relation To Broader Scientific Literature"**).
2) Each module in the proposed method is well-designed with clear and justified motivation. The designed diffusion anomaly detection model is very different from the existing diffusion-based anomaly detection methods.
3) Comprehensive experimental results demonstrate the method's effectiveness.
4) Theoretical analysis is provided to support the claims.
Weaknesses:
1) How does the proposed method ensure a gap between real normal data and pseudo abnormal data? Are they definitively different? The authors should clarify this point.
2) An analysis of the computational complexity is absent.
3) In the theoretical analysis, why must A_2 perturb normal data before generating pseudo anomalous data? Why not directly use perturbed normal data to train the classifier? A more intuitive explanation, possibly with examples, would be helpful.
4) The code used for the experiments is not provided.
Other Comments Or Suggestions: There are some grammar and format errors, e.g., in line 142, "Let … denotes" should be "Let … denote". Authors should double-check the grammar carefully.
Questions For Authors: Please refer to Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Thank you for your recognition, and we hope the responses below can solve your concerns:**
**To W1:** AGDiff ensures the gap between normal and pseudo-anomalous graphs via its **controlled latent diffusion process** and **joint training**. Rather than using arbitrary noise, the conditioned vector (Eq. 10) introduces learnable perturbations that simulate potential and diverse anomalous patterns while preserving structural plausibility. There are several pieces of evidence to support this claim, e.g., (1) Varying $\eta$ influences the generation quality of pseudo graph anomalies, thereby influencing the final performance (refer to Reviewer kMMx's W2). (2) Removing the conditioning vector $\mathbf{c}$ leads to a substantial performance drop (refer to Reviewer 2Zh2's W1). (3) Our statistical analysis (refer to Reviewer kMMx's W4) confirms that there is indeed a meaningful gap between pseudo-anomalous graphs and normal graphs.
**To W2:** Here, we provide a computational complexity analysis for AGDiff. For a dataset of $N$ graphs, each with an average of $m$ nodes (feature dimension $d$), $|E|$ edges, and latent dimension $d_{z}$, the AGDiff framework operates in three phases:
1. **Pre-training:** An $L$-layer GIN is employed as the backbone network in the pre-training, where the overall complexity is $\mathcal{O}(N L (|E|d + m d^{2}))$ due to the message aggregation ($\mathcal{O}(|E|d)$) and feature transformation ($\mathcal{O}(m d^{2})$).
2. **Pseudo Anomaly Generation:** This phase involves the computation of conditional vector ($\mathcal{O}(N d_{z}^{2})$) and a $T$-step latent diffusion process ($\mathcal{O}(N T m d_{z}^{2})$) across all graphs.
3. **Decoding \& Anomaly Detection:** Decoding involves the computation of node attributes and adjacency matrices from latent embeddings, which results in time complexity of $\mathcal{O}(N|E|d_{z})$ when we apply a negative sampling strategy in practice. The computational complexity of the anomaly detector is similar to the pre-training stage, i.e., ($\mathcal{O}(N L(|E|d + m d^{2}))$), due to their similar network structure.
Therefore, the overall computational complexity of AGDiff is approximately $\mathcal{O}(N L (|E|d + m d^{2}) + N (T m+1) d_{z}^{2} + N|E|d_{z})$, which is comparable with other state-of-the-art baselines such as SIGNET, MUSE, DO2HSC. Besides, potential optimizations like using parallel computation further enhance efficiency. We will add this analysis to the paper.
**To W3:** We would like to address your concern from two perspectives:
1. **Subtle Deviations Facilitate Decision Boundary Learning.** In $\mathcal{A}_{2}$, perturbing normal data before the generation process aims to ensure subtle deviations of the generated pseudo-anomalous graphs from normal graphs, which is critical for effective decision boundary learning. Relevant evidence can be found in the response to Reviewer kMMx’s W2.
2. **Learnable Perturbations Expand Anomaly Diversity.** Directly using perturbed normal data as pseudo anomalies would constrain the diversity of anomalous patterns the model encounters, as the perturbations would be static and lack adaptability. This limitation may cause the overfitting of the classifier to a narrow set of pseudo anomalies, reducing its generalizability. In $\mathcal{A}_2$, however, the perturbations are learnable and dynamically adjusted during joint training with the anomaly detector. As the detector improves, the perturbation mechanism evolves, generating increasingly sophisticated and varied pseudo anomalies. This adaptive process exposes the model to a broader spectrum of potential anomalies, enhancing its robustness and generalization.
**To W4:** We have provided the implementation code of AGDiff at https://anonymous.4open.science/r/AGDiff-137F/. The link is fully anonymous, and some pre-trained models are also attached for reproducibility.
**To Other Comments:** We will double-check our paper and correct all identified grammatical and formatting issues such as replacing "Let … denotes" with "Let … denote".
---
Rebuttal Comment 1.1:
Comment: The authors' clarifications have addressed my concerns, and the added complexity analysis and code further strengthen the paper. I will raise my score accordingly.
---
Reply to Comment 1.1.1:
Comment: **Dear Reviewer BUES:**
Thank you for the positive feedback. We are glad that our responses have addressed your concerns. We sincerely appreciate your valuable comments, which have helped us improve the paper. The related content will be included in the revised version.
Best regards,
Authors | Summary: This paper introduces Anomalous Graph Diffusion (AGDiff), a novel graph-level anomaly detection (GLAD) framework that consists of three core components: (1) a pre-training module employing variational inference to learn a structured latent space, (2) a latent diffusion process that introduces controlled perturbations to generate pseudo-anomalous graphs, and (3) a jointly trained graph anomaly detector that distinguishes normal graphs from pseudo-anomalous graphs. The central idea is to mitigate the scarcity of anomalous graphs in GLAD by generating diverse pseudo graph anomalies through a perturbation conditioned latent diffusion process. Experimental results demonstrate that AGDiff significantly outperforms state-of-the-art GLAD baselines on diverse GLAD tasks.
Claims And Evidence: The advantages of AGDiff over other GLAD baselines have been validated via extensive experiments. However, the claim that the generated pseudo-anomalies of AGDiff "closely resemble normal graphs" is only qualitatively discussed via t-SNE visualizations (Appendix D). A quantitative analysis with similarity metrics, e.g., graph edit distance between generated and normal graphs would strengthen this claim.
Methods And Evaluation Criteria: The proposed method makes sense for the GLAD problem. The evaluation metrics (AUC and F1-Score) are standard for GLAD, and the benchmark datasets cover both moderate-scale and large-scale imbalanced scenarios, which make sense for real-world GLAD problem.
Theoretical Claims: I have checked the theoretical analysis of this paper in the appendix, where AGDiff (Algorithm $\mathcal{A}\_{2}$) is compared with reconstruction-based model (Algorithm $\mathcal{A}\_{1}$), and the related analysis is compelling.
Experimental Designs Or Analyses: I have checked the soundness of experimental designs and analyses of this paper. The experimental design (Tables 1 and 2) is sound and reliable, which employs consistent data splits and fair baseline comparisons.
Supplementary Material: No separate supplementary material was provided, but I have reviewed the appendices. These appendices enrich the paper with more details on datasets, experimental settings. Moreover, the theoretical analysis and visual results provides more plausible evidence for understanding the advantages of the proposed method.
Relation To Broader Scientific Literature: Different from the diffusion model-based AD approaches [1, 2] that focus on modeling data normality, AGDiff proposes to leverage the diffusion model as the perturbator for normal data, which provides a novel and promising solution for addressing the anomaly scarcity challenge and make it distinct from supervised GLAD approaches reliant on labeled data.
[1] Diffusion models for medical anomaly detection. MICCAI, 2022
[2] Dynamic addition of noise in a diffusion model for anomaly detection. CVPR, 2024
Essential References Not Discussed: The literature review is thorough, with key related works appropriately cited.
Other Strengths And Weaknesses: **Strengths:**
1. The proposed AGDiff method tackles the challenge of anomaly scarcity in GLAD by generating pseudo-anomalous graphs via a latent diffusion process, which surpasses existing GLAD methods and shows significant novelty and promising potential impact.
2. The theoretical analysis in appendix A well supports the validity of the proposed method, which provides empirical trends in both reconstruction errors and performance metrics to demonstrate that the generated pseudo-anomalous graphs enhance the decision boundary learning.
3. AGDiff achieves state-of-the-art performance compared to existing GLAD approaches on diverse benchmark datasets, particularly in imbalanced settings. The ablation study also validates the importance of each component. The experimental results reflects the practical effectiveness and adaptability of the proposed method.
**Weaknesses:**
1. The effect of the KL divergence term (Eq. 7) in the pre-training phase requires clarification. Can the authors further explain it?
2. Eq. 10 introduces noise $\eta$ to perturb latent embeddings, while its influence on anomaly detection performance is not discussed.
3. While the empirical results show that the generated pseudo graph anomalies improve anomaly detection, the authors are encouraged to provide some visual comparisons between the generated pseudo anomalies and normal graphs, which would improve the interpretability of the proposed method.
4. This paper refers to the “diversity” of the generated pseudo graph anomalies, but only qualitatively through the t-SNE visualization. Are there quantitative metrics, e.g., graph edit distance or MMD, to assess the discrepancy of the generated samples compared to normal graphs?
Other Comments Or Suggestions: I have no other comments or suggestions for this paper.
Questions For Authors: Please refer to Weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **We thank the reviewer for the comments. Hope our responses below are helpful in solving your concerns.**
**To W1:** The KL divergence term in Eq. 7 is crucial for regularizing the latent space during pre-training. By encouraging the latent distribution $\mathbf{Z}$ to align with the standard normal prior $\mathcal{N}(\mathbf{0}, \mathbf{I})$, it prevents the model from overfitting to a narrow or degenerate latent manifold [1]. This well-structured latent space is key for generating diverse pseudo-anomalous graphs in subsequent stages. We will highlight this in our paper.
```
[1] Kipf, Thomas N., and Max Welling. Variational Graph Auto-Encoders, NeurIPS Workshop on Bayesian Deep Learning, 2016.
```
**To W2:** We have tested the effect of different noise magnitudes $\eta$ on MUTAG and ER\_MD, as summarized in the table below.
$\begin{matrix}
\\hline
\text{Datasets}&\text{Metrics}&\eta=0&\eta=0.01&\eta=0.1&\eta=1&\eta=10&\eta=100\\\\\\hline
\text{MUTAG}&\text{AUC}&93.20(2.64)&92.64(0.32)&92.40(0.08)&\bf95.83(2.15)&92.00(0.00)&92.00(0.00)\\\\
&\text{F1-Score}&86.00(2.00)&88.00(0.00)& 88.00(0.00)&\\bf89.45(1.37)& 88.00(0.00)& 88.00(0.00)\\\\\\hline
\text{ER\\_MD}&\text{AUC}&82.95(2.17)&86.81(0.91)&87.78(1.07)&\\bf91.21(1.84)&82.08(0.52)&81.76(0.59)\\\\
&\text{F1-Score}&77.36(1.89)&80.19(2.83)&85.09(0.94)&\\bf86.04(2.26)&71.70(0.00)&72.64(0.94)\\\\\\hline
\end{matrix}$
We can observe a performance degradation when this noise term is removed (i.e., $\eta=0$) or set to an excessive value (e.g., $\eta=10$). Our explanation for these observations is that (1) removing the noise term may lead to over-proximity between the pseudo graph anomalies and the normal graph, thus making it difficult to train an anomaly detector. (2) While an excessively high $\eta$ can lead to pseudo anomalies that deviate too far from normal data, thus reducing the discriminative ability of the anomaly detector. These findings highlight the importance of balancing $\eta$ to ensure meaningful perturbation to the initial latent representation $\mathbf{Z}_{0}$.
**To W3:** We have added a visualization comparing pseudo anomalies with normal graphs, available at https://anonymous.4open.science/r/AGDiff-137F/visualization/ (under the "visualization" file). The visualization results clearly illustrate that while the pseudo anomalies lie close to the normal data (with certain overlaps), they consistently exhibit subtle deviations, which serves as good evidence to support our claim. We will include these visual comparisons in the paper.
**To W4:** First, we have performed a quantitative analysis of the diversity of pseudo-anomalous graphs on ER\_MD. Except for that, rather than graph edit distance (which has $\mathcal{O}(n^{3})$ complexity), we employed the **normalized Laplacian spectral distance** to measure the discrepancy between the generated anomalies and normal graphs, as it is computationally efficient and direct related to graph topology. Specifically, we computed pair-wise spectral discrepancies between:
1. **Generated pseudo-anomalous graphs and normal graphs**, which ranged in $[0.17, 1.12]$.
2. **Real anomalous graphs and normal graphs**, which ranged in $[0.40, 5.12]$.
While the range for pseudo anomalies is narrower compared to real anomalies, this outcome aligns with our design intent. Rather than mimicking the extreme deviations observed in real anomalies, we aimed to produce controlled yet challenging perturbations to enhance the decision boundary learning. The embedding visualization results (in our response to W3) further support this, where the pseudo anomalies are distinctly separable from normal graphs. These results indicate that the generated pseudo anomalies exhibit a meaningful discrepancy from the normal graphs, providing sufficient diversity to effectively challenge the model during training.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors’ detailed responses, which adequately address my concerns, particularly regarding the interpretability issue. The additional visualizations and statistical results further justify the motivation and the validity the proposed method. In light of these improvements, I would like to raise my overall assessment to this work.
---
Reply to Comment 1.1.1:
Comment: **Dear Reviewer kMMx:**
We appreciate your time and effort in revisiting our work, your positive comments mean a lot to us. We are pleased that our responses can help address your concerns.
Best Regards,
Authors | Summary: The paper introduces a graph-level anomaly detection method to improve the performance of GNNs on anomalous graphs. The method consists in generating pseudo-anomalous graph with diffusion models in other to enhance the classification capabilities.
Claims And Evidence: The paper claims to that generating pseudo-anamalous graphs improves the learning capabilities of GNNs.
Methods And Evaluation Criteria: The paper fails to verify the claim that it puts forward.
1. What are pseudo-anomalous graphs? Why are they pseudo-anomalous and not simply anomalous?
2. This method is simply data augmentation utilizing a generative model. What is the novelty of the method beyond putting two existing pieces together?
3. There is no proof that the improvement of the method is actually because of the pseudo-anomalous data. By are the authors claiming this? How do they verify this?
Theoretical Claims: The authors do not put any theoretical claims forward.
Experimental Designs Or Analyses: I believe the experimental design does not help in answering the questions.
Supplementary Material: The supplementary material does not show a lot of important details.
How many diffusion graphs are generated?
How much time does it take to train the GNN using the diffusion model method?
There is an ablation study missing on the number of generated graphs for the joint training.
Relation To Broader Scientific Literature: I believe the paper is a data augmentation paper.
Data Augmentation for Supervised Graph Outlier Detection via Latent Diffusion Models
Kay Liu, Hengrui Zhang, Ziqing Hu, Fangxin Wang, Philip S. Yu
Counterfactual Data Augmentation With Denoising
Diffusion for Graph Anomaly Detection
Xiao et al.
Essential References Not Discussed: See item before.
Other Strengths And Weaknesses: I believe the paper has very little novelty.
Other Comments Or Suggestions: None.
Questions For Authors: See before.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **We thank the reviewer for the comments. Hope our responses below help to solve your concerns.**
**To Q1:**
1. **What are pseudo-anomalous graphs?** Pseudo-anomalous graphs are graphs generated via a controlled latent diffusion process (refer to Section 4.3). These graphs are optimized to resemble normal graphs yet exhibit subtle deviations, therefore mimicking potential anomalies and providing effective auxiliary signals for refining the decision boundary.
2. **Why are they pseudo-anomalous and not simply anomalous?** The graphs are termed "pseudo-anomalous" as they are not sampled from real anomaly data, but are generated by introducing subtle perturbations to normal graphs. In unsupervised GLAD, real anomalous data is not utilized during training. Therefore, our approach leverages pseudo-anomalous graphs to compensate for this absence.
**To Q2:** Please note that our work focuses on GLAD, and is not merely a data augmentation paper. We want to highlight two key innovations of AGDiff beyond standard augmentation:
1. AGDiff leverages a conditioned latent diffusion process to generate pseudo-anomalous graphs from normal data (Section 4.3), which is not arbitrary augmentation but a targeted approach to simulate potential graph-level anomalies via controlled perturbation in an unsupervised manner, which addresses the anomaly scarcity problem in GLAD. Therefore, our approach is distinctly different from GOEM [Liu et al.]’s supervised diffusion process (relying on labeled anomalies) and CAGAD [Xiao et al.]’s counterfactual augmentation (based on modifying the neighboring node attributes).
2. Unlike the traditional augmentation paradigm where augmentation is decoupled from learning, AGDiff integrates the diffusion process and anomaly detection in a joint training framework (Section 4.4). This enables bidirectional feedback where the detector refines the generation process, and the generator is adapted based on feedback from the detector to generate more challenging pseudo-anomaly samples.
Moreover, we want to emphasize that GOEM [Liu et al.] and CAGAD [Xiao et al.] are designed for node-level tasks, which differs fundamentally from AGDiff. Our approach is specifically tailored for graph-level anomaly detection, addressing the scarcity of anomalous data with an unsupervised and adaptive strategy.
**To Q3:** The positive effect of the pseudo-anomalous data can be validated in the following parts of our paper: (1) In the ablation study (Table 3), the performance significantly drops ($\geq$ 20\%) when we remove the latent diffusion module (no pseudo-anomalous data generated) or the conditioned vector (determine the generation quality of pseudo-anomalous data). (2) Refer to Appendix A - through the theoretical analysis and empirical comparison with the reconstruction-based model, we demonstrated the benefits of pseudo-anomalous data in improving the discrimination capability of the anomaly detector. These findings are good evidence of the contribution of pseudo-anomalous data.
**To Q: Theoretical Claims:** We theoretically and empirically demonstrated why AGDiff could outperform the reconstructor-based approach. (Refer to comments of Reviewer kMMx, BUES, and 2Zh2)
**To Q: Experimental Designs Or Analyses:** Here are the summarized questions that our experiments answered:
```
1. How does AGDiff compare with SOTA GLAD baselines? (Refer to Table 1)
2. How does AGDiff perform in large-scale imbalanced GLAD? (Refer to Table 2)
3. How can AGDiff's discrimination ability be verified? (Refer to Figures 2 & 5)
4. How does the hyper-parameter impact the performance? (Refer to Figure 3)
5. How does each component in AGDiff contribute to performance? (Refer to Table 3)
6. Why can AGDiff outperform reconstruction-based methods? (Refer to Appendix A)
```
We have further answered several questions in the rebuttal (see responses to Reviewers kMMx's W2\&W4, BUES's W2\&W4, 2Zh2's W3). If you have other questions, please feel free to discuss them with us.
**To Q: Supplementary Material:**
1. We set the number of generated graphs equal to the normal graphs as this balanced configuration leads to more stable training. As per your suggestion, we further conducted an ablation study by varying the ratio $r$ of generated pseudo graphs relative to normal graphs. The results (on DD) below indicate that the balanced setting (i.e., 100\% ratio) yields the best performance.
$\begin{matrix}
\\hline
&\text{Metrics}&r=30\\%&r=60\\%&r=90\\%&r=100\\%\\\\\hline
\text{DD}&\text{AUC}&85.06(3.42)&86.72(0.70)&87.30(0.49)&\bf88.23(0.67)\\\\
&\text{F1-Score}&80.78(4.48)&83.23(1.45)&84.03(0.36)&\bf84.06(0.59)\\\\\hline
\end{matrix}$
2. It is important to clarify that we are not training the GNN using the diffusion model. The diffusion model is jointly trained with the anomaly detector (GNN as the backbone), and functions as a perturbator to generate pseudo-anomalous graphs. We have analyzed AGDiff's time complexity (refer to Reviewer BUES's W2). | null | null | null | null | null | null |
Quamba2: A Robust and Scalable Post-training Quantization Framework for Selective State Space Models | Accept (poster) | Summary: The paper introduces Quamba2, a PTQ framework for State Space Models, which aims to reduce model size and improve computational efficiency while maintaining performance. Quamba2 supports multiple bit-width configurations (W8A8, W4A8, and W4A16) for different deployment scenarios, such as cloud services and edge devices. The framework leverages an offline sort-and-cluster approach for quantizing inputs and per-state-group quantization for input-dependent parameters, resulting in significant speed-ups and memory reduction with minimal accuracy loss. The paper demonstrates Quamba2's effectiveness through extensive experiments and evaluations on various tasks and datasets, including MMLU.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: There are no theoretical claims.
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes, all parts.
Relation To Broader Scientific Literature: This paper propose a new method to solve the quantization problem in SSMs.
Essential References Not Discussed: "Post-training quantization for vision transformer. Z Liu, Y Wang, K Han, W Zhang, S Ma, W Gao."
Other Strengths And Weaknesses: Strengths:
1. Quamba2 supports multiple bit-width configurations (W8A8, W4A8, W4A16), addressing diverse deployment needs (e.g., cloud throughput vs. edge-device efficiency).
2. The offline cluster-aware weight reordering and per-state-group quantization effectively mitigate SSMs’ sensitivity to quantization errors, improving precision without fine-tuning.
Weaknesses:
1. While emphasizing speed-ups, the paper does not quantify the computational cost of offline weight reordering and clustering, which could affect deployment practicality.
2. The evaluation focuses only on Mamba1/Mamba2. It remains unclear whether Quamba2 generalizes to other SSM architectures (e.g., S4, DSS, DenseMamba).
3. Offline weight rearrangement may complicate integration into existing inference pipelines, especially for dynamic or frequently updated models.
Other Comments Or Suggestions: Please see the above comments.
Questions For Authors: Please see the above comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and thoughtful questions. We address the reviewer’s concerns and responses below.
> While emphasizing speed-ups, the paper does not quantify the computational cost of offline weight reordering and clustering, which could affect deployment practicality.
We provide the detailed breakdown of the GPU hours on A5000 for our offline clustering, scale calibration, quantization, and weight packing in Table R5.
| Table R5 | Group & Reorder | Scale Calib. | GPTQ | Weight Packing | Total (hr) |
|--------------|-------------------------|-----------------|-----------|----------------------|-------------|
| Quamba2-2.7B | 0.04 | 0.05 | 0.07 | 0.01 | 0.17 |
| Quamba2-8B | 0.05 | 0.17 | 0.10 | 0.03 | 0.35 |
We note that this is a one-time cost. The quantized model can be deployed to various devices and applications.
> The evaluation focuses only on Mamba1/Mamba2. It remains unclear whether Quamba2 generalizes to other SSM architectures (e.g., S4, DSS, DenseMamba).
We apply our method to the latest Jamba-1.6-Mini, a large SSM-based language model with 52 billion parameters (12B active / 52B total), and show the result in Table R12. Following the official guidelines [8], we quantize the Transformer and MoE blocks with *bitsandbytes* [9, 10], while keeping the Mamba blocks in half-precision. Next, we apply different precision levels to the Mamba blocks using our framework. We observe that the accuracy degradation after quantizing the Mamba blocks to low bit-width is approximately 1%. This demonstrates that our quantization framework enables effective low-bit quantization for large-scale Mamba-Transformer hybrid models, addressing a key limitation of existing LLM quantization techniques.
|Table R12 | Transformer | MoE | Mamba | Avg Acc |
|-----------------------------------|-----------------|----------|------------|--------|
|Jamba-1.6-Mini | W8A8 | W8A8 | FP16 | 78.3% |
|Quamba2-Jamba-1.6-Mini | W8A8 | W8A8 | **W8A8** | 77.2% |
| | W8A8 | W8A8 | **W4A8** | 77.3% |
| | W8A8 | W8A8 | **W4A16** | 78.0% |
> Offline weight rearrangement may complicate integration into existing inference pipelines, especially for dynamic or frequently updated models.
Our framework only increases the common GPTQ quantization pipeline by 0.05 GPU hours (~3 minutes) on A5000, while largely boosting the accuracy by 13.7% for Quamba2-8B-W4A8, as shown in Table R13.
| Table R13 | Group & Reorder | Scale Calib. | GPTQ | Weight Packing | Total (hr) | Acc. |
|--------------|-------------------------|-----------------|-----------|----------------------|-------------|---------|
| GPTQ | - | 0.17 | 0.10 | 0.03 | 0.30 | 55.1%|
| Ours | 0.05 | 0.17 | 0.10 | 0.03 | 0.35 | 68.8%|
Notably, our framework also supports LoRA modules for various downstream applications. Once the trained LoRA weights are fused into the main model weights, the same weight reordering and quantization pipeline is reused and applied.
[8] https://huggingface.co/ai21labs/AI21-Jamba-Mini-1.6
[9] Dettmers, et al. "LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale"ArXiv (2022)
[10] Dettmers, et al. "8-bit Optimizers via Block-wise Quantization" ICLR (2022) | Summary: The paper introduces Quamba2, a robust and scalable post-training quantization framework tailored for Selective State Space Models (SSMs), specifically Mamba1 and Mamba2. Quamba2 leverages structural properties unique to SSMs, such as channel order preservation and activation persistence, through novel techniques like sort-and-cluster and per-state-group quantization, significantly enhancing quantization accuracy. The framework supports various bit-width configurations, achieving notable speedups and a 4× reduction in memory usage, with minimal accuracy loss. Extensive experiments demonstrate Quamba2's effectiveness, outperforming state-of-the-art approaches in both speed and accuracy, while maintaining generalizability across diverse tasks, including the challenging MMLU dataset.
Claims And Evidence: Yes, the claims made in the submission are generally well-supported by clear and convincing evidence. The paper provides comprehensive experimental results demonstrating significant speedups and memory reductions across various bit-width configurations (W8A8, W4A8, W4A16), comparing directly to relevant baselines such as MambaQuant and Quamba, thus adequately supporting claims of superiority. Key conceptual contributions, including "sort-and-cluster" and "per-state-group quantization," are clearly illustrated through ablation studies, justifying their effectiveness in addressing quantization-induced errors in Selective SSMs.
Methods And Evaluation Criteria: Yes, the proposed claims, including improved accuracy, speedup, and memory reduction via techniques like sort-and-cluster, per-state-group quantization, and cluster-aware weight reordering, are clearly supported by detailed experiments on multiple benchmarks (LAMBADA, HellaSwag, PIQA, ARC, and WinoGrande), latency profiling, and memory comparisons.
Theoretical Claims: The paper does not present explicit theoretical proofs requiring validation.
Experimental Designs Or Analyses: Yes, the experiments are methodologically sound and comprehensive: the authors clearly justified the benchmark choices, carefully detailed their quantization methods, and presented extensive latency and accuracy comparisons. The ablation studies effectively validate individual contributions of their techniques, such as sort-and-cluster and per-state-group quantization, which clearly isolate and demonstrate their empirical benefits.
However, there exists a limited diversity in datasets when evaluating mixed-precision robustness (primarily relying on MMLU), suggesting a broader evaluation could strengthen the robustness claims.
Supplementary Material: Yes, including results for six zero-shot downstream tasks, and the implementation and evaluation details of the Quamba2 framework.
Relation To Broader Scientific Literature: The paper’s key contributions connect closely to prior literature on efficient neural network quantization and compression, especially works focusing on selective SSM. Quamba2 extends previous findings by introducing novel techniques, e.g., sort-and-cluster and per-state-group quantization informed by SSM-specific propertie: channel order preservation and activation persistence, to significantly reduce quantization errors. Also, the paper leverages established methods such as Hadamard transformations and weight reordering to further enhance quantization precision.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
- Creative combination of existing ideas, e.g., quantization, weight reordering with novel SSM-specific insights.
- Strong practical significance for efficient deployment of large SSMs.
- Rigorous and clear experimental design and ablation studies.
Weaknesses:
- Limited evaluation of generalizability, primarily focused on MMLU.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s positive feedback and respond to the concerns as follows.
> Limited evaluation of generalizability, primarily focused on MMLU.
We evaluate our framework on more tasks to show that the W4AX improves generalizability in Table R11. We include BoolQ (accuracy), and the generation-based tasks Natural Questions (NQ) (exact match) and SquadV2 (F1).
| Table R11 | Precision | BoolQ | NQ | Squadv2 |
|---------------|---------------|--------|-------|-----------|
| Mamba2 | FP16 | 76.5 | 17.2 | 51.9 |
| Quamba2 | W8A8 | 68 | 15.0 | 43.6 |
| | W4A8 | 64.6 | 14.2 | 45.9 |
| | W4A16 | 71.2 | 16.6 | 50.7 |
| | W4AX | 73.2 | 14.9 | 47.4 |
We sincerely appreciate the reviewer’spositive feedback and will include the additional evaluations in the final version. | Summary: This paper proposed a quantization scheme for SSMs such as mamba2. By leveraging the channel order preserving and activation persistence characteristics of SSMs, author mainly utilized two existing techniques, reordering and Hadamard rotation, to alleviate the quantization difficulty for SSMs and improved the resulting performance. Because not all the additional quantization-related components are done by offline processing, author also profiled the model inference and verified real end-to-end speed-up. Furthermore, author discussed the speed/accuracy trade-off between different quantization configurations, e.g. W8A8, W4A8, W4A16, and mixed precisions. Lastly, author demonstrated that for model size greater than 2.7B, quantizing embeddings and lm_head to 4bit has almost no accuracy penalty, so it can be used to further reduce model size for deployment.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: NA, no theoretical derivations in this work.
Experimental Designs Or Analyses: Yes, looks reasonable, no issues.
Supplementary Material: Yes, Appendix A to C.
Relation To Broader Scientific Literature: improve upon previous SSM quantization works. doesn't seem to improve W8A8 accuracy compared to MambaQuant (Table 6) but this work enables 4bit options so that it can improve the generation speed of memory-bound cases (Table 5).
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strength
1. well written and easy to follow.
2. covered most of the important quantization considerations and demonstrated real speed-up using custom kernels.
3. promised to open source kernels.
Weakness
1. Even though most of the additional processing are offline, some of the steps sound a bit time-consuming than others, e.g. clustering. It would be beneficial to give reader a rough idea and a few examples of the GPU hours of the key steps, maybe a break down table similar to Table 8 or 9.
2. The value of quantizing SSD scan and Conv layer is a bit unclear. Author mentioned the purpose to quantize SSD to 8bit is to reduce memory pressure. But general understanding is that SSD and Conv are very light in computation and constant memory with respect to sequence length, unlike KV cache. In fact, Table 3 shows that with input seq_len=1024, bs=8, INT8 SSD only saves ~0.5 msec out of ~3msec. Compared to Table 5 input seq_len=1024 bs=1, time-to-first-token for INT8 case is ~122 msec. The saving seems negligible. Maybe the author could elaborate more on SSD quantization and provides some justifications for doing so. And it would be helpful to add this option to one of the ablation tables (8 to 10) and show the impact of SSD quantization on accuracy.
3. Experiments 5.1 Line 346, "...percentile clipping on the input SSM, ..." may need to be a little more specific. For example, "input SSM" refers to one, some, or all in (A, B, C, x)? which percentile were used? and maybe a simple sentence or two to explain the motivation or impact would be nice.
Other Comments Or Suggestions: NA
Questions For Authors: please see "Weakness" above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review and positive feedback. We address the reviewer’s questions below.
> a breakdown table for GPU hours.
We provide the detailed breakdown of GPU hours for Quamba2. We report the GPU hours on A5000 for offline clustering, scale calibration, quantization, and weight packing in Table R5.
| Table R5 | Group & Reorder | Scale Calib. | GPTQ | Weight Packing | Total (hr) |
|-|:--:|:--:|:--:|:--:|:--:|
| Quamba2-2.7B| 0.04 | 0.05 | 0.07 | 0.01 | 0.17 |
| Quamba2-8B | 0.05 | 0.17 | 0.10 | 0.03 | 0.35 |
> show the impact of SSD quantization on accuracy.
We show the impact of causal convolution and SSD scan on Mamba2-2.7B in Table R6. We quantize the weights to 4-bit with different activation bit-width settings, and report the accuracy on the Lambada dataset. We apply our quantization techniques to SSD inputs (B, C, x).
| Table R6 | Inproj output | Conv output | SSD output + Had |Acc. |
|:---:|:---:|:---:|:---:|:---:|
| | (Conv input) | (SSD input) | (Outproj input) | |
| | FP16 | FP16 | FP16 | 68.8% |
| | Int8 | FP16 | FP16 | 68.0% |
| | Int8 | Int8 | FP16 | 67.2% |
| | Int8 | Int8 | Int8 | 65.6% |
> The value of quantizing SSD scan and Conv layer is a bit unclear.
The SSD update is a latency and memory bottleneck during generation, especially as batch size increases, although the SSD scan remains minor during prefilling. We profile latency (µs) of the linear layer, causal convolution, and SSD update, along with memory usage (MB) during generation for Quamba2-8B W4A8 and W4A16. As shown in Table R7, cached SSM and convolutional states grow linearly with batch size, eventually exceeding model size and dominating both latency and memory in large-batch scenarios.
| Table R7 |Model size (MB) | bsize | Linear (µs) | Conv (µs) | SSD (µs) | conv size (MB) | state size (MB) |
|-----------|------------------|-------|-----------|-----------|-----------|------------|-------------|
| Quamba2-8B-W4A8 | 4049 | 1 | 95.08 | 2.2 | 5.825 | 2.2 | 56 |
| | | 64 | 155.7 | 22.2 | **219.99** | 140 | **3584** |
| | | 128 | 254.37 | 53.2 | **466.65** | 280 | **7168** |
| Quamba2-8B-W4A16 | 4056 | 1 | 96.9 | 1.79 | 6.22 | 4.4 | 112 |
| | | 64 | 170.59 | 25.2 | **397.84** | 280 | **7168** |
| | | 128 | 364.22 | 47.6 | **785.46** | 560 | **14336** |
Our W4A8 models with 8-bit SSD inputs halve the cached state size and improve time-per-output-token (TPOT, i.e., generation) latency, as shown in Table R8. End-to-end TPOT latency is reported in milliseconds (ms), and OOM indicates an out-of-memory error.
|Table R8| Bit-width| b=1 | b=32 | b=64 | b=128 | b=256 |
|-|----------|------|------|------|-------|-------|
|Mamba2-8B|FP16 | 22.73 | 35.74 | 49.63 | OOM | OOM |
|Quamba2-8B| W8A8 | 12.61 | 23.83 | 30.82 | 44.85 | 79.65 |
| | W4A8 | 7.43 | 15.05 | 24.65 | 44.54 | 85.26 |
| | W4A16 | 7.58 | 20.58 | 38.48 | 74.25 | OOM |
These justify our motivation of quantizing 8-bit SSD and the cached SSM states.
> Experiments 5.1 "input SSM" refers to one, some, or all in (A, B, C, x)? Which percentile were used? explaining the motivation or impact would be nice.
For the Mamba2 baseline, we follow the setting in Quamba [5] and use their official implementation. Specifically, we apply the percentile clipping to the x input activation and per-tensor quantization for B and C for Mamba2 models. We list the clipping percentiles in Table R9.
| Table R9 | x perc. |
|--------|----------------|
| 2.7B | 0.9995 |
| 8B | 0.999 |
In Table R10, we apply clipping to (B, C, x) for Quamba2-8B-W4A8, respectively, and compare to our proposed weight-reordering and clustering. We apply per-group GPTQ quantization to the weights and Hadamard transforms to the SSD output. We report the accuracy on the Lambada dataset.
| Table R10 | Bit-width | B/C | x | Acc. |
|------------|------------|----------------|------------|-----------|
|Mamba2-8B| FP16 | - | - | 70.9% |
|Quamba2-8B| W4A8 | Per-tensor | Clipping | 55.1% |
| | W4A8 | Clipping | Clipping | 58.7% |
| | W4A8 | PerSG | Clipping | 60.7% |
| | W4A8 | PerSG | SnC | 68.8% |
We sincerely thank you for your valuable input which prompted us to clarify the details of our work.
[5] Chiang, et al. "Quamba: A Post-Training Quantization Recipe for Selective State Space Models." ICLR (2025).
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications. Please do include the justifications in the revised version. I'd like to keep my assessment unchanged.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your positive feedback and thorough review. We will incorporate the results into the revised version. | Summary: This work introduces Quamba2, a novel post-training quantization (PTQ) framework designed for State Space Models (SSMs), particularly the Mamba1 and Mamba2 architectures. The work addresses the challenge of efficiently scaling SSMs for deployment in cloud and edge computing environments by optimizing low-bit-width quantization techniques (W8A8, W4A8, and W4A16). Unlike previous methods that struggle with quantization-induced errors in SSMs, Quamba2 leverages channel order preservation and activation persistence to improve quantization accuracy. The framework employs a sort-and-cluster technique for input processing and per-state-group quantization for model parameters, ensuring computational consistency while minimizing accuracy loss. Experimental results demonstrate that Quamba2-8B achieves up to 3× generation speed-up, 4× memory reduction, and only a 1.6% average accuracy drop compared to full-precision models.
Claims And Evidence: While the claims regarding the technicality and results are well-supported, some claims remain problematic in my opinion.
1. The paper claims Quamba2 outperforms "several state-of-the-art SSM quantization methods," but there are comparisons to only two previous methods (MambaQuant and Quamba).
2. While the paper demonstrates deployment on Nvidia Nano 8G, it is not clear whether Quamba2 can enable real-world performance for edge applications better than non-SSM quantized models.
3. The paper mentions evolutionary search for mixed precision configurations but doesn't thoroughly explain the search space and constraints
Methods And Evaluation Criteria: Mostly yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: While the experimental design is generally sound and appropriate for the claims made, it would be better if the authors can add more statistical rigor, and clearly report the experimental parameters.
1. The paper doesn't clearly specify the batch sizes for all accuracy evaluations, which could affect performance metrics. Statistical significance testing is absent when comparing different methods, making it difficult to determine if observed differences are meaningful or just noise.
2. The ablation studies focus primarily on the 8B model; additional ablations on smaller models would strengthen generalizability claims.
3. The 3:1 ratio of W4A8:W4A16 seems predetermined rather than being a result of optimization
Supplementary Material: Yes, all of it.
Relation To Broader Scientific Literature: The paper's novelty is somewhat incremental. While Quamba2 introduces techniques like sort-and-cluster and per-state-group quantization specifically for SSMs, these methods largely adapt existing quantization approaches to the unique properties of State Space Models. The core insights about channel persistence and state persistence in SSMs are intriguing, but the paper primarily builds upon and extends two recent works (MambaQuant and Quamba) rather than presenting fundamentally new quantization paradigms. Core approaches like weight grouping, per-channel/group quantization scaling, Hadamard transforms to smooth activation distributions, and mixed-precision configurations are all established techniques in the Transformer quantization literature. The mixed-precision configuration approach, while practical, employs standard evolutionary search methods seen in other domains.
Essential References Not Discussed: 1. This paper does not cite some of the relevant literature in quantization of Mamba models (when there are not a lot of works yet on this topic). For example, [1] explores binary (1-bit) quantization for state space models and addresses some of the same challenges with linear recurrence sensitivity. While more extreme than Quamba2's focus, this work provides important context on the quantization limits of SSMs, and it would be important to compare the performance-efficiency trade-off with this work.
2. The paper uses evolutionary search for mixed precision but does not cite relevant prior work on hardware-aware neural architecture search that specifically targets bit-width selection, such as HAQ [2] or HAWQ [3], which pioneered similar approaches for Transformer models.
[1] Tang, S., Ma, L., Li, H., Sun, M., & Shen, Z. (2024). Bi-mamba: Towards accurate 1-bit state space models. arXiv preprint arXiv:2411.11843.
[2] Wang, K., Liu, Z., Lin, Y., Lin, J., & Han, S. (2019). HAQ: Hardware-aware automated quantization with mixed precision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Dong, Z., Yao, Z., Gholami, A., Mahoney, M. W., & Keutzer, K. (2019). HAWQ: Hessian aware quantization of neural networks with mixed-precision. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
Other Strengths And Weaknesses: Please see the points above.
Other Comments Or Suggestions: N/A
Questions For Authors: Please clarify the novelty and the experimental design choices as explained above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments. We address the reviewer’ concerns and responses below. References [1-3] are from the review.
> The paper's novelty is somewhat incremental.
Our SSM quantization framework **introduces novel observations**, **open-sources low bit-width kernels**, and **explores mixed-precision** strategies to improve the generalization, which are distinct from *prior SSM quantization* work [4, 5]. We summarize the key novelty and contributions of our work:
- We identify three key properties in SSMs: channel order, channel persistence, and state persistence.
- Based on these, we propose sort-and-cluster and per-state-group quantization, validated on large-scale datasets.
- We explore W4AX mixed-precision quantization to boost robustness and generalization of low-bit SSMs.
- Our framework supports W4A8, W4A16, W4AX, and W8A8, enabling flexible deployment and speedups across platforms.
> Outperform only two previous methods.
To the best of our knowledge, the most **up-to-date**, **peer-reviewed**, and **strongest** *SSM baselines* were included by the Jan 30 deadline. We compare against and outperform *two and only two* latest state-of-the-art *SSM quantization* methods, MambaQuant and Quamba, which were accepted to ICLR 2025 on Jan 22. We will change “several state-of-the-art” to “two latest state-of-the-art” in our manuscript.
> Compared with 1-bit Bi-mamba
We compare our framework against 1-bit Bi-mamba on Mamba2 2.7B in terms of storage, performance, GPU hours, and cost versus the average accuracy of PIQA HS WG ARC-e ARC-c in Table R1. We estimate the A100 GPU hours for a fair comparison.
|Table R1|Storage|Tokens|GPU Hrs|Cost|Avg Acc|
|-|-|-|-|-|-|
|Bi-mamba-2.7B-W1A16| 0.55 GB |105B|7822|$32070|52.68|
|Quamba2-2.7B-W4A8|1.4 GB|0.18M|0.05|$0.205|61.36|
Although QAT-based Bi-Mamba offers better storage efficiency, it requires several orders of magnitude more tokens, GPU hours, and cost (more than 100,000$\times$) compared to our PTQ framework.
> The paper doesn't clearly specify the batch sizes for all accuracy evaluations. Statistical significance testing is absent.
We use a batch size of 16, a fixed random seed, and report the average accuracy over five runs (Section 5.1) for all experiments. We include standard deviation in Table R2 for Quamba2-8B-W4A8. Notably, our method outperforms all baselines on Mamba2 beyond the reported deviations.
|Table R2|LA| HS|PIQA|Arc-E|Arc-C|WG|Avg|
|-|-|-|-|-|-|-|-|
|Quamba2-8B-W4A8|68.6±0.10|77.1±0.23|79.1±0.21|75.0±0.24|46.0±0.39|68.7±0.28|69.1±0.17|
> The ablation studies focus primarily on the 8B model.
We conduct the same ablation study for Quamba2-2.7B-W4A8, and show the results in Table R3. We note that the group sizes of B/C for 2.7B are one, making it equivalent to per-tensor quantization. Our framework in the last row outperforms all other settings.
|Table R3|Weights||Had.|B/C|x|Acc.|
|-|-|-|-|-|-|-|
||PerG|GPTQ||PerSG|SnC||
|FP16|-|-|-|-|-|69.5%|
|W4A8|✓|||-||fail|
||✓||✓| - ||39.8% |
||✓|✓|✓|-||51.2%|
|W4A8 (Ours) |✓|✓|✓|-|✓|65.6%|
> Quamba2 better than non-SSM quantized models on edge?
We include the 4-bit Llama-3-8B [7] in Table R4 for reference. Due to time constraints, we report only the Time-To-Last-Token (TTLT) in seconds on an A5000 GPU, using 2K input tokens and 2K generated tokens, along with the average accuracy across six zero-shot tasks. We will include latency results on the Nano 8G in the next version of our manuscript.
|Table R4|Bit-width|Avg Acc|TTLT (2k+2k)|Memory|
|-|-|-|-|-|
|Llama-3-8B|FP16|70.4%|48.52|15.4G|
|Llama-3-8B-QServe|W4A8KV4|69.0%|24.63|5.7G|
|Mamba2-8B|FP16|70.7%|47.3|15.7G|
|Quamba2-8B (ours)|W4A8|69.1%|15.4|4.0G|
> Explain the search space and constraints. The 3:1 ratio of W4A8/A16 seems predetermined.
Our search space consists of N^2 configurations, where N denotes the number of layers. We fix the W4A8/A16 ratio and search for the precision for each layer to achieve the best accuracy. We will include more configurations in our manuscript.
> Evolutionary search for mixed-precision seen in other domains. Cite relevant on hardware-aware NAS targeting bit-width selection.
Our work to address the performance gap for low bit-width SSMs on large datasets by mixed-precision with generic speedups, which are different from CNNs [2, 3], Transformers [6], and prior SSM quantization work [4, 5]. We will cite and discuss the relevant work in our final version.
[4] Xu, et al. "MambaQuant: Quantizing the Mamba Family with Variance Aligned Rotation Methods." ICLR (2025).
[5] Chiang, et al. "Quamba: A Post-Training Quantization Recipe for Selective State Space Models." ICLR (2025).
[6] Zhao, et al. "Automatic mixed-precision quantization search of bert." IJCAI (2021).
[7] Lin, et al. "QServe: W4A8KV4 Quantization and SystemCo-design for Efficient LLM Serving." MLSYS (2025). | null | null | null | null | null | null |
MoHAVE: Mixture of Hierarchical Audio-Visual Experts for Robust Speech Recognition | Accept (poster) | Summary: MoHAVE is a novel audio-visual speech recognition system that addresses the scalability challenges in traditional AVSR models. The paper introduces a sparse Mixture-of-Experts (MoE) framework combined with a hierarchical gating mechanism that dynamically routes audio-visual inputs to modality-specific expert groups. Here’s a brief summary:
MoHAVE leverages a sparse MoE architecture to scale model capacity while keeping computational costs low. Its hierarchical gating system consists of an inter-modal router that assigns weights to audio and visual expert groups based on input characteristics, and intra-modal routers that further select the top experts within each group. This design enables the model to adapt to varying noise conditions by shifting reliance between modalities—using more visual cues in high auditory noise and vice versa. The paper demonstrates that MoHAVE achieves state-of-the-art performance on robust AVSR benchmarks like LRS3 and in multilingual speech recognition and translation tasks, all while activating only a fraction of its total parameters during inference.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: A scalable sparse MoE framework tailored for audio-visual speech recognition.
A novel hierarchical gating mechanism that dynamically adjusts expert group utilization based on input context.
Superior robustness and accuracy under noisy conditions, setting new performance benchmarks on standard AVSR tasks.
Essential References Not Discussed: The key contribution is a MOE-based AVSR model, and cite mainstream prior works.
Other Strengths And Weaknesses: Overall, I think this is a good piece of work; the MoE technique is well-suited for AVSR, and previous studies have shown that simply fusing the two modalities can lead to models that overly depend on audio. From an AVSR perspective, the experimental results achieved in this paper are excellent, representing a robust algorithm. However, since my own research field does not focus on MoE, I cannot provide a particularly meaningful evaluation of the innovations in the MoE component.
Other Comments Or Suggestions: Na
Questions For Authors: Na
Ethical Review Concerns: Na
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper thoroughly and for recognizing the key strengths, including the **suitable hierarchical MoE architecture for AVSR, a novel gating mechanism, and strong experimental results demonstrating robust performances**.
We greatly appreciate your thorough evaluation from the perspective of AVSR, even though MoE lies outside your primary research field. As such, we would like to briefly highlight our contributions of MoHAVE to the MoE framework:
- **Novel hierarchical gating mechanism**: An inter-modal router dynamically allocates tokens to audio or visual expert groups based on input characteristics, and an intra-modal router further dispatches tokens to appropriate experts within these groups.
- **Adaptive and robust MoE**: Previous works for multimodal MoEs have relied on modality fusion strategies or assigning fixed roles to each expert. While they lack adaptability and robustness to dynamically changing noisy environments, our work addresses this limitation by dynamically adjusting the usage of expert groups.
We also kindly encourage you to go through other reviewers' comments as well as our responses, and if any additional concerns arise, please let us know through the discussion phase. We would be glad to provide any further clarification. | Summary: This paper proposes an adaptive hierarchical routing mechanism in mixture of experts model for audio-visual (AV) speech recognition and AV language X speech to English text translation. As compared to a hard routing of modalities into modality specific expert groups, this paper uses a combination of inter-modal and intra-modal routers while still keeping audio and video experts separate. This way the model can utilize A-only, V-only and AV data in one model. The paper also implements some load balancing techniques to balance the mixture weights.
For AVSR, LRS3 dataset is used and the model performance is reported under various noise conditions. For cross-lingual speech task, MuAViC benchmark is used. The model is compared agains Base AV MoE method and AV-HuBERT method. In the AVSR experiments, it is shown that the WER on noisy datasets improve over AV-HuBERT and that the load biasing is a crucial implementation detail that affect the final WER. In the X-En Speech-to-Text Translation, the large MoHAVE model outperforms XLAVS-R models for most of the test languages in terms of the BLEU score.
## update after rebuttal
I would like to keep my score after rebuttal.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Equations in the main text mostly seem to be correct. In Eq. (11), $E^{A}$ is used both as the set of experts and the number of experts in summation, hence the notation is a little confusing.
Experimental Designs Or Analyses: Went through the tables but have not checked the details of Figs. 4 and 5 (which have an expected pattern of relying less on audio and more on video in low SNR conditions).
One of the main contributions is presented as "robust AVSR benchmarks including multilingual tasks, delivering high accuracy while maintaining computational overhead. " However, the computational efficiency is only discussed in the appendix. It could have been useful to include that analysis in the main text and Fig. 4 or 5. could have been moved to the Appendix.
Supplementary Material: Skimmed through it. Read through Appendix A. 2. where the computational costs are discussed.
Relation To Broader Scientific Literature: In line with previous studies, load balancing is crucial for the success of the MoE implementation. The analysis of the mixture weights per modality was also supporting earlier observations around relying more on the visual component when the acoustic noise is heavy.
Since multimodal systems are becoming more popular, the proposed hierarchical multimodal MoE might be helpful for other studies in other multimodal applications.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: + Strengths: A variant of the MoE routing is proposed which might be useful for other multimodal applications. Results show some improvement over the AV-HuBERT baseline for AVSR and MoHAVE also can handle some multilingual tasks.
- Weaknesses: The notation in the equations could be impoved, and the paper might benefit from including the computational cost analysis in the main text.
Other Comments Or Suggestions: 1) Eq. 11, please check the notation as described above
2) One of the main contributions is presented as "robust AVSR benchmarks including multilingual tasks, delivering high accuracy while maintaining computational overhead. " However, the computational efficiency is only discussed in the appendix. It could have been useful to include that analysis in the main text and to save space Fig. 4 or 5. could have been moved to the Appendix. This is a minor suggestion.
Questions For Authors: - What was the reason for the following choice? "For sequences containing both audio and video, we exclude them from the load biasing loss calculation but incorporate them into the load balancing."
- How are the batches constructed during training? Do the batches contain a mix of audio-only datapoints, video-only datapoints and AV data points in a single batch?
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > Weakness 1: The notation in the equations could be improved
A1: Thank you for the detailed review. We will revise the notations, especially in Eq. (11), to clearly distinguish the set of experts from its cardinality.
> Suggestion 1: The computational cost analysis is only discussed in Appendix
A2: We agree with your suggestion that computational efficiency is indeed one of the main contributions of MoHAVE, especially given its sparse MoE architecture that activates only a fraction of parameters. To better highlight this contribution, we will move the computational cost comparison as well as its discussion currently presented in Appx. A.2 into the main body.
> Question 1: What was the reason for the following choice? *For sequences containing both audio and video, we exclude them from the load biasing loss calculation but incorporate them into the load balancing.*
A3: The load biasing loss is designed specifically to encourage modality specialization for a subset of experts. Audio-visual multimodal tokens do not inherently favor one modality-specific expert group over another. Therefore, these tokens are included in the load balancing, ensuring uniform expert group loads across every multimodal token.
> Question 2: Do the batches contain a mix of audio-only, video-only, and AV data points in a single batch?
A4: Yes, each batch is constructed by sampling a total of 320 seconds of data, structured as follows: 25% of the data points in the batch are audio-only (video dropped), another 25% are video-only (audio dropped), and the remaining 50% are multimodal, containing both audio and video. | Summary: The paper introduces MoHAVE, a novel Audio-Visual Speech Recognition (AVSR) framework leveraging a Mixture of Experts (MoE) architecture. By dynamically selecting modality-specific experts through a hierarchical gating mechanism. Experimental results on benchmark datasets demonstrate its effectiveness, outperforming existing models in challenging noisy environments.
Claims And Evidence: The main claims of the paper are:
1. MoE architecture for scaling AVSR systems that effectively scales AVSR models while maintaining computational efficiency
2. Hierarchical gating for adaptive expert utilization
3. Robust AVSR performance
These claims are generally well-supported by experimental results. However, a comparison with all relevant baseline models (e.g. CMA, UniVPM) is important to support claim 3.
Methods And Evaluation Criteria: The proposed method and evaluation criteria are well-defined. The hierarchical Mixture-of-Experts (MoE) approach is a reasonable architectural choice for improving model scalability and the evaluation benchmarks (LRS3 and MuAViC) are widely used in the AVSR research community. However, the evaluation mostly relies on synthetic noise additions rather than real-world conditions. It would be interesting to show how the model performs on real-world data.
Theoretical Claims: The theoretical claims are well-supported particularly in the discussion of MoE routing mechanisms and hierarchical gating. The paper clearly defines the load balancing loss and load biasing loss to improve expert selection. The hierarchical gating strategy for inter-modal and intra-modal routing is mathematically justified, and the empirical results show effectiveness the approach.
Experimental Designs Or Analyses: The experimental design and analysis are well-structured with the use of two AVSR benchmarks: LRS3 and MuAViC. The authors compare their approach with multiple baseline models (e.g. AV-HuBERT, AV-MoE), perform ablation analysis (load biasing loss, hard routing, and number of activated experts), along with a computational cost analysis that show the efficiency of the proposed model. However, some baseline models (e.g. CMA, UniVPM) are not included in Table 1 for comparison, despite being mentioned in the related work and included in Table 2.
Supplementary Material: I reviewed the supplementary material, focusing on the experimental setup, including model descriptions and computational cost analysis (A.1, A.2), the LRS3 and MuAViC benchmark experiments (A.3, A.4), and additional results on expert group utilization, multilingual performance in clean environments, and variations of MoHAVE implementations (B.1, B.2, B.5). These sections provide valuable insights and further clarify the paper’s methodology
Relation To Broader Scientific Literature: The paper’s key contributions build upon prior work in Audio-Visual Speech Recognition (AVSR) using Mixture-of-Experts (MoE), including models like AV-HuBERT, AV-data2vec, and Auto-AVSR, which utilize self-supervised learning for audio-visual speech processing. However, MoHAVE advances this approach by incorporating a hierarchical MoE framework, enhancing scalability and robustness while maintaining computational efficiency.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths
1. Scalability without excessive computational cost.
2. Adaptive expert selection improves generalization across different noise conditions.
3. Comprehensive benchmarking across various AVSR datasets and multilingual settings.
Weaknesses
1. The model has been evaluated on synthetic data and not on real-world conditions
2. Some relevant baseline models (e.g. CMA, UniVPM) are missing from the comparison.
Other Comments Or Suggestions: NA
Questions For Authors: 1. Why don’t you compare MoHAVE with the CMA (Kim et al., 2024) and UniVPM (Hu et al., 2023c) in Table 1?
2. Can MoHAVE handle asynchronous speech and lip movements (e.g. delays in video frames relative to audio)?
3. How do different languages affect expert selection and the hierarchical MoE routing?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > Weakness 1: The model is evaluated on synthetic data and not on real-world conditions
A1: We acknowledge your concern regarding evaluation with synthetic noise data. While standard AVSR benchmarks such as LRS3 and MuAViC typically offer curated datasets with high-quality audio and clear visual information, **these benchmarks alone cannot fully represent real-world noisy conditions.** Therefore, following standard practice in robust AVSR research [1,2], we have introduced various noise conditions to evaluate our MoHAVE's robustness and adaptability.
Additionally, to better reflect real-world noise conditions, we conducted further evaluations by augmenting LRS3 with realistic background audio from the DEMAND dataset [3], which contains recordings from diverse indoor and outdoor environments, e.g., cafeteria. On this enhanced benchmark (at SNR=-10~0), MoHAVE consistently outperformed AV-HuBERT across various real-world settings including **cafeteria (WER: 6.4 vs. 8.6), restaurant (11.9 vs. 13.1), meeting room (4.5 vs. 5.7), and river (4.4 vs. 6.1)**, achieving an **average WER of 3.6% vs. 4.1%** across all 18 environments. These results further confirm MoHAVE’s performance under realistic audio-visual conditions.
> Weakness 2: CMA and UniVPM are missing from Table 1
A2: The CMA and UniVPM models in Table 2 are all built upon AV-HuBERT-Large, **matching the architecture and activated parameter count with the dense (non-MoE) baseline in Table 1.** Furthermore, both Table 1 and Table 2 evaluate under identical experimental setups and noise configurations (i.e., babble, speech, music, and natural noises), while Table 2 reports average results for music and natural. For detail, please refer to the table below.
Method|# Experts|Groups|Activated Params|Total Params|babble|speech|music|natural|N-WER|C-WER
-|-|-|-|-|-|-|-|-|-|-
UniVPM|-|-|478M|478M|9.3|4.1|3.6|3.6|5.2|1.2
CMA|-|-|500M|500M|8.1|2.9|3.8|3.6|4.6|1.5
We note that Table 1 primarily demonstrates the effectiveness and efficiency of MoHAVE, **by comparing different MoE variants**, including standard MoE, hard routing, and hierarchical MoE. In contrast, CMA and UniVPM in Table 2 utilize special modules for cross-modality, which are orthogonal to the MoE framework, focusing instead on audio-visual fusion or feature enhancement strategies independent of expert routing mechanisms. Yet, recognizing the importance of comprehensive comparisons, we have included the result of incorporated CMA + MoHAVE in Table 2. To improve clarity, we will revise by merging Table 1 and Table 2.
> Question 1: Can MoHAVE handle asynchronous speech and lip movements?
A3: We have not yet evaluated MoHAVE under audio-visual asynchronous conditions. Our current framework is optimized for scenarios where audio and video are temporally aligned, as is standard in most AVSR works. Prior works that address the audio-visual asynchrony have proposed solutions like external synchronization module [4], which explicitly model temporal offsets between audio and visual streams. While MoHAVE does not currently model asynchrony, we believe combining MoHAVE with methods that explicitly handle asynchronous inputs could be a valuable extension for future research.
> Question 2: How do different languages affect expert selection and hierarchical MoE routing?
A4: Thank you for this insightful question. Our analysis indicates language-dependent differences in expert allocation within MoHAVE. For example, Arabic tokens tend to be routed more frequently toward visual experts, whereas French or Spanish tokens rely more heavily on audio experts (please see this [**anonymized link**](https://figshare.com/s/a45cbf35654ae37b2ce9)). However, we also note that these trends vary by layer. Also, within each expert group, the intra-modal router’s load-balancing ensures a uniform expert utilization across data samples. Thus, there is no explicit language-specific expert selection within groups, consistent with observations found in [5]. We suppose that more detailed investigation into expert load distribution across languages and its relation to linguistic/paralinguistic characteristics would be valuable future work.
---
References:
[1] Hong et al. "Watch or listen: Robust audio-visual speech recognition with visual corruption modeling and reliability scoring." CVPR, 2023.
[2] Kim et al. "Learning video temporal dynamics with cross-modal attention for robust audio-visual speech recognition." SLT, 2024.
[3] Thiemann et al. "The diverse environments multi-channel acoustic noise database" Proceedings of Meetings on Acoustics, 2013.
[4] Li et al. "Unified cross-modal attention: robust audio-visual speech recognition and beyond." TASLP, 2024.
[5] Zoph et al. "St-moe: Designing stable and transferable sparse expert models." arXiv, 2022. | Summary: This paper enhances audio-visual speech recognition based on MoE with audio/visual hierarchical modeling. This paper attaches the audio-visual MoE fusion parts to the decoder part and controls the audio and video expert groups, respectively, based on the group-level load biasing loss so that each modality contributes to audio-visual speech recognition in a balanced manner. The experimental results show that this balanced (hierarchical) manner performs reasonably from the hard routing-based approaches. The method also achieved state-of-the-art performance in the noisy LRS3 benchmark and multilingual audio-visual speech recognition/translation.
## update after rebuttal
I checked the rebuttal, but I still could not fully convince myself that this paper has significant novelty from AV-MoE (Cheng et al.) and EVA (Wu et al.) in terms of the methodology, as the major concept is similar. The actual experimental comparisons will make their claims stronger. Thus, I want to keep my score as it is.
Claims And Evidence: - The proposed MoHAVE (Mixture of Hierarchical Audio-Visual Experts) shows performance improvements from hard routing-based approaches. However, its margin is small.
- The method shows the state-of-the-art performance on two public benchmarks (audio-visual noisy speech recognition based on LRS3 and multilingual audio-visual speech recognition/translation based on the MuAViC benchmark)
- The paper also shows how much each modality contributes by checking the MoE posterior values, but this does not have comparisons, and it is difficult to validate better interpretability.
Methods And Evaluation Criteria: - The paper uses two public benchmarks (audio-visual noisy speech recognition based on LRS3 and multilingual audio-visual speech recognition/translation based on the MuAViC benchmark). It is based on the established token (word/character) error rate and BLEU score, and its evaluation criteria are valid.
Theoretical Claims: - This paper does not have a theoretical component.
Experimental Designs Or Analyses: - I found that the straightforward approaches based on the hard-routing method are very competitive. The paper should provide the hard-routing method in the following experiments (e.g., Tables 2 and 3).
- The expert load distribution results (Figures 4 and 5) are intuitive and interpretable. However, there are no comparisons, and it is difficult to discuss whether this behavior is better or not. Is it possible to compute that of the hard-routing method? Then, we can discuss how the proposed MoHAVE is more reasonable.
Supplementary Material: - I checked B.5 since I'm interested in the encoder-level fusion used in prior studies (Cheng et al., 2024; Wu et al., 2024). The decoder-level fusion seems more effective, and I recommend the authors emphasize this result in the main document to make a better distinction (Cheng et al., 2024; Wu et al., 2024).
Relation To Broader Scientific Literature: - multi-modal processing (vision, text, and speech) has become very important in recent AI technologies. Also, MoE has become a very active research topic in ML now. So, this research that combines them would have a good broader scientific impact.
Essential References Not Discussed: The paper references prior AV speech recognition methods based on MoE (Cheng et al., 2024; Wu et al., 2024) and distinguishes its focus on general and speech video scenarios. However, this distinction is insufficient, as all approaches (MoHAVE, Cheng et al., 2024; Wu et al., 2024) are applicable to both scenarios. Given the methodological similarities, the paper should include experimental comparisons with these prior works to strengthen its claims.
Other Strengths And Weaknesses: Strengths
- showing the state-of-the-art performance in two public audio-visual speech recognition/translation benchmarks, including the multilingual setup
- the modality contribution is intuitive
Weaknesses
- the proposed method and its improvement are incremental.
- hard-routing methods seem to be strong, and the paper should have more comparisons with them
- AV ASR has already been studied in various literature (e.g., (Cheng et al., 2024; Wu et al., 2024)) and its contribution is marginal. The experimental comparisons with (Cheng et al., 2024; Wu et al., 2024) may somewhat mitigate this weakness.
Other Comments Or Suggestions: - Equation (2): $E_i(x)$ suddenly appears without any explanation. Also, it is confusing because $E$ is used as the number of experts in equation (1). I recommend the authors rewrite these equations.
- This is just a note. The TED organization changed its policy last year and did not want researchers to use the TED data for AI development. The MuAViC dataset is based on the TED data, and it would be difficult to publish the results based on this data in the future.
Questions For Authors: Figures 4 and 5: Can you compute and compare the expert load distribution in the hard-routing methods versus the proposed methods? If the proposed methods demonstrate greater interpretability, this would further substantiate the method’s interpretability claims.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: > Weakness 1: The proposed method and its improvement are incremental
A1: Thank you for your valuable comments. We would like to clarify that MoHAVE introduces several key innovations in both scalability and robustness for AVSR systems, which go beyond existing works:
MoHAVE is the **first AVSR framework that scales up to ~1B params**, through using a sparse MoE architecture to enable efficient scaling with low computational overhead. To mitigate the model’s inherent bias toward audio, we also introduced **expert group specialization**, followed by a **novel hierarchical gating mechanism** that dynamically routes tokens based on modality reliability and input characteristics. Unlike previous multimodal MoEs relying on modality fusion or assigning fixed roles to each expert, MoHAVE explicitly adjusts expert group usage dynamically—enhancing both adaptability and robustness.
MoHAVE achieves **state-of-the-art results across robust AVSR benchmarks** (Tables 2 and 3). While the improvements in Table 1 may seem modest on average, the gains under severe noise conditions are substantial. As shown in Appx. A.3 (Table 5), MoHAVE-Large achieves 5.0% WER on LRS3 with speech noise at SNR=-10—yielding a **56.1% relative WER improvement over AV-HuBERT-Large, 36.7% over AV-MoE-Large, and 25.4% over the hard-routing variant.** This indicates that MoHAVE correctly predicts over half of the words AV-HuBERT misses.
We believe these contributions pose a significant breakthrough in AVSR scalability and adaptive learning, and our hierarchical routing design offers broad potential for other multimodal MoE applications as well.
> Weakness 2: The paper should have more comparisons with hard routing
A2: We initially introduced hard routing into AVSR to utilize visual experts for noise robustness. However, it lacks adaptability since the allocation of expert groups must be manually fixed (50% audio / 50% visual in our implementation). This approach is sub-optimal depending on audio-visual input quality (as discussed in Fig. 4(b)), and the limitation becomes clearer under challenging environments. Under severe conditions (e.g., Table 5, SNR=-10, -5) hard routing cannot dynamically adjust expert usage, performing much worse than MoHAVE. In response to the reviewer’s suggestion, we additionally evaluated the hard routing model on MuAViC, as **provided below**. Here, hard routing substantially underperforms MoHAVE and even mAV-HuBERT.
Model (Task)|Ar|De|El|Es|Fr|It|Pt|Ru|Avg
-|-|-|-|-|-|-|-|-|-
Hard Routing (AVSR)|93.4|49.3|35.7|20.3|23.6|23.4|24.1|44.7|39.3
**MoHAVE** (AVSR)|92.9|47.3|35.3|18.7|21.2|21.6|21.9|40.6|**37.4**
Hard Routing (AVS2TT)|-|-|6.7|19.9|24.7|19.6|23.0|7.2|16.8
**MoHAVE** (AVS2TT)|-|-|11.4|22.3|27.1|22.1|25.1|9.2|**19.5**
**Regarding Fig. 4 and 5:** computing expert load distribution for hard routing would be trivial. By design, expert usage is manually set depending on the input: 100% audio experts for audio-only, 100% visual for video-only, and 50/50 for audio-visual (finding optimal split is heuristic). Unlike MoHAVE, there is no data-driven or noise-aware expert selection. Thus, hard routing would trivially display static distributions without dynamic behavior.
> Weakness 3: Experimental comparisons with (Cheng et al., 2024; Wu et al., 2024)
A3: Direct comparisons with AV-MoE (Cheng et al.) and EVA (Wu et al.) are unfortunately infeasible due to fundamental differences in target tasks and methods. Both AV-MoE and EVA primarily address audio captioning for visual contexts (e.g., narrating sports game scenes), while our work specifically targets typical AVSR tasks, where both audio and visual inputs directly involve human speech.
Moreover, AV-MoE employs a dense MoE; unlike sparse expert structures commonly used in modern LLMs or Transformers, AV-MoE’s "MoE" is actually implemented as weighting between unimodal and cross-modal adapters, rather than selecting sparse FFN experts. Specifically, AV-MoE uses two entirely separate MoEs for audio encoder and visual encoder, infeasible for processing multimodal tokens. Our approach, MoHAVE, fundamentally differs by employing a sparse multimodal MoE, dynamically routing tokens based on audio-visual inputs.
Closer to our work is EVA, which simply applies a sparse MoE structure into an audio-visual encoder. Although exact implementation details are unavailable (code/checkpoints unreleased), EVA’s structure aligns closely with our basic MoE implementation which we evaluated as *AV-MoE* in Table 1 (*AV-MoE-Base* and *AV-MoE-Large*), except ours is in the decoder. As demonstrated in our study (Table 9 in Appx. B.5), applying MoE at the encoder-level—like EVA—falls behind our multimodal decoder approach. Thus, EVA likely cannot achieve comparable robustness or efficiency.
> Suggestion 1: Rewrite the equations
A4: Thank you. We will revise the equations to clearly distinguish the expert set from its cardinality, defining $E_i$ as the output of the $i$-th expert. | null | null | null | null | null | null |
Scaling Embedding Layers in Language Models | Reject | Summary: This paper proposes SCONE, which is an extended n-gram embedded layer to improve model's performance. SCONE introduces contextualized embeddings for frequently-used n-grams. While these embeddings are learned from a small Transformer model, they can also be precomputed and stored to avoid additional latency. The experiment shows that SCONE achieves better performance without additional inference cost.
## Update after rebuttal
My score keeps unchanged since I have no misunderstandings with the authors.
Claims And Evidence: This paper makes systematic experiments on the inference cost and enhanced performance.
Methods And Evaluation Criteria: This paper leverages well-accepted evaluation criteria.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: The experimental designs are reasonable and the results are solid.
Supplementary Material: No supplementary materials.
Relation To Broader Scientific Literature: This is an interesting and novel research topic. Most of research on LLM architecture is usually on the Attention and FFN Part. There are only a few research focusing on the simple input embedding layer. However, this paper is relatively a new research direction to enhance model's capability without additional inference overhead.
Essential References Not Discussed: All essential references are discussed.
Other Strengths And Weaknesses: I think the novelty of this paper is worth praising. There is no clear weaknesses in this paper.
Other Comments Or Suggestions: No other comments.
Questions For Authors: No other questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your supporting review! We appreciate your recognition of the novelty and contributions of our work. We also welcome any further comments you may have. | Summary: The paper proposes a new method, SCONE, to expand the embedding layer. Instead of directly expanding the vocabulary size, which usually leads to the sparsity issue (long-tailed symbols/tokens receive sparse updates due to their data sparsity), the paper chose to expand the embedding layer by incorporate frequent n-grams (f-grams)embeddings for given inputs. The n-grams embeddings are precomputed by a separate model and can be offloaded to CPU and secondary memory during inference. This allows scaling the modelling capacity of embedding layer without incurring additional inference cost. Experiments on GPT-2 tokenizers and OLMo architecture demonstrate the proposed method has potential to scale either the number of cached n-grams or the model learning them without increasing inference costs.
Claims And Evidence: The claims are well-supported:
- Solid experiments across multiple model sizes (128M to 1.3B) and datasets
- Clear ablation studies on key parameters
- Convincing measurements of storage and latency impacts
- Strong results showing a 1B model with SCONE outperforming a 1.9B baseline
Some are not clear to me. But I feel they are more or less deserving deeper studies rather than this work which focuses on empirical speedup in inference time.
- The paper seems to assume expand embedding layer with f-grams can enhance model performance. But it doesn't thoroughly validate that additional n-grams are necessarily useful across all contexts. I feel in some context, using pure token embeddings should be enough for the task.
- The approach assumes n-grams only be added in embedding layers, but self-attention and feedforward layers also implicitly store bi-gram and tri-gram patterns. Directly offloading these computations could avoid repeated processing and yield further efficiency gains.
Methods And Evaluation Criteria: yes, they make sense
- Their f-gram selection process builds on established BPE approaches
- They use standard metrics (perplexity) on common benchmarks
- They measure both model performance and practical deployment factors
Theoretical Claims: no theoretical claims made though it could be interesting to figure out how much additional expressiveness is added with the f-gram embeddings and how they tradeoff the non-embedding computation with embeddings.
Experimental Designs Or Analyses: yes, they make sense
- Consistent training across baselines and variants
- Good ablation studies
- Appropriate scaling from small to large models
- Testing across diverse datasets
Supplementary Material: yes, appendix A and B
Relation To Broader Scientific Literature: The authors position their work in relation to three main areas of prior research:
- Contextualized Word Embeddings: They acknowledge previous work on incorporating context into word embeddings and highlight that their key novelty is allowing embeddings to be precomputed and offloaded without increasing inference-time FLOPS.
- Scaling of Vocabulary Size: Recent findings from Tao et al. (2024) show that larger models benefit from larger vocabularies but note that optimal vocabulary sizes are still much smaller than model sizes. This motivates the proposed approach of extending the embedding layer without changing the vocabulary size.
- Tokenization in Language Models: They discuss how their method assumes a predefined vocabulary but is not tied to any specific tokenization algorithm, making it compatible with various approaches.
- They also discussed MoE and memory layers in the appendix
Essential References Not Discussed: SCONE's approach to embedding expansion could benefit from situating itself within a broader context of research across multiple domains. The following references would provide important theoretical grounding and suggest potential applications beyond the paper's current scope:
- Recent work [1-4] points out that some implicit circuits or paths inside transformers might already capture n-grams. These works are relevant to understand how SCONE achieves the speedup theoretically. For example, is the offloading of frequent n-grams replacing some of the original n-gram pathways inside transformer (for example, jet bi-gram path or jet n-gram path in [4])? This represents a tradeoff between the in-context computation and the lookup table computation.
- Frequency-related performance degradation is not uncommon in general embedding-based models. The issue of sparsity when scaling vocabulary size is not unique to LLMs but also exists in other embedding systems. Solutions can include adaptively regularizing the embeddings or sparsifying them [5,6] in the recommender systems. Actually, SCONE might be more powerful in the domain of recommender models where embeddings take up an even larger portion of the entire model, which can be a better scenario for SCONE (as in LLMs it seems that scaling the model sizes might diminish the benefit of SCONE).
- Expanding vocabulary and scaling embedding layers is quite important for multilingual applications. Some work on expanding vocabulary [7,8] focuses on expanding embeddings for this purpose. It could be worthwhile to discuss how SCONE can help these domains. For example, instead of finding n-grams in English, can we find and embed the n-grams in a target language? This would show that SCONE can be an effective way to do cross-lingual transfer without incurring non-emebdding inference cost.
By incorporating these references and exploring connections to these adjacent fields, SCONE could establish stronger theoretical foundations while simultaneously demonstrating broader applications beyond language modeling, potentially strengthening both its academic contribution and practical impact.
[1] [Neurons in Large Language Models: Dead, N-gram, Positional](https://arxiv.org/abs/2309.04827)
[2] [Transformer Feed-Forward Layers Are Key-Value Memories](https://aclanthology.org/2021.emnlp-main.446/) (EMNLP 2021)
[3] [Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space](https://aclanthology.org/2022.emnlp-main.3/) (EMNLP 2022)
[4] [Jet Expansions of Residual Computation](https://arxiv.org/pdf/2410.06024)
[5] [λopt Learn to Regularize Recommender Models in Finer Levels](https://arxiv.org/pdf/1905.11596) (KDD 2019)
[6] [Learnable Embedding Sizes for Recommender Systems](https://arxiv.org/pdf/2101.07577) (ICLR 2021)
[7] [On the Cross-lingual Transferability of Monolingual Representations](https://aclanthology.org/2020.acl-main.421.pdf) (ACL 2020)
[8] [Improving Language Plasticity via Pretraining with Active Forgetting](https://arxiv.org/abs/2307.01163) (NeurIPS 2023)
Other Strengths And Weaknesses: #### Strengths
- The motivation for scaling embeddings is compelling and addresses a real bottleneck in language model efficiency
- The approach of replacing partial model computation with precomputed frequent n-gram embeddings is innovative and practical
- The method is conceptually simple and does not rely on particular architectural design of the model. Thus it has great generalizability and could potentially be used for improving performance in other domains beyond language modeling, particularly in recommender systems and cross-lingual transfer.
#### Weaknesses
I would not say these are weakness but rather like interesting directions to explore in the future.
- The focus on embedding offloading is somewhat narrow - the paper could explore offloading other non-embedding compute to CPU as well, such as implementing a query-answer datastore for frequent queries that could similarly speed up inference
- While the paper demonstrates good performance gains, it's not entirely clear whether SCONE can outperform alternative methods like Mixture of Experts (MoE) when focusing solely on inference speedup, though the authors do discuss MoE in the appendix
Other Comments Or Suggestions: some quotations are single commas rather than double commas
Questions For Authors: **Q1:** Can you explain in simple terms how the speedup is achieved? The paper states: "We demonstrate that a 1B parameter model with SCONE outperforms a baseline model requiring ∼2× more inference-time FLOPs."
**Q2:** Does SCONE impact sample efficiency during training? Or does the introduced f-gram embedding require more data to train effectively?
**Q3:** Counting n-grams on a large corpus is computationally expensive and can take days. Could you estimate the time complexity of your counting algorithm and report the runtime for computing n-grams over a typical pretraining corpus?
**Q4:** Looking at Figure 6, SCONE appears to work well with smaller models but seems to bring less improvement to perplexity as model size increases. Could you explain why the benefit margin diminishes with larger models?
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your review. Please find our responses below, we’re happy to discuss further if needed. We’ve also included downstream evaluations in our response to Reviewer iJdC.
**1. Missing references**
We thank the reviewer for the insightful references. We have incorporated all the references in the following discussions, and added them to Related Work (Sec 5).
“**Implicit $n$-gram patterns in transformers.** Recent work analyzing the internal mechanisms of transformers has shown that these models often utilize implicit $n$-gram patterns for prediction (Geva et al., 2021; Geva et al., 2022; Voita et al., 2023). For instance, Chen et al., 2024 show that certain attention heads detect specific $n$-gram patterns, while MLPs can perform linguistic operations such as adding the “-ing” suffix. These findings underscore the importance of $n$-gram information in language modeling and offer a potential explanation for the effectiveness of SCONE. An interesting future direction is to examine how SCONE's f-gram embeddings interact with the transformer’s implicit $n$-gram patterns.”
“**Embedding sparsity in multilingual applications and recommender systems.** This work focuses on a common setting for training LLMs: language modeling on large-scale text corpora, primarily in English. However, scaling embedding layers presents challenges beyond this context, particularly due to frequency-related performance degradation caused by sparsity. Multilingual applications are one such scenario. Two phrases in different languages may refer to the same concept but correspond to different embeddings. Their embeddings should ideally be close. Recent work explores methods for learning transferable embeddings in cross-lingual settings (Artetxe et al., 2020; Chen et al., 2023). Another relevant example is scaling the embeddings for recommender systems (Chen et al., 2019; Liu et al., 2021), where embeddings often dominate the model's parameter count due to the high cardinality of user or item categories. For both scenarios, SCONE’s strategy, i.e., parameterizing large embedding tables using a neural network, provides a complementary approach to mitigate sparsity issues.”
**2. The focus on embedding offloading is somewhat narrow.**
We agree that exploring offloading beyond input embeddings could further reduce inference costs and our work can be viewed as a first step. A key research challenge along this direction is deciding what should serve as keys in such a system. We have added the following discussion to the last section:
“An interesting future direction is to extend SCONE beyond short $n$-grams to include longer and frequent queries. A key challenge would be designing effective keys for such queries. Using raw text as keys may lead to low hit rates, as semantically similar queries often differ at the surface level. Alternatively, using semantic embeddings as keys would require discretization methods to map continuous embeddings to a set of keys that supports efficient indexing.”
**3. Whether SCONE can outperform methods like MoE.**
Comparing with MoE is indeed an interesting future work. That said, SCONE offers a key advantage: while both aim to scale model capacity under fixed inference FLOPS, MoE requires all expert weights to reside on accelerators, since any token might activate any expert. In contrast, SCONE is designed for **fixed accelerators memory usage at inference**.
**4. Explain in simple terms how the speedup is achieved. (1B with SCONE v.s. ~2x inference FLOPS baseline)**
The 1B SCONE-enabled model outperforms the 1.9B baseline. The 1.9B baseline requires ~2x inference FLOPS. Thank you for pointing this out; we’ve clarified this in the introduction.
**5. Training sample efficiency of SCONE?**
No, SCONE does not reduce sample efficiency. All models are trained on the same number of tokens, and as shown in Figure 13, the improvements are consistent throughout training.
**6. Complexity of the counting algorithm & its runtime.**
As noted in Section 3.1, there is an efficient implementation for the counting algorithm that requires $n-1$ linear passes over the corpus ($n$ is the max f-gram length). On 1T tokens using 8 processes and $n=5$, processing took about 10 hours. We've included this in the revision.
**7. SCONE appears to bring less improvement to perplexity as model size increases.**
As perplexity decreases, achieving the same absolute reduction becomes more difficult due to the nature of language modeling. In Figure 6, we used a linear scale on the y-axis, which can make improvements at lower perplexity appear smaller. We’ve uploaded a new figure (at [this anonymous link](https://github.com/Anonymous-figure-upload/scone_author_response)) that uses a log y-axis (following recent scaling law studies, e.g., Figure 2 in Hoffmann et al., 2022). The improvements in log scale appear more consistent.
**8. Some quotations are single commas.**
We have changed the single comma quotations in line 567.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my concerns and sharing the new evaluation results including the new figure on perplexity. Below are my comments.
Given the large portion of embeddings in large language models ( as we expand to multiple languages, the vocabulary size will lead to a even larger portion of parameters being the embeddings), the topic of this paper is quite relevant and timely, receiving a large audience as this method can reduce the inference cost significantly.
Evaluation-wise: The authors demonstrate good results using not only perplexity but also metrics on common benchmarks during rebuttal. This new result further convinces me that the paper deserves a clear accept.
Writing and presentation-wise: My other concerns was mainly among how the work can be well contextualized regarding the literature of scaling embedding-based models. The authors provided a revised related work section while also sufficiently discussing their method compared to methods like MoE during the rebuttal stage. I would suggest the authors add a discussion over the scope (bullet point 2 and 3 along with some of the related works) in the appendix.
N-grams-wise: N-grams are interesting themselves as both linguistic analysis targets and the basic component for language modeling. Although the early n-grams models are not as competent as the latest transformer architectures, they can be quite a useful tool for further scaling up the models [9]. Additionally, it is very likely that the LLMs themselves are good n-gram approximators as shown by the line of work on interpretability and theory of transfoemer [1-4,10-12]. I also believe the good work in this line is coming rather than a bold statement like "n-grams has been given up in LLM era".
Overall, I believe this paper deserves a clear accept given its relevance, novelty and good emprical results.
[9] Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens https://arxiv.org/abs/2401.17377
[10] A Mathematical Framework for Transformer Circuits https://transformer-circuits.pub/2021/framework/index.html
[11] Can Transformers Learn n-gram Language Models? https://arxiv.org/abs/2410.03001
[12] The Role of n-gram Smoothing in the Age of Neural Networks https://arxiv.org/abs/2403.17240
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the follow-up comments and appreciate the helpful suggestions.
* Regarding the suggestion to add a discussion over the scope.
We will extend the discussion of related work and promising future directions as suggested. This will include: (1) offloading embeddings for longer sequences beyond $n$-grams, (2) connections with implicit $n$-gram circuits in transformers, (3) potential applications of SCONE in multilingual and recommender system settings, and (4) the connections and distinctions among our approach, MoE, and memory layers. | Summary: This paper introduces a technique for expanding input embedding layers to improve the performance of language models. The experimental results show that the solution mentioned in this paper outperforms a 1.9B parameter baseline.
Claims And Evidence: I think the writing of this paper can be improved. Maybe all the claims made in the submission are supported by convincing evidence, but the authors must rewrite some parts of the paper. For example, the caption of Figure 1 (top) claims that with 10M f-grams, the 1.3B model matches the 1.9B baseline, while with 1B f-grams, the 1B model surpasses it. However, the legend of Figure 1 (top) is "+10M f-grams (0.6B f-gram model)", "+10M f-grams (1.8B f-gram model)", "+1B f-grams (0.6B f-gram model)", and "+1B f-grams (1.8B f-gram model)". It seems that no line in Figure 1 represents the 1.3B model.
Methods And Evaluation Criteria: Almost no existing open-sourced model uses the same architecture as that developed in this paper.
Theoretical Claims: This paper does not include any proofs for theoretical claims.
Experimental Designs Or Analyses: I think this paper should include more evaluation.
Supplementary Material: Yes, I review the supplementary materials.
Relation To Broader Scientific Literature: This paper considers f-gram embeddings for transformer-based models.
Essential References Not Discussed: I think this paper almost includes all related works.
Other Strengths And Weaknesses: Strengths:
1. This paper proposes a solution to expand the embedding layer.
2. The experimental results show the effectiveness and efficiency of the developed approach.
Weaknesses:
1. The manuscript's quality is lacking. Major parts should be rewritten, and the figures need to be polished.
2. The motivation of this paper is not clear. Especially, this architecture is not utilized by any open-weighted model. In addition, n-gram has been given up in the LLM era. I'm unsure why the authors employ n-grams to enhance the performance of large language models.
Other Comments Or Suggestions: Here are my comments:
1. lines 88-93, the authors claim "Our experiments with GPT-2 models (Radford et al., 2019) pre-trained on WebText (Peterson et al., 2019)
confirm these limitations: Only 7.3% of embedding vectors in a 2M vocabulary receive more than 100 updates over 100M training tokens, compared to 97.6% for a 32K vocabulary". However, it is known to all that the 7B model is always trained by 1T tokens (https://arxiv.org/pdf/2302.13971), therefore, it is not clear why embeddings receive very few updates when we train the 7B model.
2. Lines 391 and line 396, I feel confused that +10M Vf-gram (0.6B Af-gram) has different performance over c4-en, books, etc. The caption of Table 2 mentions that all models are trained for 200B tokens. In addition, where is the 1.3B model mentioned in the caption of Table 2?
3. The Perplexity of OLMo is important. However, what people really care about is the accuracy of models over downstream tasks. Could you evaluate your pre-trained model over downstream tasks such as Arc-Challenge, Arc-Easy, and Hellaswag?
4. The concept shares similarities with RAG in certain respects. However, according to Table 1, deploying this model requires substantial memory or disk space. Therefore, the model with f-gram embedding is hard to deploy at an edge device.
5. Given the vocabulary size of Gemma is 256,000, I'm concerned that the memory and disk requirements for the f-gram embedding layers for the Gemma model will surge significantly.
The writing issue:
1. the x-label of Figure 1 should be inference FLOPs, not inference-time FLOPs.
2. Is it possible to keep the line width consistent?
3. I guess the red line in Figure 5 is an error bar. However, the error bar always contains a vertical line.
4. As for Figure 6, the legend overlaps with the line.
Questions For Authors: See comments and suggestions.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We thank the reviewer for the suggestions. Please find our response below.
**1. The manuscript's quality is lacking.**
We have carefully addressed the concerns regarding writing (see below). Please see the updated Figure 1 (and its caption), Figure 5, and Figure 6 at [this anonymous link](https://github.com/Anonymous-figure-upload/scone_author_response). We would like to clarify that these issues pertain to presentation and do not affect the validity of our main claims.
**1.1** Figure 1: the line of 1.3B model and the x-label.
The 1.3B model corresponds to the column with inference FLOPS of $6.20\times 10^{12}$. We have revised the caption to improve clarity.
We have changed the x-label, and other occurrences of “inference-time” FLOPS in the paper, to inference FLOPS.
**1.2** Line width not consistent.
We have adjusted the width of all figures to be $0.9\times$ the text column width.
**1.3** Figure 5: error bars.
We have updated Figure 5 to include vertical lines in the error bars.
**1.4** Figure 6: legend overlaps with the line.
We have adjusted the figure to eliminate any overlap between the legend and the line.
**2. The motivation is not clear. This architecture is not utilized by any open-weighted model. In addition, n-gram has been given up in the LLM era.**
Our motivation is to reduce inference FLOPS by trading computation for RAM or SSD, which are significantly cheaper and more abundant than accelerators. A natural target for this is the embedding layer, due to its inherent lookup-based structure. While we agree that the NLP community has largely moved away from traditional n-gram methods, recent work showed that $n$-grams still play important roles in transformers (due to space limit, please see the references in reviewer LBJ6’s review). We hope our work offers a new perspective on how n-grams can be leveraged to improve the efficiency of LLMs.
**3. … it is known to all that the 7B model is always trained by 1T tokens**
In lines 88–93, our intention is to highlight that increasing the vocabulary size can eventually degrade model performance. We use the update counts over 100M tokens to help explain this phenomenon. We have revised the manuscript to clarify this point:
“In Appendix C, we train GPT-2 models for 80B tokens with vocabulary sizes ranging from 32K to 2M, and observe performance degradation as the vocabulary size exceeds 512K. This degradation may be attributed to the increasing sparsity of updates per token as the vocabulary grows. …”
With 1T training tokens, the absolute number of updates each embedding receives will increase. However, the relative sparsity remains: larger vocabularies still result in fewer updates per embedding on average.
**4. Lines 391 and line 396, I feel confused that +10M Vf-gram (0.6B Af-gram) has different performance over c4-en, books, etc. … In addition, where is the 1.3B model mentioned in the caption of Table 2?**
Although lines 391 and 396 use the same f-gram configuration, they correspond to different main model sizes. Specifically, the results from lines 390 to 394 are based on the 1B model, while lines 395 to 399 correspond to the 1.3B model.
To clarify this distinction, we have added the following to the caption of Table 2:
“We train three baseline models of sizes 1B, 1.3B, and 1.9B. For the 1B and 1.3B baseline models, we apply our SCONE method with four different configurations, and present the results directly below each corresponding baseline model.”
**5. Could you evaluate your pre-trained model over downstream tasks?**
Yes, we evaluated the zero-shot performance of our models on PIQA, MMLU, HellaSwag, ARC-Easy, ARC-Challenge, and CommonsenseQA, following [the recent implementation in the OLMO codebase](https://github.com/allenai/OLMo/blob/main/olmo/eval/downstream.py). Due to space constraints, please see **our response to reviewer iJdC** for the full results. The downstream evaluation outcomes align with the perplexity trends.
**6. … deploying this model requires substantial memory or disk space.**
The configurations in Table 1 use large f-gram embedding layers to demonstrate that even the most resource-intensive setup explored in the paper remains feasible in certain server-based settings.
As shown in Figure 6, SCONE offers clear improvements even with much smaller embedding sizes. For example, with a 512K f-gram embedding layer—approximately 20× smaller than the 10M setting in Table 1—the perplexity of the 589M base model improves from 18.1 to 16.8 on WebText.
**7. The memory and disk requirements for the Gemma model will surge significantly.**
For a given embedding dimension, the storage usage of the f-gram embedding layers is only determined by the number of f-grams, which is a configurable parameter independent of the vocabulary size. Therefore, please note that a larger vocabulary does not lead to higher memory or disk requirements compared to smaller vocabularies.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal! I reviewed the author's rebuttal and the comments from other reviewers.
However, I have the following concerns:
1. I agree that the methodology mentioned in this paper can improve the accuracy of models over downstream tasks in some cases since this paper proposes a novel embedding/tokenizer solution. However, the improvement is limited from the results of 1.9B (56.75%) and 1B + 1B f-grams (57%).
2. I think the presentation of this paper should be improved. For instance, the authors should polish the figures and add some examples (as a response to **Reviewer iJdC**) to aid readers' understanding. This paper appears to be poorly prepared.
3. Since you offload the f-gram embedding layer to the storage, I do not think inference FLOPs is still a good metric. The users really care about end-to-end latency. Inference FLOPs are not equal to end-to-end latency.
4. Almost all papers shared by **Reviewer LBJ6** are before 2024 (only one paper was submitted in 2024), therefore, I still believe that n-gram is not an important technique in the LLM era.
In conclusion, I would not be surprised if this paper is accepted as a poster. However, if this paper is rejected by ICML, I recommend that the authors enhance the presentation of the paper before the next submission. Honestly, I consider this paper to be borderline. Thanks!
---
Reply to Comment 1.1.1:
Comment: Thank you for your follow-up comments! For improving clarity and presentation, we’ll make sure [the polished figures](https://github.com/Anonymous-figure-upload/scone_author_response) are included in the revision. We will also include the example in the response to Reviewer iJdC.
Regarding the comment on inference FLOPs vs. end-to-end latency, we respectfully note that we did report end-to-end token generation speed at the bottom of Figure 1 (i.e. counting the latency of loading f-gram embeddings to GPU). The plot used the [vLLM package](https://github.com/vllm-project/vllm). The figure shows that: **1)** when f-gram embeddings are stored in CPU memory, the impact on latency is negligible, **2)** when they are stored on NVMe, the generation speed decreases by ~20% (e.g., for the 1B model the speed reduces from ~250 tokens per second to ~200 tokens per second).
We appreciate your thoughtful feedback and hope our clarifications support a potential adjustment of the score toward a borderline recommendation. | Summary: The paper presents a new method for scaling the vocabulary of LLMs. Given some base vocabulary, a set of frequent n-grams is calculated. When such a n-gram is seen, a small transformer (called an f-gram model) is applied to the n-gram embeddings to produce a new embedding. This embedding is then fed to the larger LLM in lieu of the original n-gram. During evaluation, the f-gram embeddings can be precomputed and cached in a separate table. The authors evaluate this method experimentally and find:
1. It improves evaluation perplexity.
2. Perplexity improves as a function of the number of embeddings.
3. Perplexity improves as a function of the f-gram model size.
Claims And Evidence: The claims are generally supported by evidence, but I'm not convinced that perplexity is the best evaluation metric. It will change depending on the tokenizer, and thus it's not a good apples-to-apples metric. It would be better to use MMLU/hellaswag etc.
Methods And Evaluation Criteria: No, see above.
Theoretical Claims: na
Experimental Designs Or Analyses: The design of experiments is sensible, but the evaluation metric is not good imo.
Supplementary Material: Na
Relation To Broader Scientific Literature: Na
Essential References Not Discussed: Na
Other Strengths And Weaknesses: Pros:
1. The idea is simple and natural.
2. The writing is clear.
3. There are a lot of good experiments.
Cons:
1. The evaluation metric (perplexity) is not a good metric. Because of this, I do not think that the evaluation is good. So I'm not convinced that the method works in practice.
Other Comments Or Suggestions: na
Questions For Authors: 1. Can you add numbers for standard LLM evals like MMLU, hellaswag and so on?
2. Can you clarify how the system works during inference? If I decode 3 tokens from the base vocabulary and find that they match an n-gram from the f-gram model, should the model recompute the k,v-cache for these three tokens with the embeddings from the matched n-gram?
EDIT: I have increased my score in response to the non-perplexity eval numbers.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments. Please find our response below.
**1. The evaluation metric (perplexity) is not a good metric. Can you add numbers for standard LLM evals like MMLU, hellaswag and so on?**
While perplexity is a commonly used metric for evaluating language models, we acknowledge that additional downstream evaluations can further strengthen our work. In response, we have incorporated zero-shot evaluations on standard benchmarks including PIQA, MMLU, HellaSwag, ARC-Easy, ARC-Challenge, and CommonsenseQA, following [the recent implementation in the OLMO codebase](https://github.com/allenai/OLMo/blob/main/olmo/eval/downstream.py). The results are shown below. Thank you for suggesting; we will add the benchmark results in the revision.
| Model | PIQA | HellaSwag | ARC-Easy | ARC-Challenge | Commonsense QA | MMLU_var | Avg |
|--------|---------|-----------|----------|---------------|----------------|----------|--------|
| 1B baseline | 73.57 | 60.93 | 69.47 | 31.76 | 48.73 | 37.61 | 53.67 |
| 1.9B baseline | 75.31 | 65.86 | 74.21 | 36.78 | 49.71 | 38.64 | 56.75 |
| 1B + 10M f-grams| 73.95 | 63.58 | 70.35 | 32.09 | 49.96 | 39.25 | 54.86 |
| 1.3B + 10M f-grams| 75.04 | 65.52 | 75.26 | 36.44 | 49.96 | 38.54 | 56.79 |
| 1B + 1B f-grams| 75.31 | 67.05 | 72.45 | 36.44 | 50.78 | 39.97 | 57.00 |
Applying SCONE, i.e., adding f-gram embeddings, does not increase inference FLOPS and requires only off-accelerator storage. Notably, the 1.9B baseline incurs roughly twice the inference FLOPS of the 1B model. The downstream evaluation results align with the perplexity trends and further reinforce our main claims.
**2. Can you clarify how the system works during inference? If I decode 3 tokens from the base vocabulary and find that they match an n-gram from the f-gram model, should the model recompute the k,v-cache for these three tokens with the embeddings from the matched n-gram?**
To clarify, SCONE does not require recomputing the (k,v) cache during inference. Instead, it fetches the f-gram embedding for each decoded token individually. For example, if the current context is [t0, t1] and the newly decoded token is t2, and we find a match for [t0, t1, t2] in the f-gram embedding layer, we use the f-gram embedding of [t0, t1, t2] as the input embedding for t2 in the main model. This f-gram embedding of [t0, t1, t2] corresponds to the embedding of the last token embedding in the output of the f-gram model (precomputed for [t0, t1, t2]) and only involves simple lookups during inference. If it helps, we can add this example in the revision. | null | null | null | null | null | null |
On the Out-of-Distribution Generalization of Self-Supervised Learning | Accept (poster) | Summary: This paper focuses on the out-of-distribution generalization of self-supervised learning. The authors first give one plausible explanation for SSL having OOD generalization, then analyze and conclude that SSL learns spurious correlations during the training process from the perspective of generation and causal inference. To address this issue, they further propose a post-intervention distribution (PID) grounded in the Structural Causal Model. Experiments verify the advantages of their method.
## Update After Rebuttal
Thank you for addressing my concerns. I maintain my positive score.
Claims And Evidence: It seems convincing.
Methods And Evaluation Criteria: The method and evaluation criteria make sense.
Theoretical Claims: I have checked the proofs of theoretical claims, and the entire theoretical derivation and claims seem appropriate.
Experimental Designs Or Analyses: I have checked the experimental designs and analyses, and the experimental results are impressive.
Supplementary Material: I have reviewed the supplementary.
Relation To Broader Scientific Literature: By analyzing the mini-batch construction during the SSL training phase, this paper gives one plausible explanation for self-supervised learning (SSL) having OOD generalization. Moreover, this paper also analyzes and concludes that SSL learns spurious correlations during the training process, which leads to a reduction in OOD generalization.
Essential References Not Discussed: The essential references seem sufficient.
Other Strengths And Weaknesses: Strengths:
1. This article provides a wealth of theoretical analysis, making the entire work more solid.
2. The experimental results are impressive.
Weaknesses:
1. While the theoretical aspects are robust, the practical implementation of these concepts, especially the integration of causal inference in SSL, might be complex and computationally intensive. This could limit its applicability in environments with constrained computational resources.
2. Some of the causal assumptions made may not hold in all real-world scenarios, which could affect the generalizability of the findings. A deeper exploration of these assumptions, including conditions under which they may not be valid, would provide a more comprehensive view of the method’s applicability.
3. Some key terms and variables used throughout the paper could be defined more clearly to avoid ambiguity, enhancing the paper’s accessibility to a broader audience.
Other Comments Or Suggestions: See weaknesses (above) and questions (below).
Questions For Authors: 1. Could you provide more detailed insights into how the Post-Intervention Distribution (PID) is specifically integrated into the self-supervised learning process? It would be beneficial to understand the operational steps or algorithms used to enforce PID constraints during mini-batch preparation.
2. You mentioned that the proposed method minimizes spurious correlations. Can you discuss any specific metrics or evaluation criteria used to measure the extent of spurious correlations before and after applying your method?
3. Could you discuss the scalability of your proposed method, particularly in terms of computational resources and time required as the dataset size increases? Is the method feasible for large-scale real-world applications where computational efficiency is critical?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Response to Weaknesses 1 & Questions 3**:
Thank you for pointing these out. The proposed method has two main phases with the following complexity analysis per mini-batch (batch size $B$, dataset size $D$):
**Step 1: Latent Variable Model Training:**
- **$q_\phi(s|x^+,x^{\rm label})$:** Each sample requires a forward pass with cost $O(C_\phi)$, totaling $O(B \cdot C_\phi)$.
- **$p_f(x^+|s,x^{\rm label})$:** Each sample incurs a cost $O(C_f)$, totaling $O(B \cdot C_f)$.
- **$g$ for $\lambda^e$:** Computed once per mini-batch with cost $O(C_g)$.
- **KL-Divergence:** Involves operations over the latent dimension $n$ and sufficient statistic dimension $k$, contributing $O(B\cdot n\cdot k)$.
- **Orthogonality Regularization:** Requires $O(n\cdot k^2)$, which is constant when $n$ and $k$ are small.
Thus, the training phase complexity is approximately: $
O\Big(B\cdot (C_\phi + C_f + n\cdot k) + C_g + n\cdot k^2\Big).
$
**Step 2: Algorithm 1**
- **Propensity Score Calculation:** For each sample, computing scores across the $D$ candidates costs $O(D\cdot n\cdot k)$, leading to a total of $O(D^2\cdot n\cdot k)$ for the mini-batch.
- **Matching Operation:** A brute-force matching over $D$ samples yields an additional $O(D^2)$.
Therefore, the sampling phase has an overall complexity of approximately: $
O(D^2\cdot n\cdot k).
$
**Step 3: Overall Complexity**
The combined complexity per mini-batch is: $
O\Big(B\cdot (C_\phi + C_f + n\cdot k) + C_g + n\cdot k^2 + D^2\cdot n\cdot k\Big).
$
The symbols $C_\phi$, $C_f$, and $C_g$ represent the computational cost for a single forward pass (or operation) of each respective network module. For specific computational resources and time, please refer to **Response to Weaknesses 2** of **Rebuttal for Reviewer 1Q1V**
----
**Response to Weaknesses 2**
Thank you for pointing this out. We provide a deeper exploration of **Assumption 3.3** and **Assumption 4.1** in our response.
For **Assumption 3.3**, we implicitly assume that the noise is independent of both $ s $ and $ x^{\rm label} $. However, in many practical scenarios, noise may be correlated with either the latent variables or the observed features—for example, sensor noise that correlates with lighting conditions in image data—which can interfere with the separation between causal and non-causal factors.
Regarding **Assumption 4.1**, the main concern lies in the potential mismatch between the true conditional distribution in real-world data and the assumed exponential family. If the actual distribution is more complex or exhibits behaviors that go beyond this family—such as multi-label or multi-instance characteristics—then the applicability of our method may be compromised.
----
**Response to Weaknesses 3**:
Thank you for pointing this out. In the final version, we will add a table to illustrate all terms and variables related to our method.
----
**Response to Questions 1**:
Thank you for pointing this out. We explain this issue through the following steps:
**Step 1: How do we implement PID**
According to *Definition 4.4* in the original submission, $s$ and $x^{\rm label}$ are conditionally independent given $ba(s)$. Based on this, if all pairs in a mini-batch share the same $ba(s)$, then within this mini-batch, $s$ and $ x^{\rm label} $ can be considered independent. Consequently, such a mini-batch can be viewed as being sampled from a **PID**.
**Step 2: How is this integrated into SSL**
In the training phase of SSL, a mini-batch is typically sampled from the training data prior to each iteration. In standard SSL, this mini-batch is randomly sampled. In contrast, our method constructs the mini-batch using **Algorithm 1** from the original submission. That is, our approach embeds into SSL by replacing the mini-batch sampling process with **Algorithm 1**, without altering any other part of the SSL training procedure.
According to **Algorithm 1**, the core criterion for selecting samples is to ensure that the $ ba(s) $ values of each pair are as similar as possible. This ensures that the resulting mini-batch has consistent $ ba(s) $ across all samples, thereby forming a PID.
----
**Response to Questions 2**:
Thank you for pointing this out. Instead of proposing specific metrics or evaluation criteria, we run a toy experiment on the COCO dataset [1] with two different experimental settings: 1) training and testing the SSL model on full images; 2) training and testing the SSL model on foreground images. Setting 2) can be thought of as not being subject to background semantic confounding. For the Top 1 classification accuracy, the results of SimCLR are 39.66 and 50.19, the results of SimCLR + Ours are 45.25 and 51.48. We observe that our method gives closer results in both settings and significantly outperforms SimCLR. Thus, it can be concluded that our method learns less spurious correlations.
[1] Microsoft coco: Common objects in context. ECCV, 2014. | Summary: This work propose a minibatch sampling strategy to select pairs of samples in the mini-batch to enhance the OOD geralization ability of SSL methods. By investigating on a causal perspective from the constructed SCM model, the method propose a Post-Intervention Distribution, which can be realized by balancing score.
Claims And Evidence: I do not find any evident errors in the claims.
Methods And Evaluation Criteria: The method seems convincing, essentially when the balancing condition holds, there is no changes of distribution in spurious features, therefore the SSL method will focus on the invariant features.
Theoretical Claims: The results of Theorem 3.4 should be correct that the loss minimizes the worst-case risk, this is a well-defined target in invariant learning literatures. I am not sure if Theorem 4.3 is correct as I am not familiar with the identifiability theory.
Experimental Designs Or Analyses: This work primarily addresses the OOD generalizability of SSL methods; however, the experiments do not include any OOD datasets, such as _Waterbirds and CMNIST_. Conducting experiments directly on OOD datasets would help evaluate the effectiveness of the proposed sampling strategy.
Supplementary Material: I read Appendix E and F.
Relation To Broader Scientific Literature: This work relates to the self-supervised learning literatures and domain generalization, as well as causal inference literatures.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: 1. The biggest concern for me is the experiment setting, i.e., it does not involve any OOD dataset for evaluation, however, the main goal of this work is to enhance OOD generalization ability of SSL methods.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Why using a exponential family distribution to model $p(s \mid x^{label})$? Why not using a reversible neural nets which may achieve higher expressibity and simplify the design?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Response to Weaknesses 1 & Experimental Designs Or Analyses**:
Thank you for pointing this out. We clarify this issue through the following steps:
**Step 1: How the original submission constructs the OOD task**
The transfer learning task and the few-shot learning task can be regarded as OOD (out-of-distribution) tasks, as the training and test datasets in these tasks follow different data distributions. Meanwhile, in **Appendix C.3** of the original submission, we also provide evaluation results on two OOD datasets, namely the Colored-MNIST dataset and the PACS dataset.
**Step 2: Results on Waterbirds dataset and CMNIST dataset**
For Waterbirds, we follow the implementation in ZARE et al. (2023) "Evaluating and Improving Domain Invariance in Contrastive Self-Supervised Learning by Extrapolating the Loss Function". During training, waterbirds (landbirds) predominantly appear on water (land) backgrounds; however, the distribution is altered at test time. We report both the average and worst-group performance. The results in the table below demonstrate that our method yields consistent improvements, particularly enhancing performance for the worst-performing group.
| Method | Test Accuracy | Worst Group |
| ----------- | ------------- | ----------- |
| SimCLR | 76.2 | 19.2 |
| SimCLR+Ours | 78.0 | 24.9 |
| MAE | 74.9 | 17.6 |
| MAE+Ours | 77.2 | 22.1 |
For Colored-MNIST, we follow the implementation in Huang et al. (2024) "On the Comparison between Multi-modal and
Single-modal Contrastive Learning". The task is a 10-class digit classification, with 10% of the labels randomly reassigned. During training, images belonging to class ‘0’ (or ‘1’) are colored red (or green) with a probability of 77.5%, and another random color with a probability of 22.5%. For the test set, the coloring scheme is reversed relative to training, which allows us to evaluate the extent to which the model relies on color cues for classification. The results in the table below show that our method improves OOD test accuracy by nearly 10%.
| Method | Test Accuracy |
| ----------- | ------------- |
| SimCLR | 12.7 |
| SimCLR+Ours | 23.5 |
| MAE | 15.1 |
| MAE+Ours | 24.9 |
These results further demonstrate that our proposed method effectively improves the OOD generalization performance of SSL.
----
**Response to Question 2**:
Thank you for pointing this out. According to Assumption 3.3 in the original submission, both $ x^{\rm label} $ and $s$ can be obtained through an invertible neural network. Training such an invertible neural network typically requires training data in the form of $(x^{\rm label} _i, s _i, x _i^+) _{i=1}^N $. However, we did not adopt this mechanism directly because we did not have access to such training data. In particular, we were unable to provide the corresponding $ s_i $ for each pair. Therefore, we opted to use a **Learning Latent Variable Model** approach instead. | Summary: The paper explores the **out-of-distribution (OOD) generalization** of self-supervised learning (SSL). It analyzes how mini-batch construction in SSL training influences OOD generalization and argues that SSL models often learn **spurious correlations**, which hinder their ability to generalize to unseen distributions. To address this issue, the paper introduces a **post-intervention distribution (PID)** based on **Structural Causal Models (SCMs)**. This ensures that spurious variables and label variables remain independent, improving OOD generalization.
Furthermore, the authors propose a **mini-batch sampling strategy** that enforces PID constraints through a latent variable model. They provide theoretical proof of the identifiability of their method and validate it with empirical results. Experiments on various downstream OOD tasks demonstrate that their approach significantly enhances SSL’s generalization performance.
### **Strengths:**
1. **Novel Causal Perspective on SSL OOD Generalization**
- The paper offers a compelling causal analysis of why SSL struggles with OOD generalization and how spurious correlations arise.
2. **Innovative Mini-Batch Sampling Strategy**
- Unlike traditional batch sampling, the method ensures **spurious correlations are minimized**, leading to better OOD generalization.
3. **Strong Empirical Performance**
- The proposed method consistently improves performance across diverse benchmarks, including **unsupervised, semi-supervised, transfer learning, and few-shot learning tasks**.
Claims And Evidence: The claims in the paper are partially supported by the evidence in the experiment part. However, the reviewer is concerned about the lack of evaluation on mask-autoencoder based pre-training methods like [1,2] in the main paper. Although some results are provided in the supplementary, it can be worthwhile to add the comparison in the experiments in the main paper together with contrastive-based methods, as in the analysis and proof part, the authors formulate discriminated-based and generative-based methods with a unified framework.
[1] He, K., Chen, X., Xie, S., Li, Y., Doll´ ar, P., and Girshick, R. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16000–16009, 2022.
[2] Tong, Z., Song, Y., Wang, J., and Wang, L. Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. Advances in neural information processing systems, 35:10078–10093, 2022.
Methods And Evaluation Criteria: The reviewer thinks the evaluation setting and metrics make sense for the claims.
Theoretical Claims: The reviewer checked partially of the proofs. Specifically, theoretical claims in Section 3 and 4.1 are checked.
Experimental Designs Or Analyses: The reviewer thinks that experimental designs are valid.
Supplementary Material: Yes. Section BCE were checked. And part of Section A was checked.
Relation To Broader Scientific Literature: The paper makes a theoretical and practical contribution by introducing a causal approach to improving OOD generalization in SSL. While the method is empirically validated, its assumptions, computational cost, and feasibility in large-scale applications could be explored further.
Essential References Not Discussed: The reviewer does not come up with essential related literature that is not discussed.
Other Strengths And Weaknesses: 1. Starting from Line 162, left column, the authors propose an assumption that "the semantic information within x+ is related only to xlabel, that is, s does not contain any causal semantics related to the task.". They provide two examples about this assumption. However, as natural images are not restricted to numbers/styles discussed in the two example, the reviewer is concerned about the assumption. More examples in ImageNet should be provided against it.
2. As additional training is required, the reviewer is concerned about the training efficiency of the proposed method. The authors should provide evaluation on it.
3. As more parameters (VAE) are introduced in the proposed method, the reviewer is concerned about the fairness of comparison. Some discussion should be provided.
Other Comments Or Suggestions: Please refer to Other Strengths And Weaknesses part.
Questions For Authors: Please refer to Other Strengths And Weaknesses part.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Response to Claims And Evidence**:
Thank you for pointing this out. In **Appendix C.1**, we report the results of MAE. Now, we present the results of VideoMAE.
We transfer the learned VideoMAE + Ours on Kinetics-400 [1] to downstream action detection dataset AVA [2]. Following the standard setting [3], we evaluate on top 60 common classes with mean Average Precision (mAP) as the metric under IoU threshold of 0.5. The result is shown as follows:
| Method | Backbone | Pre-train Dataset | Extra Labels | $T \times \tau $ | GFLOPs | Param | mAP |
| ----------------- | ------------- | ----------------- | ------------- | ---------------- | ------ | ----- | ---- |
| VideoMAE | ViT-L | Kinetics-700 | No | $16 \times 4 $ | 597 | 305 | 36.1 |
| VideoMAE | ViT-L | Kinetics-700 | Yes | $16 \times 4 $ | 597 | 305 | 39.3 |
| VideoMAE + Ours | ViT-L | Kinetics-700 | No | $16 \times 4 $ | 597 | 305 | 38.7 |
| VideoMAE + Ours | ViT-L | Kinetics-700 | Yes | $16 \times 4 $ | 597 | 305 | 42.1 |
In the above table, "Extra Labels" denotes that if the pre-trained models are additionally fine-tuned on the pre-training dataset with labels before transferred to AVA. $T \times \tau $ refers to frame number and corresponding sample rate. In the final version, we will add these results to the main body of our submission.
[1] The kinetics human action video dataset. arXiv preprint, 2017.
[2] Ava: A video dataset of spatio-temporally localized atomic visual actions. CVPR, 2018.
----
**Response to Weaknesses 1**:
Thank you for pointing this out. We provide some additional ImageNet-inspired examples and explanations to address the concern:
**Object vs. Background:**
Consider an ImageNet class like "Golden Retriever." The causal semantics for recognizing a Golden Retriever primarily reside in the dog’s shape, fur texture, and facial features. Although many images might have different backgrounds—such as parks, beaches, or urban settings—these background elements (which could be captured by $s$) are not causally responsible for the image being classified as a Golden Retriever. In this case, $x^{\rm label}$ would capture the object-specific features, while $s$ would account for non-causal variations like the background.
**Intra-Class Variability:**
Take another class, such as "Volcano." A volcano can be pictured under different weather conditions, from different angles, and with various surrounding landscapes. While these environmental or stylistic factors vary widely, the key causal semantics—such as the volcano’s structure, cone shape, and crater—remain consistent. Again, $s$ may vary (capturing changes in lighting, weather, or background) without affecting the causal information needed to identify a volcano.
----
**Response to Weaknesses 2**:
Thank you for pointing this out. We provide the model efficiency and memory footprint results of the proposed method trained on 8 NVIDIA Tesla V100 GPUs.
| Method | Training Time (Hours) | |Memory Footprint (GB) | |
| ----------- | --------------------- | -------- | --------------------------------- | -------- |
| | CIFAR-10 | ImageNet | CIFAR-10 | ImageNet |
| SimCLR | 10.4 | 101.9 | 23.3 | 221.6 |
| SimCLR+Ours | 12.7 | 106.2 | 29.7 | 230.7 |
| MAE | 13.8 | 115.5 | 26.9 | 244.9 |
| MAE+Ours | 16.4 | 122.2 | 31.2 | 252.2 |
For specific computation complexity, please refer to **Response to Weaknesses 1 & Questions 3** of **Rebuttal for Reviewer fDza**
----
**Response to Weaknesses 3**:
Thank you for pointing this out. To illustrate without loss of generality, we take SimCLR as a representative SSL method. First, our proposed **Algorithm 1** only modifies the mini-batch construction process during the training phase of SimCLR. Even though we train a VAE, it does not affect other components of SimCLR’s training pipeline, including the training objective, network architecture, and optimization algorithm. Second, training a VAE independently on ImageNet and using its feature extractor for evaluation yields an accuracy of **35.44%**, in contrast to SimCLR's **70.15%**. We then use the parameters of the VAE's feature extractor to initialize the feature encoder of SimCLR and retrain SimCLR from this initialization. The resulting accuracy is **68.71%**, which is **1.45%** lower than that of SimCLR trained from scratch. In comparison, SimCLR combined with our method achieves an accuracy of **73.32%**. These results demonstrate the fairness of our evaluation and confirm that the performance gain is not due to the additional VAE. | Summary: This paper explores whether self-supervised learning possesses out-of-distribution (OOD) generalization capabilities and investigates the reasons behind its potential failure. To address this, the authors propose a Post-Intervention Distribution (PID), grounded in the Structural Causal Model. PID enables accurate OOD generalization by disentangling spurious correlations between features and labels. The authors introduce a simple yet seemingly effective mini-batch resampling technique and provide a substantial number of supporting theorems. However, I find the number of theorems somewhat excessive. I suggest that the authors consolidate the most essential ones into key theorems and present them in the manuscripts for better clarity and impact.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: yes, all part
Relation To Broader Scientific Literature: This paper explores the relationship between self-supervised learning and OOD generalization.
Essential References Not Discussed: yes
Other Strengths And Weaknesses: Strengths:
1. Comprehensive experiments and theoretical justifications.
2. The proposed method is simple yet appears to be effective.
Weaknesses:
1. The authors should include additional OOD benchmark datasets in the main experiments, such as Colored-MNIST and PACS.
2. I find the concept of $x_{i}^{anchor}$ mentioned in line 90 somewhat confusing, particularly regarding why it can be directly transformed into the $x^{label}$ used in Equation (1).
3. How is the balancing score function specifically implemented? Is it learnable? Does it produce a scalar output?
Other Comments Or Suggestions: see weakness
Questions For Authors: see weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Response to Weaknesses 1**:
Thank you for pointing this out. Due to space limitations, we reported the experimental results of Colored-MNIST and PACS in **Tables 9** and **Tables 10** in **Appendix C.3** of the original submission. In the final version, we will move these results to the main body of the paper.
----
**Response to Weaknesses 2**:
Thank you for pointing this out. We explain this issue through the following steps:
**Step 1: How are augmented data pairs formed in SSL**
In D-SSL, each sample in a mini-batch undergoes stochastic data augmentation to generate two augmented views, e.g., for ${{x_i}}$, the augmented samples can be represented as $x^1_i$ and $x^2_i$. For G-SSL, $x_i$ is first divided into multiple small blocks, with some blocks masked, and the remaining blocks reassembled into a new sample, denoted as $x^1_i$. The original sample is then referred to as $x^2_i$. Thus, the augmented dataset in SSL (whether D-SSL or G-SSL) is represented as $X_{tr}^{aug} = ( {x_i^1,x_i^2} )_{i=1}^N$. $(x_i^1,x_i^2)$ forms the $i$-th pair.
The above statement can be found in the first paragraph of **Section 2** in the original submission.
**Step 2: How is the anchor formed in SSL**
The objective of D-SSL methods typically consists of two components: alignment and regularization. The alignment part is to maximize the similarity between samples that share the same pair in the embedding space, and the regularization part aims to constrain the learning behavior via inductive bias. It is noteworthy that “alignment” in D-SSL is often implemented based on anchor points, that is, viewing one sample in **a pair** as an anchor, the training process of such SSL methods can be seen as gradually pulling **the other sample in this pair** (a pair consists of two augmented samples) towards the anchor. Meanwhile, G-SSL can be regarded as implementing alignment of samples within a pair based on an encoding-decoding structure, by inputting sample $x^1_i$ into this structure to generate a sample, and making it as consistent as possible with sample $x^2_i$. The concept of anchor is also applicable to G-SSL, where $x^2_i$ is viewed as the anchor, and thus the training process of such SSL methods can be viewed as gradually constraining $x^1_i$ to approach $x^2_i$.
The above statement can be found in the second paragraph of **Section 2** in the original submission.
**Step 3: How does the anchor become a label in SSL**
Based on Step 2, regardless of whether it is G-SSL or D-SSL, the anchor can be regarded as a learning target. Specifically, SSL can be interpreted as follows: In a data augmentation pair, one sample (the anchor) is designated as the target. By constraining the other augmented sample in the feature space to move toward this anchor, consistency in feature representations is achieved. This dynamic adjustment causes samples within the same pair to become tightly clustered, thereby forming an effect similar to a local cluster center.
In traditional classification problems, the common approach is to first project samples into a label space and then constrain them to move toward their corresponding one-hot labels to achieve supervision. In contrast, SSL directly applies constraints in the feature space, which means that the anchor effectively takes on the role of a “label.” In this unsupervised setting, the anchor provides a supervisory signal similar to that of a label. Therefore, it can be argued that labels manifest differently across various spaces—in the feature space, the anchor represents this “implicit label.”
----
**Response to Weaknesses 3**:
Thank you for pointing this out. We clarify this issue through the following steps:
**Step 1: How we obtain $s$ for a given pair**
$s$ is a vector obtained based on the distribution $q_\phi^e(s \mid x^+, x^{\text{label}})$. In other words, when $\phi$ is given, $s$ is also determined. The learning process of $\phi$ is described in Section 4.1 of the original submission. For a given pair, e.g., $(x_i^+, x_i^{\text{label}})$, we sample once from $q_\phi^e(s \mid x_i^+, x_i^{\text{label}})$ to obtain $s_i$.
**Step 2: The computation of $ba(s_i)$ in Algorithm 1**
Based on Equation (5) in the original submission, we compute $ba(s_i)$. Specifically, this computation is performed with respect to the entire dataset. Given the full training dataset, the pair $(x_i^+, x_i^{\text{label}})$, and $s_i$, the $x_j^{\text{label}}$ involved in Equation (5) are traversed across the entire dataset. Then, according to Definition 4.5, we obtain $ba(s_i)$, which is a vector.
**Step 3: High-level explanation of Algorithm 1 and the identifiability of the spurious variable $s$**
A high-level explanation of **Algorithm 1** is provided in **Appendix F**, while a high-level explanation regarding the identifiability of the spurious variable $s$ is provided in **Appendix G**. | null | null | null | null | null | null |
Come Together, But Not Right Now: A Progressive Strategy to Boost Low-Rank Adaptation | Accept (poster) | Summary: The paper proposes a simple regularization strategy for LoRA fine-tuning that stochastically drops LoRA adapters according to a certain schedule.
The authors show that the proposed training strategy enhances linear mode connectivity (LMC) and adapter-wise dropout stability.
Furthermore, it usually improves fine-tuning over standard LoRA, DoRA and HiRA.
Finally, the authors present an analysis on robustness to pruning adapters, comparing linear fusion to ensembles, ablation studies on the dropout strategy and hyperparameters, using shapley values to determine importance of adapters, and custom generation with diffusion.
Claims And Evidence: There are a few claims that lack support:
- line 19 (right): "LoRA oftentimes rapidly overfits and converges to a local optimum near initialization." - no references or empirical evidence is presented
- line 75 (right): The authors claim that PEFT follows a hierarchical structure - What is meant by hierarchical structure? Is it meant that higher layers need more adapter ranks? If so, the analysis in Section 5.2 showed that applying higher dropout in higher layer adapters performs better, which directly contradicts the claim that higher layers need more adapters. Also the analysis on Shapley values in Section 5.1 shows that for text, lower layer adapters are more important, indicating that there is also a dependence on data modality.
- line 76 (right): The authors claim that CoTo accounts for the "hierarchical structure of PEFT" and is "tailored for PEFT", but no evidence is provided for that. In fact, the authors only consider LoRA-style methods, but PEFT comprises a lot more methods. Furthermore, the same dropout probability is applied to all adapter layers regardless of layer depth, i.e. adapters across layers are equally weighted.
- In line 416 (left) the authors claim that 75% first stage proportion leads to improved LMC and dropout stability, however the authors have not compared e.g. LMC for 50% vs 75% first-stage, therefore this claim was not verified. It was only verified for the efficiency vs performance trade-off.
- In line 430 (right), the authors claim that "Both theoretical and empirical analyses show that CoTo improves adapter-wise dropout stability and linear mode connectivity, leading to better generalization, interpolation, model merging, and pruning" - however, the theoretical results of this work only show that the CoTo objectiev is equivalent to a weighted sum of objectives, which does not tell anything about other findings, which are purely empirical.
- In line 434 (right): "Shapley value analysis confirms that CoTo increases the marginal contribution of individual adapters." - This is not clear from looking at Figure 7, since not colorbars are shown and scale of different plots are unknown.
Methods And Evaluation Criteria: The proposed training strategy is very simple and straightforward which I consider a strength of the work.
It makes sense and stochastic regularization is an important topic for PEFT methods that has not been explored in a lot of detail.
Theoretical Claims: Theorem 3.1 shows that training via CoTo is equivalent to a progressively shifted objective weighted by some factor.
I have not checked the proof for Theorem 3.1 in detail due to time constraints.
Experimental Designs Or Analyses: I commend the authors for applying CoTo to different LoRA variants and the amount of experiments that were provided. However, I believe that a comparison to other stochastic regularization techniques such as Dropout [1] and Stochastic Depth, and [2] should be included. This forsters understanding of the training strategy and puts the proposed strategy into perspective to others.
[1] Lin et al., LoRA Dropout as a Sparsity Regularizer for Overfitting Control, arXiv 2024
[2] Wang et al., LoRA Meets Dropout under a Unified Framework, ACL 2024
The authors increase the learning rate for CoTo compared to competitors. In [1] it was shown that higher learning rate can be beneficial in the case of LoRA optimization. To enable a fair comparison to LoRA, I would recommend to also run LoRA with this same learning rate to account for that difference.
[1] Hayou et al., Efficient Low Rank Adaptation of Large Models, ICML 2024
The results on diffusion models and shapley values are entirely qualitatively. Based on this the authors claim significant improvements of CoTo over competitors, which is not convincing. In fact by merely looking at the generated images in Figure 5 it is very hard to tell which one is of better quality. The same goes for shapley values when no color bar is provided. I recommend to either remove claims on significant improvements or report quantitative measure with variance estimates to verify them.
Why does the selection of DTD for pruning analysis ensure generalizable insights?
On the multitask experiments, the authors use generative language models for language understanding tasks leading to very low scores in general. On such tasks BERT-style models such as DeBERTa-v3 [1] are the state-of-the-art usually reaching over 90% average accuracy. This model has also commonly been used in for LoRA-style fine-tuning [2,3,4], why do the authors use generative models here?
[1] He et al., DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing, ICLR 2023
[2] Hu et al., LoRA: Low-Rank Adaptation of Large Language Models, ICLR 2022
[3] Zhang et al., AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning, ICLR 2023
[4] Meng et al., PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models, NeurIPS 2024
In Section 4.2.1, the authors aim to answer the question whether CoTo makes linear fusion closer to an ensemble, but this question is never clearly answered. From my understanding the reported results only show that CoTo is usually better than LoRA*. To give a definitive answer I believe it needs to be looked at the magnitude of P for LoRA* vs LoRA-CoTo*, as if the magnitude of P is larger than there is a larger gap between ensemble and linear fusion.
Supplementary Material: I have looked over the appendix of the paper.
Relation To Broader Scientific Literature: In general, there are plenty of methods that build on LoRA-style parameter-efficient fine-tuning.
Many of them investigate discarding more parameters, different initialization schemes, adaptive rank allocation, etc.
This paper investigates stochastic regularization for different LoRA variants, which provides a better understanding on how to best train LoRA-style methods.
Essential References Not Discussed: There is more related work [1,2,3] on adaptive rank allocation shows that for certain downstream tasks, naturally patterns emerge where more ranks are allocated to certain weights in higher layers and sometimes adapters are disregarded entirely. This could be used as additional support for dropping LoRA adapters stochastically during training.
[1] Liu et al., Alora: Allocating low-rank adaptation for fine-tuning large language models., NAACL 2024
[2] Paischer et al., One Initialization to Rule them All: Fine-Tuning via Explained Variance Adaptation, arXiv 2024
[3] Meo et al., Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates. arXiv 2024
Other Strengths And Weaknesses: **Strengths**
The proposed training strategy is simple and intuitive.
The authors conduct plenty of experiments and analyses demonstrating advantages of the proposed training strategy for different LoRA variants.
**Weaknesses**
The authors show learning curves for LoRA variants vs LoRA-CoTo variants in Figure 9 (Appendix B2) showing that CoTo requires longer training until convergence. This should also be mentioned in the main text to put the method in perspective.
In Table 4, Ensemble-CoTo performs consistently worse on average than the standard Ensemble method, do the authors have an intuition why this is the case?
Other Comments Or Suggestions: Tables would be a bit nicer to read if subsequent rows were Method and Method-CoTo, separated by horizontal lines, otherwise one needs to jump back and forth across separated grouped rows.
The notion of adapters vs layers is a bit confusing. In the introduction the authors refer to LoRA adapters as "adapters", but then use "layers" throughout the remainder of the manuscript. This can be confused with a layer that contains multiple of adapters.
You could add a straight line in figure 4 that shows the pre-merge performance for LoRA, so the reader does not need to switch back and forth between table and plot.
-- POST REBUTTAL --
I would like to thank the authors for the added analysis which I believe improve the depth and rigor of the work. As all of my concerns have been properly addressed, I recommend acceptance of this work.
Questions For Authors: Have you looked at what P (used in LoRA*) learns during optimization? It seems that a good optimum would be to learn to invert one of the adapters for merging (i.e. inverting the fusion), such that eventually no fusion/ensembling is occuring at all.
No other remaining questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer uABN for the insightful feedback, which helped improve both our analysis and presentation. Below, we respond to the main concerns point by point. Additional results are available via https://anonymous.4open.science/r/coto, with new content labeled as Tab. rX and Fig. rX.
#### Claims And Evidence
> We agree with these concerns and will revise the claims for clarity.
>
> - L19,r: Fig. r4 shows that models with the same initialization tend to converge to similar solutions, regardless of method or learning rate. We will also cite works supporting lazy training dynamics.
>
> - L75-76,r: The term “hierarchical structure” was unclear. We will revise it to *layer-wise importance differences*, note that layer importance is modality-dependent, and specify “layer-wise LoRA-style PEFT methods” instead of the broader term PEFT.
>
> - L416,l: Fig. 6-7 in the manuscript show pruning and marginal contributions at 25% training. Fig. r1 provides additional merging and pruning results over training steps.
>
> - L430,r: As shown in [r1], the dropout objective is equivalent to adding a regularization term. Similarly, CoTo’s objective is a weighted sum of sub-objectives, where the weight ($w_6$ in Fig.2) of the standard objective gradually dominates as training progresses.
>
> [r1] On the Regularization Properties of Structured Dropout.
>
> - L434,r: Fig. r5 has been updated with unified colorbars.
#### Experimental Designs
[E.1] Comparison to other stochastic techniques
> Stochastic depth is designed for pretraining and not applicable to LoRA. [2] introduces a KL loss between dropout and non-dropout outputs during PEFT but is not open-sourced or directly applicable. Thus, we compare CoTo with Dropout [1] and a variant using our progressive strategy. As shown in Fig. r2 and Fig. r3, CoTo consistently outperforms naive dropout in general, merging and pruning tasks.
[E.2] Run LoRA with the same learning rate
> - In Tab. 5 of ablation studies, we add LoRA-Pro results with 1e-4 and 2e-4 for fair comparison (55.8 ± 0.7, 40.9 ± 1.1). On classification tasks, CoTo also consistently outperforms LoRA on general, merging, and pruning performance across five learning rates (Fig. r6).
> - The higher learning rate for CoTo offsets slower convergence from stochastic training. Full learning rate settings are provided in Appendix A.3.
[E.3] Remove claims or report quantitative measures
> We will remove “significant” and move diffusion results to Appendix. Additional quantitative results (Tab. r2) and qualitative examples will be included.
[E.4] Why DTD for pruning analysis
> DTD has relatively low zero-shot performance (~44%), making the LoRA's gain more visible. Results on all vision tasks are provided in **Fig. r7** for completeness.
[E.5] Why use generative models instead of BERT-style models like DeBERTa-v3
> - We follow LoRA-LEGO, which evaluates only generative LLMs, as they are more advanced and suitable for merging. BERT models for classification rely on task-specific poolers and classifiers, making merging less meaningful (e.g., not generalizable to OOD tasks). In contrast, generative models support instruction tuning across diverse tasks via a unified prompt format [r2], offering a more rigorous testbed.
>
> [r2] Finetuned Language Models are Zero-Shot Learners.
>
> - As suggested, we have also added results on DeBERTa-v3 in Tab. r1, where CoTo still shows consistent improvements.
[E.6] Analysis of the magnitude of P
> We optimize the alignment matrix P by minimizing the proposed $\Delta_{upper}$ using Adam (500 steps, lr=0.01) and analyze both the magnitude of P and the difference $\|\Delta W_f - \Delta W_m\|_2$ before and after applying P. Results are shown in Fig. r8. For LoRA, the magnitude of P is larger, and the difference (both before and after applying P) is smaller than in CoTo. This indicates that CoTo improves linear mode connectivity not by adapter-wise alignment but by addressing layer-wise misalignment.
#### Essential References
> Thanks for highlighting these works. We will cite and discuss them in the revision.
#### Weaknesses
[W.1] Fig. 9 shows that CoTo requires longer training until convergence
> The slower loss decrease in Fig. 9 is due to CoTo’s stochastic activation, which delays early convergence but improves generalization. Notably, CoTo with LoRA and DoRA reaches lower loss at 900 and 2k steps, respectively. Also, as each step in CoTo is faster (due to adapter skipping), total training time is reduced (Tab. 6). Any delay from stochasticity can be mitigated by adjusting the learning rate or activation schedule (Fig. r1).
[W.2] Intuition why Ensemble-CoTo performs worse
> Ensemble-CoTo improves on DeBERTa (Tab. r1) but underperforms on LLaMA, possibly due to non-convergence. We will further investigate this and update the results if new findings emerge.
#### Comments
> Thanks, we will revise accordingly.
#### Questions
> Please see our response to E.6.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing most of my comments and adding additional very insightful results, I have a few more remaining questions and comments.
> L19,r: Fig. r4 shows that models with the same initialization tend to converge to similar solutions, regardless of method or learning rate. We will also cite works supporting lazy training dynamics.
The original claim was that "adapters converge to a local optimum near initialization", Fig r4 shows that same initialization tend to converge to similar solutions, however what is not supported is that the reached optimum is close to initialization. To verify this, you would need to compute the distance between initialization and trained adapter and this should be relatively small, which is what I doubt. Another remak is that tSNE projections are very sensitive to hyperparameters, so they usually dont tell much.
> Stochastic depth is designed for pretraining and not applicable to LoRA. [2]
Can you elaborate why it is not applicable? Just because it is designed for pretraining does not mean it cannot be applied to fine-tuning.
> Why use generative models instead of BERT-style models like DeBERTa-v3
I appreciate adding results for DeBERTav3. It seems that LoRA without CoTo consistently outperforms LoRA with CoTo, while merging performance still improves. I agree though that investigating the effect of merging on OOD tasks is not as meaningful with such models compared to generative ones, still they hold sota performance in classification. I am satisfied with these results.
If the authors can address the first to points properly, I will increase my score.
---
Reply to Comment 1.1.1:
Comment: We are deeply grateful to the reviewer for the thoughtful follow-up and the opportunity to clarify the remaining concerns. As part of our response, we have added new results in Fig. r9 and Tab. r3.
### Validity of the “Convergence Near Initialization” Claim
> We appreciate the reviewer for rigorously examining every claim in our paper. To further substantiate our original claim that "adapters converge to a local optimum near initialization", we have
> - replicated the analysis from Fig. r4 five times using different initializations and reported the following table, which summarizes the **average $\ell_2$ distance between the initialized and final LoRA adapter weights** across learning rates. Each value is averaged over 5 seeds. There are some key observations:
> - For both LoRA and LoRA-CoTo, **the final weights remain much closer to their initialization** than to weights trained from different seeds, even at initialization, supporting convergence near initialization in line with lazy training dynamics.
> - LoRA-CoTo shows slightly larger distances from initialization, indicating **broader exploration** during training, yet its final weights are more consistent across seeds, suggesting a more **consistent convergence path**.
> - While we agree with the reviewer that t-SNE visualization is sensitive to hyperparameters, t-SNE that has been widely adopted for cross-checking clusters corroborates our claim. As shown in Fig. r9 (an updated version of Fig. r4 with varying perplexities), the final adapters form five distinct clusters centered around their initializations.
>
> | Method| Comparison|5e-5|1e-4 (default) |5e-4|1e-3| Note|
> | --- | --- | --- | --- | --- | --- | --- |
> | Both| Init vs. Init (diff. seeds)|1.155±0.002| 1.155±0.002| 1.155±0.002 | 1.155±0.002 ||
> | LoRA| Init vs. Final (same seed)|0.476±0.016 |0.445±0.018|0.757±0.010| 1.315±0.015 ||
> | LoRA-CoTo | Init vs. Final (same seed)|0.610±0.004 |0.789±0.004| 1.251±0.024| 1.637±0.025 |Slightly larger |
> | LoRA| Final vs. Final (diff. seeds) |1.703±0.031| 1.810±0.035|2.348±0.030| 3.117±0.027 ||
> | LoRA-CoTo | Final vs. Final (diff. seeds) |1.380±0.004| 1.533±0.006|2.142±0.030 | 2.631±0.022 |Slightly smaller |
>
> - added an analysis where we **perturb the initialization** in a very small range by adding small uniform noise to each adapter and examine **whether the adapted LoRAs converge within this range**. For three seeds with three perturbations each, we compute the average $\ell_2$ distance between final weights. The results below show that when initialization points are close, the **final weights also remain tightly clustered**, further supporting the claim of convergence near initialization.
>
> | Method| Comparison (diff. perturbations) |5e-5| 1e-4| 5e-4|1e-3|
> | --- | --- | --- | --- | --- | --- |
> | Both| Init vs. Init |0.020±0.000 |0.020±0.000 |0.020±0.000 |0.020±0.000 |
> | LoRA| Final vs. Final |0.049±0.005 |0.065±0.006|0.593±0.016|1.712±0.014|
> | LoRA-CoTo|Final vs. Final |0.040±0.009 |0.055±0.004| 0.208±0.007|0.515±0.029|
### Applicability of Stochastic Depth in the Context of LoRA Fine-tuning
> We thank the reviewer for always inspiring us to dig further and make our results more solid.
> - We would like to humbly clarify that our intended message was that Stochastic depth designed for pre-training skips entire Transformer layers—namely, it operates at the **inherent Transformer parameters**—is inconsistent with our fine-tuning setup, which tunes **adapters only**. We apologize for any confusion.
> - Part of our core contributions, i.e., (1) identifying the deficiencies of current LoRA optimization with uneven distribution across layers and (2) proposing a progressive training strategy on adapters to address such deficiencies, is exactly the bridge to close such inconsistency.
> - Based on our above contributions, we have followed the reviewer's thought to transfer the structured, layer-wise dropout strategy of stochastic depth to our setting.
> - We apply the same linear decay schedule across adapter layers as in Stochastic depth, i.e., a linear decay in activation probability from $p = 1$ (first layer) to $p = 0.5$ (last layer). We denote this variant as `Stochastic Depth for LoRA`, where early adapter layers are always active while later adapter layers are occasionally skipped.
> - We have compared CoTo against `Stochastic Depth for LoRA` on *DTD* and *UCF101* classification tasks and summarized the results in Tab.r3.
> - `Stochastic Depth for LoRA` indeed improves merging and pruning, confirming that structured adapter skipping is indeed effective—an insight shared by CoTo.
> - CoTo still outperforms `Stochastic Depth for LoRA` consistently across generalization, merging, and pruning, and it goes beyond stochastic depth by introducing a **progressive activation schedule** over training steps, which stabilizes layer-wise optimization and enhances generalization. | Summary: The paper introduces CoTo, which integrates structured dropout with LoRA fine-tuning, demonstrating improved generalization and enhanced performance in model merging and pruning. Similar to stochastic depth, the proposed method freeze LoRA adaptor for certain layers with a certain probability and such probability decreases over the entire training process.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: The proposed method is simple and not new. It is very intuitive and makes sense for improving generalization.
Theoretical Claims: I did not check the correctness of Theorem 3.1.
Experimental Designs Or Analyses: I have checked the validity of the experimental design. I believe it is natural to compare LoRA-variants without CoTo and those with CoTo.
Supplementary Material: I have checked Section A.3 for the details of how hyper-parameters are chosen.
Relation To Broader Scientific Literature: Researchers in this field may get interested in the paper as it improves generalization of PEFT methods.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The proposed method shows consistent improvements compared to baselines.
2. The experiments are extensive, including VLM, and LLM.
3. The paper also provides extensive studies, including how CoTo affects model merging, and pruning. It provides interesting insights to the researchers in the field.
Weaknesses:
1. As the authors mentioned in related works, there are many works that tested applying structured dropout and stochastic depth. The novelty of the paper is limited, i.e., applying structured dropout to PEFT methods.
Other Comments Or Suggestions: While the overall novelty of the paper is somewhat limited, i.e., mainly combining existing techniques such as stochastic depth and structured dropout within the PEFT framework, the strength of the experimental results helps compensate for this. The experiments are extensive, and demonstrate significant improvements over baselines. Therefore, I am inclined to recommend acceptance of the paper.
Questions For Authors: I do not have other questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer Xi9N for the thoughtful review and for highlighting both the strengths and limitations of our work. We especially appreciate the recognition of our experimental rigor and the inclination toward acceptance. Below, please find our responses to the main concerns, and let us know if any issues remain. Additional experiments conducted during the response period (see Fig. **r**X & Tab. **r**X) are available in this anonymous link: https://anonymous.4open.science/r/coto.
#### [Weakness] As the authors mentioned in related works, there are many works that tested applying structured dropout and stochastic depth. The novelty of the paper is limited, i.e., applying structured dropout to PEFT methods.
> We fully understand the concern regarding novelty. Indeed, from a structural perspective, **our method may appear similar to prior work** on structured dropout and stochastic depth. We acknowledge this and appreciate the reviewer’s valuable observation.
>
> - We welcome the opportunity to clarify CoTo's distinct contributions and novelty within the PEFT landscape. While there are superficial similarities, CoTo's core innovation goes beyond simply applying existing ideas. As discussed in **Appendix C** and summarized in **Table 8**, while CoTo, dropout, and stochastic depth all involve deactivation during training, they differ fundamentally in objective:
>
> - **Dropout** aims to prevent overfitting by randomly deactivating individual neurons or weights.
>
> - **Stochastic depth** is primarily designed for efficient pretraining of full-model by randomly skipping entire layers.
>
> - **CoTo** is designed as a training paradigm for the **layer-wise PEFT method**, aiming to **balance the utilization of LoRA adapter layers**. It prevents adapters from dominating and promotes better merging and pruning by employing **structured, layer-wise deactivation coupled with a progressive activation strategy**.
>
> In essence, CoTo synthesizes concepts from regularization (like dropout) and efficient training (like stochastic depth) but repositions and adapts them into a coherent strategy specifically targeting the unique dynamics and optimization challenges of layerwise PEFT.
>
> - To empirically demonstrate this distinction, we provide additional experiments (**Anonymous Figs. r2 and r3**) on image classification comparing CoTo against direct applications of dropout to LoRA adapters:
>
> - Standard Dropout (fixed $p$).
>
> - Progressive Dropout (linearly decaying $p$).
>
> **Fig. r2** shows LoRA adapter merging robustness under multiple seeds, and **Fig. r3** evaluates structured pruning across adapter components. The results clearly show that **naive applications of dropout strategies yield limited benefits** and do not effectively improve merging or pruning robustness. CoTo consistently and significantly outperforms these variants in terms of accuracy, merging performance, and pruning robustness, highlighting that its effectiveness stems from its PEFT-specific design, not merely from applying dropout principles.
>
> Finally, we note that **CoTo is fully compatible with standard dropout**, which is already included in our main experiments. We will revise the manuscript to better highlight these conceptual and empirical distinctions, helping readers better position CoTo within the broader landscape of PEFT techniques.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. I am keeping my original rating. | Summary: The paper introduces a training strategy to progressively deactivate adapters during training to ensure better optimization across all layers, enhancing model performance and efficiency. Extensive experiments across various models and tasks demonstrate its effectiveness in boosting LoRA's capabilities, including improved generalization, better model merging, and efficient pruning.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria make sense for the problem.
Theoretical Claims: Yes, I checked the correctness of the proofs for theoretical claims.
Experimental Designs Or Analyses: Yes, I checked the soundness of all experimental designs.
Supplementary Material: Yes, I reviewed the supplementary material.
Relation To Broader Scientific Literature: Please refer to Strengths and Weaknesses.
Essential References Not Discussed: Please refer to Strengths and Weaknesses.
Other Strengths And Weaknesses: Strengths:
1. CoTo provides a novel approach to addressing the uneven distribution of LoRA updates across layers.
2. The strategy is supported by both theoretical insights and empirical evaluations, showcasing consistent performance improvements.
3. CoTo is compatible with various LoRA variants and extends its benefits to tasks such as model merging and pruning, demonstrating broad applicability.
Weaknesses:
1. The paper's notation is a bit unclear. What exactly does "a single LoRA layer" refer to? And what does "adapter" refer to? In transformer models, a single layer typically contains multiple weight matrices, such as the attention weight matrix and the projection weight matrix. Does "one adapter" mean all the LoRA modules added to these matrices in one layer?
2. Following up on the previous question, could you explain in more detail how the adapter drop method works?
3. In Table 1, the improvement in performance seems marginal. How do the experiments show that CoTo can enhance generalization? Are any held-out tasks set aside to test this?
4. Line 160-164, why does a decreasing learning rates facilitates the exploration of new local optima? Is this an assumption? What's the strategy for decreasing the learning rate?
Other Comments Or Suggestions: Please refer to Strengths and Weaknesses.
Questions For Authors: Please refer to Strengths and Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer VYdq for the thoughtful and constructive comments. We address each concern in detail below and remain open to any further questions.
[W1] Notation is a bit unclear. What exactly does "a single LoRA layer" refer to? And what does "adapter" refer to? Does "one adapter" mean all LoRA modules added to these matrices in one layer?
> We greatly appreciate the reviewer for bringing our attention to this notation, which indeed benefits from further clarification.
>
> - Adapter: As stated in Lines 11–13 (right) of our manuscript, we use the term adapter to refer to the trainable parameters introduced by LoRA.
> - A single LoRA layer = a single adapter denotes **the collection of all LoRA modules inserted into a single Transformer layer**.
> - The exact inserting locations of LoRA within each layer vary by task, following state-of-the-art practices, as detailed in the "Target Module" of Tab. 7.
> - Consequently, activation decisions are applied collectively to all LoRA modules within a layer, rather than to individual modules.
[W2] Explain in more detail how the adapter drop method works
>Absolutely. Thank you again for the question. The adapter drop mechanism is a key component of our training strategy, described in Sec. 3.1 and illustrated in **Fig. 1**, and we appreciate the opportunity to clarify it further.
>
>At each training step, we sample a Bernoulli variable $\delta_l\sim \text{B}(p)$ for each layer to determine **whether the entire layer of adapters is activated**. This sampling is implemented using the `TrainerCallback` function at each step. The activation probability $p$ increases linearly from 0 to 1 during the first three-quarters of training and remains fixed at 1 thereafter. When deactivated, **all LoRA modules within a transformer layer** are excluded from both forward and backward computation. As shown in Fig. 1, this strategy enables extensive layer-skipping during the early stages of training and gradually transitions to full activation of all adapters.
[W3] Table 1 performance improvement seems marginal
> Thank you for this insightful question. While the improvements in Tab. 1 may appear modest in absolute terms, we note that:
>
> - Tables 1 and 2 report results on standard benchmarks, where performance is already strong. CoTo still achieves consistent gains while maintaining lower cost (Sec. 5.3). Compared to prior methods, such as DoRA vs. LoRA and HiRA vs. DoRA, CoTo provides comparable or even greater improvements, setting new state-of-the-art results in several settings.
> - In addition to generalization accuracy gains, CoTo significantly enhances the **merging and pruning performance** of LoRA adapters. This is another major benefit of our method.
>
> CoTo improves generalization at both the **sample level** and the **task level**:
>
> - Tab. 1–3 evaluate performance on the test sets of in-domain tasks, demonstrating sample-level generalization. In Tab. 3, we train on MetaMathQA and test on GSM8K, which differ in domain, reflecting robustness under domain shift.
> - Tab. 4 includes explicitly **held-out tasks** in the merging experiments. Following the LoRA-LEGO protocol, we train LoRA adapters on a subset of tasks (in-domain) and evaluate merged adapters on both seen (ID) and unseen (OOD) tasks. CoTo consistently improves performance on these held-out tasks, demonstrating strong task-level generalization.
>
> We will revise the manuscript to more clearly highlight these generalization settings and the role of held-out tasks in our experimental design.
[W4] Line 160-164, why does a decreasing learning rates facilitates the exploration of new local optima...
> We acknowledge that the original statement "Coupled with decreasing learning rates, this strategy facilitates the exploration of new local optima while preserving properties established in earlier stages" was confusing and appreciate the opportunity to clarify.
>- The "strategy" referenced here is the proposed CoTo strategy, which is responsible for facilitating the exploration of new local optima.
>- What we intended to deliver is that "decreasing learning rates" coupled with the lazy training property of neural networks (cf. Line 158 - 160) is **responsible for "preserving properties established in earlier stages"**, such as linear mode connectivity and dropout stability.
> - Specifically, as supported by prior work [r1, r2], using a higher learning rate during the early training phase promotes broad exploration of the loss landscape. Subsequently, gradual learning rate decay encourages convergence to flatter, more generalizable minima. In our empirical setting, we adopt a cosine decay learning rate schedule, which we believe helps maintain early-stage properties.
> We will revise the manuscript to better reflect this intent.
>
> [r1] SGDR: Stochastic Gradient Descent with Warm Restarts.
>
> [r2] A Second look at Exponential and Cosine Step Sizes: Simplicity, Adaptivity, and Performance.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. Most of my concerns are addressed. After considering the insights from other reviewers, I increased my score to 3. | Summary: This paper proposes CoTo, a training strategy for LoRA that progressively deactivates adapters during training to promote balanced optimization across layers. CoTo enhances generalization, model merging, and pruning while reducing training time, demonstrating performance improvements across vision-language models (CLIP), LLMs (LLaMA), and diffusion models (SDXL).
## update after rebuttal
I am still not convinced by the reply to Q2 regarding the theoretical analysis. I am keeping my score unchanged.
Claims And Evidence: Most claims are well-supported by empirical evidence, including performance gains, improved model merging, and pruning efficiency across multiple benchmarks. However, the theoretical analysis is limited to fully connected networks, and layer-wise optimization balance lacks direct gradient-based validation.
Methods And Evaluation Criteria: The methodology and evaluation use diverse benchmarks across vision-language models, LLMs, and diffusion models. However, layer-wise optimization balance lacks gradient-based validation, and the choice of a linear activation schedule is not compared to alternatives.
Theoretical Claims: The theoretical analysis may not fully generalize to transformers in attention layers.
Experimental Designs Or Analyses: The experiments cover various models and LoRA variants, with clear results supporting CoTo's improvements. However, layer-wise optimization balance is not directly validated using gradient-based metrics.
Supplementary Material: I review the appendix except for the proof details.
Relation To Broader Scientific Literature: The paper relates to the PEFT literature, building on LoRA and its variants while introducing stochastic adapter activation to improve layer-wise optimization. It also connects to Linear Mode Connectivity (LMC) for model merging and extends dropout and stochastic depth methods to adaptively adjust activation for balanced optimization.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Lack of Mathematical Analysis on Gradient Distribution and Convergence: The paper does not provide a detailed mathematical analysis of gradient distribution or training convergence, leaving uncertainty about the optimization stability and convergence behavior of CoTo.
Narrow Scope of Adapter Merging Tests: The experiments on CoTo's adapter merging are limited to LLaMA-2 and LLaMA-3. Existing works highlight the importance of adapter merging across different tasks/domains, but this paper's scope is too narrow to generalize.
Other Comments Or Suggestions: NA
Questions For Authors: Why use a linear increase in activation probability?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer Np5j for the valuable comments. Below, please find our responses to each concern, and let us know if any issues remain. All experiments during the response period (Fig. **r**X & Tab. **r**X) are accessible in this anonymous link https://anonymous.4open.science/r/coto.
#### [Q1] Direct gradient-based validation for layer-wise optimization balance
> - **Gradient-based validation**
>
> To directly validate layer-wise optimization balance, we compute the **average gradient magnitudes of layers** from both the text and vision encoders on vision classification tasks. For comparability, gradients are normalized such that their sum across layers equals one. The results below indicate that CoTo leads to a **more balanced gradient distribution** across layers:
>
> | | Lower Layers (0-4) | Middle Layers (5-8) | Higher Layers (9-12) |
> | ------------------ | ------------------ | ------------------- | -------------------- |
> | LoRA (text) | 27.59% | 32.35% | 40.06% |
> | LoRA-CoTo (text) | 28.48% | 33.53% | 37.99% |
> | LoRA (vision) | 24.06% | 32.59% | 43.35% |
> | LoRA-CoTo (vision) | 26.74% | 33.90% | 39.35% |
>
> - **Other validations already presented in our manuscript**
> - Fig. 6 shows that CoTo-trained models consistently outperform LoRA, regardless of whether the lower, middle, or higher layer-adapters are pruned.
> - Fig. 7 presents Shapley value analyses, revealing that CoTo enhances the marginal contributions of lower and middle layers.
>
> Together, these results support the effectiveness of CoTo in promoting balanced layer-wise optimization.
#### [Q2] The theoretical analysis may not fully generalize to transformers in attention layers.
> For clarity and simplicity of notation, we formulate our method using fully connected networks, where $W_l$ represents the total parameters of layer $l$. However, since our analysis (e.g., Theorem 3.1) considers activation at the layer level (as described in Sec. 3.1), it does not rely on the internal structure of each layer. By the Law of Large Numbers, the CoTo objective can be reasonably approximated as a weighted sum of sub-objectives, making the analysis broadly applicable — including to transformer-based architectures.
#### [Q3] The experiments on CoTo's adapter merging are limited to LLaMA-2 and LLaMA-3. Existing works highlight the importance of adapter merging across different tasks/domains, but this paper's scope is too narrow to generalize.
> We respectfully clarify below the **broad scope** of adapter merging evaluations, which we have deliberately designed to validate CoTo's high generalizability.
> - **Architectural scope includes LLaMA-2/LLaMA-3/CLIP/SDXL/DeBERTa V3**
> - As recognized by your summary, we evaluate adapter merging on CLIP (Fig. 4), LLaMAs (Fig.4, Tab. 4), and SDXL (Fig. 5), covering **widely adopted architectures** in text, vision, and multimodal domains.
> - We also evaluate CoTo on the **BERT-style architecture** DeBERTa V3 (Tab. r1). Despite the challenges of merging BERT-style models -- requiring additional fine-tuning of task-specific poolers and classifiers -- CoTo still outperforms.
> - We fully agree that adapter merging across different tasks/domains is critical. Aware of this, Sec. 4.2.2 ("LoRA Merging with Different Tasks") merges 7 adapters trained on 7 in-domain (ID) tasks, and evaluates the cross-task merged adapter on both ID and 2 out-of-domain (OOD) tasks. The consistent superiority with CoTo over baselines advocates our **broad scope in cross-task adapter merging**.
> - Besides, our adapter-merging evaluations span diverse task types, including vision classification (Fig. 4), commonsense reasoning (Fig. 3), natural language understanding (Tab. 4), and customized generation via diffusion models (Fig. 5), further validating **CoTo's generalizability across tasks/domains.**
#### [Q4] Why use a linear increase in activation probability?
> We select linear increase as it offers a **favorable trade-off between simplicity and performance**, preserving the core benefits of the proposed CoTo.
> - Simplicity: Unlike other activation probability scheduling functions (e.g., sine and exponential), linear increase **requires no additional hyperparameters** given the first phase spanning 75% of the training duration (cf. Line 410 (left)).
> - Performance: Following the reviewer's great suggestion, we compare **linear**, **sine**, and **exponential** activation probability functions. As evidenced in Fig. r1, linear increase strikes a delicate balance between generalization, merging, and pruning performance. | null | null | null | null | null | null |
Learning to Keep a Promise: Scaling Language Model Decoding Parallelism with Learned Asynchronous Decoding | Accept (poster) | Summary: The authors propose a mechanism, Pasta to train large language models (LLMs) to identify and express semantic independence and directly optimizes for both response quality and decoding latency. This framework has two key components, the Pasta-Lang, an annotation language that allows LLMs to express semantic independence, and it also has an interpreter orchestrate parallel decoding on the fly. This two-stage framework achieved state-of-the-art on a benchmark, AlpacaEval, speeding up and improving the quality of generation.
Claims And Evidence: The experiments generally support the claims of the work. However, with only specifying that the work is evaluated on one dataset, the contributions may be limited.
Methods And Evaluation Criteria: The work only used a benchmark dataset, which may present generalization risks for the proposed method.
Theoretical Claims: The theoretical claims make sense. There’s not much theoretical proofs provided, but more empirical settings.
Experimental Designs Or Analyses: The experiments are adequate. Evaluations on the speed and quality are both provided with enough details.
Supplementary Material: The supplementary materials are satisfactory.
Relation To Broader Scientific Literature: There’s a generally thorough literature review provided in the paper, and the authors discussed how their research is built on prior work.
Essential References Not Discussed: I am not aware of any other essential references not discussed.
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the encouraging feedback from the reviewer!
> However, with only specifying that the work is evaluated on one dataset, the contributions may be limited.
We would like to clarify that AlpacaEval actually is a suite of 5 different benchmarking datasets: Self-Instruct [1], Open-Assistant [2], Vicuna [3], Koala [4], and hh-rlhf [5].
References:
[1] Wang, Yizhong, et al. Self-Instruct: Aligning Language Models with Self-Generated Instructions. ACL. 2023. https://arxiv.org/abs/2212.10560
[2] Köpf, Andreas. OpenAssistant Conversations -- Democratizing Large Language Model Alignment. NeurIPS. 2023. https://arxiv.org/abs/2304.07327
[3] Chiang, Wei-Lin, et al. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. 2023. https://lmsys.org/blog/2023-03-30-vicuna/
[4] Geng, Xinyang, et al. Koala: A Dialogue Model for Academic Research. 2023. https://bair.berkeley.edu/blog/2023/04/03/koala/
[5] Bai, Yuntao, et al. Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. 2022. https://arxiv.org/abs/2204.05862 | Summary: This work presents a novel framework that allows the model to learn semantic boundaries in the sequence space, so that semantically independent chunks could be decoded in parallel, or async manner. First, they proposed an XML-based markup to be able to structure a flat sequence into structured chunks that can be used by the decoding controller. Specifically, decoding could switch from usual decoding into async mode, and then synchronize chunks back and use previous chunks as context to decode the final part of the sequence.
Authors design training pipeline so that the model can learn when to generate control tokens to switch into async mode and back. Training pipeline involves both supervised and preference finetuning. Their experimental results show that they method is able to maintain or improve the quality of model responses while making decoding faster due to ability to process async chunks in parallel.
Claims And Evidence: * the overall idea of allowing the model to split the chain of thought into a set of semantically independent chunks is sound and has clear motivation given recent advances in reasoning with long chain of thoughts.
* the benefits of splitting decoding chunks into parallel executions that can synchronize later is an interesting idea, however, the practical utility of such method is questionable especially on the efficiency side. Efficient and optimal async inference of such blocks (while keeping consistent kv cache of the sequential prefix) might be overly complicated usecase unless all major foundational language models would use that.
Methods And Evaluation Criteria: * experimental testbed uses public benchmarks for performance, which is good, but "theoretical speedup" on decoding side is a bit less convincing.
Theoretical Claims: * presented speed-ups (geomean theoretical speedup) usually comes with quality degradation, which is a bit concerning given that in theory such async chunks might help to improve reasoning abilities by reducing hallucinations. In that sense the hypotheses suggested by authors do not hold very strong in the experimental part, unless I misunderstood something.
Experimental Designs Or Analyses: Experimental design is described clearly. The presented method shows robust improvements in quality and speed compared to other async decoding methods.
Supplementary Material: No
Relation To Broader Scientific Literature: I'm not very familiar with async decoding literature, but related work section looks like it well represents related work, and goes even further into decoding efficiency discussion about methods such as speculative decoding.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Overall the biggest concern i have is the potential impact of such approach in the community, I don't see such approach being used widely, but I could be biased. However, I think the proposed method makes sense, and I don't see any flaws in the experimental design!
Other Comments Or Suggestions: * it would help to emphasize that LC winrates in alpaca eval shall be interpreted as "higher-better" , same for all other metrics in the experiments.
Questions For Authors: Do you think such approach could bring improvements in model's ability in long chain of thought reasoning? If yes, what would be an essential experiment to do in context of this work to showcase that?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the helpful comments!
> the benefits of splitting decoding chunks into parallel executions that can synchronize later is an interesting idea, however, the practical utility of such method is questionable especially on the efficiency side. Efficient and optimal async inference of such blocks (while keeping consistent kv cache of the sequential prefix) might be overly complicated usecase unless all major foundational language models would use that.
Our results directly address efficiency. Our implementation achieves practically significant speedups up to 1.9x and on average reaches 78.6% of the theoretical speedup (an optimistic estimate of maximum possible speedup). These results clearly demonstrate real efficiency gains.
Further, our interpreter design is model-agnostic. Any foundation model can reuse our implementation as long as it adopts the same control tokens to orchestrate asynchronous decoding.
> experimental testbed uses public benchmarks for performance, which is good, but "theoretical speedup" on decoding side is a bit less convincing.
We emphasize that: 1) The left plot in Fig. 3 presents measured wallclock speedup. 2) Theoretical speedup is helpful because end-to-end evaluation entangles the quality of our training algorithm's with the quality of our interpreter implementation. Theoretical speedup lets us examine how effectively our training algorithm promotes speedup, independent of our interpreter implementation.
> presented speed-ups (geomean theoretical speedup) usually comes with quality degradation, which is a bit concerning given that in theory such async chunks might help to improve reasoning abilities by reducing hallucinations. hypotheses suggested by authors do not hold very strong in the experimental part, unless I misunderstood something.
Could the reviewer please clarify why async chunks might help reduce hallucinations and specify what "hypotheses suggested" refers to?
We acknowledge the quality-speedup tradeoff and have been transparent about it throughout our paper, including in the abstract. Our contribution is achieving the best quality-speedup tradeoffs among existing asynchronous decoding methods.
> it would help to emphasize that LC winrates in alpaca eval shall be interpreted as "higher-better" , same for all other metrics in the experiments.
We will add this clarification in the paper.
> Do you think such approach could bring improvements in model's ability in long chain of thought reasoning? If yes, what would be an essential experiment to do in context of this work to showcase that?
We believe mathematical reasoning benchmarks are promising application scenarios for PASTA. However, we also emphasize that DeepSeek R1 (first open source long CoT model) was released only ~8 days prior to ICML deadline. We will include the results of using Pasta with long chain of thought reasoning models in the final paper.
---
Rebuttal Comment 1.1:
Comment: thanks for addressing my comments!
> Our contribution is achieving the best quality-speedup tradeoffs among existing asynchronous decoding methods.
this makes sense
given the authors desire to add more experiments relevant to long CoT reasoning, Im increasing my score | Summary: The paper addresses LLM inference speed and proposes a method to enable asynchronous, parallel decoding of partial LLM responses. The main idea is to fine-tune the model to output special tags for partial answers that are independent of each other and that can then be decoded in parallel. The critical ingredients of the method are:
- Obtaining a dataset which contains markup identifying independent parts of answers; this is achieved by distillation from a powerful LLM (Gemini)
- Training the model to actually produce the independent parts in an independent manner, where the important detail is adjusting position encodings
- Implementing a decoder that can realize the new parallel decoding ability into lower wall-clock time during inference.
Claims And Evidence: The paper claims superior results in both quality and speed compared to existing asynchronous decoding methods; the one considered for comparison is APAR. The authors mention SoT, another relevant baseline, in L234 as a comparison point but I did not find any results with it. After the SFT stage with annotated examples, the model performs worse than APAR in both speedup and quality; after rejection sampling training, the authors achieve higher output quality and higher speed-up compared to APAR, validating the pareto-dominant claim of the abstract.
It should be noted that here, "quality" is measured by win-rates according to LLM-as-a-judge. The authors took some precaution and selected the hyper-parameters of their algorithm with Gemini (Fig. 4, Fig. 5) while performing final evaluations with GPT-4, but comparing models on benchmarks with objective metrics would be appreciated to remove bias.
Methods And Evaluation Criteria: Yes, quality and speed are indeed the main criteria to evaluate methods that improve decoding speed.
Regard the overall method, I like the idea of a model-driven parallelization strategy, and I can indeed see how for many questions, in particular in combination CoT, answers contain some independent and some dependent parts. What's unfortunate is that the method requires a dedicated training strategy that appears to be pervasive, requiring not only SFT data but also further dedicated fine-tuning stages. It's not clear whether existing fine-tuned models can be easily adapted to follow the format.
The method also relies on specifics regarding the handling of position encodings during parallel decoding. While, in the end, the model does predict the expected length of the response itself, the decision to have it output a multiple of 10 seems arbitrary and to be selected because it turned out to work best. It's unclear to me what the impact of this choice would be if, for example, decoding would be performed with top-p sampling instead of greedy. I am a bit surprised by the noise in Figure 4 and would have expected oracle positions to perform best; here, they perform badly in terms of quality. I would appreciate a general comment regarding noise in the paper's evaluations.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: Experimental setup looks good to me; the criteria for the preference tuning stage with BonBon are sensible (finding a trade-off between speed and quality).
I did not like that the analysis promises future improvements from further rounds of BonBon (5.3, "Continuous Improvements") but stops at 2 rounds, while this style of training has trouble achieving benefits after more than 2 iterations (e.g., https://arxiv.org/abs/2308.08998).
Supplementary Material: No
Relation To Broader Scientific Literature: APAR, the main point of comparison, uses a similar technique to achieve decoding speed-ups but relies on hand-crafted heuristics to identify independent sub-sequences in the LLM output. Instead, this paper asks Gemini to produce an annotated dataset to bootstrap the model's ability to announce parallel sections The method presented here offers more flexibility in structuring the output, and via sampling and preference optimization, the model can be further tuned towards higher speed and quality.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: While the writing is mostly clear, I found it a bit confusing that the related work is spread throughout the paper. There is an initial discussion in the introduction, then section 2, and finally there is a dedicated related work section which discusses work that, in my opinion, are not very related to this paper. I would prefer a joint discussion in a single section; for example, I don't think the postulated dichotomy is very relevant to the method overall apart from describing related work.
For the "tokens" parameter of the <promise> tag, it took a while to find out what it actually means. It first appears in Fig 1, the main figure presenting the method, without explanation, then in section 3 where is referred to an integer attribute (which makes it appear from Fig. 1 as the model would get them wildly wrong), then finally on page 7 in the discussion of the ablations we learn that it represents 10x the expected token count. Maybe I missed an earlier explanation, but it would be good to clarify this early, e.g., in the figure caption or in the "Annotations." paragraph of the intro.
I would suggest the authors refrain from labeling their method as "pareto-optimal" without adding the qualifier "among the methods that were compared" (e.g., L128 right).
Other Comments Or Suggestions: - Fig. 1 caption, "It wait"
- L135 right, "Pasta-Lang is XML-like"
- L254 left, "BonBon training" (iterations?)
Questions For Authors: - Do you find that the estimated token counts in the promises are accurate?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewers careful reading and insightful comments!
> The authors mention SoT, another relevant baseline, in L234 as a comparison point but I did not find any results with it.
We have included here the updated Figure 3 with the SoT results (please see [rebuttal doc](https://anonymous.4open.science/r/icml-rebuttal-2025-8A52/ICML_Rebuttal.pdf) Figure 1). Notably, we do not observe any speedup by SoT, when applied to Baseline-SFT. We believe that SoT, as a prompt-based method requires the base model to have strong instruction-following ability to perform well, and
validated this hypothesis by applying SoT to the stronger official instruction-finetuned Gemma-IT model from Google. With this generous implementation, SoT achieves a 1.61x
speedup while dropping its win rate by 12%. In contrast, our round 2 Pasta-BoN model with quality weight 2 achieves 1.62x speedup with only a 5% drop to win rate.
> comparing models on benchmarks with objective metrics would be appreciated to remove bias.
We thank the reviewer for this suggestion. We will include an objective evaluation benchmark in the final paper.
> Regard the overall method, I like the idea of a model-driven parallelization strategy, and I can indeed see how for many questions, in particular in combination CoT, answers contain some independent and some dependent parts. What's unfortunate is that the method requires a dedicated training strategy that appears to be pervasive, requiring not only SFT data but also further dedicated fine-tuning stages. It's not clear whether existing fine-tuned models can be easily adapted to follow the format.
Our post-training process mirrors RLHF (SFT + preference optimization). PASTA extends RLHF to improve not just quality but also latency. As RL methods take an increasingly central role in LLM training [1], PASTA is well-positioned for integration with these training algorithms.
> the decision to have it output a multiple of 10 seems arbitrary and to be selected because it turned out to work best.
We considered the following candidates of position ID adjustment techniques:
- Const: assume a fixed async chunk length,
- Pred-1: have the LLM predict async chunk length in multiples of 1,
- Pred-10: have the LLM predict async chunk length in multiples of 10.
In addition to our empirical analysis, we have clear rationales for selecting Pred-10. Having the LLM predict async chunk length (rather than using a fixed constant) makes position ID adjustment learnable. We chose multiples of 10 over exact token count prediction (Pred-1) because Pred-1 creates an unrealistic training condition—it predicts length perfectly during training, so the model never encounters position ID collisions. During inference, however, prediction errors inevitably cause such collisions. Pred-10 predicts async chunk length with error due to rounding, intentionally creating imperfect but realistic training conditions that better prepare the model for inference-time position ID collisions.
> It's unclear to me what the impact of this choice would be if, for example, decoding would be performed with top-p sampling instead of greedy.
We swept the following sampling parameters {T=0.5, T=1} x {TopP=0.2, TopP=0.4, TopP=0.8} for each of our 5 round 2 models. Results show similar quality/speedup trade-off as greedy sampling. Please see [rebuttal doc](https://anonymous.4open.science/r/icml-rebuttal-2025-8A52/ICML_Rebuttal.pdf) Figure 2).
> I am a bit surprised by the noise in Figure 4 and would have expected oracle positions to perform best; here, they perform badly in terms of quality. I would appreciate a general comment regarding noise in the paper's evaluations.
The SFT stage shows slightly higher noise (std over 3 runs is 0.6% win rate) while the BoN stage exhibits much less noise. In Figure 3, we report performance across 10 BoN models, providing clear evidence of stability and consistency.
We hypothesize this difference is because we do not optimize for a specific quality/speedup trade-off during SFT, which means the resulting model may land at different points on the same Pareto frontier. Whereas for the BoN stage, we do explicitly optimize for a specific quality/speedup trade-off, making the optimization problem more constrained and the solution more stable.
> Do you find that the estimated token counts in the promises are accurate?
Inspecting the outputs from 5 round 2 models, we find the average relative error to be 14.5%, computed as average(abs(prediction - ground_truth)/ground_truth).
> I did not like that the analysis promises future improvements from further rounds of BonBon (5.3, "Continuous Improvements") but stops at 2 rounds, while this style of training has trouble achieving benefits after more than 2 iterations.
We will include results with 3 rounds of training and update our analysis accordingly in the final paper.
[1] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. | Summary: This paper proposes PASTA, a method for accelerating LLM inference via promises. It's based on PASTA-lang, a markup-based language that expresses which parts of a prompt might be semantically independent. They write a specification for PASTA-lang, use Gemini-Flash to annotate an instruction-tuning dataset with asynchronous tags in PASTA-lang, and finally finetune a model on PASTA data. The finetuned model achieves significant inference-time speedups on tasks that can be decomposed into semantically independent chunks.
Claims And Evidence: Yes, the benchmarking results are very compelling, as well as the training and inference time measurements.
Methods And Evaluation Criteria: Yes, the method makes a lot of sense, and the evaluation is thorough and includes Pareto curves for various tradeoffs.
Theoretical Claims: n/a
Experimental Designs Or Analyses: The experimental design was thorough including various ablations, speed analyses, and description of the finetuning process.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper is a nice contribution to the LLM inference literature, specifically under the umbrella of asynchronous decoding. It does require special tuning on a specific dataset, so it is not adaptable to the general case, but it's a nice proof-of-concept.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Strengths:
- Method is simple and works well
- Configuring attention mechanisms in this setup is tricky and important, and has been done well in this case ("Training") in Section 4
- Although they only use the SlimOrca dataset, the finetuning setup seems general enough that it might be reusable in other domains
- Experimental analysis is thorough and interesting
Weaknesses:
- How generalizable is this method? Does finetuning on SlimOrca in PastaLang produce generalization capabilities that would help for other unrelated tasks/domains?
- How does PASTA compare to speculative decoding? Could they be used in parallel, with speculative decoding active within each asynchronous chunk?
Other Comments Or Suggestions: n/a
Questions For Authors: - Why doesn't Figure 3 compare directly to skeleton-of-thought?
- Even though it's synchronous, shouldn't standard speculative decoding be a suitable baseline as well?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the encouraging feedback and thoughtful comments! Here're our responses.
> How generalizable is this method? Does finetuning on SlimOrca in PastaLang produce generalization capabilities that would help for other unrelated tasks/domains?
We designed our method and evaluation to accommodate a wide variety of tasks/domains. Specifically, SlimOrca was derived from the FLAN collection of instruction-following dataset, covering 1800+ tasks[2]. Furthermore, our evaluation also covers a wide range of representative real world interaction with LLM including translation/summarization/explanation/creative writing/mathematical problem solving/coding. As [1] notes:
“[AlpacaEval] operates on a fixed set of 805 instructions chosen to be representative of user interactions on the Alpaca web demo.”
Therefore, our results demonstrate generalizations across a considerable range of tasks/domains already.
> How does PASTA compare to speculative decoding? Could they be used in parallel, with speculative decoding active within each asynchronous chunk?
> Even though it's synchronous, shouldn't standard speculative decoding be a suitable baseline as well?
We indeed believe PASTA and speculative decoding are complementary techniques that compose well. As you mentioned, speculative decoding can accelerate each asynchronous chunk in PASTA, creating a multiplicative speedup effect. We will include the results of using speculative decoding in the final paper.
> Why doesn't Figure 3 compare directly to skeleton-of-thought?
We include here the updated Figure 3 with skeleton-of-thought results (please see [rebuttal doc](https://anonymous.4open.science/r/icml-rebuttal-2025-8A52/ICML_Rebuttal.pdf) Figure 1). As described in Section 5, we do not observe speedup by SoT, when applied to Baseline-SFT. We believe that SoT, as a prompt-based method, requires the base model to have strong instruction-following ability to perform well, and validated this hypothesis by applying SoT to the stronger official instruction-finetuned Gemma-IT model from Google. With this generous implementation, SoT achieves a 1.61x speedup while dropping its win rate by 12%. In contrast, our round 2 Pasta-BoN model with quality weight 2 achieves 1.62x speedup with only a 5% drop to win rate.
References:
[1] Dubois, Y., et al. Length-controlled alpacaeval: A simple way to debias automatic evaluators. arXiv preprint arXiv:2404.04475, 2024
[2] Longpre, S., et al. The Flan Collection: Designing Data and Methods for Effective Instruction Tuning. arXiv preprint arXiv:2301.13688, 2023. | null | null | null | null | null | null |
Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization | Accept (poster) | Summary: The paper analyzes how RoPE enables periodic attention patterns and then analyzes the limitation of RoPE in that regard.
The authors argue that the limitation arises from spectral damage prevalent when RoPE is used with typical DL architectures.
The authors propose FoPE, which is based on RoPE, but while RoPE treats each dimension as a single-frequency function, FoPE models each dimension as a Fourier Series, consisting of a dominant frequency component and several harmonic components.
Additionally, FoPE clips low frequencies because the authors argue these are undertrained.
In experiments, the authors show that FoPE maintains a better performance for increased context lengths.
Claims And Evidence: The primary claim—that FoPE improves length extrapolation—is well-supported by empirical analysis. The experiments effectively showcase FoPE’s advantages over RoPE, and the inclusion of comparisons with YARN strengthens the argument.
Methods And Evaluation Criteria: The evaluation methods and criteria appear appropriate. The experiments test models up to 1.2B parameters, a reasonable scale for assessing generalization. The comparison with YARN adds credibility.
Theoretical Claims: I briefly reviewed the theoretical claims presented in the main paper and did not identify any obvious issues.
Experimental Designs Or Analyses: The experimental design seems to be valid and sound. Models up to 1.2B parameters are tested, which is already a reasonable size to support generalization to practical settings. The comparison optionally includes YARN, which strengthens the argument.
As I am not deeply familiar with related approaches, I don't know if a comparison to additional methods would be appropriate.
A comparison to non-transformer architectures that provide favourable length extrapolation capabilities would be interesting in addition (e.g. see Figure~7 in [1]).
That said I have one concern regarding the hyperparameters D and sigma. The authors rightly conduct a sensitive analysis which shows that the parameters seem to be important. However, there is no clear guidance on how to select this parameter a-priori.
[1] xlstm: Extended long short-term memory, M Beck et al. Advances in Neural Information Processing Systems 37
Supplementary Material: There is no supplementary material
Relation To Broader Scientific Literature: The paper is closely related to research on positional embeddings for transformers and the broader literature on length extrapolation. The study builds on RoPE, a widely used technique in modern deep learning models.
Essential References Not Discussed: I am not aware of any essential references that are missing.
Other Strengths And Weaknesses: Strengths:
- Clear motivation with according analysis of the approach (spectrum damage)
- Evalution includes comparision to length extrapolation technique (yarn)
Weakness:
- No guidance for hyperparameter selection (D and sigma) although the methods seem to be sensitive to these parameters.
- No comparison to non-transformer LLM architectures that provide more favorable length generalization properties like RWKV, xLSTM or Mamba (e.g. see Figure~7 in [1])
[1] xlstm: Extended long short-term memory, M Beck et al. Advances in Neural Information Processing Systems 37
Other Comments Or Suggestions: -
Questions For Authors: - How would you select the sigma and D hyperparameter for a new large-scale models where a hyperparameter search is not feasible because of the scale/cost of the model training? (The sensitivity of these parameters could affect the real-world usability of FoPE)
- Could clipping low frequencies have unintended consequences in certain applications?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the valuable comments from the reviewer, we want to deliver further explanations accordingly.
# Comparison with more baselines on more benchmarks
It is a nice suggestion to compare FoPE with more baseline methods, thus we supplement the following experiments:
- We validate the effectiveness of FoPE by evaluating it on 10+ additional benchmarks and comparing it with 3 more baselines. These supplementary experiments are conducted on OLMo-1.2B training on C4 datasets, with all experimental settings kept consistent with Sec 5.2 (line 293-329).
|methods|avg acc|basic_arithmetic|social_iqa|winogrande|openbook_qa|sciq|hellaswag|piqa|commonsense_qa|arc_easy|
|-|-|-|-|-|-|-|-|-|-|-|
|NoPE|42.14|25.67|43.71|51.86|29.80|76.70|41.83|68.83|31.61|51.40|
|ALiBi|42.93|24.97|42.53|53.12|31.40|77.70|43.06|69.42|**33.42**|**53.68**|
|KERPLE|43.22|25.03|43.81|**54.07**|32.40|**78.20**|43.65|69.64|32.92|52.46|
|FIRE|42.38|25.60|42.63|49.88|33.40|77.00|42.75|69.31|32.92|50.35|
|RoPE|42.98|24.60|43.45|51.54|**33.60**|77.10|43.36|**70.13**|33.01|52.98|
|FoPE|**43.37**|**26.17**|**44.12**|53.20|32.20|77.80|**43.83**|70.08|32.92|53.33|
|methods|avg acc|mmlu_stem|mmlu_social_sciences|mmlu_humanities|mmlu_other|
|-|-|-|-|-|-|
|NoPE|25.81|25.47|25.38|28.01|24.39|
|ALiBi|25.99|25.93|27.20|26.44|24.39|
|KERPLE|26.16|26.02|25.97|27.82|24.82|
|FIRE|26.04|26.34|**27.41**|26.25|24.16|
|RoPE|26.68|27.01|27.26|27.87|24.53|
|FoPE|**27.57**|**27.30**|27.31|**29.89**|**25.79**|
|methods|avg ce loss|natural_qs_open|trivia_qa_wiki|arc_easy|
|-|-|-|-|-|
|NoPE|1.4513|1.4334|1.6129|1.3077|
|ALiBi|1.4242|1.3879|1.6057|1.2789|
|KERPLE|1.4284|1.4149|1.5878|1.2825|
|FIRE|1.4580|1.4365|1.6258|1.3118|
|RoPE|1.4225|1.4114|1.5973|1.2588|
|FoPE|**1.3941**|**1.3818**|**1.5736**|**1.2272**|
- **While FoPE is primarily designed for length generalization and long-context tasks, it also performs comparably or even surpasses all baseline methods**. But on MMLU, all methods have weak performance, perhaps caused by the training data. In the future work, we may conduct more experiments on datasets containing math data.
- We also conduct evaluation similar as Sec 5.2 (See https://anonymous.4open.science/r/FoPE-Supplementry-Material/Supplementary_Main_Experiments.pdf). **FoPE still demonstrate the best accuracy on passkey retrieval and the second-best ppl on C4 (only slightly behind ALiBi).**
- As time is limited, we have not conducted experiments on RWKV, xLSTM and Mamba. But it is really a nice suggestion to consider the length generalization capabilities of models with different architectures. We would like to cite these papers in related work in our future revisions.
# Clarification of the hyper-parameters' influence on FoPE
We agree with the reviewer that hyper-parameters optimization is crucial for the large-scale training, but we would like to make some clarifications:
1.**In a wide range of parameters we tested, FoPE consistently demonstrates a significantly better performance than RoPE**.
2. **The performance of FoPE is not highly sensitive to hyper-parameters** (see ablation in Fig 6 and Sec 5.5), although an elaborate hyper-parameter selection of FoPE may lead to a better performance.
Thus, **even without a careful selection of hyper-parameters, FoPE is still competent for replacing RoPE in Transformers**.
Additionally, we emphasize that hyper-parameter tuning is an inevitable procedure before pre-training nowadays, many other hyper-parameters also need to be selected (i.e. hidden_dim, num_heads, ...). As for FoPE, the ablation studies in Fig 6 and Sec 5.5 suggested that:
- The optimal $\sigma$ for FoPE increases as hidden_dim and num_layers grow, which is expected, as larger models suffer more from spectral damage.
- The best $D$ tends to be slightly bigger than the head_dim of each attention head. This is because too few frequency components cannot represent the spectrum damage well, , while too many exceed the model’s representational capacity.
But it is a nice suggestion to have a better selection strategy, we would like to continually research this problem.
# Influence of clipping low frequencies
It is an insightful suggestion to consider the negative influence of clipping low frequencies, we want to make some clarification below:
- "Clipping low frequencies" is quite similar to the "interpolation" used by many extrapolation methods like YARN[1]. As this operation is mainstream solution for LLMs' length generalization, we suppose "Clipping low frequencies" does not have significant negative influence in most tasks.
- **Also, FoPE has a consistent performance on diverse benchmarks, .** Thus, the answer to this problem is still unclear. But we would like to continually consider this issue.
---
[1] YaRN: Efficient Context Window Extension of Large Language Models. ICLR 2024. | Summary: The paper introduces the Fourier positional embedding (FoPE) based on Fourier series. The authors begin by analyzing the rotary positional embedding (RoPE) method in the frequency domain. Further analysis of the feed-forward network yields additional information that linear layers produce spectrum leakage (mix of frequencies) and the activation functions lead to spectrum distortion due to their harmonics generation. These two types of spectrum effects are causes for spectrum damage for context length generalization, hindering RoPE’s effectiveness. Then, the attention mechanism is shown to produce spectrum damage on low frequencies for *undertrained* components. Following this analysis, FoPE is proposed as a multi-frequency representation for each dimension based on Fourier Series, instead of a single frequency like RoPE. The proposed method also includes zeroing the low frequency components to control for undertrained frequencies. Next the authors compare FoPE against RoPE and ALiBi on pre-training, and against YARN extension on continual pre-training of OLMo, and against RoPE on fine-tuning SmolLM-1.7B. They measure perplexity and passkey retrieval to test the context length generalization of the resulting models. In most cases, FoPE shows better generalization when increasing the context length at test time. Further, ablation analysis is performed to test both the zeroing of frequency and without the Fourier series, and various analysis related to other aspects of the language model.
## update after rebuttal
Thanks to the authors for clarifying the various points. After a thoughtful consideration, primarily given the additional comparisons, I've decided to update my score.
Claims And Evidence: The claims made in the submission are mostly supported. A formal definition of *undertrained* components of attention is missing, so it is unclear when does spectrum damage based on this definition matters. An empirical analysis of the potential Spectrum Damage on a well-trained model is lacking.
Methods And Evaluation Criteria: The derived methods seem sound. The selection of the low frequency threshold is not analyzed. What is the impact on the method if an incorrect or poor threshold selected? The evaluation criteria for the method seems correct. The datasets could be selected to test generalization further (e.g., perplexity on SCROLLS).
Theoretical Claims: The derivations in the paper seem correct. Please, see comments above about undertrained components.
Experimental Designs Or Analyses: The proposed method is evaluated on a pre-training on C4 and Books datasets. The results show that ALiBi performs better than FoPE. In the text, the authors suggest that there is an “issue” with how ALiBi considers the linear declined attention, putting a considerable effort to learn the short-distance information. However, it is well-known that the distribution of mutual information between tokens behaves as a power-law distribution [1]. Namely, this is a property of the data, and the model exploits such prior knowledge (and not the other way around). In addition, pre-trained language models have better perplexity than the ones showed in Section 5.2, and the models are much bigger in size today (using more train tokens) than those used in the paper.
The passkey retrieval results in section 5.2 show that the bigger the model, the better the retrieval results. If the 3 FoPE model sizes have the same embeddings and trained with the same context length, then why are the smaller models unable to generalize in the same way as the biggest model? Their curves decay much faster. Also, existing models can solve tasks beyond 8k context length today. It is unclear if they can generalize similarly as FoPE or even better. This suggests that there may be an issue when training the models or with the experiment itself.
Last, it is unclear what is the maximum sequence length for section 5.4? Also, what is the performance of the baseline model *before* fine-tuning?
Other positional embedding methods have not been compared.
[1] “Critical Behavior in Physics and Probabilistic Formal Languages”, Lin and Tegmark, 2017
Supplementary Material: No
Relation To Broader Scientific Literature: The contribution of FoPE and proposes that context length generalization is of some value to the community. Current LLMs have context lengths that are very large (100k+ tokens), thus the significance is low. The theoretical analysis is well executed but limited (i.e., how do residual connections influence the frequencies? How does the LayerNorm or the specific activation functions?).
Essential References Not Discussed: Not aware of such references.
Other Strengths And Weaknesses: * Theory and derivations are valuable in the paper.
* The experiments and results are not conclusive. It’s hard to really conclude the value of FoPE versus RoPE or other not-compared embedding methods.
Other Comments Or Suggestions: The descriptions in the text of the experiments related to Figure 6 look disconnected to the Figure itself.
Questions For Authors: See questions in sections above.
* Can you learn the threshold value for the low frequencies?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the elaborate comments, we will address them according to their proposed order.
# Clarification for "undertrained components" and the impact of poor threshold
- **We have jointly defined the "undertrained components" and "floor frequency" in Fig 2, Sec 3.3 and Sec 4, using both visualization and formula.**
- Also, **these two concepts are well-known in the area of length generalization**, mentioned by many papers [1, 2].
- As for the impact of poor threshold, we add ablations on 60M OLMo trained with 512 context length (similar setting as Sec 5.2). **Our findings suggest that selecting a floor frequency no-less than $2\pi/L$ is necessary, but a higher threshold does not have a significant influence.**
|threshold (f=$2\pi/L$)|0|0.5f|0.75f|f|1.25f|1.5f|
|-|-|-|-|-|-|-|
|**ce loss on 8192 length**|6.86|6.54|5.97|5.84|5.86|5.85|
# Empirical analysis of the Spectrum Damage on well-trained models
- In Sec 5.4, our experiments are based on SmolLM-1.7B, a famous model well-trained by HuggingFace for academic purposes.
- In Sec 5.6, Fig 3 and Fig 7, we have a detailed empirical analysis of the influence of Spectrum Damage on LLaMA-2-7B.
- To further illustrate this phenomenon, we conduct another empirical analysis on LLaMA-2-7B (See https://anonymous.4open.science/r/FoPE-Supplementry-Material/Empirical_Analysis_on_Spectrum_Damage.pdf). We force every token only has one frequency component in the 1st layer. Then, we compare the spectrum of the 1st and 2nd layer after DFT on the attention map. **Many other frequency components appear in the 2nd layer, which demonstrates the happening of Spectrum Damage.**
# Advantage of FoPE over ALiBi and the well-known negative influence of linear declined attention
- **Our results in Sec 5.2 clearly show FoPE's advantages compared to ALiBi, it is unclear why the reviewer got the opposite conclusion.**
- We agree that "the mutual information between tokens behaves as a power-law distribution", and we appreciate this perspective.
- However, this property is only useful for short context modeling, but not for long-context modeling. For example, **the "linear declined attention" hinders the retrieval of long-distance information, thereby lacking the crucial ability for long-context modeling.**
- **Not only do our results in Fig 1 demonstrate this phenomenon, but many well-known work also have the similar viewpoint [2, 3, 4]** that "linear declined attention" and ALiBi is fall short in long-context applications.
# Clarification for Sec 5.4 (the maximum sequence length and base model performance)
- We have provided the maximum sequence length in Sec 5.4 (line 328 and 373).
- The performance of SmolLM-1.7B-base is as follows, which do not influence our conclusions:
|length|gov_report|multi_news|trec|triviaqa|samsum|
|-|-|-|-|-|-|
|0-4k|9.10|5.06|42.0|68.53|17.97|
|4-8k|9.50|7.06|51.0|69.21|16.70|
|8k+|8.02|6.01|38.0|65.81|20.17|
# Advantage and clarification of "smaller models' performance decay faster than larger ones"
- Firstly, this phenomenon implies FoPE is a scalable method, which should be considered as an advantage.
- Secondly, it is not odd that larger models have better generalization.
# FoPE is valuable for many reasons, although current LLMs achieve 100k+ context length
- It is well-known that current Transformer-based LLMs achieve 100k+ context length using extrapolation as YARN [1]. **But in Fig 4, we have shown FoPE can also be used for extrapolation and has better performance than YARN.**
- With FoPE, models achieve better length generalization trained with a much shorter context length. **Thus, FoPE can significantly improve the training efficiency of models (the longer length, the much slower training).** This is valuable for saving time and money.
# Comparison with more baselines on more benchmarks
Please check our reply to Reviewer LYp1 for detailed results.
# Influence of residual connections, LayerNorm and specific activation functions
- We have modeled the influence of activation functions in Sec 3.2.
- Considering the residual connections and LayerNorm only deliver linear transform in frequency domain, we did not include their analysis in our paper. But for clarity, we will contain them in future revisions.
# Generalization ability of FoPE on non-training data
- **We have evaluated the OOD generalization in Sec 5.2.** (trained on Gutenberg Books and evaluated the ppl on C4)
- Our evaluations on Sec 5.4 include datasets from SCROLLS (See Sec 5.4).
# Clarification for Figure 6
The description is clearly presented just under Fig 6 in Sec 5.5.
---
[1] YaRN: Efficient Context Window Extension of Large Language Models. ICLR 2024.
[2] FIRE: Functional Interpolation for Relative Positions Improves Long Context Transformers. ICLR 2024.
[3] CLEX: Continuous Length Extrapolation for Large Language Models. ICLR 2024.
[4] Mesa-Extrapolation: A Weave Position Encoding Method for Enhanced Extrapolation in LLMs. NeurIPS 2024. | Summary: This paper analyses the limitations of Rotary Position Embedding (RoPE) in extending language model context length using Discrete Signal Processing theory. It identifies spectral damage from linear layers, activation functions, and insufficient frequency training as key issues affecting RoPE’s periodicity. To address this, the authors propose Fourier Position Embedding (FoPE), which constructs a Fourier Series and removes harmful frequency components to enhance length generalisation.
Claims And Evidence: The claims are generally supported and shows performance (length generalisation) gains across benchmarks. However, it does not address the computational cost of FoPE, which is important for understanding the trade-off between performance and efficiency.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate. FoPE enhances length generalisation by improving position encoding, addressing spectral issues in long-context attention. The evaluation on diverse benchmarks (e.g., GovReport, MultiNews, TREC) effectively tests performance across varying context lengths. However, including more complex reasoning tasks would provide a more comprehensive assessment.
Theoretical Claims: No.
Experimental Designs Or Analyses: While the paper demonstrates performance improvements on long contexts, it doesn't provide insights into how the Fourier Position Embedding method impacts computation costs of training and inference.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper extends Rotary Position Embedding (RoPE) by analyzing its frequency-domain properties using Discrete Signal Processing (DSP) and identifying spectral distortions that hinder long-context generalization. Building on prior work in positional embeddings (Su et al., 2021; Press et al., 2022) it introduces Fourier Position Embedding (FoPE), which filters harmful frequency components to improve attention’s periodicity.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1) Frequency-domain analysis of RoPE using Discrete Signal Processing (DSP), offering new insights into spectral distortions affecting long-context generalization.
2) Practical impact with Fourier Position Embedding (FoPE), demonstrating strong improvements in context extension across model scales and tasks.
Weaknesses:
1) Computational cost of FoPE is unclear, especially for large-scale models.
Other Comments Or Suggestions: N/A
Questions For Authors: 1) Could the authors include benchmarks on reasoning tasks like MMLU and GSM8K to demonstrate greater task diversity?
2) The authors are asked to provide a comparison of computational analysis.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the detailed comments from the reviewer, we would like to make several clarification below.
# Computation cost of FoPE is similar to RoPE
We agree with the reviewer that the efficiency and computation cost are essential for Position Embedding, thus:
- FoPE keeps the rotary matrix having the similar shape as RoPE.
- FoPE keeps the similar pipeline as RoPE: ① pre-compute the rotary matrix and save the matrix in cache before training; ② take out the matrix for rotation during training.
Therefore, FoPE is as efficient as RoPE, independent of the model scale. (Reviewer FqQv also acknowledges that FoPE is lightweight and effective in "Other Strengths And Weaknesses")
# Evaluation on more tasks and baselines
It is a nice suggestion to evaluate FoPE on more diverse tasks, thus we supplement the following experiments:
- We validate the effectiveness of FoPE by evaluating on 10+ more benchmarks and comparing with more 3 more baselines. These supplementary experiments are conducted on OLMo-1.2B training on C4 datasets, keeping all of the settings similar to the experiments in Sec 5.2 (line 293-329). We do not show the results on GSM8K as all methods achieve nearly 0 acc, which may be caused by the lack of math data in C4.
| methods | avg acc | basic_arithmetic | social_iqa | winogrande | openbook_qa | sciq | hellaswag | piqa | commonsense_qa | arc_easy |
|-|-|-|-|-|-|-|-|-|-|-|
| NoPE | 42.14 | 25.67 | 43.71 | 51.86 | 29.80 | 76.70 | 41.83 | 68.83 | 31.61 | 51.40 |
| ALiBi | 42.93 | 24.97 | 42.53 | 53.12 | 31.40 | 77.70 | 43.06 | 69.42 | **33.42** | **53.68** |
| KERPLE | 43.22 | 25.03 | 43.81 | **54.07** | 32.40 | **78.20** | 43.65 | 69.64 | 32.92 | 52.46 |
| FIRE | 42.38 | 25.60 | 42.63 | 49.88 | 33.40 | 77.00 | 42.75 | 69.31 | 32.92 | 50.35|
| RoPE | 42.98 | 24.60 | 43.45 | 51.54 | **33.60** | 77.10 | 43.36 | **70.13** | 33.01 | 52.98 |
| FoPE | **43.37** | **26.17** | **44.12** | 53.20 | 32.20 | 77.80 | **43.83** | 70.08 | 32.92 | 53.33 |
| methods | avg acc | mmlu_stem | mmlu_social_sciences | mmlu_humanities | mmlu_other |
|-|-|-|-|-|-|
| NoPE | 25.81 | 25.47 | 25.38 | 28.01 | 24.39 |
| ALiBi | 25.99 | 25.93 | 27.20 | 26.44 | 24.39 |
| KERPLE | 26.16 | 26.02 | 25.97 | 27.82 | 24.82 |
| FIRE | 26.04 | 26.34 | **27.41** | 26.25 | 24.16 |
| RoPE | 26.68 | 27.01 | 27.26 | 27.87 | 24.53 |
| FoPE | **27.57** | **27.30** | 27.31 | **29.89** | **25.79** |
| methods | avg ce loss | natural_qs_open | trivia_qa_wiki | arc_easy |
|-|-|-|-|-|
| NoPE | 1.4513 |1.4334 | 1.6129 | 1.3077 |
| ALiBi | 1.4242 |1.3879 | 1.6057 | 1.2789 |
| KERPLE | 1.4284 | 1.4149 | 1.5878 | 1.2825 |
| FIRE | 1.4580 | 1.4365 | 1.6258 | 1.3118 |
| RoPE | 1.4225 | 1.4114 | 1.5973 | 1.2588 |
| FoPE | **1.3941** | **1.3818** | **1.5736** | **1.2272** |
- **Although FoPE is primarily designed for length generalization and long-context tasks, it also performs comparably or even surpasses all baseline methods**. But on MMLU, all methods have weak performance, perhaps caused by the training data. In the future, we may conduct more experiments on datasets containing reasoning and math data.
- We also conduct evaluation similar to those in Sec 5.2 (See https://anonymous.4open.science/r/FoPE-Supplementry-Material/Supplementary_Main_Experiments.pdf). **FoPE still demonstrates the best accuracy on passkey retrieval and the second-best ppl on C4 (only slightly behind ALiBi).** | Summary: ## update after rebuttal
I read the latest clarification by the authors, and understand q and k in Re[qk*e^{i{m-n}\theta}] are not exactly the 2-dim vector [q_x, q_y]^T but an implicit complex number. I raised my score back.
******
This paper points out that in RoPE, different dimensions correspond to different frequencies,
and suggests viewing the interaction between queries and keys from the perspective of non-Uniform DFT (Eq.2 - Eq.4).
(However, there is an unclear point here: while RoPE is motivated by complex number rotation, its implementation is actually based on vector rotation--—they are not equivalent. The authors' analysis is based on the former, but the actual analysis object is the latter, and the authors' implementation is also the latter. Therefore, at minimum, they should explain this gap.)
Based on this **deconstructive analysis**,
the authors identify two issues:
1. If we consistently view RoPE from a signal perspective, we cannot ignore the spectral leakage caused by activation functions and linear layers;
2. Considering that low frequencies correspond to stable components in long contexts, while the text used in training is not very long, these low-freq components are insufficiently trained.
The authors propose two improvements to RoPE:
1. Allow feature dimensions that previously corresponded to a *single* frequency to now correspond to *multiple* frequencies, while still maintaining a *primary* frequency, enabling more flexible adjustment of the disturbed spectral information.
2. Set the insufficiently trained dimensions directly to 1.
In experiments, the authors primarily validate the benefits of FoPE for length extrapolation capabilities. Experimental scenarios include pre-training, continual pre-training, and fine-tuning.
Claims And Evidence: I have concerns about the following claim:
The authors view the interaction between queries and keys from the perspective of non-Uniform DFT (Eq.2 - Eq.4),
and they hope this perspective can be maintained throughout the model, thus proposing two improvements.
The starting point of these approaches is understanding RoPE's formula as Eq.2.
However, **it should be noted that RoPE's actual implementation is not Eq.2.**
Eq.2 can be considered as the heuristic starting point of RoPE,
while the actual implementation rotates each pair of dimensions by a certain angle, which is not equivalent to Eq.2,
therefore, in fact **we cannot write a strict Fourier form equation**.
It can be said that:
**RoPE has both a "heuristic approach" and a "practical approach",
and the authors' analysis is based on the former "heuristic approach" but improves upon the "practical approach".**
I would like to hear other reviewers' and AC's opinions on this point.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: There are no specific theoretical claims, but the paper's analysis has certain theoretical aspects. Please see my concerns in (Claims And Evidence).
Experimental Designs Or Analyses: Yes.
PS: I have not personally conducted similar experiments and have insufficient understanding of the experimental details.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper provides insights into understanding Transformer architecture and solving long-range dependency problems from a signal analysis perspective.
Essential References Not Discussed: Not sure.
Other Strengths And Weaknesses: Other Strengths:
1. Provides a deconstructive analysis of existing models;
2. The proposed method is lightweight and effective;
3. Although I am not very familiar with the experimental approaches in this direction, I find the authors' writing of the experimental section to be very fluent.
Other Weaknesses:
- I still have concerns about the signal-based explanation. First, there are the concerns raised in (Claims And Evidence). Additionally, I am concerned about: in the authors' analysis, attention with RoPE becomes an inverse Fourier transform form, which essentially treats the input as a frequency domain response. However, if this is the case, how can we analyze multiple stacked blocks from a signal perspective?
Other Comments Or Suggestions: What is the typical range of values for $\omega_m = 1/{\theta}^{(2m/M)}$ in Eq.2? This should be explicitly stated.
Questions For Authors: Please refer mainly to the (Claims And Evidence) section,
and also see the (Other Weaknesses) and (Other Comments Or Suggestions) sections
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the insightful comments from the reviewer, we are going to make clarifications for the concerns above.
# Clarification for the "heuristic approach" and "practical approach" of RoPE/FoPE
It seems this is the major concern of the reviewer, we acknowledge the importance of clarifying this point:
1. **The Rotary Matrix implies the real part of the complex number in Matrix Space, which is also the main difference between the "heuristic approach" and "practical approach" of RoPE.** The derivation is in Sec 3.4 of Roformer's original paper [1], we also provide a derivation under our understanding:
In the 2D case of RoPE, the vectors $q$ and $k$ can be viewed either as 2D plane vectors ($\mathbf{q}=(q_x,q_y)^T$) or as vectors in the complex plane ($\mathbf{q}=\|\mathbf{q}\|e^{i\theta_q}$, $\|q\|=\sqrt{q_x^2+q_y^2}$, $\theta_q = \arctan{\frac{q_x}{q_y}}+k\pi$ ). The inner product of the two vectors is equal to $\langle\mathbf{q},\mathbf{k}\rangle=Re[\mathbf{qk}^*]$ (the proof is at the end of this reply).
2. **As taking the real part does not change the frequency-domain property of a vector, thus: ① RoPE's actual implementation still achieves Eq.2; ② we can still write a strict Fourier form equation.**
3. **As a result, our theoretical analysis and implementation are well-aligned.** To provide better clarity, we will explicitly include a derivation in the future revisions.
4. **Our further empirical results also demonstrate that the actual implementation of RoPE properly achieves its theoretical motivation, also well-aligning with our frequency-domain analysis (See https://anonymous.4open.science/r/FoPE-Supplementry-Material/Empirical_Analysis_on_Spectrum_Damage.pdf).** In this experiment, we force every token only has one frequency component in the first layer. Then, we compare the spectrum of the first and second layer with DFT on attention map (aligned with the derivation in Sec 2.2).
# Relationship between fourier/signal modeling and multiple stacked blocks
- Firstly, we want to clarify that, **as the Fourier Transform is a linear transform [2], it does not change the main properties of models.** Thus, the multiple stacked blocks are still in charge of modeling more diverse/complex and higher-order functions/signals, which improves the expression ability of models.
- Secondly, based on our modeling from a signal/frequency-domain perspective, **different layers actually process signals of different frequency distribution.** This is useful for improving the expression ability of models. However, **RoPE wrongly regards the frequency distribution as same in different layers, leading to drawbacks in length generalization**.
- Lastly, our modeling in frequency-domain can analyze the periodicity of attention mechanism, providing insights into the drawback of RoPE in length generalization.
# Typical range of values for $\omega_m$
We have shown these statistics in Appendix D.3 (line 727-728). The frequencies of RoPE are between (0, 1] and samples more densely near 0.
# Comparison with more baselines on more benchmarks
We also conduct more supplementary experiments, please check our reply to Reviewer LYp1 for detailed results.
---
## Reference
[1] Jianlin Su, et al. RoFormer: Enhanced Transformer with Rotary Position Embedding.
[2] Oppenheim, et al. Signals and Systems.
---
## Additional Proof
$$\langle f_q(\mathbf{x_m},m),f_k(\mathbf{x_n},n)\rangle$$
$$=\langle \mathbf{q}e^{im\theta},\mathbf{k}e^{in\theta}\rangle$$
$$=\text{Re}[\mathbf{qk}^*e^{i(m-n)\theta}]$$
$$=\text{Re}[\|q\|\|k\|e^{i[\theta_q-\theta_k+(m-n)\theta]}]$$
$$=\|q\|\|k\|\cos(\theta_q-\theta_k+(m-n)\theta)$$
$$=\|q\|\|k\|\left[ \cos(\theta_q-\theta_k)\cos(m-n)\theta - \sin(\theta_q-\theta_k)\sin(m-n)\theta\right]$$
$$=\|q\|\|k\|\left[(\cos\theta_q\cos\theta_k+\sin\theta_q\sin\theta_k)\cos(m-n)\theta-(\sin\theta_q\cos\theta_k-\cos\theta_q\sin\theta_k)\sin(m-n)\theta \right]$$
$$=(q_xk_x+q_yk_y)\cos (m-n)\theta - (q_yk_x-q_xk_y)\sin (m-n)\theta$$
$$=[q_x, q_y][\cos (m-n)\theta, -\sin (m-n)\theta; \sin (m-n)\theta, \cos (m-n)\theta][k_x, k_y]^T$$
$$=[q_x, q_y][\cos m\theta, \sin m\theta; -\sin m\theta, \cos m\theta][\cos n\theta, -\sin n\theta; \sin n\theta, \cos n\theta][k_x, k_y]^T$$
$$=(\mathbf{R_{\theta,m}q})^T(\mathbf{R_{\theta,n}k})$$
$$=\mathbf{q^T R_{\theta,m-n}k}$$
---
Rebuttal Comment 1.1:
Comment: > Re: Clarification for the "heuristic approach" and "practical approach" of RoPE/FoPE
**The "heuristic approach" and "practical approach" of RoPE are indeed not equivalent, as we can clearly see from your Additional Proof derivation.**
In this derivation, you attempted to derive the "practical approach" formula from the "theoretical heuristic approach".
The third line
$$\textrm{Re}[{\mathbf{q}\mathbf{k}^{*}}e^{\mathrm{i}(m-n)\theta}]$$
is correct, but the next line is incorrect. Here, $\mathbf{q}\mathbf{k}^{*}$ is a real number, equal to
$\|\mathbf{q}\| \|\mathbf{k}\| \cos(\theta_q - \theta_k)$,
or it can be written as
$\mathrm{Re}[\|\mathbf{q}\| \|\mathbf{k}\| \mathrm{e}^{i(\theta_q - \theta_k)}]$. Therefore, after the third line, you can only write:
$$
\mathrm{Re}[ \underbrace{\mathrm{Re}[\|\mathbf{\theta_q}\| \|\mathbf{\theta_k}\| \mathrm{e}^{i(\theta_q - \theta_k)}]}_{\textrm{a real number}} \mathrm{e}^{\mathrm{i}(m-n)\theta}],
$$
And this is not equal to the fourth line's formula. The difference lies in whether you accidentally let $\theta_q$ and $\theta_k$ affect the angle in the complex plane.
Therefore, your "practical approach" and "theoretical heuristic approach" are not equivalent.
I'm not saying this to negate your contribution - even at the heuristic level, your analysis and attempts are meaningful.
**However, there is indeed an overclaim here, and it's quite crucial, so I will discuss this carefully with the AC and other reviewers.**
Additionally, **there seem to be another overclaim** in the paper.
The authors analyze RoPE from the perspective of NUDFT, but according to Wikipedia [https://en.wikipedia.org/wiki/Non-uniform_discrete_Fourier_transform], the number of frequencies in NUDFT needs to reach N.
This point should also be clarified.
> Re: Relationship between fourier/signal modeling and multiple stacked blocks
I didn't quite understand this part.
What I mean is, since the authors want to view the Transformer from a signal perspective throughout,
can the output of an attention layer with RoPE be considered as the result of inverse NUDFT? If so, how should this be connected with the subsequent V and more self-attention layers?
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's elaborate comments on our derivation. But there are several essential clarification we want to present:
> Re: $\mathbf{qk^*}$ is real number, equal to $|q||k|\cos(\theta_q-\theta_k)$ (or $Re[|q||k|e^{i(\theta_q-\theta_k)}]$
We would clarify two crucial misunderstandings:
- $\mathbf{q}$, $\mathbf{k^*}$ and $\mathbf{qk^*}$ are all complex numbers, but not real numbers.
- In the third line of our formula, it is $Re[\mathbf{qk^*}e^{i(m-n)\theta]}$, but not $Re[<\mathbf{q, k^*}>e^{i(m-n)\theta}]$.
Thus, **the third line of our formula calculates the "multiplication" but not the "inner product" between $\mathbf{q}$ and $\mathbf{k}$.** This would lead to a totally different results, as $\mathbf{qk^*} = |q||k|e^{i[\theta_q-\theta_k]}$, but $<\mathbf{q, k^*}> = Re[\mathbf{q(k^*)^*}]=Re[\mathbf{qk}]=|q||k|\cos(\theta_q+\theta_k)$.
We understand that the reviewer want to present the "inner product" of $q$ and $k$ in attention mechanism, but the "inner product" have been presented in the second line of our derivation $<\mathbf{q}e^{im\theta}, \mathbf{k}e^{in\theta}>$.
**Overall, we are still confident that our derivation is correct. We welcome further discussion with the Reviewer FqQv, as well as the other reviewers and ACs.**
Additionally, we would clarify some ambiguous definition in our first version rebuttal. That is, the original format of a complex number should be $\mathbf{q}=|q|e^{i\theta_q}$ ($|q|=\sqrt{q_x^2+q_y^2}, \theta_q=\arctan{\frac{q_x}{q_y}}+k\pi$), while the vector format of a complex number should be $\mathbf{\vec{q}}=[q_x, q_y]^T$. Accordingly, we rewrite the derivation at the end of this reply (without changing any of its original meaning).
> Re: Your "practical approach" and "theoretical heuristic approach" are not equivalent.
We would like to respectfully clarify that, **we did not claim the equivalence between the "practical approach" and "theoretical approach" of RoPE/FoPE**. The expressions we used are "RoPE **implicitly achieves** Periodic Attention based on NUDFT" and "the practical approach and theoretical approach of RoPE/FoPE are **well-aligned**". **These expressions are deliberately chosen and clearly distinct from claiming strict equivalence.**
**Also, our derivation explains the actual relationship between the "practical approach" and "theoretical approach" of RoPE/FoPE.** That is, these two approaches are geometrically aligned. In implementation, RoPE transforms complex-number operations in the complex plane into vector operations in the 2D real plane.
> Re: The number of frequencies in NUDFT needs to reach N, this point should also be clarified.
The number N is not essential for NUDFT, and can be set to any reasonable number. This can be supported by the official implementation of NUDFT from MATLAB (https://www.mathworks.com/help/matlab/ref/double.nufft.html) and other repo (https://mind-inria.github.io/mri-nufft/nufft.html).
> Re: Can the output of an attention layer with RoPE be considered as the result of inverse NUDFT? If so, how should this be connected with the subsequent V and more self-attention layers?
We did not claim that "**the output of an attention layer with RoPE** could be considered as the result of inverse NUDFT", but we claim that "**the attention score/weight of a self-attention with RoPE (without softmax)** could be considered as the result of inverse NUDFT". **This is also be claimed by the title in our paper, this transform implicitly achieves a Periodic Attention, which have the potential to be extended periodically for better length generalization.**
Thus, the subsequent V is a weighted average of the former V, where the weight is the **Periodic Attention Score**. Still, the V vector implies the coefficients of frequency components.
As for the influence of passing coefficients across layers, we have answered in our former rebuttal. That is, "the multiple stacked blocks are still in charge of modeling more diverse/complex and higher-order functions/signals, which improves the expression ability of models."
---
**Additional Derivation**
$< f_q(\mathbf{x_m},m),f_k(\mathbf{x_n},n)>$
$=\langle \mathbf{q}e^{im\theta},\mathbf{k}e^{in\theta}\rangle$
$=Re[\mathbf{qk}^*e^{i(m-n)\theta}]$
$=Re[\|q\|\|k\|e^{i[\theta_q-\theta_k+(m-n)\theta)]}]$
$=\|q\|\|k\|\cos(\theta_q-\theta_k+(m-n)\theta)$
$=\|q\|\|k\|\left[ \cos(\theta_q-\theta_k)\cos(m-n)\theta-\sin(\theta_q-\theta_k)\sin(m-n)\theta\right]$
$=\|q\|\|k\|\left[(\cos\theta_q\cos\theta_k+\sin\theta_q\sin\theta_k)\cos(m-n)\theta
-(\sin\theta_q\cos\theta_k-\cos\theta_q\sin\theta_k)\sin(m-n)\theta \right]$
$=(q_xk_x+q_yk_y)\cos(m-n)\theta - (q_yk_x-q_xk_y)\sin(m-n)\theta$
$=[q_x, q_y] [\cos(m-n)\theta, -\sin(m-n)\theta; \sin(m-n)\theta, \cos(m-n)\theta] [k_x, k_y]^T$
$=[q_x, q_y] [\cos m\theta, \sin m\theta; -\sin m\theta, \cos m\theta][\cos n\theta, -\sin n\theta;\sin n\theta, \cos n\theta][k_x, k_y]$
$=\mathbf{\vec{q}}^T\mathbf{R}_{\theta,m-n}\mathbf{\vec{k}}$ | null | null | null | null | null | null |
Goal-Oriented Skill Abstraction for Offline Multi-Task Reinforcement Learning | Accept (poster) | Summary: The paper proposes Goal-Oriented Skill Abstraction (GO-Skill), a novel method for offline multi-task reinforcement learning (MTRL) that learns a unified policy to solve multiple tasks using pre-collected, task-mixed datasets. It introduces a skill extraction process using a goal encoder, vector quantization (VQ), and a skill decoder to build a discrete skill library from offline trajectories, followed by a skill enhancement phase to address class imbalance. A high-level skill-based policy, implemented as a Prompt Decision Transformer, dynamically combines these skills to tackle specific tasks. Main findings include superior performance on the MetaWorld benchmark (MT30 and MT50 setups), with average episode returns of 3159.1 (Near-Optimal) and 2123.8 (Sub-Optimal) on MT50, outperforming baselines like MTDT and PromptDT.
Claims And Evidence: The claim that GO-Skill enhances knowledge transfer and task performance via reusable skills is well supported by experiments on MetaWorld MT30 and MT50, showing higher average episode returns.
Methods And Evaluation Criteria: The GO-Skill method, involving goal-oriented skill extraction and hierarchical policy learning, is appropriate for offline MTRL, leveraging offline data to abstract reusable skills.
Theoretical Claims: The paper does not present formal theoretical claims or proofs. It includes serveral equations defining the loss objective to optimize the proposed model’s performance and basic formulation of model framework. As such, there are no proofs to verify.
Experimental Designs Or Analyses: The experiment compare serveral Decision Transformer-style RL models in MTRL and demonstrates its effectiveness with statistical significant improvement. The qualitative result of ablation study is well-designed and clear.
Supplementary Material: Appendix A (environment details) and additional visualizations (Figure 9) were reviewed and align with the main text’s claims.
Relation To Broader Scientific Literature: The paper omits discussion of "LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning" by Liu et al. (NeurIPS 2023), which is highly relevant to its contributions in knowledge-sharing for multi-task RL. LIBERO provides a benchmark for evaluating knowledge transfer—both declarative and procedural—in lifelong RL settings.
Given the paper’s focus on sharing knowledge across RL tasks, referencing LIBERO could contextualize its approach against a standardized benchmark. This omission counld limits the ability to assess how the proposed method compares to or builds upon established knowledge transfer frameworks in RL.
Essential References Not Discussed: To my knowledge, no essential references are missing. See the section "Relation To Broader Scientific Literature".
Other Strengths And Weaknesses: Strengths:
1. the paper is well written and clear.
2. Originality in goal-oriented skill abstraction, strong empirical results on MetaWorld, and clear visualizations (e.g., Figure 7). The skill enhancement phase addressing imbalance is a practical innovation.
Weaknesses:
1. Skill Extraction need addtional training cost to mitigate the skill imbalance problems.
2. Over-reliance on MetaWorld limits domain diversity, such as LIBERO.
Other Comments Or Suggestions: See above.
Questions For Authors: 1. What does “dynamic transfer” mean in Section 3.1.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your constructive feedback and acknowledgment of our efforts. Following are our responses to all your concerns.
> Skill Extraction need addtional training cost to mitigate the skill imbalance problems.
As detailed in Appendix B.2, we ensure that the total number of training iterations remains consistent across all methods. Specifically, the sum of stage 1 iterations (Skill Extraction) and stage 2 iterations (Skill Enhancement and Skill-Based Policy Learning) in GO-Skill is equal to the total iterations of the baselines. Importantly, since Skill Enhancement and Skill-Based Policy Learning are entirely independent learning processes, they are trained in parallel during the second stage of GO-Skill. As a result, no extra training time or iterations are introduced compared to the baselines. This design allows us to incorporate skill balancing mechanisms without increasing overall computational cost or compromising fairness in comparison.
> Over-reliance on MetaWorld limits domain diversity, such as LIBERO.
Thank you for suggesting the LIBERO benchmark. However, we would like to clarify that LIBERO is fundamentally a multi-task imitation learning benchmark rather than a MTRL benchmark. The dataset only provides a limited number of expert demonstrations (50 trajectories per task) and lacks reward information (only a binary success signal). This setup does not fully capture one of the key advantages of GO-Skill: suboptimal trajectories from certain tasks can still facilitate skill learning and potentially benefit other tasks leveraging the same skill. We have analyzed this phenomenon in Section 5.4 of the main paper.
Nevertheless, we adapted our approach to the LIBERO setting by employing an imitation learning paradigm, removing the return-conditioning in DT-based models, and comparing GO-Skill, PromptDT, and MTDT under this framework. Additionally, since LIBERO is a vision-based environment, we utilized R3m\[1] to encode image observations into vector representations. We use the LIBERO90 setting, and the experimental results are as follows:
| Method | LIBERO |
| -------- | ---------------- |
| MTDT | \$9.9 \pm 3.5\$ |
| PromptDT | \$17.6 \pm 3.7\$ |
| GO-Skill | \$29.4 \pm 4.0\$ |
From the experimental results, it can be seen that the explicit knowledge sharing provided by skill abstraction can also help agent to learn multiple tasks quickly in a multi-task imitation learning scenario.
Furthermore, based on suggestions from other reviewers, we also conducted few-shot experiments on *Cheetah-Vel* and *Ant-Dir*, following the PromptDT setup. The results are shown below:
| Method | Cheetah-Vel | Ant-Dir |
| -------- | ------------------ | ------------------ |
| MTDT | \$-158.6 \pm 9.8\$ | \$168.3 \pm 20.7\$ |
| PromptDT | \$-39.5 \pm 4.2 \$ | \$398.9 \pm 39.1\$ |
| GO-Skill | \$-40.8 \pm 3.6 \$ | \$411.2 \pm 32.5\$ |
As shown in the table, GO-Skill achieves comparable performance to PromptDT. This is expected because these environments are relatively simple, and the high similarity between different tasks causes both GO-Skill and PromptDT to reach the performance ceiling of these benchmarks.
\[1] Nair, Suraj, et al. R3m: a universal visual representation for robot manipulation. CoRL 2022.
> What does "dynamic transfer" mean in Section 3.1.
"Dynamic transfer" refers to the MDP’s transition dynamics, i.e., how the state evolves. The key distinction here is that this dynamic transition describes the **state change after H steps of decision-making**, rather than a single-step transition. If this phrasing has caused any misunderstanding, we will revise it to "H-step dynamic transfer" in the revised manuscript to clarify the meaning.
**Thank you once again for your valuable feedback. If you need further elaboration or additional points to include in the response, we welcome further discussion to ensure everything is clear and satisfactory.** | Summary: This paper introduces GO-Skill, a novel hierarchical framework for offline multi-task reinforcement learning. The approach decouples learning into two components:
- A low-level action predictor ("skill-decoder transformer") that generates actions based on a given skill prompt
- A high-level skill predictor ("skill-based decision transformer") built upon the PromptDT architecture that selects appropriate skills to serve as prompts for the skill-decoder
The main innovation lies in the skill representation: embeddings that capture state variations over a fixed horizon $H$. The authors evaluate their approach on the MetaWorld benchmark, demonstrating consistent improvements over relevant baseline methods.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. The authors demonstrate GO-Skill's greater performance across multiple MetaWorld benchmarks (MT30, MT50) under different data conditions through relevant comparisons with previous methods. The ablation studies validate the contribution of each component (goal-oriented representation, vector quantization, skill enhancement). The paper provides interesting qualitative analysis, with visualizations of skill distributions proving method's ability to learn transferable skills, and examples of skills depicted in Appendix. The experimental methodology is sound: the paper uses multiple random seeds for statistical significance and provides detailed per-task results in the supplementary material.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem. The MetaWorld benchmark provides a diverse set of robotic manipulation tasks suitable for evaluating multi-task reinforcement learning. The authors evaluate their approach under different conditions (including varying numbers of tasks and dataset quality) and compare against relevant state-of-the-art approaches. The ablation studies effectively isolate the contributions of different components of the GO-Skill architecture.
Theoretical Claims: Not Applicable (there are no theoretical claims).
Experimental Designs Or Analyses: The experimental design is solid and well-organized. The authors test GO-Skill by comparing it with six different baseline approaches across four test conditions (MT30/MT50 combined with Near-Optimal/Sub-Optimal data). They also include several ablation studies that examine the importance of different parts of their system, such as how the skill model is designed, how they handle the problem of skill imbalance, and how different parameter settings affect performance. To help understand how their method works, they include helpful visualizations showing how skills are distributed and how well their system adapts to new tasks during fine-tuning. I also found the skill visualizations from Appendix C particularly interesting.
Supplementary Material: Upon reviewing all the appendix, I would recommend adding a reference to Appendix C in the results section, as it contains valuable visualizations that enhance understanding of the learned skills.
Relation To Broader Scientific Literature: The paper differentiates itself from previous methods that mainly work on low-level action imitation. Instead, this work puts emphasis on higher-level skill abstraction, which takes inspiration from how humans learn. This approach allows the system to identify reusable skills across different tasks.
Essential References Not Discussed: To the best of my knowledge, all the releva related works are cited and discussed.
Other Strengths And Weaknesses: Strengths:
- The paper is well-structured and clearly written, making complex technical concepts accessible.
- The approach has a strong conceptual motivation inspired by human learning.
- The skill visualization provides intuitive understanding of what the model is learning.
- The method appears to effectively leverage sub-optimal data by extracting useful skill fragments.
Weaknesses:
- The approach has only been evaluated on a single benchmark (MetaWorld), which limits understanding of its generalizability to other domains.
- The trajectory-level state difference representation of skills may not capture all relevant aspects of useful skills.
- The fixed skill horizon is a limitation that the authors acknowledge, as different skills naturally operate at different timescales.
- The evaluation doesn't include locomotion tasks like "Cheetah-vel" and "Ant-dir" that were studied in related work like PromptDT.
Other Comments Or Suggestions: Minor issues:
- Line 132, 2nd column: Technically, the goal encoder should be a function from $\mathcal{S}$ to $\mathcal{Z}$ ($\mathcal{S} \rightarrow \mathcal{Z}$).
- In Algorithm 2: The "get prompt" line appears outside the "for each task" loop, which seems counterintuitive since prompts would typically be task-dependent.
Questions For Authors: 1. In Algorithm 2, why is the "get prompt" line not included in the "for each task" loop? Since prompts are typically task-specific, this seems counterintuitive. Could you clarify how prompts are handled across different tasks?
2. Have you considered how your approach could be extended to learn skills with different temporal horizons? Some manipulations (like grabbing) might occur over shorter horizons than others (like transporting).
3. How might your approach perform on locomotion tasks like those studied in the PromptDT paper (e.g., "Cheetah-vel" and "Ant-dir")? These environments have different dynamics from manipulation tasks, which could test the generality of your skill abstraction method.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your constructive feedback and acknowledgment of our efforts. Following are our responses to all your concerns.
> Perform on locomotion tasks "Cheetah-vel" and "Ant-dir"
We followed the experimental setup of PromptDT and conducted few-shot experiments on the *Cheetah-Vel* and *Ant-Dir* environments. The detailed results are as follows:
| Method | Cheetah-Vel | Ant-Dir |
| -------- | ------------------ | ------------------ |
| MTDT | \$-158.6 \pm 9.8\$ | \$168.3 \pm 20.7\$ |
| PromptDT | \$-39.5 \pm 4.2 \$ | \$398.9 \pm 39.1\$ |
| GO-Skill | \$-40.8 \pm 3.6 \$ | \$411.2 \pm 32.5\$ |
As shown in the table, GO-Skill achieves comparable performance to PromptDT. This is expected because these environments are relatively simple, and the high similarity between different tasks causes both GO-Skill and PromptDT to reach the performance ceiling of these benchmarks.
Furthermore, based on suggestions from other reviewers, we also conducted experiments on another multi-task imitation learning benchmark *LIBERO*. Due to space constraints, we kindly refer you to our response to **Reviewer FkAC** for detailed information. For convenience, we recommend using **Ctrl+F** to search for "**LIBERO benchmark**" to quickly locate the relevant response.
> The trajectory-level state difference representation of skills may not capture all relevant aspects of useful skills.
We appreciate your thoughtful observation. Indeed, no single representation can capture all aspects of useful skills. GO-Skill adopts a goal-based representation to define skills in a task-agnostic manner. As shown in our ablation study in Section 5.3, this representation outperforms action-based alternatives. We believe this is because multiple action sequences can lead to the same state transition, and abstracting skills based on goals allows the agent to generalize across these variations, leading to more transferable and reusable skills.
We agree that goal-based representation may not always be optimal, but our framework is flexible and can accommodate alternative skill representations. We thank you for highlighting this direction, and we plan to further explore more expressive representations as part of our future work.
> How GO-Skill could be extended to learn skills with different temporal horizons?
Thank you for raising this important point. While the current GO-Skill implementation adopts a fixed temporal horizon for simplicity and consistency, the framework itself is **fully compatible with variable-length skills**.
To support variable-length skills, the primary change would be in the **skill decoder**, which can be extended to learn a terminate signal to determine when to exit a skill. The key challenge lies in **how to extract and define variable-length skills** during the offline skill discovery phase. As mentioned in our Future Work Section, this is a central focus of our ongoing research. We are currently exploring approach by applying thresholds on state changes to identify skill. When the state change exceeds a certain threshold, it indicates the need to switch to a new skill.
We believe enabling dynamic skill lengths will further enhance the expressiveness and adaptability of GO-Skill, and we look forward to sharing these results in future work.
> Add a reference to Appendix C in results section.
Thank you for your suggestion and for recognizing the value of the visualization results. We will add a reference to Appendix C in the results section in the revised manuscript.
> "get prompt" appears outside the "for each task" loop in Alg.2.
Thank you for pointing out this mistake. This is indeed a writing error, and we will correct it in the revised manuscript.
Additionally, to clarify the prompt selection process: similar to PromptDT, we construct a Prompt Set for each task with the top-N highest return trajectories (N=4 in our experiments). Each time "get prompt" is called, a trajectory is randomly sampled from the corresponding task’s Prompt Set.
> Goal encoder should be $\mathcal{G}: \mathcal{S} \rightarrow \mathcal{Z}$
Thank you very much for pointing out the error. We will correct it in the revised manuscript.
**Thank you once again for your valuable feedback. If you need further elaboration or additional points to include in the response, we welcome further discussion to ensure everything is clear and satisfactory.**
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your responses and the additional results & analysis. They definitely address all the interrogations I had. Furthermore, I completely agree with the suggested changes.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for your thoughtful feedback and encouraging response. We are glad our response helped address your concerns. We will incorporate the suggested changes into the final version and believe they also help to resolve related points raised by other reviewers. We sincerely appreciate all the constructive comments and the opportunity to further improve the paper.
Best regards,
All authors | Summary: This paper presents Goal-Oriented Skill Abstraction (GO-Skill), an approach aimed at enhancing knowledge transfer in offline multi-task reinforcement learning (MTRL). GO-Skill extracts reusable skills from task-mixed offline datasets through goal-oriented representations combined with vector quantization, creating a discrete skill library. To handle imbalances among skill categories, the method includes a refinement phase. These skills are then utilized via a hierarchical policy, demonstrating performance improvements on MetaWorld benchmark tasks compared to several baselines.
Claims And Evidence: All the claims and architectural choices are well-motivated and validated by ablations.
Methods And Evaluation Criteria: The methods and evaluation metrics used are appropriate for offline multi-task reinforcement learning problems. Using the widely accepted MetaWorld benchmark provides valid grounds for comparison with baselines. However, results are reported using average return rather than success rate, which is the standard metric in this domain. I'd recommend including the success rate as the primary comparison metric.
Theoretical Claims: The paper does not include explicit theoretical claims or proofs.
Experimental Designs Or Analyses: Overall, the experimental setup is thorough, clearly comparing GO-Skill to several relevant baseline methods across multiple scenarios (MT30, MT50, near-optimal and sub-optimal datasets).
However, an important baseline from existing literature, such as data-sharing methods described in "Conservative Data Sharing for Multi-Task Offline Reinforcement Learning" by Yu et al., is notably missing.
Furthermore, the paper contains only one category of MTRL benchmark, Metaworld, which is not sufficient to justify all the components introduced in this paper.
Supplementary Material: I went over the supplementary materials briefly.
Relation To Broader Scientific Literature: The authors effectively place their work within the existing offline RL and MTRL literature, building on prior approaches such as Decision Transformers, Prompt-DT, and vector quantization, clearly situating their contribution.
Essential References Not Discussed: The authors overlooked important related offline MTRL baselines, particularly the conservative data-sharing approach proposed by Yu et al., which is essential for the proper contextualization of their results.
Other Strengths And Weaknesses: ## Strengths
- Intuitive and well-motivated skill abstraction inspired by human learning.
- Strong empirical evidence of improved performance against baselines and ablations.
## Weaknesses:
- The method is quite complex and it is not clear whether all the components are necessary in general. I'd suspect for different MTRL task suites, not all the proposed contributions would be necessary.
- Lacks success rate, which limits the comparability of the results.
- Only a single environment setup is used — Metaword.
Other Comments Or Suggestions: More environments should be introduced, at least for the baseline comparison.
Questions For Authors: 1. How is the hierarchical skill abstraction fundamentally different from previously proposed methods in offline RL and multi-task learning?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your constructive feedback and acknowledgment of our efforts. Following are our responses to all your concerns.
>Overlook offline MTRL baseline CDS.
Thank you for pointing out the omission of the CDS, which is indeed a significant contribution to offline MTRL. We will cite and discuss CDS more thoroughly in the revised manuscript.
CDS focuses on a conservative data-sharing strategy, selectively utilizing data from other tasks to benefit learning on a given target task. In contrast, GO-Skill approaches **knowledge sharing at a higher level of abstraction** by discovering reusable skills that can be naturally composed across tasks. Moreover, these two methods could be integrated, where the high-level skill-based policy in GO-Skill can be enhanced by applying conservative filtering to determine which skill-based transitions to share across tasks as proposed in CDS, leading to more robust policy learning. We believe this hybrid direction holds promise and represents an exciting avenue for future work.
Unfortunately, CDS’s codebase and experimental data are not publicly available, making a direct comparison difficult. Moreover, reimplementing such a method without official reference results for validation could risk an unfair or inaccurate evaluation. Nonetheless, we are currently emailing the authors to request the necessary resources and guidance for reproduction. If successful, we will experimentally explore the potential complementarity between CDS and GO-Skill, and we are excited to investigate the results in the final version.
>Only a single environment setup is used — Metaword.
Based on suggestions from other reviewers, we have conducted experiments on two additional benchmarks beyond MetaWorld. One is the locomotion task suite (*Cheetah-Vel* and *Ant-Dir*) used in PromptDT, and the other is *LIBERO*, a multi-task imitation learning benchmark.
Due to space constraints, we kindly refer you to our response to **Reviewer FkAC** for detailed information. For convenience, we recommend using **Ctrl+F** to search for "**LIBERO benchmark**" to quickly locate the relevant response.
>The method is quite complex and it is not clear whether all the components are necessary in general. I'd suspect for different MTRL task suites, not all the proposed contributions would be necessary.
As noted above, we also evaluate GO-Skill on additional benchmarks. The results demonstrate the general applicability of GO-Skill, supporting that it is not tailored to a specific environment.
In addition, our method is not conceptually complex. It consists of two main components: the skill model, which is shared across all tasks and generates actions to interact with the environment, and the skill-based policy, which completes specific tasks by composing different skills. In our ablation study, we systematically analyze both the skill model architecture and the methods for handling skill imbalance, demonstrating that each component contributes to overall performance.
> Lacks success rate, which limits the comparability of the results.
We chose to report return as the primary evaluation measure in MetaWorld because it provides a more informative metric of task performance. For example, in the 'drawer-close' task, the environment considers the task successful once the drawer is closed past a certain threshold — even if it is not fully closed. However, the agent can continue to receive additional rewards for closing the drawer further, indicating a higher degree of task completion. In such cases, success rate acts as a binary threshold, while return reflects the quality and completeness of the agent’s behavior.
In addition, we conducted additional experiments and report the success rate of GO-Skill as follows:
|MT30 Near-Optimal|MT30 Sub-Optimal|MT50 Near-Optimal|MT50 Sub-Optimal|
|-|-|-|-|
|$85.5\pm1.0$|$62.8\pm1.4$|$82.8\pm1.0$|$58.2\pm1.4$|
> How is the hierarchical skill abstraction fundamentally different from previously proposed methods?
Our **skill-based MTRL framework** offers a fundamentally different perspective from prior methods. Specifically, most prior multi-task RL methods achieve generalization through **implicit knowledge sharing** through various parameter-sharing strategies. In contrast, GO-Skill adopts an **explicit knowledge sharing** approach by directly extracting reusable skills from offline data. A key advantage of this explicit skill abstraction is its ability to better leverage suboptimal data. In offline settings, a suboptimal skill in one task may serve as an optimal skill for another, allowing GO-Skill to effectively master useful behaviors across tasks. This leads to more efficient skill acquisition and enhances generalization across diverse multi-task scenarios.
**Thank you once again for your valuable feedback. If you need further elaboration or additional points to include in the response, we welcome further discussion to ensure everything is clear and satisfactory.** | Summary: This paper proposes a method for offline multi-task reinforcement learning. The main idea is to an approach based on goal-oriented skill abstraction to better learn to extract and reuse skills to transfer to new tasks. Technically, the method utilizes vector quantization to form a discrete skill library. The idea is reasonable, and the paper shows improved performance.
# update after rebuttal
The rebuttal helps address some of the concerns and I remain broadly positive with the paper.
Claims And Evidence: The claims are supported with experimental evidence.
Methods And Evaluation Criteria: The methods are plausible and and the evaluation criteria follow the standard practice.
Theoretical Claims: The paper does not make significant theoretical claims.
Experimental Designs Or Analyses: The experimental and analyses are reasonable and appear correct.
Supplementary Material: Details of the experiments (datasets/hyperparameters) etc. are presented in the supplementary material.
Relation To Broader Scientific Literature: The proposed method shows improved performance compared with existing multi-task reinforcement learning methods on standard benchmark datasets.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
The method is plausible and well justified.
The paper shows improved performance on standard benchmark datasets.
Weaknesses:
Technical components of the paper are quite standard, so better justification would be beneficial.
Other Comments Or Suggestions: The main comparison results shown in Fig. 5 are presented without actual numbers. The y-axis starting from 1.5 makes it difficult to judge the difference of different settings.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your constructive feedback and acknowledgment of our efforts. Following are our responses to all your concerns.
> Technical components of the paper are quite standard, so better justification would be beneficial.
We appreciate your comment and acknowledge that certain technical components of our framework, such as vector quantization and transformer-based policy modeling, are standard techniques. However, these choices are made to **validate the effectiveness of our framework**, rather than serving as core contributions. Alternative implementations, such as diffusion-based policies or different discretization methods, could also be applied without altering the fundamental ideas of our approach.
The key contribution of GO-Skill lies in proposing **a novel skill-based MTRL framework** that enables **explicit knowledge sharing** through skill learning and composition, inspired by how humans generalize through modular skills. Furthermore, we introduce **Goal-Oriented Skill Abstraction**, a new method for defining and extracting reusable skills based on goal transitions rather than action sequences or rewards. This task-agnostic representation enhances skill transferability and enables effective skill discovery even from sub-optimal data.
In Section 5, we justify different components through an ablation study. The results show that while standard techniques like VQ contribute to performance improvements, Goal-Oriented Skill provides the most significant boost, highlighting its critical role. In addition, all ablated variants of GO-Skill outperform the baselines, demonstrating the effectiveness of our skill-based framework.
We hope this clarifies that our contributions are primarily conceptual and architectural, and that our use of standard components supports, rather than limits, the novelty of our approach.
> The main comparison results shown in Fig. 5 are presented without actual numbers. The y-axis starting from 1.5 makes it difficult to judge the difference of different settings.
We appreciate your valuable feedback on Fig.5. In the revised manuscript, we will include a table presenting the exact numerical results corresponding to Fig.5 for clarity. Below is the table summarizing the results:
| Method | MT30 Near-Optimal | MT30 Sub-Optimal | MT50 Near-Optimal | MT50 Sub-Optimal |
| ----------- | ----------------- | ---------------- | ----------------- | ---------------- |
| MTDT | \$2342 \pm 149\$ | \$1690 \pm 64\$ | \$2290 \pm 121\$ | \$1682 \pm 116\$ |
| MH-MTDT | \$2434 \pm 123\$ | \$1799 \pm 69\$ | \$2356 \pm 158\$ | \$1786 \pm 84\$ |
| PromptDT | \$2497 \pm 138\$ | \$1780 \pm 86\$ | \$2315 \pm 101\$ | \$1787 \pm 105\$ |
| MH-PromptDT | \$2587 \pm 105\$ | \$1843 \pm 103\$ | \$2478 \pm 57\$ | \$1841 \pm 35\$ |
| MTDIFF-P | \$2458 \pm 109\$ | \$1756 \pm 96\$ | \$2387 \pm 91\$ | \$1781 \pm 89\$ |
| HarmoDT | \$2874 \pm 84\$ | \$2136 \pm 68\$ | \$2745 \pm 91\$ | \$1921 \pm 104\$ |
| GO-Skill | \$3059 \pm 70\$ | \$2168 \pm 73\$ | \$3159 \pm 71\$ | \$2124 \pm 78\$ |
| MH-GO-Skill | \$3096 \pm 76\$ | \$2175 \pm 67\$ | \$3181 \pm 68\$ | \$2166 \pm 57\$ |
The reason for setting the y-axis starting from 1.5k is to better highlight the performance differences among different methods while maintaining a consistent y-scale across all four settings for easy cross-comparison. In addition, we would greatly appreciate any suggestions you may have for an improved visualization standard, and we will gladly incorporate the necessary adjustments in the revised manuscript accordingly.
**Thank you once again for your valuable feedback. If you need further elaboration or additional points to include in the response, we welcome further discussion to ensure everything is clear and satisfactory.** | null | null | null | null | null | null |
Foundation Model Insights and a Multi-Model Approach for Superior Fine-Grained One-shot Subset Selection | Accept (oral) | Summary: This paper explores the use of foundation models (FMs) for one-shot subset selection, focusing on fine-grained image datasets. The authors find that FMs outperform traditional information extractors (IEs) in fine-grained tasks but struggle with noisy, coarse-grained datasets. To address this, they propose RAM-APL, a multi-FM framework that combines intra-class (via Ranking Mean, RAM) and inter-class (via Accuracy of Pseudo-class Labels, APL) feature analysis for improved subset selection. The method is evaluated on fine-grained datasets like Oxford-IIIT Pet and CUB-200-2011, demonstrating its effectiveness.
Claims And Evidence: The authors demonstrate RAM-APL's superiority on fine-grained datasets like Oxford-IIIT Pet and CUB-200-2011, with ablation studies validating its components. However, the claim that FMs underperform on noisy, coarse-grained datasets could benefit from broader experiments. Overall, the evidence is solid but could be further reinforced.
Methods And Evaluation Criteria: The proposed RAM-APL method addresses fine-grained subset selection by leveraging multiple foundation models and combining intra-class and inter-class feature analysis, which appears reasonable for the task. The evaluation criteria, including accuracy on datasets like Oxford-IIIT Pet and CUB-200-2011, are appropriate for assessing performance. However, the evaluation could be expanded to include more diverse datasets, particularly those with noisy or coarse-grained characteristics, to better validate the method's robustness and generalizability. Overall, the methods and evaluation criteria are suitable, though broader validation could strengthen the findings.
Theoretical Claims: The paper does not present any explicit theoretical claims or proofs, which focuses on the empirical evaluation of the proposed RAM-APL method.
Experimental Designs Or Analyses: The experimental design is well-structured, with appropriate benchmark datasets (e.g., Oxford-IIIT Pet, CUB-200-2011) and a comprehensive set of baseline methods for comparison. The inclusion of ablation studies and parameter analyses is good, as they provide valuable insights into the contributions of different components of the proposed RAM-APL method. However, the ablation experiments in Tables 1, 2, and 3 are limited to sampling rates of 1%, 50%, and 70%, which raises some concerns. The 1% sampling rate seems unconventional and may not reflect practical scenarios, while the 70% rate sometimes underperforms, as seen in the results. A more balanced evaluation, including the 10% and 30% sampling rates used in Figure 3, would provide a clearer understanding of the method's performance across a wider range of realistic settings.
Supplementary Material: The supplementary material provides detailed experimental setups, results, and analyses for both single-model and multi-model studies. It includes comprehensive comparisons with 12 baseline methods, visualizations of the RAM metric, and an exploration of feature relationships between different foundation models.
Relation To Broader Scientific Literature: The key contributions of this paper are closely related to the broader literature on subset selection and foundation models.
Essential References Not Discussed: From my perspective, this article does not omit any essential references.
Other Strengths And Weaknesses: There are no other strengths and weaknesses. Here I summarize the strengths and weaknesses I answered in the previous parts.
Strengths:
1. Clarity of Methodology: The paper presents the proposed method, RAM-APL, in a straightforward and concise manner, making it easy to understand and follow.
2. Comprehensive Experimental Setup: The experiments are well-designed and cover a range of datasets and scenarios, providing a thorough evaluation of the method's effectiveness.
Weaknesses:
1. Lack of Analysis on FM Performance in Coarse-Grained Tasks: While the paper highlights that foundation models (FMs) underperform in coarse-grained, noisy datasets, it does not provide a detailed analysis or explanation for this behavior. Additional experiments or theoretical insights could help clarify why FMs struggle in these scenarios.
2. Limited Sampling Rate Evaluation in Ablation Studies: Tables 1, 2, and 3 only compare results at 1%, 50%, and 70% sampling rates. The 1% rate is unconventional and may not reflect practical use cases, while the 70% rate sometimes underperforms. It would be more informative to include results for 10% and 30% sampling rates, as shown in Figure 3, to provide a more balanced and realistic evaluation of the method's performance.
Other Comments Or Suggestions: N/A
Questions For Authors: Please see the weaknesses of the ``Other Strengths And Weaknesses'' part, where there is a summary.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful feedbacks! We address your questions in the following responses.
___
**W1: A detailed analysis of why FM as IE underperform on coarse-grained datasets with noisy labels.**
A1: We sincerely appreciate the reviewer's insightful comments. Due to the character limit in the rebuttal, we kindly refer the reviewer to our response to Reviewer aGx6's "W1&Q1: A deeper discussion of why FM as IE performs poorly on coarse-grained datasets with noisy labels."
___
**W2: Limited sampling rate evaluation in ablation studies.**
A2: We sincerely appreciate the reviewer’s insightful feedback and acknowledge the importance of conducting ablation studies across diverse sampling rates.
To address this concerns, we have conducted additional Ablation Studies at 10% and 30% sampling rates. The revised Tables 1, 2, and 3 are provided below.
Table 1. Ablation study based on Pet.
| Method | Information Extractor (IE) | 1% | 10% | 30% | 50% | 70% |
|-----------|---------------------------|----------|----------|-----------|-----------|-----------|
| MIN |Model-TD| 5.6±0.7 | 14.6±0.5 | 26.4±1.6 | 40.3±2.6 | 55.2±2.7 |
| MIN | CLIP | 5.6±0.2 | 15.4±1.0 | 29.3±2.4 | 45.9±1.8 | 56.3±0.7 |
| MIN | DINOv2 | 6.2±0.1 | **15.5±0.7** | 32.0±1.4 | 46.8±2.0 | **60.5±2.9** |
| RAM | CLIP+DINOv2 | 5.9±0.3 | 15.1±0.5 | **33.1±2.3** | 47.1±1.4 | 56.5±2.7 |
| RAM-APL | CLIP+DINOv2 | **6.5±0.4** | 15.2±1.2 | 32.4±2.9 | **47.5±1.9** | 58.7±2.2 |
Across various sampling rates, RAM consistently outperforms MIN (CLIP as IE), while RAM-APL further improves performance, reaching levels comparable to DINOv2. Though RAM-APL (CLIP+DINOv2) demonstrates overall superior performance, its effectiveness at 70% sampling can be improved. In future work, we aim to enhance our method’s effectiveness at high sampling rates to further improve its practical utility.
Table 2. Comparison of the performance of our method using different numbers of foundation models as information extractors.
| DINOv2 | CLIP | SigLIP | EVA-CLIP | 1% | 10% | 30% | 50% | 70% |
|----------|------------|------------|-----------|----------|----------|-----------|-----------|-----------|
| ✓ | | | | 5.9±0.3 | 15.4±1.1 | 31.6±2.3 | 47.7±1.1 | 57.9±4.1 |
| | ✓ | | | 5.7±0.4 | 15.0±0.2 | 27.9±1.2 | 43.6±1.9 | 57.0±0.4 |
| | | ✓ | | 6.6±0.3 | 14.1±1.0 | 28.8±1.1 | 43.9±1.7 | 55.1±2.6 |
| | | | ✓ | 5.4±0.3 | 15.0±0.6 | 30.2±2.5 | 44.4±2.3 | 56.6±1.8 |
| ✓ | ✓ | | | 6.5±0.4 | 15.2±1.2 | 32.4±2.9 | 47.5±1.9 | **58.7±2.2** |
| ✓ | | ✓ | | 5.9±0.3 | 16.2±0.1 | 31.4±3.2 | 45.0±1.3 | 58.6±1.2 |
| ✓ | | | ✓ | 6.0±0.6 | 16.0±0.9 | **35.8±2.9** | 46.5±1.8 | 54.9±3.5 |
| | ✓ | ✓ | | 6.4±0.2 | 15.1±0.4 | 29.8±1.6 | 45.9±1.3 | 56.2±2.7 |
| | ✓ | | ✓ | 5.9±0.3 | 15.5±0.7 | 31.4±1.7 | 44.2±2.2 | 55.9±1.8 |
| | | ✓ | ✓ | **6.7±0.4** | 16.2±0.6 | 34.7±0.3 | 45.7±0.8 | 56.6±2.4 |
| ✓ | ✓ | ✓ | | 6.2±0.8 | 15.6±0.5 | 33.2±1.4 | **48.3±1.1** | 57.6±0.1 |
| ✓ | ✓ | | ✓ | 6.0±0.4 | **17.5±1.0** | 35.2±1.8 | 47.9±1.5 | 55.6±2.1 |
| ✓ | | ✓ | ✓ | 6.1±0.3 | 16.8±0.6 | 34.4±2.1 | 47.0±2.0 | 55.1±1.6 |
| | ✓ | ✓ | ✓ | 6.1±0.2 | 16.1±0.3 | 33.9±1.4 | 46.8±1.5 | 55.1±0.5 |
| ✓ | ✓ | ✓ | ✓ | 6.5±0.2 | 16.8±1.1 | 34.0±2.7 | 46.3±0.5 | 56.9±1.1 |
We observe that leveraging multiple foundation models outperforms using a single model. The optimal balance of computational efficiency, memory usage, and performance is achieved with DINOv2 + CLIP. For the highest overall accuracy, DINOv2+CLIP+EVA-CLIP is recommended. These findings validate the benefits of multi-model selection, and the results will be included in the supplementary material.
Tabel 3. Comparison of feature fusion strategies.
| Fusion Method | 1% | 10% | 30% | 50% | 70% |
|----------------|----------|----------|-----------|-----------|-----------|
| Concatenate | 5.9±0.4 | 16.3±0.4 | 31.7±1.3 | 47.7±3.0 | 57.8±1.2 |
| Ours| 6.5±0.4 | 15.2±1.2| 32.4±2.9 | 47.5±1.9 |58.7±2.2|
We observe that Our strategy outperforms Concatenate, especially at higher sampling rates, which are crucial for practical applications. To maximize the performance of multi-model method at high sampling rates, we adopt the Ours fusion strategy. The findings and new results will be included in the supplementary material. | Summary: The paper makes comparisons between traditional information extractors (IEs) and a single foundation model (FM) on a series of datasets to explore scenarios in which a single FM would be advantageous as an IE. It reveals that a single FM performs poorly on coarse-grained image datasets with noisy labels and performs well on fine-grained image datasets with clean and noisy labels. The paper introduces a one-shot subset selection approach (called RAM-APL) tailed for fine-grained datasets, which ingeniously maps the misaligned features extracted by an ensemble of FMs into a unified distance ranking space, considering both intra-class distribution and inter-class distribution of samples. Experiments demonstrate the SOTA selection performance of RAM-APL on three fine-grained image datasets.
Claims And Evidence: The claims regarding the foundation model insights and the effectiveness of RAM-APL in the paper are supported by strong empirical evidence. However, additional analysis on fine-grained datasets with noisy labels could further strengthen the claims.
Methods And Evaluation Criteria: The proposed approach is reasonable, but the evaluation is limited to fine-grained datasets. A broader range of datasets would help confirm RAM-APL’s robustness.
Theoretical Claims: The paper does not present formal theoretical proof, as it is largely focused on empirical evaluation. The conceptual framework of the RAM-APL method is well-explained, and the reliance on empirical analysis is justified. No significant issues were found in the presentation of the algorithmic ideas.
Experimental Designs Or Analyses: To explore scenarios in which a single FM would be advantageous as an IE, the paper employs systematic and rigorous experiments to analyze the impact of various factors (such as coarse-grained and fine-grained, labels that are clean or noisy, and balanced or unbalanced class distributions) and provides in-depth discussions of the results. Besides, the experimental design is robust in validating the effectiveness of RAM-APL, and the chosen datasets (Oxford-IIIT Pet, Food-101, and CUB-200-2011) are well suited for evaluating subset selection methods in the context of fine-grained image classification. The paper compares RAM-APL against multiple baseline methods, showing clear improvements.
However, it would be beneficial to include additional experiments to explicitly examine the performance of subset selection methods on fine-grained datasets with noisy labels. The paper highlights the strengths of FMs as IEs on both fine-grained datasets with clean and noisy labels, so more explicit comparisons between RAM-APL and other methods on fine-grained datasets with noisy labels would further strengthen the effectiveness of RAM-APL.
Supplementary Material: The supplementary material is detailed, providing valuable insights into the methodology and additional experimental results. However, the paper would benefit from including code or links to a code repository to facilitate reproducibility.
Relation To Broader Scientific Literature: The paper effectively situates itself within the subset selection literature, particularly focusing on feature-based subset selection. The use of FMs for subset selection is both relevant and timely, especially as research increasingly relies on large pre-trained models.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths
- The paper is well-written and clear and presents a novel contribution to subset selection in fine-grained image classification.
- The paper systematically compares FMs and traditional IEs through rigorous experiments, offering practical insights into their relative performance across diverse image datasets.
- The paper is highly original in combining multiple FMs for subset selection, a novel approach that significantly improves performance in fine-grained datasets.
- The paper provides comprehensive empirical results, strengthening its practical impact.
Weaknesses
1. The paper provides convincing results on three classical image fine-grained datasets. However, the paper does not compare the performance of RAM-APL with other methods on fine-grained image datasets with noisy labels. Experiments on fine-grained image datasets with noisy labels are important to further demonstrate the effectiveness of RAM-APL.
2. The paper lacks a deeper discussion of why FM as IE performs well on fine-grained datasets with noisy labels and poorly on coarse-grained datasets with noisy labels. For example, which specific types of images or classes does FM as IE perform better on, and which classes does it not perform well on? Understanding these nuances would help in understanding the advantages of FM as an IE and would help in adapting RAM-APL to other domains.
Other Comments Or Suggestions: - Consider including a code release or a link to a public repository for reproducibility purposes.
Questions For Authors: Please see [Weaknesses] 1-2 in the Other Strengths And Weaknesses. If authors address them, reviewer would like to change the rating.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive feedbacks! We address your questions in the following responses.
___
**W1: Evaluation on fine-grained image datasets with noisy labels.**
A1: We sincerely appreciate the reviewer’s insightful suggestion. We acknowledge the importance of evaluating the effectiveness of our approach with other selection methods on fine-grained image datasets with noisy labels.
To address this concerns, we conducted additional experiments (as detailed in the tables below) on the Oxford-IIIT Pets dataset with 20% symmetric label noise and with 40% symmetric label noise. Subsets are sampled following the same experimental setup described in the manuscript.
Dataset: Oxford-IIIT Pets dataset with 20% symmetric label noise
| Method | IE | 1% | 10% | 30% | 50% | 70% | 100% |
|------------|---------------|----------|----------|-----------|-----------|-----------|----------|
| Random | - | 4.9±0.7 | 10.0±1.0 | 16.7±0.6 | 25.3±0.4 | 33.4±2.6 | 42.7±1.8 |
| Harding | Model-TD | 5.0±0.1 | 8.1±1.8 | 15.3±1.8 | 20.7±0.5 | 33.1±0.4 | 42.7±1.8 |
| KCG | Model-TD | 5.3±1.4 | 7.8±0.8 | 15.4±1.1 | 22.3±1.7 | 32.2±1.5 | 42.7±1.8 |
| CD | Model-TD | 5.2±0.4 | 6.6±0.8 | 13.7±0.2 | 22.4±1.6 | 32.1±1.4 | 42.7±1.8 |
| Margin | Model-TD | 4.7±0.1 | 9.0±0.7 | 16.3±0.5 | 23.9±0.6 | 33.5±1.2 | 42.7±1.8 |
| Forgetting | Model-TD | 5.9±0.6 | 11.5±0.9 | 18.7±1.3 | 29.5±0.9 | 36.9±0.4 | 42.7±1.8 |
| GraNd | Model-TD | 4.3±0.2 | 7.8±0.8 | 15.6±0.7 | 22.9±1.5 | 32.4±2.1 | 42.7±1.8 |
| Cal | Model-TD | 6.2±0.8 | 12.2±0.6 | 22.2±2.6 | 29.4±1.3 | 38.7±0.8 | 42.7±1.8 |
| Glister | Model-TD | 4.8±0.2 | 10.5±1.1 | 17.2±1.0 | 26.1±2.6 | 34.7±2.2 | 42.7±1.8 |
| GC | Model-TD | 5.2±0.6 | 12.8±1.5 | 20.3±1.3 | 27.0±0.6 | 32.9±0.9 | 42.7±1.8 |
| MDS | Model-TD | 3.8±0.4 | 9.8±0.3 | 17.1±0.6 | 24.3±1.7 | 30.7±3.1 | 42.7±1.8 |
| MIN | Model-TD | 5.6±0.2 | 11.6±0.4 | 19.8±1.4 | 28.0±2.2 | 35.3±2.5 | 42.7±1.8 |
| **Ours** | **CLIP+DINOv2** | **6.7±0.3** | **16.7±0.3** | **32.5±1.8** | **46.0±1.6** | **56.7±0.7** | 42.7±1.8 |
Dataset: Oxford-IIIT Pets dataset with 40% symmetric label noise
| Method | IE | 1% | 10% | 30% | 50% | 70% | 100% |
|------------|---------------|----------|----------|-----------|-----------|-----------|----------|
| Random | - | 5.1±0.5 | 8.0±0.6 | 12.6±0.6 | 15.0±0.3 | 19.1±0.5 | 23.0±0.6 |
| Harding | Model-TD | 4.4±0.2 | 6.3±0.5 | 11.1±0.9 | 13.1±0.6 | 18.2±1.3 | 23.0±0.6 |
| KCG | Model-TD | 4.9±0.8 | 6.3±0.5 | 9.9±1.2 | 14.3±1.1 | 18.1±0.9 | 23.0±0.6 |
| CD | Model-TD | 4.8±0.8 | 6.3±0.5 | 10.3±0.8 | 14.0±0.3 | 17.7±1.2 | 23.0±0.6 |
| Margin | Model-TD | 4.1±0.3 | 7.0±0.8 | 11.1±0.9 | 14.3±0.9 | 19.0±0.8 | 23.0±0.6 |
| Forgetting | Model-TD | 5.4±0.8 | 10.2±1.6 | 12.9±0.8 | 17.2±0.4 | 21.4±0.9 | 23.0±0.6 |
| GraNd | Model-TD | 4.4±0.9 | 6.7±1.0 | 10.2±0.5 | 14.5±1.6 | 18.8±1.2 | 23.0±0.6 |
| Cal | Model-TD | 5.4±0.3 | 10.6±1.1 | 14.9±1.1 | 18.9±1.0 | 22.2±1.2 | 23.0±0.6 |
| Glister | Model-TD | 5.2±0.3 | 7.6±1.1 | 12.4±0.8 | 18.3±0.8 | 21.8±1.6 | 23.0±0.6 |
| GC | Model-TD | 4.9±0.7 | 9.7±1.1 | 12.8±0.7 | 15.4±0.8 | 20.5±1.7 | 23.0±0.6 |
| MDS | Model-TD | 3.9±0.2 | 7.2±0.3 | 12.0±0.2 | 15.0±1.5 | 18.5±0.8 | 23.0±0.6 |
| MIN | Model-TD | 5.3±0.4 | 9.4±0.7 | 14.3±0.7 | 18.3±0.6 | 20.9±0.6 | 23.0±0.6 |
| **Ours** | **CLIP+DINOv2** | **6.1±0.3** | **15.0±1.2** | **30.4±1.7** | **44.8±0.1** | **42.6±0.8** | 23.0±0.6 |
("IE" means information extractor, "Model-TD" denotes the model trained on the full set for 10 epochs.)
We observe that RAM-APL consistently outperforms all baselines across different sampling rates on each noisy fine-grained dataset, demonstrating its effectiveness.
Your suggestion has been highly valuable. Through experimental analysis, we have identified the significant advantages of designing selection algorithms based on foundation models for noisy datasets, which motivates us to explore more effective foundation model-based denoising approaches in future work. The above experimental results and discussions will be included in the supplementary material.
___
**W2: A deeper discussion of why FM as IE performs poorly on coarse-grained datasets with noisy labels.**
A2: We sincerely appreciate the reviewer's insightful comments. Due to the character limit in the rebuttal, we kindly refer the reviewer to our response to Reviewer aGx6's "W1&Q1: A deeper discussion of why FM as IE performs poorly on coarse-grained datasets with noisy labels."
___
**S1: Code release.**
A3: We thank the reviewer for this suggestion. We will release the full implementation code in a public repository upon paper acceptance to ensure reproducibility.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their positive response and detailed rebuttal. The authors have addressed my concerns. | Summary: To investigate whether foundation models (FMs) can truly replace task-specific information extractors (IEs) in subset selection, this paper examines the effectiveness of FMs as IEs for one-shot subset selection. Through extensive experiments across a set of image datasets, this paper identifies the strengths and limitations of FMs as IEs: they excel on fine-grained image datasets but underperform on coarse-grained datasets with noisy labels. To capitalize on the complementary strengths of multiple FMs and overcome limitations in existing feature-based selection methods, this paper introduces RAM-APL, which maps misaligned features from multiple FMs into a unified distance ranking space, considering intra-class and inter-class distributions. The selection methods are evaluated on three fine-grained classification datasets.
Claims And Evidence: Yes. The claims in this paper are generally supported by clear experimental results, particularly in demonstrating that RAM-APL improves subset selection on fine-grained image datasets.
Methods And Evaluation Criteria: Yes. The methodology is well-motivated for subset selection, and the benchmark datasets are appropriate.
Theoretical Claims: Yes. Since this paper is largely data-driven, there are no formal proofs for the theoretical claims, but the empirical justification is sound.
Experimental Designs Or Analyses: Yes. The experimental design is rigorous and well-structured:
1. The evaluation considers three fine-grained image datasets (CUB-200-2011, Oxford-IIIT Pets, Food-101), making the conclusions well-supported in the targeted domain.
2. A range of baselines is compared, including random selection, single-FM approaches (DINOv2, CLIP, et al.), and other subset selection methods.
3. The ablation study analyzes hyperparameters (\alpha, \beta) and the effect of different FM combinations, showing that DINOv2 + CLIP provides the best results.
However, a few concerns:
1. This paper does not assess whether the selected subsets generalize across different model architectures. A key question is whether subsets selected by RAM-APL would maintain their effectiveness when applied to architectures beyond ResNet.
2. This paper claims FMs are ineffective on coarse-grained datasets with noisy labels but does not analyze why in depth. A more detailed study (e.g., feature visualizations or error analysis) would help substantiate this finding.
Supplementary Material: Yes. The supplementary material is well-structured, providing additional insights into the methodology and extended experimental results.
Relation To Broader Scientific Literature: This paper relates to subset selection approaches but differs by leveraging multiple FMs to form a unified ranking space.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. This paper structure is reasonable, with each component of the proposed method clearly explained, making it easy to understand and implement.
2. This paper introduces a well-motivated and innovative approach to subset selection. RAM-APL effectively harnesses the complementary advantages of multiple foundation models, addressing the variability in FM performance across datasets and selection methods. The empirical evaluations are thorough and provide strong evidence supporting the effectiveness of the proposed method.
Weaknesses:
1.This paper does not assess whether the selected subsets generalize across different model architectures. Would the subsets selected by RAM-APL retain their effectiveness when applied to architectures beyond ResNet?
2. While this paper argues that FMs perform poorly on coarse-grained datasets with noisy labels, it lacks an in-depth analysis of the underlying reasons. Incorporating feature visualizations or error analysis could provide stronger empirical justification for this claim.
3. The analysis is somewhat limited, primarily focusing on accuracy. Additional insights, such as diversity or difficulty analysis of the selected subsets, would enhance the evaluation.
Other Comments Or Suggestions: See the Weakness part.
Questions For Authors: 1. Would the subsets selected by RAM-APL retain their effectiveness when applied to architectures beyond ResNet?
2.Why do FMs struggle with coarse-grained datasets containing noisy labels but perform well on fine-grained image datasets?
3. How do RAM and APL strategies influence the distribution of representations in selected subsets?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive feedbacks! We address your questions in the following responses.
___
**W1&Q1: Cross-architecture generalization of RAM-APL.**
A1: We sincerely appreciate the reviewer’s insightful question regarding the cross-architecture generalization of RAM-APL. We acknowledge the importance of evaluating whether our selected subsets remain effective across different model architectures beyond ResNet.
To address this concerns, we conducted additional experiments on the Oxford-IIIT Pets dataset (Pets) using MobileNet-V3 as the target model. The results, presented in the table below, compare RAM-APL against five strong baselines that maintain identical architectures for their information extractors (IE) and target models.
MobileNet-V3 (MBV3)
| Method | IE→Target Model | 10% | 30% | 50% |
|----------------------|--------------------------|-------------|-------------|-------------|
| Random | MBV3 → MBV3| 10.9±1.1 | 42.1±3.6 | 61.6±1.9 |
| Forgetting | MBV3 → MBV3| 13.3±0.8 | 42.0±2.0 | 61.0±2.3 |
| GC | MBV3 → MBV3| 12.4±1.7 | 40.4±0.2 | 61.3±1.5 |
| MDS | MBV3 → MBV3| 11.9±0.7 | 39.8±2.0 | 62.1±3.3 |
| MIN | MBV3 → MBV3| 11.9±1.8 | 38.4±0.8 | 61.0±1.4 |
| **RAM-APL (Ours)** | (CLIP+DINOv2)→ MBV3 | **13.6±0.3** | **45.7±0.9** | **62.3±1.4** |
We observe that RAM-APL consistently outperforms all baselines across different sampling rates, indicating its strong cross-architecture generalization ability.
Your suggestion has been highly valuable, inspiring us to further explore multi-model subset selection in broader cross-architecture settings in future work. The above experimental results and discussion will be included in the supplementary material.
___
**W2&Q2: A deeper discussion of why FMs struggle with coarse-grained datasets containing noisy labels but perform well on fine-grained image datasets.**
A2: We sincerely appreciate the reviewer's insightful comments. Due to the character limit in the rebuttal, we kindly refer the reviewer to our response to Reviewer aGx6's "W1&Q1: A deeper discussion of why FM as IE performs poorly on coarse-grained datasets with noisy labels."
___
**W3&Q3: How do RAM and APL influence the distribution of representations in selected subsets?**
A3: We sincerely appreciate the reviewer’s insightful question regarding the influence of RAM and APL strategies on the distribution of representations in the selected subsets. We acknowledge the importance of analyzing how these strategies shape the feature space and their impact on sample diversity and representativeness.
To address this concern, we conducted additional experiments and analyzed the feature distributions of different selection strategies. Specifically, we examined the average cosine distance between data pairs within the selected subsets, which provides insights into intra-class and overall diversity. The results are summarized in the table below:
Table. Average cosine distance of data pairs in the subset
| Method | IE | Class 0 | Class 1 | Class 2 | Class 3 | Class 4 | Whole subset |
|-----------|-------------|-----------|-----------|-----------|-----------|-----------|--------------|
| Min | CLIP | 0.1617 | 0.1795 | 0.1176 | 0.1509 | 0.1327 | 0.2680 |
| RAM | CLIP+DINOv2 | 0.1695 | 0.1919 | 0.1259 | 0.1611 | 0.1392 | 0.2767 |
| RAM-APL | CLIP+DINOv2 | 0.1659 | 0.1986 | 0.1317 | 0.1597 | 0.1399 | **0.2787** |
From these results, we observe that RAM and RAM-APL lead to a more diverse feature distribution in the selected subset compared to Min-based selection. The whole-subset average cosine distance is highest under RAM-APL (0.2787), indicating that it selects more diverse samples overall, improving coverage of the feature space. Moreover, the per-class distances suggest that RAM-APL encourages a balance between inter-class and intra-class diversity, with slightly higher values in harder-to-distinguish classes.
Furthermore, the t-SNE visualizations in Figures 9-11 (https://anonymous.4open.science/r/RAM-APL-DED5/README.md) further confirm these findings. Compared to Min-based selection, which tends to concentrate samples within certain regions of the feature space, RAM and RAM-APL distribute samples more broadly across the space, ensuring better representational coverage. This suggests that our approach enhances model performance by capturing a more comprehensive representation of the dataset.
Your suggestion has been highly valuable in strengthening our analysis. The above results and discussions will be included in the supplementary material to provide a clearer understanding of the impact of our proposed selection strategies.
---
Rebuttal Comment 1.1:
Comment: After reading the response, the authors have addressed my concerns. Thus, I support accepting this paper. | Summary: This paper investigates one-shot subset selection using Foundation Models (FMs) to reduce deep learning training costs by improving efficiency. Traditional Information Extractors (IEs) rely on models pre-trained on the target dataset, introducing dataset dependency. The paper addresses two key questions: (1) Can FM-based subset selection outperform traditional IE-based methods across diverse datasets? (2) Do all FMs perform equally well for subset selection? Experimental results show that FMs excel on fine-grained datasets but underperform on coarse-grained datasets with noisy labels. Based on these findings, the authors propose RAM-APL (RAnking Mean-Accuracy of Pseudo-class Labels), a novel method that leverages multiple FMs to enhance subset selection performance on fine-grained datasets. Extensive experiments validate the superiority of RAM-APL on three fine-grained datasets.
Claims And Evidence: The main claims of the paper are supported by experimental data. For instance, the superiority of FMs as IEs on fine-grained datasets and the effectiveness of the RAM-APL method are validated through experiments. However, some conclusions (e.g., the limitations of FMs on coarse-grained datasets) lack deeper explanations.
Methods And Evaluation Criteria: The proposed RAM-APL method significantly improves subset selection performance on fine-grained datasets by leveraging the feature extraction capabilities of multiple FMs. The method is well-designed, and the evaluation criteria (e.g., prediction accuracy) are appropriate for the subset selection task. The experimental results demonstrate that RAM-APL outperforms SOTA methods on multiple datasets, validating its effectiveness.
Theoretical Claims: The paper does not provide rigorous theoretical proofs but validates the effectiveness of FMs for subset selection through experiments. The experimental design is sound, and the results support the advantages of FMs on fine-grained datasets. However, the paper lacks a deeper theoretical analysis of the RAM-APL method, such as why it effectively leverages the complementary strengths of multiple FMs. The authors are encouraged to supplement the paper with relevant theoretical analysis to enhance the credibility of the method.
Experimental Designs Or Analyses: The experimental design of Single-model Study is comprehensive, covering multiple datasets (e.g., CIFAR-10, CIFAR-10N, Oxford-IIIT Pet) and different FMs (e.g., DINOv2, CLIP). The results demonstrate that FMs perform well on fine-grained datasets but struggle on coarse-grained datasets with noisy labels. The RAM-APL method significantly improves performance on fine-grained datasets by combining the feature extraction capabilities of multiple FMs.
A limitation of the experimental design is the lack of in-depth analysis of why FMs underperform on coarse-grained datasets. For example, is this due to the feature distribution or noise levels in the datasets?
Supplementary Material: The supplementary material provides detailed methodological explanations and additional experimental results, enhancing the credibility of the paper.
Relation To Broader Scientific Literature: The paper clearly situates itself within the existing literature. Traditional subset selection methods rely on IEs pre-trained on the target dataset, which introduces dataset dependency. By introducing FMs, the paper proposes a dataset-agnostic subset selection method, expanding the scope of subset selection research. However, the paper does not sufficiently discuss the relationship with existing FM-related work, such as FM applications to few-shot learning or noisy datasets.
Essential References Not Discussed: The paper cites a wide range of related literature but omits some key works. For example, FM applications on noisy datasets (e.g., “CLIPCleaner: Cleaning Noisy Labels with CLIP” by Chen Feng et al., 2024) are highly relevant. The authors are encouraged to include relevant references and discuss the implications.
Other Strengths And Weaknesses: Strengths:
1. The findings of the effectiveness of FMs for subset selection are interesting.
2. The proposed RAM-APL method significantly improves subset selection performance on fine-grained datasets.
3. The experimental design is comprehensive, covering multiple datasets and FM combinations.
4. The supplementary material provides detailed experimental explanations and results, enhancing the paper's credibility.
Weaknesses:
1. The paper lacks a theoretical explanation for the underperformance of FMs on coarse-grained datasets with noisy labels.
2. The paper lacks a theoretical analysis of the RAM-APL method, explaining why it effectively leverages the complementary strengths of multiple FMs.
3. The discussion of related FM literature is insufficient.
Other Comments Or Suggestions: NA
Questions For Authors: 1. Why do FMs underperform on coarse-grained datasets with noisy labels? Is this related to feature distribution or noise levels in the datasets?
2. Can the authors provide a theoretical analysis of the RAM-APL method, explaining why it effectively leverages the complementary strengths of multiple FMs?
3. Have the authors considered applying the RAM-APL method to other tasks, such as few-shot learning or various noisy datasets?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your positive feedback! We greatly appreciate your insightful questions, which have deepened our analysis of the findings and inspired further exploration for future work. Below, we address your questions in sequence.
___
**W1&Q1: A deeper discussion of why FM as IE performs poorly on coarse-grained datasets with noisy labels.**
A1: We sincerely appreciate the reviewer's insightful comments regarding the theoretical understanding of foundation models (FMs) on noisy datasets. We have conducted extensive additional analyses to explain this phenomenon, with key findings visualized in Figures 1-8 (https://anonymous.4open.science/r/RAM-APL-DED5/README.md).
Our empirical investigation reveals:
**In coarse-grained datasets (CIFAR-10N-worse, Figures 1-4)**
- FM-extracted features exhibit:
- Weak inter-class separation for visually similar categories (e.g., dog/cat in CIFAR-10N-worse);
- Substantial overlap between clean and noisy samples' feature distributions.
This explains FMs' limited effectiveness as information extractors for coarse-grained noisy data.
**By contrast, in fine-grained datasets (Oxford-IIIT Pet with 40% symmetric label noise, Figures 5-8):**
- FM-extracted features exhibit:
- Compact clustering of correctly-labeled samples;
- Strong inter-class separation for visually similar categories;
- Smaller overlap between clean and noisy samples in feature space.
- Features from models trained on full noisy set show:
- Loose clustering of correctly-labeled samples;
- Significant overlap between clean and noisy samples in feature space.
This leads to the selection of more noise samples (visible in dark red points) and substantially weaker performance of traditional information extractors (IEs) compared to FMs.
**Key Inference:**
The comparative analysis reveals that FMs serve as superior information extractors when their features demonstrate:
- Tighter clustering of correctly-labeled samples;
- Reduced overlap between clean and noisy distributions.
We will incorporate these analyses in the supplementary material to strengthen our empirical analysis and contributions.
___
**W2&Q2: Theoretical Analysis of Multi-FM Complementarity in RAM-APL.**
A2: We thank the reviewer for this important question. RAM-APL's effectiveness in leveraging multiple FMs stems from two fundamental principles:
1.Feature Space Orthogonality:
Our analysis reveals that different FMs learn nearly orthogonal feature representations (Figure 6 in the Suppl.), with:
$$
\text{cossim}⟨M_i(x), M_j(x)⟩ ≈ 0 \quad \forall i \neq j
$$
This orthogonality demonstrates that each FM (e.g., $ M_i, M_j $) captures distinct, complementary aspects of the data $x$.
2.Bias Reduction via Ensemble Consensus:
The ensemble mechanism could mitigate individual FM biases and preserve robust cross-model agreements. Table 2 in the manuscript demonstrates RAM-APL's performance gains when combining CLIP and DINOv2 versus individual FMs, confirming the benefits of multi-FM integration.
___
**W3: Expanded Discussion of FM Literature.**
A3: We sincerely appreciate the reviewer for this constructive suggestion. We will significantly strengthen our discussion of FM literature by incorporating CLIPCleaner (Chen et al., ACM MM 2024). The key insights are:
Both our work and CLIPCleaner leverage CLIP's zero-shot capabilities. Differently, CLIPCleaner, a single-FM method, focuses on noisy label cleaning via prediction probabilities. Our RAM-APL, a Multi-FM approach, specializes in subset selection for clean and noisy fine- rained data using visual deep features.
We'll discuss CLIPCleaner in Section 2 of the revised manuscript.
___
**Q3: Extensions to Few-shot Learning and Noisy Data.**
A4: We sincerely appreciate the reviewer's insightful question regarding the broader applicability of RAM-APL. By leveraging the strong feature discriminability of foundation models and mitigating biases through ensemble consensus, RAM-APL shows a strong ability to distinguish noisy datasets. While our current work focuses on standard subset selection for fine-grained datasets, its theoretical framework and algorithmic design naturally extend to:
- Noisy Few-shot Learning:
Enhances robustness in few-shot scenarios by effectively identifying label-feature mismatches in small support sets.
- Noisy Label Scenarios:
Particularly effective for fine-grained noisy data (as demonstrated in our experiments). Moving forward, we plan to develop more effective denoising strategies tailored to such datasets within the RAM-APL framework.
We will include this extended analysis in the supplemental materials to better position RAM-APL's broader applicability. | null | null | null | null | null | null |
You Get What You Give: Reciprocally Fair Federated Learning | Accept (poster) | Summary: The paper tackles the free-rider problem in a multi-agent federated learning environment by introducing MShap, a Shapley value-based, budget-balanced payment mechanism to enhance fairness and data gains. This mechanism achieves a Nash Equilibrium without requiring knowledge of agents' private cost functions. Experiments were conducted using the MNIST, FashionMNIST, and CIFAR-10 datasets.
## update after rebuttal:
I read the rebuttal of the authors as well as the questions of the reviewers. I maintain my score.
Claims And Evidence: The main claims of the paper are:
(1) MShap gives reciprocal fairness by rewarding agents in proportion to their contribution.
The payment mechanism is designed to align each agent’s utility with their Shapley value, which guarantees fairness by construction.
The paper provides theoretical proofs and experimental results that confirm this claim.
(2) MShap achieves Nash equilibria
The paper proves it using Kakutani’s fixed-point theorem.
Methods And Evaluation Criteria: The paper uses Shapley value to achieve fairness, game theory for Nash equilibrium, and best-response dynamics for computation.
The datasets MNIST, FashionMNIST, CIFAR-10 have been used to evaluate the claims of the paper.
Theoretical Claims: For example, theorem 4.3 on the existence of the Nash equilibrium. The proof makes use of Kakutani’s fixed-point theorem.
Experimental Designs Or Analyses: The paper uses 3 datasets for experiments. It designs distributed protocol (FedBR-Shap) that "relies exclusively on gradient information, eliminating the need for sharing actual data points". Comparisons to baselines such as FL without payments and a welfare-maximizing mechanism have been done.
Supplementary Material: Yes, in particular section C for the proof of the existence of Nash equilibrium. It looks straightforward.
Relation To Broader Scientific Literature: The paper introduces reciprocal fairness by building on prior work on mechanism design such as Karimireddy et al., 2022; Murhekar et al., 2023. The use of Shapley values aligns with prior work on agent's contribution and data valuation such as Wang et al.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths: The paper is written well and combines existing ideas in game theory and shapley values to address fairness in federated learning.
Some weaknesses are lack of real-world validations, e.g., on healthcare etc.
Shapley value estimation can be computationally expensive.
There is no discussion on scalability when thousands of agents are involved
Other Comments Or Suggestions: N/A
Questions For Authors: (1) The paper makes the claim that no other mechanism can simultaneously Pareto-dominate MShap in both data share and total welfare.
However, only two baselines have been used for comparison. More complex baselines are missing.
(2) Datasets on real-world use cases, e.e., healthcare, could show how the results could generalize to new situations
(3) A formal complexity analysis of the Shapley value computation in FL settings is missing
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your time and comments. We respond to your questions below.
**Q:** The paper makes the claim that no other mechanism can simultaneously Pareto-dominate MShap in both data share and total welfare. However, only two baselines have been used for comparison. More complex baselines are missing.
**A:**
We would like to point out that our claim of "no other mechanism can simultaneously Pareto-dominate $\mathcal{M}^{Shap}$ in both data share and total welfare" is formally proven in Theorem 4.8. Since the claim is theoretically proven, it holds for *all* comparative baselines. Nevertheless, to empirically demonstrate the guarantees of our mechanism, we have included comparisons to two existing mechanisms for federated learning. We also conducted many new experiments, for which we request the reviewer to kindly refer to the response to reviewer Dvs1.
**Q:** Datasets on real-world use cases, e.e., healthcare, could show how the results could generalize to new situations
**A:** Thank you for the suggestion. As per your suggestion, we added one benchmark on lumpy skin disease prediction [4], where the input is the information of patients (regions, detection metrics, etc), and the output is whether to be identified as lumpy skin disease. We observe that there are unbalanced labels in the training datasets (much more negative data than positive data). Intuitively, positive data points are more helpful for forecasting disease, and our experiment also verifies this point.
We have two agents, each with 2000 data points. 70% of agent 1's data is positive, and 30% of agent 2's is positive. We train the model with varying numbers of samples from the two agents and learn closed-form accuracy functions given by:
$$
a_1(\textbf{s}) = 1- \frac{1}{0.0129s_1 + 0.0050s_2 + 1},
a_2(\textbf{s}) = 1- \frac{1}{0.0148s_1 + 0.0146s_2 + 1}.
$$
Observe that the weight of $s_1$ is larger than that of $s_2$, which indicates agent 1's dataset is more valuable than agent 2. We ran best response dynamics for the three mechanisms and found the following results.
* The results of fraction shares are: (i) FedBR: (30.5%, 18.1%); (ii) FedBR-BG: (30.6%, 18.1%); (iii) FedBR-Shap: (38.1%, 55.4%).
* The accuracies of the two agents are: (i) FedBR: (93.5%, 95%); (ii) FedBR-BG: (93.5\%, 93.5\%); (iii) FedBR-Shap: (96\%, 96.25\%).
Thus, our mechanism outperforms the two baselines in terms of both data sharing and accuracy. Importantly, Agent 1 has a smaller fraction than Agent 2 in the equilibrium of our mechanism, which contrasts with the other methods. This aligns with the fairness guarantee of our mechanism: An agent with low-quality data should contribute more data points to balance the benefit (s)he receives from other agents.
The above results, along with the results of many new experiments, are detailed in **[Experiments](https://www.dropbox.com/scl/fi/7q6m19rkdml94zo3zwrl8/submission8749_rebuttal.pdf?rlkey=797frtrobzqx62qdblefp08oy&e=1&st=9qc3dzov&dl=0)**.
**Q:** A formal complexity analysis of the Shapley value computation in FL settings is missing
**A:** Thank you for this question. We note that in the worst case, computing the Shapley value can take exponential time in the number of clients. For this reason, we turn to Monte Carlo estimation for computing the Shapley value approximately, as is common [1,2,3]. As shown in these works, to compute the Shapley value for $n$ agents within an error of $\varepsilon\in(0,1)$ and a confidence of $1-\delta$, it suffices to use $m = \frac{2n}{\varepsilon^2}\cdot\log(\frac{2n}{\delta})$. We will mention this formal analysis in our paper. We use $\varepsilon = \delta = 0.1$ in our new experiments.
**References:**
[1] Addressing The Computational Issues of the Shapley Value With Applications in The Smart Grid, Sasan Maleki. 2015
[2] Towards Efficient Data Valuation Based on the Shapley Value, Jia et al. 2019
[3] Efficient Sampling Approaches to Shapley Value Approximation, Zhang et al. 2023
[4] Lumpy Skin Disease Dataset, Afshari Safavi, Ehsanallah. 2021 | Summary: This paper proposes a payment-based mechanism for improving fairness in federated learning systems, allow Nash Equilibria that are fairer and incentivize strategic participants to share more data. Their approach is designed to ensure reciprocity, i.e. each agent receives exactly as much utility as their fair Shapley contribution towards the federation. The authors provide many useful theorems and properties, including the existence of efficient Nash Equilibrium, budget balancing and best-response convergence. Finally, they perform empirical evaluations with some popular image datasets and show strong performance compared to two reasonable baselines.
Post-rebuttal Updates:
The authors addressed my questions in a satisfactory manner. I will keep my score and recommend this paper be accepted.
Claims And Evidence: The paper supports all claims made, and provides intuition as well as proofs for them.
Methods And Evaluation Criteria: The proposed method is a natural construction to balance reciprocity and enjoys strong properties. The empirical evaluation is also a useful addition to the mostly theoretical paper.
Theoretical Claims: Yes, I checked the theoretical proofs for budget balanced mechanisms and reciprocity, and for the admittance of Nash Equilibria. I also tried to check the proofs for the convergence of BR dynamics, and these seem correct to the best of my understanding.
Experimental Designs Or Analyses: I checked the experimental design and results, and they appear sound. The evaluation compares against two reasonable baselines, and the results show the strong performance of the proposed approach on all tasks. The evaluation metrics also convey useful information in a condensed form, which helps in better evaluating the contribution.
Supplementary Material: Yes, I read the appendix at the end of the paper with the proofs and experiment details.
Relation To Broader Scientific Literature: The paper proposes a useful payments-based mechanism for improving fairness in federated learning. This has many applications in real federated learning systems, and could encourage participation by stakeholders (particularly due to the guarantee of individual rationality), while ensuring there are no free-riders. The results show strong adherence to the reciprocity metric, which is evident by construction. I see it having applications in areas where acquisition costs are more homogenous, as that will prevent the feeling of unfairness because of varying input cost for the same return.
Essential References Not Discussed: The paper positions itself very well in related work.
Other Strengths And Weaknesses: I understand the benefits of not relying on the individual cost functions, however, I don't fully understand why not considering costs is a feature. To me, it seems like a simplification of the reciprocity requirement. The cost is part of the total utility an agent derives, so a perfectly balanced system could also balance this cost out, while ensuring no misreporting. I understand this is hard to do, but why would this not be better if it is possible?
Other Comments Or Suggestions: Page 2 "in metrics of reciprocity, data gain, accuracy gain, data gain, and" data gain is repeated
In the introduction, reciprocity is denoted with $\beta$, but later on, $r$ is used (Section 3.1)
Questions For Authors: 1. There could be large costs (in terms of payments due) for someone participating with less data. Since the payment is decided only after training the model, is there any bound on how large this payment might be when agent i commits data s_i? Agents might have limited budgets which may affect their participation in the system.
2. On a related note, how is the payment system communicated to the agents? Are agents able to estimate their expected payments locally? It would be useful to walk through an example of how agents compute their best response share s_i, since this would depend on the payment they expect.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your time and comments. We respond to your questions below.
**Q:** I understand the benefits of not relying on the individual cost functions, however, I don't fully understand why not considering costs is a feature. To me, it seems like a simplification of the reciprocity requirement. The cost is part of the total utility an agent derives, so a perfectly balanced system could also balance this cost out, while ensuring no misreporting. I understand this is hard to do, but why would this not be better if it is possible?
**A:** We agree that a system that ensures costs are not misreported would be ideal. However, we consider that not relying on costs is a feature, because a fair mechanism under our definition of fairness (i) avoids the burden of verifying costs, (ii) retains its guarantees even if agents misreport costs, (iii) does not penalize high quality, low cost agents while still incentivizing high quality, high cost agents. In contrast, the above features will not hold for a "fair" mechanism where fairness is defined to incorporate costs.
Nevertheless, we agree that fairness can be defined in various ways, and alternative definitions of reciprocity could include agents costs. One such definition could be the minimum ratio of an agent’s total utility to their contribution to the federation’s welfare:
$
\min_{s \in \mathit{NE}(\mathcal M)} \min_{i \in N} \frac{u_i(s)}{\varphi^W_i(s)} = \frac{a_i(s) + p_i(s) - c_i(s_i)}{\varphi^A_i(s) - c_i(s_i)}
$
Under this definition, our mechanism $\mathcal{M}^{Shap}$ still achieves reciprocity 1. Further, note that $\min_{s \in \mathit{NE}(\mathcal M)} \min_{i \in N} \frac{a_i(s) + p_i(s)}{\varphi^A_i(s)} \leq 1$ for any weakly-budget balanced mechanism $\mathcal{M}$. Thus, we have:
\begin{align*}
\min_{s \in \mathit{NE}(\mathcal M)} \min_{i \in N} \frac{a_i(s) + p_i(s) - c_i(s_i)}{ \varphi^A_i(s) - c_i(s_i)} &\leq \min_{s \in \mathit{NE}(\mathcal M)} \min_{i \in N} \frac{a_i(s) + p_i(s)}{\varphi^A_i(s)},
\end{align*}
This implies that other baseline mechanisms will have even lower reciprocity under the new definition than under the current definition of reprocity (as reported in Table 1). Moreover, all our guarantees on data, welfare, and accuracy will continue to hold under this definition.
**Q:** There could be large costs (in terms of payments due) for someone participating with less data. Since the payment is decided only after training the model, is there any bound on how large this payment might be when agent i commits data s_i? Agents might have limited budgets which may affect their participation in the system.
**A:** At a sample vector $s$, the payment to agent $i$ is given by $p_i = \varphi_i^A(s) - a_i(s)$. If agent $i$ has to pay, i.e., $p_i < 0$, we obtain that an upper bound on $p_i$ is $a_i(s)$. That is, in the worst case, an agent will essentially have to pay $a_i(s)$ to obtain the data from other agents. That said, we agree that studying this problem when agents have limited budgets is a very interesting question for future work.
**Q:** On a related note, how is the payment system communicated to the agents? Are agents able to estimate their expected payments locally? It would be useful to walk through an example of how agents compute their best response share s_i, since this would depend on the payment they expect.
**A:** The payment scheme is published upfront. The share of agents are updated during best response dynamics by communication with the server, i.e., through evaluations on models to compute the Shapley share. However, if the accuracy functions of the agents are common knowledge and have a common test set (e.g. when small number of organizations with standard classification tasks), then agents can estimate their payments locally. | Summary: Summary:
The paper studies a method for incentivizing data contributions in
collaborative/federated learning, while also satisfying fairness criteria.
The authors propose evaluating the contributions from each agent via the Shapley value,
based on the value agents derive from the data, and design a payment scheme where agents
get paid or need to pay based on how much value they are deriving from the aggregate data.
The authors show that a stochastic version of best response dynamics (BRD) converges to a
Nash equilibrium (NE), and provide guarantees on the efficiency of any NE.
They supplement this with experiments on three datasets.
Incentives in FL is an important and timely problem at the intersection of machine
learning and game theory. However, the novelty of the work is unclear, as several key ideas appear to be adapted from previous work (e.g., Murhekar et al.) without clear explanation of what is fundamentally new. The practicality of using payments in a federated learning scheme is not well-motivated. Moreover, several aspects of the paper, including the definition of reciprocity, the baselines for efficiency, and the need for a stochastic BRD are not properly motivated. For these reasons, I cannot recommend the paper for acceptance.
Detailed comments:
1. The paper would benefit from a more thorough discussion of when payments would be practical in a federated learning scheme. Are there real-world scenarios where payments have been successfully implemented in FL?
- This point is particularly important since some agents may need to pay in addition to contributing data. The authors should provide a compelling use case to support the practicality of such a scheme.
- While I acknowledge that some prior works (e.g., Murhekar et al.) have proposed similar frameworks, this does not eliminate the need for the authors to clearly justify their setting.
2. The novelty of the work is not entirely clear. Several ideas seem to be inspired by Murhekar et al. (e.g., budget-balanced payments, the BRD scheme), and it is unclear which aspects are fundamentally new in this paper.
- It appears that the primary contribution is adapting the techniques from Murhekar et al. to also account for fairness through the Shapley value. Is this understanding correct?
- The concept of rewarding agents based on their contribution is not new in collaborative learning and has been previously explored (see [1, 2, 3] below). Moreover, prior works have also employed the Shapley value in collaborative learning schemes to determine agent rewards.
- It would be helpful if the authors clearly summarized the key novelties and differences in the proof techniques early on in the paper under the "Our Contributions" section.
3. The reciprocity term in Definition 3.3 does not appear to include the costs. Excluding costs from this fairness criterion seems arbitrary, especially since costs are being considered when defining the utility. The authors should clarify the rationale behind this choice.
4. Regarding the BRD scheme, it is not clear what the set R_t represents in stochastic BRD. Why is a stochastic version of BRD necessary instead of using regular BRD?
5. In Definitions 3.6 and 3.7, considering NE in a mechanism without payments as a baseline is somewhat unconventional. Generally, a baseline or comparator is expected to be a stronger benchmark, where the objective is for the algorithm to approximate or approach the baseline's performance.
- Here, the baseline appears weaker since it does not allow payments, which explains why the ratio between performance is greater than 1 in Theorem 4.9.
- A more reasonable baseline might be one that maximizes data/accuracy gain when agents are not strategic.
- Furthermore, Theorem 4.9 is difficult to interpret, and additional discussion on how far the proposed method is from optimality would be beneficial.
6. It would strengthen the paper to provide a more comprehensive discussion of related work, particularly including the following references:
- [1] Cai et al., 2015, "Optimum statistical estimation with strategic data sources"
- [2] Chen et al., 2020, "Truthful Data Acquisition via Peer Prediction"
- [3] Chen et al., 2023, "Mechanism Design for Collaborative Normal Mean Estimation"
Claims And Evidence: See above.
Methods And Evaluation Criteria: See above.
Theoretical Claims: See above.
Experimental Designs Or Analyses: See above.
Supplementary Material: See above.
Relation To Broader Scientific Literature: See above.
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your time and comments. We respond to the questions below.
**1.** Indeed, there are blockchain-based mechanisms for FL that involve payments based on contributions, such as FedToken [1] and FedCoin [2]. FedCoin uses a "proof of Shapley" protocol, while FedToken distributes tokens based on performance. Both require an initial budget, unlike our budget-balanced approach, which penalizes poor data quality and rewards high-quality data. Also in IoT, BOppCL [3] incentivizes vehicles in intelligent transportation systems, rewarding those with more useful data via cryptocurrency.
**2.1.** Although we work on the same data-sharing framework as of [4, 5] our specific problem objective and solution concepts are fundamentally different. Our primary contribution is a fair mechanism for FL that admits Nash equilibria with strong guarantees on data shared, accuracy, and welfare.
- Objective: [5] focuses on welfare maximizing mechanisms, while not worrying about fairness or data gain. Our goal is to design fair mechanisms, first and for most, that simultaneously has strong guarantees on data sharing, accuracy, and welfare (Thm 4.8 and Thm 4.9).
- Solution: [5] only use costs to determine payments, whereas we use contributions towards model accuracy. Thus the two mechanisms are fundamentally different.
- Techniques: The proofs of guarantees of our mechanism are novel. The proof of convergence of *stochastic* BRD generalizes the proof of [5] on BRD. Lastly, we use the standard (and one of the only) proof technique of using the Kakutani fixed point theorem to prove the existence of Nash equilibrium. The same technique is used by several other works, e.g. [4,5].
**2.2.** We request that you kindly revisit Sec 1.1 on Page 2 where we clarify this point: "We remark that rewarding agents according to their contribution levels has been well motivated and studied in FL (Wang et al., 2019; Sim et al., 2020; Zhang et al., 2020; Yu et al., 2020). However, the crucial difference is that our focus is to design a mechanism that incentivizes *strategic agents*, i.e., agents who strategize their data contributions based on the rewards they get from the federation so that desirable fairness and welfare guarantees are achieved at NE."
**3.** We do not include costs in the definition of reciprocal fairness, because a fair mechanism under our definition of fairness (i) avoids the burden of verifying costs, (ii) retains its guarantees even if agents misreport costs, (iii) does not penalize high quality, low cost agents while still incentivizing high quality, high cost agents, unlike [5]. In contrast, the above features will not hold for a "fair" mechanism where fairness includes costs. We also request you to read our response to Reviewer KV1s for a related discussion.
**4.** The set $R^t$ represents a random subset of $k$ agents chosen in round $t$. In round $t$, we perform BR dynamics (i.e., update the data shares) only for agents in $R^t$. A stochastic version is not necessary in general, but is practically useful in situations with large number of clients since it reduces computational and communication overheads in each round. Note that the convergence of stochastic BRD (Theorem 4.4) also implies the convergence of regular BRD, by setting $k=n$.
**5.** We agree that calling a no-payment mechanism a "baseline" is misleading and will avoid this term. Our goal was to quantify the data and accuracy gains enabled by our incentive mechanism with payments as compared to the mechanism without payments.
Also note that a mechanism maximizing total data or accuracy would require all agents to share their data fully. But with high costs, no budget-balanced mechanism can acheive this without large payments to the agents, which is not budget-balanced. Thus, a more meaningful baseline considers both accuracy and cost, like welfare maximization. In this context, we do provide theoretical guarantees against the welfare-maximizing baseline. Specifically, in Thm 4.8, we prove that a welfare-maximizing mechanism cannot achieve Pareto-dominating data contributions from individual agents compared to our mechanism. In fact, this is one of the unique and compelling features of our mechanism.
**6.** Thank you for highlighting the references. Although their primary focus is not on federated learning, we will include a discussion on aspects related to our work such as incentive mechanisms and avoiding free-riding.
**References:**
[1] FedToken: Tokenized Incentives for Data Contribution in Federated Learning, Pandey et al. 2022
[2] FedCoin: A Peer-to-Peer Payment System for Federated Learning, Liu et al. 2020
[3] BOppCL: Blockchain-Enabled Opportunistic Federated Learning in Intelligent Transportation Systems, Li et al. 2023
[4] Mechanisms that Incentivize Data Sharing in Federated Learning, Karimireddy et al. 2022
[5] Incentives in Federated Learning: Equilibria, Dynamics, and Mechanisms for Welfare Maximization, Murhekar et al. 2023 | Summary: In this paper, $\mathcal{M}^{\text{Shap}}$ is proposed, a budget-balanced payoff mechanism for federated learning scheme: the _data-sharing game_ among strategic agents. The authors theoretically designed and elaborated that their proposed mechanism ensures _reciprocal fairness_: each agent's payoff is equal to its Shapley value contribution, admitting efficient Nash equilibria as well as achieving strong performance guarantees. Empirical results show superior fairness and efficiency compared to baselines on vision benchmark datasets.
## Update after rebuttal
---
I updated the score to **accept** because the authors made a great effort to address all of my concerns.
I have summarized below how my evaluation changes after the rebuttal:
* The proposed method is theoretically sound and intriguing, but the empirical validation was not satisfactory.
* During the rebuttal period, the empirical weaknesses are clearly resolved: the authors have faithfully performed additional experiments (**twice**) and the results are acceptable.
* In addition, the authors have also agreed to narrow the scope of their paper based on the point raised by the review. I hope that this reconciliation will increase the visibility of the contributions of the proposed method in the field.
Claims And Evidence: ### Main claim
---
$\mathcal{M}^{\text{Shap}}$ is a i) reciprocally fair FL mechanism that admits ii) Nash equilibria iii) with strong guarantees in performance under data-sharing scheme, iv) outperforming efficiency-focused baselines.
### Breakdown
---
i) *Reciprocal Fairness*: authors provided Theorem 4.7, which guarantees that the proposed mechanism is (fully) reciprocally fair, with sound derivations and acceptable assumptions (e.g., individually rational clients).
ii) *Nash Equilibria*: authors also provided Theorem 4.3, which guarantees the existence of Nash equilibrium under common assumptions (concave payoff and convex cost), which is supported by satisfying conditions of Kakutani’s fixed-point theorem
iii) *Performance Guarantee*: authors showed in Theorem 4.8 that the proposed mechanism achieves (weak) Pareto optimality at Nash equilibrium, and also guarantees data & accuracy gains by providing related lower bounds in Theorem 4.9
iv) *Empirical Superiority*: in Table 1, the proposed mechanism consistently outperforms existing baselines across a few of vision benchmarks with a moderate number of agents (i.e., $n=100$).
Methods And Evaluation Criteria: - The experimental results assume identical payoff functions learned from local training, which may not hold in practical FL settings, especially under statistical heterogeneity (i.e., non-IIDness). The variations of real-world data quality are not fully explored in this regard.
- While the simulated federated setting contains an acceptable number of clients with a proper client sampling ratio, the benchmark datasets are limited to vision classification tasks.
- Existing baseline methods seems lacking; there already are incentive mechanisms similarly designed for FL (even using Shapley value), but they are not directly compared, except (Murhekar et al., 2023)
Theoretical Claims: The theoretical expositions are well structured. I appreciate the authors' efforts since all claims are easy to follow, especially thanks to the accompanying 'implications' subsections.
Experimental Designs Or Analyses: * The evaluation setting is a bit questionable. As stated in Appendix D.1, lines 882-883, the reported metrics are the performances of _a global model_ on a _global test dataset_. Please correct me If I understood wrong: I understood that the meaning of 'global' refers to the 'server-side holdout model or dataset'. The global evaluation scheme is valid when if we'd like to evaluate the global model's performance/generalization on unseen dataset or agents. However, the target of this paper is to evaluate the utility-fairness tradeoff _as a result_ of participating in the FL process. In this regard, I believe the local evaluation scheme should be adopted. Instead of assigning an identical local test dataset (c.f line 881), let each client has their own test set and evaluate the trained global model $\theta^t$ on these local test set.
* One of the main difference between FL and distributed learning is the existence of the statistical heterogeneity, i.e., non-IID nature between local distributions. While many non-IID simulation methods exist, it seems that current experimental design does not reflect this heterogeneity simulations (McMahan et al., 2018; Hsu et al., 2019). Without this, I think we can hardly believe that the empirical validation can really be extended to the _federated learning_ setting in practice.
> (McMahan et al., 2018) Communication-Efficient Learning of Deep Networks from Decentralized Data
> (Hsu et al., 2019) Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification
Supplementary Material: I reviewed all the sections of the supplementary material.
Relation To Broader Scientific Literature: The proposed method can contribute to achieving the global welfare of federated or collaborative machine learning systems, where many participants (i.e. agents) are willing to contribute their data for local model updates, but they are also potentially free-riders. Thus, prevention of such malfare is important in practice, and it is guaranteed in terms of _reciprocal fairness_ in the proposed method.
Essential References Not Discussed: > (Xu et al., 2021) Gradient-Driven Rewards to Guarantee Fairness in Collaborative Machine Learning
> (Zeng et al., 2022) Incentive Mechanisms in Federated Learning and A Game-Theoretical Approach
* These to references directly used Shapley value, as the same in the proposed method, for the incentive mechanism design in federated settings, but they're neither discussed nor compared.
> (Chaudhury et al., 2022) Fairness in Federated Learning via Core-Stability
* This paper defines _core-stable fairness_, which is not identical to _reciprocal fairness_, but is closely related in its similarity of fairness concepts, e.g., the existence of similar fair equilibria in the collaborative setting.
Other Strengths And Weaknesses: Strength
- The proposed method is well-supported by structured theories and designs.
- The targeted problem is undoubtedly important for the overall welfare of the distributed learning system.
Weakness
- While theoretically sounding, the proposed method is not appealing in practical view. It requires full synchronization for acquiring $T_i, i\in[n]$, which limits its applicability in practice (e.g., cross-device FL setting, where massive number of clients exist in the system). In addition, it also requires another communication for $T_i^\epsilon$, as well as doubled local computations to obtain $T_i$ and $T_i^\epsilon$, for the calculation of the difference of Shapley shares.
- It is heartbreaking that the empirical validation of the proposed method seems lacking, questioning the statement of 'outperformance' of the proposed method compared to baselines.
- While it's inevitable to use approximations (e.g., Monte-Carlo) to calculate the Shapley value in a reasonable time, using only 5 permutations is concerning as it can introduce an approximation error. Any ablation studies on this or theoretical justification would be appreciated.
Other Comments Or Suggestions: * Please consider narrowing down the scope of the paper into _cross-silo FL_ setting (as in Zeng et al., 2022), where only a moderate number of reliable clients participate in collaboration. It's due to the statefulness design of the proposed method, such as i) the needs of full client participation as in lines 955-956 (i.e., line 7 of Algorithm 1) and ii) the existence of memory of contribution vector $\boldsymbol{s}^t$ - which typically does not hold in cross-device setting as we cannot assume repetitive participation of clients in this setup (please see Table 1 of Kairouz et al., 2019)
* Please change the global evaluation setting into local evaluation setting, and report corresponding results. In my humble opinion, it's more fair to evaluate how each client is benefited from participating in FL. If my thought is not aligned with the authors' intention, please generously enlighten me.
* Please repeat experiments with different random seeds and provide evaluation results with standard deviation.
* Please consider conducting more experiments with smaller numbers of clients, e.g. $n=2, 10, 50$. For example, with $n=2$ and a simple quadratic model on a synthetic dataset (e.g., linear/logistic regression task), it would be more visible to understand the edge of theoretical claims provided. Regarding this, please refer to the experimental setup of (Chaudhury et al., 2022)
* Please add more baselines, especially the ones in _Essential References Not Discussed_ section.
* In eq. (5), the numerator should be $\min _{\boldsymbol{s} \in \mathrm{NE}(\mathcal{M})} \Vert \boldsymbol{s} \Vert_1$.
> (Zeng et al., 2022) Incentive Mechanisms in Federated Learning and A Game-Theoretical Approach
> (Kairouz et al., 2019) Advances and Open Problems in Federated Learning
> (Chaudhury et al., 2022) Fairness in Federated Learning via Core-Stability
Questions For Authors: * In the pseudocode of Algorithm 1, what is the meaning of 'the central server runs FedAvg'? Is it equivalent to simple averaging of local updates, i.e., $\theta^t \leftarrow \sum_{i \in R^t} T_i / k$? If so, please discard the phrase, and if not, please enlighten me.
* Can we say that $\mathcal{M}^0$ is equivalent to the `FedAvg`, as well as utilitarian setting?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback on our experiments and for appreciating our theoretical contributions.
We conducted several new experiments based on your suggestions. We ran each method thrice with different seeds, and observed that our mechanism consistently outperformed baselines in data gain, accuracy, and reciprocity. All results are in: **[Experiments](https://www.dropbox.com/scl/fi/7q6m19rkdml94zo3zwrl8/submission8749_rebuttal.pdf?rlkey=797frtrobzqx62qdblefp08oy&e=1&st=9qc3dzov&dl=0)**. We address the main weaknesses/suggestions below.
**Q:** Experiments ignore statistical heterogeneity/ local evaluation
**A:** We perform new experiments with non-IID agents following Ghosh et al. [1]. We have 30 clients partitioned into 3 groups of 10 each. We equally partition the images in our benchmarks into three groups and rotate the images by 10, 90, and 180 degrees respectively, giving datasets $A_1, A_2, A_3$. We then map dataset $A_j$ to group $j$, and split it into local training and test sets. Thus, this experiment accounts for non-IIDness due to the rotated images and uses local evaluation. We observe improved data gain (3x), accuracy, and reciprocity (2x) even in this non-IID setting with local evaluation. Finally, we note that identical payoffs and IID test-data is assumed in prior work [2-4].
**Q:** Non-vision experiments
**A:** As suggested, we implement a quadratic model on a synthetic 2-classification dataset, where inputs $X\in \mathbb{R}^{10}$ has ten features and $y\in \{0, 1\}$. We randomly generate matrix $W$, vector $b$ and number $c$ by $W_{i,j}, b_i, c\sim \mathcal{N}(0, 1)$. We have 2 agents. For non-IID distributions, we sample 1000 points uniformly for each such that 80% and 20% of agent 1's and agent 2's data have positive labels. We observe significant improvements (~30x data gain).
We also request you to kindly read our response to Reviewer ir71 for experiments on a real-world healthcare dataset.
**Q:** Theoretical justification for approximating Shapley value
**A:** We approximate SV using standard Monte Carlo estimation [5-7]. For $n$ agents, $m = \frac{2n}{\varepsilon^2} \ln\frac{2n}{\delta}$ samples ensure an error of $\varepsilon$ and confidence $1-\delta$. In new experiments, we set $\varepsilon = \delta = 0.1$. We will include this analysis in our paper.
**Q:** Additional references
**A:** Thank you for pointing out the suggested references on Shapley value in FL. We will cite them. However, our work differs in both problem setting and solution. For example, Xu et al. (2021) measure contributions via Shapley value of cosine similarities in gradient updates, while Chaudhury et al. (2022) ensure coalition stability assuming full data contribution. In contrast, we study strategic data sharing with monetary rewards. Due to the differences, we did not use them as baselines. Zeng et al. (2022) survey FL incentives and mention Wang et al. (2019), whose FL Shapley share metric aligns with our Appendix D.2 setup, and we will cite this work.
**Q:** Consider narrowing the scope to cross-silo FL.
**A:** Our theoretical results are general, but we agree on focusing on the cross-silo setting (Zeng et al., 2022) with a moderate number of agents.
**Q:** "the proposed method... requires full synchronization for acquiring $T_i$, $i\in[n]$... it also requires another communication for $T_i^\varepsilon$, as well as doubled local computations to obtain $T_i$ and $T_i^\varepsilon$..."
**A:** We agree that synchronization is a challenge in our method, but this is true in almost all cross-device FL protocols with many clients. Based on your suggestion, we will focus on cross-silo FL. We note that our protocols account for scalability by sampling only $k$ agents per round for sample updates. Since we compute $T^i_\varepsilon$ only for these $k$ clients instead of all $n$, the total computations are reduced to $n+k$ from $2n$.
**Q1:** Yes, they are equivalent, and we will clarify this.
**Q2:** FedBR computes the NE of the zero-payment mechanism $\mathcal{M}^0$ is the zero-payment mechanism. Each round of FedBR employs FedAvg to train the model, so they are "equivalent" at equilibrium but not identical.
**References**
[1] An Efficient Framework for Clustered Federated Learning, Ghosh et al. 2020
[2] Mechanisms that Incentivize Data Sharing in Federated Learning, Karimireddy et al. 2022
[3] Incentives in Federated Learning: Equilibria, Dynamics, and Mechanisms for Welfare Maximization, Murhekar et al. 2023
[4] Incentivizing Honesty among Competitors in Collaborative Learning and Optimization, Dorner et al. 2023
[5] Addressing The Computational Issues of the Shapley Value With Applications in The Smart Grid, Sasan Maleki. 2015
[6] Towards Efficient Data Valuation Based on the Shapley Value, Jia et al. 2019
[7] Efficient Sampling Approaches to Shapley Value Approximation, Zhang et al. 2023
[8] A Principled Approach to Data Valuation for Federated Learning, Wang et al. 2020
---
Rebuttal Comment 1.1:
Comment: I sincerely appreciate for the efforts and detailed responses from the authors.
### Why Not Lower Score
---
As all my concerns are clearly addressed - especially the empirical concerns - I'd like to champion this paper. Both the theoretical and the empirical justifications now seem sound to me, and I've raised the score to **accept**.
Based on the rebuttals, please consider adding or supplementing the following in the revised manuscript:
- Please adjust the scope (and accordingly polish the abstract and possibly the title) to a specific FL setup, i.e., cross-silo FL setting, to emphasize the applicability of the proposed method.
- Please clearly state the experimental setup, so that the proposed method seems to be clearly justified by experiments.
- Please also add non-vision experiments to emphasize the feasibility of the proposed methods in various practical (cross-silo) FL scenarios.
- Please state the limitation of the synchronization cost, which is nevertheless mitigated by authors' contribution through the subsampling scheme. Please also mention that this is acceptable in the cross-silo FL scenario since the total number of clients (i.e., $n$) is typically moderate.
I hope that these will help improve the presentation and communication of the research results.
### Why Not Higher Score
---
I think there still is room for improvement in following perspectives.
- For the reproducibility, the pseudocode of the proposed method (i.e., Algorithm 1 on page 18) could've been improved. For example, instead of providing "The central server computes (formula) as described in Section 5.1, ...", "Agent i computes (formula)", the authors can assign equation numbers to related formulae and state as: "The central server computes eq. (6)", "Agent i computes eq. (7)", for instance. Likewise, if the presentation is improved, succinct pseudocode can be moved to the main text, emphasizing easy implementation of the proposed method along with its theoretical justification.
- The empirical results (including those in the rebuttal) seem to have no standard deviations(or standard errors), although the authors stated that they used a new empirical setup with local evaluation. This is questionable if the authors only conduct a single run for each experiment: the number of clients is moderate in all experimental setups, thus there is little computational burden.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for increasing the score and for championing the paper! Also, thank you very much for the additional suggestions, especially the one on improving the pseudocode. We will incorporate all of them in our final version of the paper.
In our earlier response, we had reported the mean and median of results obtained by running each experiment three times. We also include the standard deviations in this updated [document](https://www.dropbox.com/scl/fi/ta78qamz9agmh5moo0mgg/submission8749_rebuttal_updated.pdf?rlkey=7ik5xq9ckkr9yvxl64ut9v9u3&st=zc8xq8lt&dl=0.
). We will include experimental results averaged over a larger number of runs in the final version of the paper. | null | null | null | null | null | null |
Finite-Time Global Optimality Convergence in Deep Neural Actor-Critic Methods for Decentralized Multi-Agent Reinforcement Learning | Accept (poster) | Summary: This paper analyzes the global convergence properties of an actor-critic algorithm for decentralized multi-agent reinforcement learning (MARL). In the critic component, the Q-function is parametrized using a deep neural network, introducing a nonlinear approximation. Each agent employs temporal difference learning, with a gossip mechanism to aggregate information across agents. The actor component approximates the policy gradient, where the advantage function is estimated using sampled temporal difference values, and the score function is computed via auto-differentiation. The gossip process is also integrated into the actor update. The main theoretical result establishes a global convergence rate of $\mathcal{O}(N^2/\epsilon^3)$ under certain assumptions on the environment and policy parametrization. Numerical experiments validate the effectiveness of the proposed algorithm.
The major innovation of this paper is to encoporate convergence analysis of consensus based optimization to existing convergence analysis of policy gradient or actor-critic algorithm, extending the setting from single agent RL to multi agent RL.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I think Lemma 4.2 is not correct. The authors use the stationary distribution $\nu(\theta)$ to take the expectation. This is for infinite horizon ergodic RL without discount (the objective average of rewards through the trajectory). For objective with a discount factor, the distribution should be the cumulative discounted distribution.
As a consequence, (if I am correct), please check how this problem affect the statements and proofs of the theorems.
Experimental Designs Or Analyses: The expreiments looks good to me.
Supplementary Material: Proof and additional numerical results.
Relation To Broader Scientific Literature: Nan
Essential References Not Discussed: Nan
Other Strengths And Weaknesses: Strength: the paper is clearly written, with solid theoretical analysis and numerical results.
Weakness: I have a personal concern about Assumption 5.5 where the authors assume a uniform upper bound for the score function and uniform positive lower bound for the Fisher information matrix. I am not sure if there exists (at least some simple) examples that satisfies these assumptions.
Other Comments Or Suggestions: Nan
Questions For Authors: Nan
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > **Comment 1:** I think Lemma 4.2 is not correct. The authors use the stationary distribution $\nu(\theta)$ to take the expectation. This is for infinite horizon ergodic RL without discount (the objective average of rewards through the trajectory). For objective with a discount factor, the distribution should be the cumulative discounted distribution.
**Response:** Thanks for catching this. We confirm that the citation in Lemma 4.2 is a typo and we should use cumulative visitation distribution. Fortunately, this typo will not affect the overall analysis in our paper. Denoting $\zeta(\cdot)$ as the state visitation distribution, we know that under Assumption 5.2, the stationary distribution $\nu(\cdot)$ not only exists but also satisfies the proportional relation $\nu(\cdot) \propto \zeta(\cdot)$ (See [1]). Therefore, by modifying Lemma 4.2 to
$\nabla_{\theta^i}J(\theta)=\mathbb{E}[\nabla_{\theta^i}\log\pi_{\theta^i}^i(a^i|s)\cdot\text{Adv}_{\theta}(s,a)|s\sim\zeta(\theta)],$
where $a\sim\pi_{\theta}(s)$, we only need to adjust $\kappa$ in Lemma 5.4 to a different $\kappa'$. With this modification, the subsequent analysis remains valid. We will correct **Lemma 4.2** in our revision.
> **Comment 2:** I have a personal concern about Assumption 5.5 where the authors assume a uniform upper bound for the score function and uniform positive lower bound for the Fisher information matrix. I am not sure if there exists (at least some simple) examples that satisfy these assumptions.
**Response:** Thanks for your insightful comment. Most related works in this area assumed $||\nabla_\theta\log\pi_\theta(a|s)|| \le M_g$, where $M_g$ is a positive constant [2-4] instead of $1$. Our intention in using $1$ was just to simplify the notations like [5,6]. But we agree with the reviewer that this simplification may lead to some unnecessary confusion. Upon carefully checking our analysis, we have identified the occurrences of $M_g$ in the following equations: Eqs. (17,26,29,32,34). We confirm that this simplification does not affect our main conclusion, as the notation $\mathcal{O}(\cdot)$ appropriately absorbs all instances of $M_g$. We thank the reviewer again for the careful reading. We will remove this somewhat unnecessary simplification and revise **Assumption 5.5** and the related analysis in our revision.
[1] Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction (Vol. 1, No. 1, pp. 9-11). Cambridge: MIT press.
[2] Fatkhullin, I., Barakat, A., Kireeva, A., & He, N. (2023, July). Stochastic policy gradient methods: Improved sample complexity for fisher-non-degenerate policies. In International Conference on Machine Learning (pp. 9827-9869). PMLR.
[3] Ding, Y., Zhang, J., and Lavaei, J. (2022). On the global optimum convergence of momentum-based policy gradient. In International Conference on Artificial Intelligence and Statistics, pages 1910–1934. PMLR.
[4] Yuan, R., Gower, R. M., and Lazaric, A. (2022). A general sample complexity analysis of vanilla policy gradient. In International Conference on Artificial Intelligence and Statistics, pages 3332–3380. PMLR.
[5] Xu, T., Wang, Z., & Liang, Y. (2020). Improving sample complexity bounds for (natural) actor-critic algorithms. Advances in Neural Information Processing Systems, 33, 4358-4369.
[6] Hairi, F. N. U., Liu, J., & Lu, S. (2022). Finite-Time Convergence and Sample Complexity of Multi-Agent Actor-Critic Reinforcement Learning with Average Reward," in Proc. ICLR, Virtual Event, April 2022. Proc. ICLR.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I think you misunderstood my concern. I agree that there is no essential difference between an upper bound of 1 or an absolute constant $M_g$. But my question is: whether is it possible that the score function has a uniform upper bound $M_g$ for any $\theta$. The same question holds for the reference papers you mentioned, although they have been published.
Other parts of the reply look good. I decide to keep my score.
---
Reply to Comment 1.1.1:
Comment: > **Question:** Thanks for the reply. I think you misunderstood my concern. I agree that there is no essential difference between an upper bound of 1 or an absolute constant $M_g$. But my question is: whether is it possible that the score function has a uniform upper bound $M_g$ for any $\theta$. The same question holds for the reference papers you mentioned, although they have been published. Other parts of the reply look good. I decide to keep my score.
**Response:** Thanks for your follow-up question. We agree that the uniform upper bound is not always satisfied for every policy and action [1]. However, we make the following two clarifications:
1. Our parameterization (in Line 199 left column, Page 4) ensures that the policy considered in this paper follows a Gaussian policy, since we use neural networks for the mean and the standard deviation parameterizations.
2. Within the Gaussian policy class, although the assumption does not always hold, it can be satisfied by additionally assuming *"the bound on sampled actions and on the mean parameterization" $\hat{Q}(\cdot;W)$* [1] (cf. Page 18), which is indeed true in many systems in practice (e.g., with clipping). Therefore, our assumption can be satisfied in many scenarios.
[1] Fatkhullin, I., Barakat, A., Kireeva, A., & He, N. (2023, July). Stochastic policy gradient methods: Improved sample complexity for fisher-non-degenerate policies. In International Conference on Machine Learning (pp. 9827-9869). PMLR.
We will add the above justifications in our revision. | Summary: This paper investigates a multi-agent neural actor-critic method, establishing the first theoretical global optimality guarantee with a finite-time convergence rate of $O(1/T)$. The authors further present numerical results demonstrating the effectiveness of this algorithm in applications involving large language models.
Update after rebuttal:
As mentioned in my Official Comment, I believe all of my concerns can be addressed based on the method proposed by the authors, along with the insights provided by Reviewer wTko. Therefore, I am updating my score to a 3.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, please refer to Question section.
Experimental Designs Or Analyses: Yes, please refer to Question section.
Supplementary Material: I did not check supplementary material carefully.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: Please refer to Question section.
Other Strengths And Weaknesses: Strengths:
The paper establishes the first theoretical convergence bound for multi-agent actor-critic methods utilizing neural networks as function approximators. The authors provide numerical results that demonstrate the algorithm's success in applications involving large language models.
Weakness:
Please refer to Question section.
Other Comments Or Suggestions: Please refer to Question section.
Questions For Authors: 1. My main concern lies in Lemma 4.2. It is improper to include results from [1] since they consider average reward MDPs where $\gamma=1$. Moreover, Lemma 4.2 is problematic because, in a discounted reward setting, the policy gradient theorem requires states to be drawn from the visitation distribution (e.g., Eq. (3) and (4) in [2] and Theorem 2.1 in [4]) rather than the stationary distribution. This raises concerns about the validity of all theoretical results presented in this paper.
2. Are $|S|$ or $|A|$ finite? If so, it seems meaningless to use neural networks to approximate the policy and the value function. Previous analyses of actor-critic algorithms with neural networks typically assume that at least one of these sets is infinite, as seen in [2] and [3].
3. In the right column of Line 167, where does $|S| + |A|$ come from? It likely arises from the softmax layer, but in that case, $|S| \times |A|$ should appear in the dimension of $\theta$.
4. The rationale behind Assumption 5.1 is unclear to me. I attempted to refer to the source provided after this assumption for clarification but could not find it. I would appreciate more details on the justification for this assumption or references to previous work that supports it. A similar issue arises with Assumption 5.8.
5. In addition, Assumption 5.8 appears to be a Lipschitz condition rather than a smoothness condition.
6. The bound in Theorem 5.10 increases with $N$, which contradicts common expectations. The purpose of using a multi-agent algorithm is to enhance speed; however, Theorem 5.10 suggests that performance may worsen as the number of agents increases.
7. The numerical results in this paper do not support the theoretical findings. Experiments should at least reflect the trends of key factors in the theoretical results, such as the number of agents, iterations, and the size of the neural network.
8. In addition, I found the paper to be poorly written, with a confusing logical flow. For instance, the definition of the stationary distribution is introduced as early as Eq. (6) but only established by Assumption 5.2. Additionally, some notations remain undefined, such as $A_{ij}$ in Assumption 5.1. Is it the same as $A_{i,j}$in Definition 4.1?
References:
[1] Zhang, Kaiqing, et al. "Fully decentralized multi-agent reinforcement learning with networked agents." International conference on machine learning. PMLR, 2018.
[2] Gaur, Mudit, et al. "On the Global Convergence of Natural Actor-Critic with Two-layer Neural Network Parametrization." arXiv preprint arXiv:2306.10486 (2023).
[3] Gaur, Mudit, et al. "Closing the gap: Achieving global convergence (last iterate) of actor-critic under markovian sampling with neural network parametrization." arXiv preprint arXiv:2405.01843 (2024).
[4] Tian, Haoxing, Alex Olshevsky, and Yannis Paschalidis. "Convergence of actor-critic with multi-layer neural networks." Advances in neural information processing systems 36 (2024).
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > **Comment 1:** My main concern lies in Lemma 4.2…
**Response:** Thanks for your comments. Please refer to our response to **Comment 1 of Reviewer DPFS**.
> **Question 2:** Are $|\mathcal{S}|$ or $|\mathcal{A}|$ finite? Besides, in right column of Line 168, the dimension seems to be derived from softmax…
**Response:** Thanks for your question. This confusion is caused by a typo. The $|\mathcal{S}|$ and $|\mathcal{A}|$ in Line 168 right column should represent **dimension of each state** and **dimension of each action**, respectively. This implies that the dimension of each $\theta^i$ is at most $m(Dm+2d)$. Therefore, we do not need to assume a finite $|\mathcal{S}|$ or $|\mathcal{A}|$, and it is not related to Softmax layer as well. We hope this clarifies the confusion. We will fix it in our revision.
> **Comment 3:** The rationale behind Assumption 5.1 and 5.8 is unclear to me. Besides, Assumption 5.8 appears to be a Lipschitz condition.
**Response:** Thanks for your comments. We'd further clarify as follows:
1. Assumption 5.1 is regarding the consensus matrix: The communication only occurs between neighbor agents in the graph, and their neighbor "is taken seriously with at least a weight of $\eta$". This ensures consensus convergence. Assumption 5.1 is a common assumption in the consensus optimization literature (e.g., see Assumption 3 in [R2], Assumption 1 in [R3]).
2. We agree with reviewer that we have made a typo in Assumption 5.8, and the terms "L-smooth" should be changed to "Lipschitz continuous." We thank the reviewer for catching this and will fix it in the revision.
> **Comment 4:** The issue about bound in Theorem 5.10 increases with $N$.
**Response:** Thanks for your comments. We'd like to further clarify as follows:
1. We have checked and confirmed that the bound in Theorem 5.10 increasing with $N$ is correct. We note that this result is not surprising, since similar results also occur in many related works in the MARL literature [R2,R4-R5].
2. We respectfully disagree with the reviewer's assertion that *"The purpose of using a multi-agent algorithm is to enhance speed."* In our humble opinion, the need for modeling with multi-agent systems comes from the underlying real-world applications (e.g., autonomous swarm, drones, robotics networks, etc.), which may or may not be related to computation speedup. In other words, "computation speedup" may not and should not be the only purpose of using multi-agent systems. In fact, due to the loss of a central server in fully decentralized MARL systems, the computation speed could be negatively affected. Thus, in this sense, the $N$-dependence result in Theorem 5.10 can be viewed as a "price" to pay for the full decentralization.
On the other hand, we note that computation speedup in multi-agent systems often occurs in scenarios with centralized servers (e.g., federated RL with multiple agents coordinated by a server). In these scenarios, the reviewer is correct that linear convergence speedup with respect to $N$ is highly desirable. However, the focus of this paper is on decentralized MARL.
> **Comment 5:** The numerical results in this paper…
**Response:** Thanks for your comments. However, we suspect that the reviewer might have missed our experimental results in the appendix. Indeed, all results regarding the trends of the key factors can be found in the appendix. More specifically, as stated in **Sec. 6.1.(2)**, "*More numerical results of ablation studies, which verify Theorem 5.10, can be found in Appendix B.1.*" Please see **Fig. 5-10** in **Appendix B.1** for more numerical results.
> **Question 6:** In addition, I found the paper to be poorly written…
**Response:** Thanks for your comments. We'd like to clarify as follows:
1. We agree that $\nu(\theta)$ indeed occurs before its formal definition. However, we did this largely because of the necessity of introducing MSBE. To mitigate any potential confusion, we have explicitly reminded the reader after Eq. (6) with the following statement in **Line 176, right column**: "*$x$ follows stationary distribution $\nu(\theta)$, which will be introduced in Lemma 5.4*".
2. Regarding $A_{ij}$ and $A_{i,j}$, thanks for catching this inconsistency. This is indeed a typo and $A_{i,j}$ should be corrected to $A_{ij}$.
[R1] Sutton & Barto. Reinforcement learning: An introduction, Cambridge: MIT press, 1998.
[R2] Hairi et al. Finite-Time Convergence and Sample Complexity of Multi-Agent Actor-Critic Reinforcement Learning with Average Reward, ICLR 2022.
[R3] Nedic & Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, Information Processing Systems, 2009.
[R4] Chen et al. Sample and communication-efficient decentralized actor-critic algorithms with finite-time analysis, ICML 2022.
[R5] Hairi et al. Sample and communication efficient fully decentralized marl policy evaluation via a new approach: Local td update. arXiv preprint 2024.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed responses. Most clarifications were helpful; however, I remain unconvinced on the following two points:
On the visitation distribution, I feel like the discrepancy between the visitation distribution and the stationary distribution appears to be more fundamental than a simple oversight. Suppose $p_t(\cdot)$ is the probability distribution over the set of states after $t$ transitions according to a fixed policy $\pi$ starting at a fixed state $s_0$. The visitation distribution $d$ is them defined as (up to a constant factor of $1-\gamma$ as compared to [1])
$$d(s) = \sum_{t=0}^{\infty} \gamma^t p_t(s).$$
If we assume $d$ is proportional to the stationary distribution $\mu$, i.e., there exists a constant $c$ such that $\mu(s) = c d(s), \forall s$. Since $\mu$ satisfies $\mu^T = \mu^T P$ where $P$ is the transition matrix, we should expect
$$d^T = d^T P.$$
Computing each element on the left-hand side using the definition of $d$ we obtain: $d(s') = \sum_{t=0}^{\infty} \gamma^t p_t(s')$. On the right-hand side, the corresponding element is:
$$\[d^T P\](s') = \sum_s d(s) p(s'|s) = \sum_{t=0}^{\infty} \sum_s \gamma^t p_t(s) P(s'|s) = \sum_{t=0}^{\infty} \gamma^t p_{t+1}(s') = \frac{1}{\gamma} (d(s')-p_0(s')).$$
For these expressions to be consistent, we must have: $p_0(s') = (1-\gamma)d(s'), \forall s'$ However, this condition is clearly problematic since $\gamma \neq 1$ and, by definition, $p_0(s') = 1$ if $s = s_0$ otherwise $p_0(s') = 0$. If my reasoning is incorrect, I would appreciate further clarification.
On the dimension of $\theta^i$, I understand the $mDm$ comes from stacking up all $W^h$. What about $2dm$?
Reference:
[1] Gaur, Mudit, et al. "On the Global Convergence of Natural Actor-Critic with Two-layer Neural Network Parametrization." arXiv preprint arXiv:2306.10486 (2023).
---
Reply to Comment 1.1.1:
Comment: **Response to Comment 1:** Thanks for your insightful follow-up comments! Upon carefully reading multiple times, we have confirmed that the reviewer's previous analysis is *correct*. However, the scenario the reviewer analyzed is *different* from our setting. Specifically, in the reviewer's previous comment, the analysis corresponds to the scenario where the state transition kernel under the behavior policy for collecting data is the *same* as that under the target policy in learning. In what follows, we use $P_{\pi}(s\rightarrow s',1)$ to denote the 1-step state transition kernel in the on-policy setting.
In contrast, our work considers the scenario where the behavior policy is **different** from the target policy. Also, we follow the behavior policy commonly used in the literature (e.g., [R1-R3]), for which the 1-step state transition kernel $\tilde{P}_{\pi}(s\rightarrow s',1)$ is written as
$$\tilde{P}\_{\pi}(s\rightarrow s',1):=\mathbb{P}_{\pi}(s\rightarrow s',1)+(1-\gamma)\mathbb{I}(s'=s_0),$$
where $\gamma \in (0,1)$ and $\mathbb{I}(s'=s_0)$ represents the indicator function of the event *"the next state is the initial state $s_0$"*. In the literature, this kernel is sometimes referred to as the "restart kernel".
Next, and we will show that, under this restart kernel, the stationary distribution $\mu(s)$ is **proportional** to the visitation measure $\eta(s)$ (Note: to be more rigorous, we call $\eta(s)$ as a "visitation measure" rather than "visitation distribution", because it's possible that $\eta(s)>1$ for some $s$ and hence not being a proper distribution). The proof is as follows:
Note that, for $\gamma<1$, the visitation measure at state $s'$ is defined as follows and can be written in a recursive form:
$$\eta(s'):=\sum_{t=0}^\infty\gamma^t\mathbb{P}\_{\pi}(s_0\rightarrow s',t)=\mathbb{I}(s'=s_0) + \gamma\sum_s\eta(s)\mathbb{P}_{\pi}(s\rightarrow s',1).$$
It then follows from the definition of $\eta(s)$ that:
$$\sum_{s'}\eta(s')=\sum_{s'}\sum_{t=0}^\infty\gamma^t\mathbb{P}\_{\pi}(s_0\rightarrow s',t)=\sum_{t=0}^\infty\sum_{s'}\gamma^t\mathbb{P}\_{\pi}(s_0\rightarrow s',t)=\sum_{t=0}^\infty\gamma^t = \frac{1}{1-\gamma}.$$
Now, we **define** the following distribution $\mu(s)$ by normalizing $\eta(s)$:
$$\mu(s):= \frac{\eta(s)}{\sum_{s'} \eta(s')} = (1-\gamma)\eta(s).$$ Note that $\mu(s)$ is the "proper" visitation distribution.
In what follows, we will prove that **$\mu(\cdot)$ is indeed the stationary distribution under kernel $\tilde{P}_{\pi}$**. Hence, the visitation measure is *proportional* to the stationary distribution. As a result, the use of stationary distribution in the policy gradient calculation remains valid. To this end, we first note that:
$$\sum_s\mu(s)\tilde{P}\_{\pi}(s\rightarrow s',1)=\gamma\sum_s\mu(s)\mathbb{P}\_{\pi}(s\rightarrow s',1) + (1-\gamma)\sum_s\mu(s)\mathbb{I}(s'=s_0),$$
which follows from the definition of $\tilde{P}_{\pi}(s\rightarrow s',1)$. Then, by using the definition $\mu(s):=(1-\gamma)\eta(s)$, we can further re-write the above equation as:
$$\sum_s\mu(s)\tilde{P}\_{\pi}(s\rightarrow s',1)=(1-\gamma)\left(\gamma\sum_s\eta(s)\mathbb{P}\_{\pi}(s\rightarrow s',1)+\mathbb{I}(s'=s_0)\right)=(1-\gamma)\eta(s')=\mu(s').$$
This shows that $\mu(\cdot)$ is the **stationary distribution under kernel $\tilde{P}_{\pi}$** and the proof is complete.
Based on the above insight, we now justify the **correctness** of our revised Lemma 4.2. First according to Sec. 13.2 in [R4], we know that
$$\nabla_\theta J(\theta) = \sum_s \eta(s)\sum_a\nabla\pi_\theta(a|s)Adv_\theta(s,a).$$ Multiplying and dividing the right-hand-side by $1-\gamma$ and using $\mu(s)=(1-\gamma)\eta(s)$, we have that
$$\nabla_\theta J(\theta) = \frac{1}{(1-\gamma)}\sum_s (1-\gamma)\eta(s)\sum_a\nabla\pi_\theta(a|s)Adv_\theta(s,a),$$ which implies that $$\nabla_\theta J(\theta) \propto \sum_s\mu(s)\sum_a\nabla\pi_\theta(a|s)Adv_\theta(s,a).$$ This is exactly what we claimed in the revised Lemma 4.2.
We thank the reviewer again for these valuable discussions, which strengthens the clarity of our work. We will add the above discussions in the revision to avoid similar doubts.
---
**Response to Question 2:** Thanks for your follow-up question. We'd like to point out that $2dm$ comes from the first and the last layer of the DNN. Specifically, the dimension of $H$ for the first layer $x^{(0)}=Hx$ is $dm$, where x is the $d$-dimensional input. Similarly, the dimension of parameter $b$ in the last layer is $m\cdot 1 \le md$. Therefore, the dimension of $\theta^i$ is at most $m(Dm+2d)$.
---
[R1] Konda. Actor-critic algorithms. PhD thesis, 2002.
[R2] Xu et al. Improving sample complexity bounds for (natural) actor-critic algorithms, 2020.
[R3] Chen et al. Sample and communication-efficient decentralized actor-critic algorithms with finite-time analysis, 2022.
[R4] Sutton & Barto. Reinforcement learning: An introduction, 1998. | Summary: This work provides the the first actor-critic algorithm with deep Q-net and deep policy-net for fully decentralized MARL problem, and provides the first global convergence result for such algorithm.
Claims And Evidence: The claim is clear as summarized above, which is supported by theoretical proof (I believe the general proof logic is correct) and experimental results (look comprehensive and convincing).
Methods And Evaluation Criteria: The experimental settings on simulation and application to RLHF look reasonable and comprehensive. The criterion of reward in the experimental results and the criterion of function value gap in Theorem 5.10 are standard and reasonable.
Theoretical Claims: I am familiar with existing actor-critic proof analysis. I believe the proof logic is in general similar to that of the existing actor-critic proof analysis and thus correct, though there might be fixable computation errors that I did not find. However, the main issue is in the choice of hyperparameter choice $B$ which may invalidate Algorithm 2, as elaborated in question (1) in "Questions For Authors".
Experimental Designs Or Analyses: I briefly scanned the problem settings, hyperparameter values and figures of the experimental results in both the main text and Appendix B. The general process looks clear to me. If possible, it may be better to provide more details, such as the figure of the simulation grid (including the landmark locations), and the math formulation of multi-agent RLHF (see the Question 13 in detail).
Supplementary Material: I read Appendix A up to Eq. (24) to learn about the proof technique with deep Q-net.
Relation To Broader Scientific Literature: This work extends the existing theoretical foundation of actor-critic methods equipped with neural networks from single-agent to multi-agent with new decentralization techniques, and also extends the existing multi-agent actor-critic analysis works from tabular and linear function approximation to deep Q-network.
Essential References Not Discussed: The lits about MARL stops at 2022. Are you sure there are no more after 2022? For example, (Chen et al., 2022) is cited 35 times in Google Scholar. Are some of them related?
Other Strengths And Weaknesses: This theoretical result is significant and novel since it is about multi-agent actor-critic algorithms with deep policy network and deep Q-net, which have many practical successes but lack theoretical foundation in existing works. The analysis technique by relating to centralized deep Q-net evaluation (Algorithm 3) is novel. The experiments look comprehensive. The presentation is clear and I can understand well.
The question (1) below in "Questions For Authors" is a major issue which may invalidate Algorithm 2. Also, based on Lemma 5.7, convergence to stationary policy in existing MARL works seems not a limitation as that can imply $\mathcal{O}(\sqrt{\epsilon_{\rm bias}})$-global optimality. Therefore, the global convergence is not novel compared with existing MARL works, especially given that (Chen et al., 2022) actually provides similar global convergence. There also remain some points to be clarified as listed below.
Other Comments Or Suggestions: (1) The introduction said works including (Chen et al., 2022) only ensures the convergence to some stationary solution. However, (Chen et al., 2022) also obtains the global convergence rate of decentralized natural actor-critic algorithm. You could reword it.
(2) In Theorem 5.10, use $e^{-\Omega(\log^2 m)}$ or $\exp[-\Omega(\log^2 m)]$.
(3) Right after Eq. (8), "we can easily get $-\frac{\langle\nabla J(\theta_t), d_t\rangle}{d_t||} \leqslant-\frac{1}{3}||\nabla J(\theta_t)||+\frac{8}{3}||e_t||$". At the final step of Eq. (8), change $-\frac{8}{3}||e_t||$ to $+\frac{8}{3}||e_t||$. Actually this could be strengthened to
$$-\frac{\langle\nabla J(\theta_t), d_t\rangle}{||d_t||}=-\frac{\langle\nabla J(\theta_t)-d_t, d_t\rangle}{||d_t||}-||d_t||\le ||\nabla J(\theta_t)-d_t||-||d_t-e_t||+||e_t||=-||\nabla J(\theta_t)||+2||e_t||$$ without discussing two cases, but I feel fine if this improvement is not done in the rebuttal.
(4) Right before Eq. (22), "since both $\overline{W}_t$ and $\overline{V}_t$ belong to $\mathcal{B}(B)$".
Questions For Authors: (1) (**The major reason for my rejection:**) In Algorithm 2, should the radius $B\le\mathcal{O}(\epsilon)$? If not, the error term (6) of Theorem 5.10 is larger than $B=\Theta(m^{1/32}D^{-6})\ge\mathcal{O}(\epsilon)$. If yes, the weights $W^i(k)$ of all iterates $k$ belong to a very tiny neighborhood $\mathcal{B}(B)$ around the initial parameter $W(0)$, so the output $W^K$ as convex combination of $W^i(k)$ also belongs to this neighborhood. In both cases, it seems that we cannot achieve $\mathcal{O}(\epsilon)$ convergence error even if $\epsilon_{\rm critic}=\epsilon_{\rm actor}=0$. How to explain?
(2) The introduction lists two major technical barriers before the research quesiton, followed by 3 technical challenges after the question. Would the "two major technical barriers" better be "two major limitations of existing works"?
(3) Why are $H$ and $b$ fixed in DNN?
(4) Many other decentralized optimization lits assume that the second largest singular value of the gossip matrix $A$ lies in (0,1). I am curious how does it relate to your Assumption 5.1?
(5) In Assumption 5.3, is there lower bound for reward, e.g., 0 or $-r_{\max}$?
(6) Could you cite at least one paper that provides Lemma 5.4? If there is not such a paper, you could prove it. Also, can we ensure that $\kappa,\rho$ do not rely on the policy?
(7) What's the intuition behind Eq. (14) of Fact A.3? (e.g. from the gradient of the MSPBE objective in Eq. (7))? Does Assumption 5.9 hold for all stationary points $W^*$ satisfying Fact A.3, or there exists one $W^*$ satisfying the Q error bound of $\epsilon_{\rm critic}$?
(8) In Corollary 5.11, is the sample complexity $\Omega(N^2\epsilon^{-3})$?
(9) What is $W_t^i$ in Fact A.2? Does "for the outer loop indexed by $t$" mean "at the $t$-th outer loop of Algorithm 1"?
(10) Should $1^{\top}$ be $\frac{1^{\top}}{N}$ in Eq. (20)?
(11) Is there any theoretical result demonstrating that the total number of DNN parameters can be far less than the total number of state-action pairs?
(12) In the experimental figure results (except Figure 4), does reward mean $\sum_{t=0}^{T-1}\gamma^t r_t$ on the current episode of finite horizon $T$?
(13) In the RLHF experiment, did you use bandit (only 1 time point) or MDP? Is the objective simply the average of policy optimization objective (expected reward with KL penalty) between the two agents? You might provide some math formulations if applicable.
(14) The lits about MARL stops at 2022. Are you sure there are no more after 2022? For example, (Chen et al., 2022) is cited 35 times in Google Scholar. Are some of them related?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Due to the space limitation, we could only respond to a subset of more critical comments in this rebuttal. But we are happy to continue to complete our responses to your remaining comments in the discussion stage when new space opens up.
> **Comment 1:** The introduction said works including (Chen et al., 2022) only ensures…
**Response:** Thanks for pointing out a related work. After reviewing [1], we acknowledge that their dec-NAC approach indeed guarantees global convergence. However, [1] is based on linear function approximation, while we consider nonlinear function approximation. Thus, our work remains the first in the MARL literature that achieves global convergence under **nonlinear function approximation**. We will clarify this in **Sec. 1** (technical barrier) and **Sec. 2** (global convergence) in the revision.
> **Question 2:** Technical typos and Grammar mistakes.
**Response:** Thanks for catching: (1) In Theorem 5.10, we agree that it should be $\exp(-\Omega(\log^2m))$. (2) We will replace "*due to*" with "*since*". (3) Indeed, there is a typo, and the sample complexity should be $\Omega(N^2\epsilon^{-3})$. (4) the correct value is $\frac{1^\top}{N}$. We thank the reviewer’s careful reading and will fix these mistakes in the revision.
> **Question 3:** In Algorithm 2, should the radius $B<\mathcal{O}(\epsilon)$? If not… How to explain?
**Response:** Thanks for your question and we'd like to clarify as follows:
**1)** Rigorously, the reviewer's statement should be written as $B=\mathcal{O}(\epsilon)$, which we fully agree with. However, this does **not** necessarily imply $B\leq\epsilon$, since the Big-O notation hides constant factors.
**2)** We confirm that the updated parameter $W^i$ for each agent $i$ remains within the projection ball centered at $W(0)$ in each critic loop according to Alg.2. However, this does **not** affect the correctness of Theorem 5.10. In our analysis, we are only required to show the estimated value function $\hat{Q}(\cdot;W^i)$ converges to $Q(\cdot;W^*)$, rather than the convergence of $W^i$. This analytical approach is also used by [2].
> **Question 4:** Why are $H$ and $b$ fixed in DNN?
**Response:** To characterize the accuracy of the DNN, we follow the convention in the literature (e.g., [2,4]), which characterized the DNN capability under a fixed $H$ and $b$ setup. In our analysis, it is used for deriving **Eq. 23** in Appendix.A.
> **Question 5:** In Assumption 5.3, is there lower bound for reward?
**Response:** Thanks for catching this. We indeed want to assume $r_t^i \in [0, r_{\max}]$. We will fix this in the revision.
> **Question 6:** Could you cite at least one paper that provides Lemma 5.4…?
**Response:** Thanks for your question. Lemma 5.4 is a mild condition widely used in the literature (e.g., [1,3]). $\kappa,\rho$ are dependent on the policy since they characterize the mixing time of the MDP under the policy.
> **Question 7:** What's the intuition behind Eq. (14) of Fact A.3…?
**Response:** Thanks for your questions and we'd like to further clarify:
1. **Intuition behind Eq. (14):** The condition in Eq. (14) is analogous to the notion of stationary point in optimization, since $\delta\cdot\nabla_WQ$ serves as the gradient of $W$. Thus, Eq. (14) means that *"there is no descent direction at $W^*$"*.
2. **Understanding of Assumption 5.9:** This assumption states that for all policy $\theta$, there exists some stationary $W^*$ satisfying the definition in Fact A.3, such that the $\epsilon_{\text{critic}}$ condition holds.
> **Question 8:** What is $W_t^i$ in Fact A.2? Does "for the outer loop indexed by $t$" mean "at the $t$-th outer loop of Algorithm 1"?
**Response:** Thanks for your questions and we'd further clarify. Your understanding is correct: $W_t^i$ is $W^i$ after $K$ iterations in Alg.2 during its $t$-th leveraging in Alg.1. Since Alg.2 is a component of Alg.1, we refer to the $t$-th round of Alg.1 as "*the outer loop indexed by $t$*". Fact A.2 implies that the consensus process does not impact the average value. We will add these clarifications in the revision.
> **Question 9:** Is there any theoretical result demonstrating that the total number of DNN parameters can be far less than the total number of state-action pairs?
**Response:** The answer is yes, which has been shown in [2,3] for single-agent RL. In this work, we generalize this insight to MARL. Please also refer to **Response** to **Question 2 of Reviewer HEYR**.
[1] Chen et al. Sample and communication-efficient decentralized actor-critic algorithms with finite-time analysis, ICML 2022.
[2] Cai et al. Neural temporal-difference and Q-learning provably converge to global optima, NeurIPS 2019.
[3] Gaur et al. Closing the gap: Achieving global convergence (last iterate) of actor-critic under markovian sampling with neural network parametrization, ICML 2024.
[4] Gao et al. Convergence of adversarial training in over-parametrized neural networks, NeurIPS 2019.
---
Rebuttal Comment 1.1:
Comment: I am satisfied with most of your responses except the following:
**Question 3 (major reason for rejection):** Since $\widehat{Q}(\cdot; W)$ is an $L$-Lipschitz continuous function of $W$ for a constant $L>0$, $||W^i-W(0)||\le \mathcal{O}(\epsilon)$ implies $||\widehat{Q}(\cdot; W^i)-\widehat{Q}(\cdot; W(0))||\le L\mathcal{O}(\epsilon)$. Hence, if you want $||\widehat{Q}(\cdot; W^i)-Q(\cdot; W^*)||\le L\mathcal{O}(\epsilon)$, we have $||\widehat{Q}(\cdot; W(0))-Q(\cdot; W^*)||\le L\mathcal{O}(\epsilon)+\mathcal{O}(\epsilon)$, which means the initial $W(0)$ should also be $\mathcal{O}(\epsilon)$-optimal, which seems an unrealistic assumption. How to explain?
**Question 6:** Would Lemma 5.4 better be an assumption?
---
Reply to Comment 1.1.1:
Comment: **Response to “major reason”:** Thanks for your question. But it seems the reviewer has some misunderstanding of the Big-O notation $\mathcal{O}(\cdot)$, which represents "growth rate of scaling" rather than some "static value."
Specifically, let us first recall the **formal definition** of $\mathcal{O}(\cdot)$, which is stated as follows: $f(n)=\mathcal{O}(g(n))$ means that $\exists C>0$ independent of $n$, $\forall n>n_0, |f(n)|\le C g(n)$ [AR1]. Therefore, when talking about $f(\epsilon)=\mathcal{O}(\epsilon)$, we mean that $f(\epsilon)$ is **a member of the family of functions** whose growth rates are upper bounded by $C\epsilon$ for some constant $C>0$, i.e., we are talking about "a function" rather than "a static value." Also, the symbol "$=$" here actually means "$\in$" rather than *"equal."* Also because of this "membership" meaning of the Big-O notation, it is **inaccurate** to write "$f(\epsilon)<\mathcal{O}(\epsilon)$". See the classic textbook [AR1] for further details about the Big-O notation. Also, in the Big-O notation, $C>0$ can be arbitrarily large, as long as it is **independent** of $\epsilon$. So, this implies that $f(\epsilon)=\mathcal{O}(\epsilon)$ **doesn't mean** the value of $f(\epsilon)$ is close to $\epsilon$.
Therefore, although the reviewer's conclusion $|| W(0)-W^* || = \mathcal{O}(\epsilon)$ is correct, it **doesn't mean** that $W(0)$ is $\epsilon$-distance away from $W^*$. $|| W(0)-W^* || = \mathcal{O}(\epsilon)$ just means that as $\epsilon \rightarrow 0$, $|| W(0)-W^* ||$ will also shrink to $0$ **not slower than a linear fashion** with some slop $C>0$. But please note that the slope $C$ could also be huge.
[AR1] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein, Introduction to algorithms, 2022.
---
**Response to issues about Lemma 5.4:** Thanks for your follow-up question. We acknowledge that in the references we mentioned before, this condition is considered as an assumption, and agree that we can directly assume this condition. However, we'd also like to point out that the existence of stationary distribution is the corollary of irreducible and aperiodic MDP. Therefore, according to Assumption 5.2, we do have Lemma 5.4.
---
**The following Responses are for remaining questions and comments in the first round.**
> **Question 1:** Does reward mean of finite horizon $T$?
**Response:** The answer is yes. The reward in those figures are computed using $\sum_{t=0}^{T-1}\gamma^t r_t$ with $T=10$ and $\gamma=0.99$. Here, $T$ has been set with a finite value to reflect the setting of MPE environment.
> **Question 2:** Many other decentralized optimization lits assume that the second largest singular value of the gossip matrix $A$ lies in (0,1). I am curious how it relates to your Assumption 5.1?
**Response:** Thanks for your question. While the second-largest eigenvalue condition is commonly used (e.g., [3]), our $\eta$-assumption, derived from [4], is also widely used in the literature (e.g., [6,7]). These two assumptions are alternative conditions and both characterize a well-behaved consensus matrix $A$, ensuring the convergence rate of local values in reaching consensus.
> **Question 3:** The literature about MARL stops at 2022. Are you sure there are no more...
**Response:** Thanks for your question. We have checked the papers that cite (Chen et al., 2022). We summarize these works as follows:
**1)** Aside from those already discussed in our paper, we identified two relevant works [7,8], both of which employ linear approximation and neither established any global convergence result.
**2)** Refs. [1-6,9,10] primarily focused on federated RL, robust decentralized RL, and multi-agent Markov games, which are not directly related to our work.
We will cite these works in the revision.
[1] Neural temporal-difference and Q-learning provably converge to global optima.
[2] Convergence of adversarial training in over-parameterized neural networks.
[3] Randomized gossip algorithms.
[4] Distributed subgradient methods for multi-agent optimization.
[5] Improving sample complexity bounds for (natural) actor-critic algorithms.
[6] Finite-Time Convergence and Sample Complexity of Multi-Agent Actor-Critic Reinforcement Learning with Average Reward.
[7] Sample and communication efficient fully decentralized marl policy evaluation via a new approach: Local td update.
[8] Learning to coordinate in multi-agent systems: A coordinated actor-critic algorithm and finite-time guarantees.
[9] Closing the gap: Achieving global convergence (last iterate) of actor-critic under markovian sampling with neural network parametrization.
[10] Sample and communication-efficient decentralized actor-critic algorithms with finite-time analysis. | Summary: Summary:
Goal of the paper is to develop a decentralized MARL (dec-MARL) Actor-critic (AC) algorithm, with DNN critic that achieves global optimality
Technical challenges to developing dec-MARL:
- AC methods from single-agent RL are inadequate for MARL due to distributed nature
- Even if first challenge was alleviated, the compounding of error from non-linear estimators is big resulting in inaccurate method
- Gradients driven from descent lemma are not sufficient to achieve global convergence
Paper contributions:
- Development of an AC based dec-MARL algorithm that achieves a global optimality convergence rate of O(1/T).
- The algorithm uses the following components: DNN critic that is trained using TD learning, actor trained using policy gradient based method, a gossiping technique for decentralized agents to communicate.
- The paper also presents some empirical experiments to show performance of the proposed algorithm.
Claims And Evidence: The paper claims the following:
- The first AC-based decentralized algorithm for MARL with DNN (non-linear function approximation) based critic
- Unlike past literature, they achieve global convergence
Evidence:
- They provide proof of theorem for the rate of achieving global convergence
- They also provide ablation study and experiments to show the algorithm's performance
Methods And Evaluation Criteria: The paper introduced experiments on two Benchmarks, one simple MPE environment (Simple Spread) and an LLM-based task. Both are reasonable benchmarks and the first is used extensively in literature.
However, the evaluation criteria are missing a comparison of the method to existing MARL algorithms to show performance differences.
Theoretical Claims: The theoretical claims appear to be reasonable, although I did not verify the proofs. Additionally, as mentioned in my comment 3 below, the structure and presentation of the paper make it unclear which contributions are from previous work and which are from this work.
Experimental Designs Or Analyses: I didn’t check in detail the validity of experimental designs or theoretical analysis.
One comment on the analysis is, the authors argue that based on the results increasing the depth of the DNN improves performance, however, that is not the case in Figure. 7.
Supplementary Material: I did not check in detail due to my main concern above.
Relation To Broader Scientific Literature: I am aware that several actor-critic-based algorithms, such as MAPPO, MATD3, and COMA, have been previously introduced for MARL and are widely used in practice.
Many of the introduced algorithms for MARL in the past lack theoretical foundations which is the focus of this paper.
The paper also mentions improvements over past works in dec-MARL:
- past only dealt with linear function approximation for critics while their algorithm deals with a broader class of non-linear functions.
- Their convergence rate is still reasonable despite the non-linearity and global optimality guarantee
- They discuss some findings for practical use from the analysis such as: improvement from the use of TD over Q-learning and effect of depth on DNN on performance. Which provided different results than past literature
Essential References Not Discussed: See above comment on MAPPO, MATD3 etc.
It would be great to discuss the differences with these in related works.
Other Strengths And Weaknesses: ## Strengths:
- Theoretical analysis of the presented algorithm which is lacked in MARL literature
- The use and analysis of non-linearity in critic is relevant ro MARL applications due to large state and action spaces.
- The paper presented some interesting insights for practical use
## Weaknesses:
- Abstract unclear for which setting the global convergence is for
- Clarify the “non-competing” nature of the rewards, otherwise, the statements are wrong
- Paper flow in introducing contributed assumptions and lemmas is not clear (Ex. Lemma 7 is not clear if it is from the referenced paper or a contribution of this work)
### Main concern:
Unless I misunderstood, agents learn independently from others, but the algorithm uses all agents’ actions. Hence, the setting does not seem fully decentralized. In particular, in lines 6 and 7 from Alg. 2, the critic has access to global actions which are commonly used in algorithms that are usually centralized.
This affects the relevance of the considered problem, please see my question below.
Other Comments Or Suggestions: I didn’t find typos.
Questions For Authors: 1. How do your obtained results compare to the state of the art of 1-3 relevant settings, such as decentralized minimization, or decentralized min-max games?
2. What were the main challenges in extending the XXX (convergence and complexity) results from the above settings?
3. (related to the main concern) Could you give examples of applications that satisfy the considered setting where the critic has access to all agents’ actions, but agents (nodes) operate independently?
4. What are the benefits of using a decentralized algorithm over Centralized training decentralized execution algorithms, and more broadly what is the motivation behind proposing a new decentralized algorithm?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments! We address the reviewer's comments point by point as follows:
> **Comment 1:** The evaluation criteria are missing a comparison of the method to existing MARL algorithms to show performance differences. It would be great to discuss the differences with these in related works (such as MAPPO, MATD3, and COMA).
**Response:** The algorithms mentioned by the reviewer all follow a centralized training decentralized execution (CTDE). By contrast, our approach is based on consensus-based **decentralized** training. Since our work executes learning without any centralized task, it fundamentally differs from these related works and hence is not directly comparable.
> **Comment 2:** The structure and presentation of the paper make it unclear which contributions are from previous work and which are from this work (such as Lemma 5.7).
**Response:** We would like to clarify that Lemma 5.7 is directly cited from a prior work. We have explicitly stated, “*These two assumptions provide the following key lemma (Agarwal et al., 2021).*” To further improve clarity, we will underline all citations in the next revision.
> **Comment 3:** The authors argue that based on the results increasing the depth of the DNN improves performance, however, that is not the case in Figure 7.
**Response:** We confirm that our theorem indeed suggests that increasing the depth of the DNN benefits performance. Although it may not be easy to see this from Fig. 7, the overall trend aligns with our theoretical predictions: deeper networks $D$ generally perform better. To further explain this phenomenon, we note that certain theoretical conditions may not strictly hold in numerical experiments, which could account for the inconspicuous gap between our theoretical results and empirical observations.
> **Comment 4:** Abstract unclear for which setting the global convergence is for. Besides, clarify the “non-competing” nature of the rewards, otherwise, the statements are wrong.
**Response:** We confirm that our work considers the **cooperative** setting, where multiple agents collaborate to maximize the long-term global cumulative reward. We acknowledge that our convergence result is specifically derived for this non-competing scenario. We thank the reviewer for pointing out this and will clarify this in **Abstract** in our revision.
> **Question 5:** Agents learn independently, but the algorithm uses all agents’ actions. In particular, the critic has access to global actions. This affects the relevance of the considered problem. Could you give examples of applications that satisfy the considered setting where the critic has access to all agents’ actions, but agents operate independently?
**Response:** One example can be found in the carrier sensing multiple access (CSMA) protocol for wireless random access networks, where participating devices (agents) can listen to packet signals transmitted over the channel and be aware of the actions other devices have taken. In general, applications may include any system where the agents can observe one another (this is also needed for consensus) for information sharing, e.g., robotics, drones, vehicles, etc. In addition, we’d like to point out that joint action is widely assumed in the MARL literature [1-3].
[1] Zhang et al. Fully decentralized multi- agent reinforcement learning with networked agents, ICML 2018.
[2] Zeng et al. Learning to coordinate in multi-agent systems: A coordinated actor-critic algorithm and finite-time guarantees. In Learning for Dynamics and Control Conference, 2022.
[3] Wai et al. Multi-agent reinforcement learning via double averaging primal-dual optimization, NeurIPS 2018.
> **Question 6:** How do your obtained results compare to the state of the art of 1-3 relevant settings? Besides, What were the main challenges in extending the results from the above settings?
**Response:** We appreciate the reviewer’s suggestions. While these are indeed interesting new directions for future work, establishing a global optimality theory for these new decentralized optimization problems is highly non-trivial and deserves dedicated papers. To provide some high-level intuition, we believe that the new handling of nonlinear approximation and consensus techniques: **(1)** will be important to the analysis for reasons similar to those illustrated in our Fig. 1; **(2)** can potentially be combined with other techniques, such as momentum, to further enhance performance.
> **Question 7:** What are the benefits of using a decentralized algorithm over CTDE algorithms, and more broadly what is the motivation behind proposing a new decentralized algorithm?
**Response:** CTDE algorithms require that there must exist an entity for executing centralized tasks. Such a requirement may render the algorithms only applicable to a limited number of real-world scenarios. A fully decentralized algorithm on the other hand does not require a central entity. | null | null | null | null | null | null |
NegMerge: Sign-Consensual Weight Merging for Machine Unlearning | Accept (poster) | Summary: This paper presents NegMerge, which enhances the process of forgetting by negation for machine unlearning. NegMerge computes a final task vector by merging task vectors derived from multiple fine-tuned models, during which it preserves elements with consistent signs across the task vectors and masks those with inconsistent signs to zero. The results demonstrate that NegMerge achieves state-of-the-art performance.
Claims And Evidence: The paper claims that the effectiveness of current methods is highly sensitive to hyperparameter selection, which is presented in Figure 1.
Methods And Evaluation Criteria: Yes, it follows the previous literature.
Theoretical Claims: In Section E, Theoretical Analysis, the paper provides a partial theoretical analysis.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes, it is located in the Appendix section of the paper.
Relation To Broader Scientific Literature: The primary contribution of this paper lies in the application of model merging techniques to the field of machine unlearning.
Essential References Not Discussed: The related work described in this paper primarily encompasses model merging and machine unlearning.
Other Strengths And Weaknesses: 1. One of the motivations of this paper is that "hyperparameter sets that preserve retain set performance tend to exhibit poor unlearning performance, and vice versa." The reasoning behind why this motivation leads to the use of model merging to address issues in machine unlearning is not clearly explained. In other words, it is not well-articulated why model merging techniques are particularly suitable for the direction of machine unlearning.
2. The issue that "unlearning performance is highly sensitive to the hyperparameter settings used for obtaining fine-tuned models" is presumably widespread in the field of machine unlearning. How does this paper mitigate this phenomenon, or which aspect of model merging technology alleviates this issue?
3. In Table 2, several metrics including Acc D_r, Acc D_f, and MIA are presented. It is not clearly stated whether higher values are better or lower values are preferable for these indicators.
4. In Table 3, the "Conflict" ablation version is inferior to the "All" version. Does this suggest that the method of selecting parameters using conflicting signs is not very effective?
Other Comments Or Suggestions: none.
Questions For Authors: Please see the "Other Strengths And Weaknesses" part.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive questions. We have done our best to address the concerns raised, as detailed below.
### **Weakness 1: Why Model Merging for Machine Unlearning?**
This is because models fine-tuned with different hyperparameters tend to specialize in either unlearning or retaining (as shown in Figure 1). Therefore, while searching for effective hyperparameters is a prerequisite for approaching optimality, we argue that merging them provides a way to combine their strengths and resolve the trade-off that a single model cannot, which also holds in unlearning. As shown in Figure 1(a), no single model achieves high performance on both objectives.
We believe model merging is effective for unlearning because it provides access to multiple optimization trajectories, which allows us to identify parameters that are related to the forget set (e.g., via sign consensus). This information is not available from a single model. By selectively negating these parameters while preserving others, we can more precisely remove unwanted information without degrading performance on the retain set. As shown in Table 1, the merged model consistently outperforms all single models. We will clarify this point in the final paper.
### **Weakness 2: Mitigating Hyperparameter Sensitivity via Merging**
Due to the sensitivity issue, obtaining a model with strong unlearning performance inevitably requires a hyperparameter search process. However, rather than selecting a single best model from this search, we mitigate the sensitivity by merging all fine-tuned models obtained from diverse hyperparameter configurations. This strategy is inspired by the model soups framework (Wortsman et al., 2022), which shows that averaging the weights of independently fine-tuned models can improve performance and robustness. The key idea is that different hyperparameter choices introduce independent variations, and merging helps cancel out noise from any single run. As a result, the final model is less sensitive to specific hyperparameter choices, while still capturing the shared structure needed for effective unlearning. We will highlight this point in the final paper.
### **Weakness 3: Clarifying Metrics in Table 2 (Acc Dr, Acc Df, MIA, etc.)**
Each metric (Acc Dr, Acc Df, Acc Dtest, MIA) is favorable when its value is equal to that of the Retrain model (“≃”), indicating that the ideal Unlearn model needs to behave similarly to the Retrain model. This notation will be clarified in the caption of Table 2.
### **Weakness 4: The “Conflict” Ablation in Table 3**
The weaker performance of the "Conflict" method suggests that using conflicting-sign parameters is not effective. As shown in Section 4.3, it degrades unlearning performance. In contrast, the "Consensus" method, which uses only sign-consistent elements, performs best, indicating that consistent signs better capture changes tied to the forget set, while conflicting signs likely reflect noise from different training setups. | Summary: The paper proposes a novel framework for machine unlearning that takes advantage of multiple fine-tuned models (10 to 30) to localize the parameters that have the same sign across all task vectors. The idea is that these parameters are characteristic of the task, while others might be noise. The procedure results in a new task vector which is then subtracted from the zeroshot model. The authors include results on CLIP unlearning and on CIFAR with ResNets.
Claims And Evidence: The claims of the paper are clear and convincing.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria do make sense for the problem at hand. The evaluation is thorough and well documented on the selected benchmarks. The paper, however, does omit experiments on machine unlearning of LLMs, which is an important topic.
A minor issue is the lack of retain performance on table 4.
Theoretical Claims: There are no formal theoretical claims. The paper does include an appendix about a theoretical analysis but it is more about intuition rather than formal claims.
Experimental Designs Or Analyses: The experimental design is sound. The “Avg gap” metric is not clear from the text.
Supplementary Material: I reviewed all parts but at not great detail.
Relation To Broader Scientific Literature: The paper misses some related works that can better place the work in the broader scientific literature. See weaknesses.
Essential References Not Discussed: * Machine unlearning of LLMs is not discussed
* See Weaknesses for some more references.
Other Strengths And Weaknesses: ## Strengths
* The idea of leveraging multiple fine-tuned models in machine unlearning is original. The core of the idea has been seen before, e.g., model soups etc, but afaik this is the first work to apply these concepts to machine unlearning.
* The paper is easy to follow and well written
* The experimental validation is thorough, ablations on the number of models are done (see also below for suggestion)
## Weaknesses
1. Lack of LLM experiments. Machine unlearning for language is an important aspect of the literature (if not the most important) and is not discussed or used in experiments
2. The paper could be further improved if more intuition is provided. Specifically, L146-158 (step 2\) provides some intuition for why the proposed method works but it is very limited. Imo, the method is interesting because it reveals that certain directions are characteristic of the task and hence appear in all models regardless of configurations (as mentioned in the paper), while everything else is noise. Hence, connecting to literature such as \[1,2\] would be very interesting. Similarly, the proposed method performs some kind of task localization (especially in the task addition section) and, therefore, works on this area should also be cited \[3,4,5\]. I believe more intuition and connecting to these areas would improve the paper.
\[1\] Morcos, Ari, Maithra Raghu, and Samy Bengio. "Insights on representational similarity in neural networks with canonical correlation." Advances in neural information processing systems 31 (2018).
\[2\] Kornblith, Simon, et al. "Similarity of neural network representations revisited." International conference on machine learning. PMLR, 2019\.
\[3\] Wang, Ke, et al. "Localizing task information for improved model merging and compression." arXiv preprint arXiv:2405.07813 (2024).
\[4\] He, Yifei, et al. "Localize-and-stitch: Efficient model merging via sparse task arithmetic." arXiv preprint arXiv:2408.13656 (2024).
\[5\] Panigrahi, Abhishek, et al. "Task-specific skill localization in fine-tuned language models." International Conference on Machine Learning. PMLR, 2023\.
Other Comments Or Suggestions: 1. See weaknesses
2. It would be useful to include a graph of consensus as a number of available checkpoints. This can be done per layer, since previous works have shown that different types of layers (e.g. attention vs mlp layers) have different levels of importance. For x models, there are (30 choose x) combinations, hence subsampling could work.
3. NTK reference is Jacot et al. The cited one Ortiz-Jimenez et al applies NTK ideas in task arithmetic. Please fix.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our method’s originality. Below, we address concerns on LLM experiments and related work.
### **Weakness 1: Lack of LLM experiments**
We agree that unlearning in LLMs is an important direction. We believe our method can generalize to transformer-based LLMs given its demonstrated effectiveness on transformer architectures, although the current study focuses on image classification tasks. We appreciate the suggestion and will include LLM experiments and discussion on our method’s applicability to LLMs in the final version of the paper.
### **Weakness 2: Additional references and deeper intuition (CCA, task localization)**
#### **1. Connection to Canonical Correlation Analysis (CCA)**
As the reviewer suggested, CCA-based methods [1,2] have been used to analyze similarity between layers of neural networks. Indeed, it would be interesting to adapt such a viewpoint to identify which layers are most effective for unlearning. Our work, however, starts from the assumption that certain parameters—rather than entire layers—are more central to the given task. We thus focus on parameter-level unlearning, where we isolate specific parameters that are consistently implicated in the task across multiple models. We agree that layer-level unlearning via CCA is a promising direction, and will add discussion.
#### **2. Relation to Task Localization / Merging Methods**
We appreciate the additional pointers to task localization methods [3,4,5]. In our Related Work section, we cite some approaches (e.g., TIES-Merging, AdaMerging, MagMax) that also address how to combine or localize tasks. However, these methods focus largely on task addition, attempting to merge models trained on different tasks to maximize the differences that define each new task. For instance, the Tall Mask method [3] seeks “taller” vectors for merging.
In contrast, our goal is to identify parameters shared across models trained on similar tasks. Rather than negating the most diverse features, we look for the parameters that share a consistent effect (sign) across models. These shared parameters most effectively capture the task vectors that are attributed to the forget set and are therefore the prime candidates for unlearning. We will further clarify this difference in the revised paper.
### **Suggestions 1: Lack of retain set (Dr) performance in Table 4**
We have added the full Table 4 results (please see Reviewer aohJ, Weakness 4) and will clarify further in the final version.
### **Suggestions 2: “Avg gap” metric not clearly defined**
Following SalUn’s evaluation protocol, we report the Avg. Gap, which is the average difference between the Unlearn and Retrain models across Dr, Df, Dtest, and MIA. We will add further clarification in the final version.
### **Suggestions 3: Sign consensus analysis**
We conducted several analyses using ViT-B/32 and ViT-L/14 on the Cars dataset with 30 models. First, we examined the block-wise consensus ratio. As shown in the tables below, the consensus ratio increases in deeper blocks, in both architectures. This suggests that more parameters are being negated as the blocks go deeper. Here, we grouped blocks into three ranges and reported the average consensus ratio per range. Each block corresponds to one transformer block.
**ViT-B/32:**
| Block Range | Consensus Ratio |
|-------------|------------------|
| 0–3 | 76.55 |
| 4–7 | 81.64 |
| 8–11 | 87.08 |
**ViT-L/14:**
| Block Range | Consensus Ratio |
|-------------|------------------|
| 0–7 | 66.52 |
| 8–15 | 73.62 |
| 16–23 | 76.68 |
Additionally, we compared the Stem layer (i.e., the patch embedding), MLP layers (c_fc and c_proj weights/biases), and Attention layers (query, key, value, and output projection weights/biases). The standard deviation refers to the variability across different layers within each category (MLP or attention). Surprisingly, the Attention layers consistently showed 100% sign consensus in both architectures. We hypothesize that this is because the query-key-value mechanism plays a fundamental role in capturing contextual relationships, which remain stable across different fine-tuning configurations.
**ViT-B/32:**
| Layer | Consensus (%) | Std Dev |
|-----------|----------------|---------|
| Stem | 1.75 | 0.00 |
| Attention | 100.00 | 0.00 |
| MLP | 44.87 | 0.31 |
**ViT-L/14:**
| Layer | Consensus (%) | Std Dev |
|-----------|----------------|---------|
| Stem | 0.36 | 0.00 |
| Attention | 100.00 | 0.00 |
| MLP | 22.41 | 0.23 |
We will include these findings, which we believe enhance the paper’s depth and quality. We sincerely thank the reviewer for the helpful suggestion that led to this.
### **Suggestions 4: NTK reference**
We will ensure that our next revision will correct it.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. The sign consensus analysis per layer is really interesting and can be featured in the paper. Since your analysis reveals that layers become progressively more important wrt depth, works that have also focused on this aspect (in general such as [6, 7] or model merging specific) should be included as references.
Regarding your comment on **Weakness 2**:
* My point on the CCA works was not for the authors to create a new baseline where the entire layers are removed/modified towards unlearning, but merely to connect to these works for more intuition. One of my major concerns with this paper is lack of intuition and, therefore, citing relative literature can help.
* I disagree with your comment on task localization. First, TIES does not "localize" any tasks; each parameter is treated individually and its final value only depends on the values for the same parameter coming from the task vectors. In other words, for T task vectors, there are T values per parameter and the final value depends on these T scalars. Second, Ada-Merging also does not localize any tasks; rather it optimizes the scalings per task and per layer towards *a single objective* which is minimizing the average across tasks loss. Finally, and more importantly, the sentence *"the Tall Mask method [3] seeks “taller” vectors for merging."* does not make any sense - what are "taller" vectors?
Despite the difference in settings (addition vs negation, multiple fine-tuned tasks on same rather than different models), [3,4,5] are very relevant to your work since they identify a subset of parameters important to each task (e.g. param 1 from layer 2 *and* param 3 from layer 5 are important for task 1 etc).
[6] Yosinski, Jason, Jeff Clune, Yoshua Bengio, and Hod Lipson. "How transferable are features in deep neural networks?." Advances in neural information processing systems 27 (2014).
[7] Neyshabur, Behnam, Hanie Sedghi, and Chiyuan Zhang. "What is being transferred in transfer learning?." Advances in neural information processing systems 33 (2020): 512-523.
---
Reply to Comment 1.1.1:
Comment: ### **1. Analysis per Layer**
Thank you for the insightful suggestion. We agree that understanding layer-wise behavior is crucial, and appreciate the pointers to [6,7].
Interestingly, our sign-consensus analysis reveals that deeper layers undergo more negation, even though our method wasn’t explicitly designed to be layer-aware. This aligns with prior observations: [6] and [7] show that lower layers capture general features while deeper ones specialize, and are more sensitive to perturbations.
We also found a related result in [8]—they observe that shallow layers are more affected by unlearning, as they encode general knowledge. While they don't perform layer-wise unlearning, they suggest it as a future direction.
We’ll incorporate this discussion into the paper—thank you for helping us make this connection clearer.
[8] Wang, Qizhou, et al. "Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond." The Thirteenth International Conference on Learning Representations.
### **2. CCA**
Thank you for the clarification. We agree that citing CCA-based works [1,2] can help clarify the intuition behind our approach and will update the paper accordingly.
### **3. Task Localization**
Thank you for your valuable feedback. As you rightly pointed out, TIES and Ada-Merging are not parameter localization methods, and we acknowledge that our previous wording may have been confusing. We will revise the corresponding parts to clarify this point.
Our method and [3,4,5] are both parameter localization methods, with different strategies. For example, [3] masks out parameters with small magnitudes, [4] (Dataless Localization) retains the top-k% parameters based on magnitude, and [5] selects the top-s parameters showing the largest changes during fine-tuning. In contrast, we identify parameters whose directionality is consistent across models trained on the same dataset, thereby localizing those most associated with the forget set.
We agree that [3,4,5] are highly relevant, and we will cite them accordingly.
Regarding the phrase “taller vectors,” we intended to refer to components with larger absolute values that are retained through masking. We agree that the term was unclear and will revise or remove it to avoid confusion. | Summary: This paper proposes a new approach to machine unlearning focusing on instance-based unlearning, where instead of forgetting samples corresponding to a specific class, it can forget samples throughout all the classes while preserving model performance on the rest of the samples, i.e., retain-set.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No.
Experimental Designs Or Analyses: Yes. The proposed approach in Section~3.2.
Supplementary Material: Supplementary material is not provided. However, the appendix is provided, which I have checked briefly.
Relation To Broader Scientific Literature: The proposed approach could provide an alternative to the important parameter-based unlearning approach.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: **Strenghts**
1. The paper is well written, and the problem setup is clear.
2. The proposed approach could provide an alternative to the important parameter-based unlearning approaches, which suffer from performance degradation on the retain-set samples while enabling unlearning on the forget-set samples.
**Weaknesses**
1. One of the key weaknesses of this paper is limited evaluation. The authors do not provide any empirical evaluation on the class unlearning setup. Furthermore, the authors only provide instance unlearning on CIFAR10 with Resnet. I believe it is crucial to include comparisons involving various datasets, CIFAR100, ImageNet-1K, and TinyImageNet is crucial. Moreover, the comparison involving different vit architecture involving Swin transformer, ViT-B/16, ViT-L/16 using aforementioned datasets is also necessary.
2. In Table~2, if we check the column Acc $(D_{f})$, it is almost similar to Acc $(D_{r})$, which raises a crucial question: is this approach really enabling instance unlearning?
Other Comments Or Suggestions: N/A.
Questions For Authors: Please refer to the weaknesses section.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s positive comments on the clarity of our paper and for recognizing the potential of our method as an alternative to parameter-based unlearning approaches. The reviewer has expressed concerns regarding the generalizability of our work and questions regarding instance-wise unlearning. We hope the following feedback addresses these concerns.
### **Weakness 1: Limited evaluation: class unlearning, more datasets, and different architectures**
We provide additional results for class-wise unlearning and 50% random data forgetting on CIFAR-10 using ResNet18, using only the forget set. As shown below, our method consistently surpasses baseline methods. Please refer to our response to Reviewer aohJ (Weakness 3) for results on the 50% random forgetting setting.
**Class-wise Unlearning Results:**
| Method | Dr | Df | Dtest | MIA | Avg.Gap ↓ |
|------------------------|-------|-------|-------|-------|-----------|
| Retrain | 100.0 | 0.0 | 92.5 | 100.0 | 0.0 |
| Random Labeling | 83.0 | 10.1 | 70.9 | 99.5 | 12.3 |
| SalUn | 86.5 | 10.8 | 74.1 | 100.0 | 10.7 |
| Task Arithmetic (Best) | 95.3 | 0.1 | 80.6 | 100.0 | 4.2 |
| Ours | 96.8 | 0.8 | 81.8 | 99.8 | 3.7 |
Additionally, we present new results on 10% random data forgetting conducted on TinyImageNet and Swin-T. With this, we now evaluate instance unlearning across three datasets—CIFAR-10, TinyImageNet, and CUB—using three architectures: ResNet-18, VGG-16, and Swin-T. Our approach consistently outperforms baseline methods across various datasets and architectures, including TinyImageNet and Swin-T. Please find our new results below.
On the **TinyImageNet** dataset:
| Method | Dr | Df | Dtest | MIA | Avg.Gap ↓ |
|------------------------|-------|-------|-------|------|-----------|
| Retrain | 100.0 | 63.6 | 63.7 | 63.8 | 0.0 |
| Random Labeling | 76.4 | 76.1 | 58.1 | 32.3 | 18.3 |
| SalUn | 73.6 | 73.8 | 56.7 | 30.6 | 19.2 |
| Task Arithmetic (Best) | 76.9 | 73.6 | 59.4 | 29.3 | 18.0 |
| Ours | 76.0 | 71.9 | 58.6 | 31.3 | 17.5 |
Using **Swin-T**:
| Method | Dr | Df | Dtest | MIA | Avg. Gap ↓ |
|------------------------|------|------|-------|-----|-------------|
| Retrain | 100.0| 97.8 | 97.7 | 4.7 | 0.0 |
| Random Labeling | 100.0| 100.0| 97.7 | 0.6 | 1.6 |
| SalUn | 99.0 | 99.0 | 96.4 | 3.4 | 1.2 |
| Task Arithmetic (Best) | 98.5 | 97.8 | 96.0 | 4.0 | 1.0 |
| Ours | 98.8 | 97.8 | 95.9 | 4.6 | 0.8 |
### **Weakness 2: Is instance unlearning truly happening?**
In the standard classifier unlearning scenario, the Retrain model is trained from scratch using only the retain set and excluding the forget set, and has been regarded as the (ground-truth-like) upper bound of performance, as none of the forget set is used during training. The goal is for the Unlearn model to closely match the Retrain model's performance on each of Dr, Df, Dtest, and MIA individually, which has been the standard evaluation protocol in this scenario. In instance-level unlearning, classification accuracy on the forget set often remains similar, as observed in the corresponding table. This is presumably due to the generalization effect: even though the Retrain model is not trained on the forget set, it can still perform well on it by leveraging class-level information learned from the retain set.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. However, it only partially addresses my concerns.
**Class-wise unlearning** - It can be seen that the proposed approach achieves really good forget-set accuracy. However, the MIA score is really, so one can simply argue that the proposed model is not actually forgetting or unlearning properly. In case of instance unlearning I may have considered that it might be the effect of generalization. However, with class unlearning that same justification will not work and MIA should also come down along with the forget set accuracy. Also for retrain model, MIA should be less, and not 100 [1].
[1] Foster, Jack, Stefan Schoepf, and Alexandra Brintrup. "Fast machine unlearning without retraining through selective synaptic dampening." In Proceedings of the AAAI conference on artificial intelligence, vol. 38, no. 11, pp. 12043-12051. 2024.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment. Our MIA evaluation follows Salun [2], which adopted the MIA-Efficacy metric from the paper - Model Sparsity Can Simplify Machine Unlearning [3]. Notably, this metric is not an attack success rate but rather similar to an attack failure rate. Higher MIA-Efficacy implies better unlearning, as it measures how much less information the model retains about the forget data. For details, please refer to Appendix C.3 of [3]. Notably, both [2, Table A2] and [3, Figure 5] report MIA-Efficiency scores close to 100 in the class-wise unlearning setting.
We also acknowledge the reviewer’s point and have re-evaluated MIA using the logistic regression-based MIA, similarly following the suggested paper Fast Machine Unlearning [1] (please see the rightmost column in the table below). The updated MIA values are significantly lower and better align with the intuition that unlearning has been successful. We will include the updated results and relevant details in the revised paper.
**Class-wise Unlearning Results:**
| Method | Dr | Df | Dtest | MIA [2] | MIA [1]
|------------------------|-------|-------|-------|-------|-------|
| Retrain | 100.0 | 0.0 | 92.5 | 100.0 | 0.00 |
| Random Labeling | 83.0 | 10.1 | 70.9 | 99.5 |0.04
| SalUn | 86.5 | 10.8 | 74.1 | 100.0 |0.00
| Task Arithmetic (Best) | 95.3 | 0.1 | 80.6 | 100.0 |0.02
| Ours | 96.8 | 0.8 | 81.8 | 99.8 |0.16
[1] Foster, Jack, Stefan Schoepf, and Alexandra Brintrup. "Fast machine unlearning without retraining through selective synaptic dampening." In Proceedings of the AAAI conference on artificial intelligence, vol. 38, no. 11, pp. 12043-12051. 2024.
[2] Fan, Chongyu, et al. "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation." The Twelfth International Conference on Learning Representations.
[3] Jia, Jinghan, et al. "Model sparsity can simplify machine unlearning." Advances in Neural Information Processing Systems 36 (2023): 51584-51605. | Summary: This paper treats the unlearning problem as a task arithmetic, where they conduct task vector by finetuning on the forget set, then subtract it from the original weight. To avoid sensitivity to hyperparameter selection, the authors create a finetuned model pool by various hyperparameter settings and aggregate them into a final task vector by considering their signs. They claim that the proposed method requires similar or fewer computational complexity than existing methods, and achieves superior performance compared to state-of-the-art methods.
Claims And Evidence: The evidence is not enough to convince the effectiveness of the proposed method. When compared with existing unlearning methods, the authors only conducted an experiment on 10% random data forgetting on CIFAR-10. However, SALUN [1] has demonstrated its effectiveness on <10,20,30,40,50>% Random Data Forgetting on CIFAR-10, CIFAR-100, and Tiny-Imagenet, whereas Boundary Unlearning [2] has demonstrated its effectiveness on class-wise unlearning setting.
[1] Fan, Chongyu et al. “SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation.” ArXiv abs/2310.12508 (2023): n. pag.
[2] Chen, Min et al. “Boundary Unlearning: Rapid Forgetting of Deep Networks via Shifting the Decision Boundary.” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023): 7766-7775.
Methods And Evaluation Criteria: The proposed methods are reasonable for unlearning. Also, evaluation criteria are the most popular benchmark and metrics, which are used in current state-of-the-art models.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: The baselines are verified to be effective on class-wise unlearning and various scenarios of instance-wise unlearning, however, the authors did not conduct sufficient experiment settings, making the comparison less convincing.
Supplementary Material: I read through all of the supplementary material.
Relation To Broader Scientific Literature: Sign consistency is an interesting approach to understand the contribution of each parameter in a model. It is feasible to apply to model pruning, model compression, model editing, etc.
Essential References Not Discussed: No essential reference is not discussed.
Other Strengths And Weaknesses: Strengths:
- Unlearning is an important problem in practice.
- The proposed methods are intuitive and well-presented.
- The paper is well-written and easy to follow.
Weaknesses:
- No theoretical claims were provided to understand the effectiveness of the proposed method rigorously.
- The inability to conduct experiments on ViT-L/14 raises concerns about the feasibility of applying these to large models and real-world applications.
- The experiments are weak to make the claims. When compared to existing unlearning methods, the authors only conduct experiments on the CIFAR10 dataset with the configuration of 10% Random Forgetting. It should be done on 50% random forgetting and class-wise unlearning, where current methods have demonstrated their effectiveness in those settings.
- In the study of sparsity, only performance on the forget set has been reported. The performance on retain set is also important, so it should not be ignored from the experiment analysis.
Other Comments Or Suggestions: There are a few typos can be improved.
Questions For Authors: - Is the proposed method efficient when the pool size is large?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s positive feedback on the clarity and intuitiveness of our method. We hope to address the concerns regarding generalizability and scalability in our responses below.
### **Weakness 1: No theoretical claims were provided**
We provide a theoretical claim for our method. We would like to note that an informal version was presented in Appendix E.
**Theorem.** Consensus-based merging of multiple task vectors yields a robust and effective unlearning direction by inducing sparsity and guiding the model toward a low-loss region.
**Lemma 1.** As the number of task vectors $\tau_k$ increases, the merged vector $\tau_{merged}$ becomes sparser.
**Proof.** Consensus merging keeps each element only if all task vectors agree in sign, acting like an AND operation. With more vectors, agreement decreases, increasing sparsity.
**Lemma 2.** If $\theta\_{ft}$ is centered around $\theta\_{ori}$, then $\theta^*\_{unlearn} = \theta\_{ori} - \tau\_{merged}$ lies closer to $\theta\_{ori}$.
**Proof.** Given $\tau\_{merged} = \bar{\theta}_{ft} - \theta\_{ori}$,
we have $\theta^*\_{unlearn} = \theta\_{ori} - (\bar{\theta\}_{ft} - \theta\_{ori})$.
If $\bar{\theta}_{ft} \approx \theta\_{ori}$, then: $\theta^*\_{unlearn} \approx \theta\_{ori}$.
Lemmas 1 and 2 imply that consensus merging improves robustness (via sparsity) and effectiveness (by staying near $\theta_{ori}$ under linear mode connectivity), yielding a reliable unlearning direction.
While our theoretical analysis may offer a reasonably plausible justification, its effectiveness was most clearly demonstrated through strong empirical results across diverse datasets and architectures. We consider a more rigorous theoretical analysis as future work.
### **Weakness 2: Inability to conduct experiments on ViT-L/14**
ViT-L/14 results for Task Arithmetic were actually included in Table 1. For Linear Task Arithmetic, we ran a scaled-down experiment on ViT-L/14 using six models on the Cars dataset to complete it within the rebuttal period. As shown in the table below, ours outperforms the baselines, achieving the lowest forget accuracy (Df ↓) while maintaining comparable retain accuracy (Dr).
| Method | Df ↓ | Dr |
|--------------------|-------|-------|
| Single Best Model | 28.68 | 71.45 |
| Uniform Merge | 34.68 | 71.96 |
| Ours | 24.47 | 71.66 |
### **Weakness 3: Evidence is not enough (50% random forgetting and class-wise unlearning)**
We extend Table 2 with additional experimental results under 50% random data forgetting and class-wise unlearning, using only the forget set. These experiments were conducted on the CIFAR-10 dataset with the ResNet18 architecture. As shown below, our method outperforms baselines in unlearning performance. For class-wise unlearning and 10% forgetting on TinyImageNet and Swin-T, please see our response to Reviewer zcBE (Weakness 1).
**50% Random Data Forgetting Results:**
| Method | Dr | Df | Dtest | MIA | Avg.Gap ↓ |
|------------------------|-------|-------|-------|------|-----------|
| Retrain | 100.0 | 92.1 | 91.7 | 19.3 | 0.0 |
| Random Labeling | 99.8 | 99.9 | 94.7 | 2.2 | 7.0 |
| SalUn | 99.6 | 99.6 | 94.2 | 4.4 | 6.3 |
| Task Arithmetic (Best) | 98.4 | 97.9 | 92.6 | 5.6 | 5.5 |
| Ours | 96.8 | 96.5 | 91.5 | 6.3 | 5.2 |
### **Weakness 4: Lack of retain set (Dr) performance in Table 4**
In Section 4.2, we mentioned that the retain set (Dr) accuracies for all methods remain around 60%, following the implementation details of Ilharco et al. (2022) for the CLIP unlearning scenario. Below is the extended table (Table 4) including both Df and Dr results. We will provide further clarification in the final version.
| # | Cars (Df↓ / Dr↑) | DTD (Df↓ / Dr↑) | SUN397 (Df↓ / Dr↑) |
|-----|------------------|------------------|---------------------|
| 5 | 30.5 / 60.4 | 28.8 / 60.5 | 49.7 / 60.5 |
| 10 | 26.6 / 59.9 | 27.8 / 60.4 | 48.7 / 60.6 |
| 15 | 26.0 / 60.0 | 27.7 / 60.4 | 47.7 / 60.5 |
| 20 | 26.1 / 60.1 | 27.1 / 60.4 | 47.8 / 60.6 |
| 25 | 26.6 / 60.2 | 27.3 / 60.5 | 47.0 / 60.4 |
| 30 | 27.4 / 60.4 | 27.2 / 60.5 | 47.2 / 60.6 |
### **Suggestions 1: Minor typos**
We appreciate the reviewer’s notice. We will carefully proofread again before the final submission.
### **Questions 1: Is the proposed method efficient when the pool size is large?**
Although our method scales with the number of fine-tuned models, it does not incur additional overhead, since all methods—including single-task arithmetic—share the same underlying model pool. Comparable cost increases also arise in single-task arithmetic due to its broader hyperparameter search space. This is discussed in detail in Section 3.3, Analyses on Computational Cost. | null | null | null | null | null | null |
GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance | Accept (poster) | Summary: This paper introduces GuidedQuant, a novel quantization framework that integrates gradient information from the end-to-end loss into the quantization objective while explicitly modeling inter-weight dependencies. The authors first identify a critical limitation of existing quantization methods: they either treat all hidden features equally or, when incorporating end-to-end loss, fail to capture the cross-interaction between weights.
To address these challenges, the paper proposes guided quantization, which leverages gradient information from the end-to-end loss to reweight layer-wise quantization errors adaptively. Furthermore, the authors introduce Layerwise Non-Uniform Quantization (LNQ), a novel non-uniform scalar quantization method that employs a closed-form solution for optimizing the codebook and a coordinate descent algorithm for optimizing the assignment matrix.
Claims And Evidence: - The authors theoretically derive an objective function that accounts for the importance of each layer in Section 3.1.
- They provide a proof in Appendix A to support Proposition 3.1, which states that minimizing their proposed objective function is equivalent to minimizing a second-order error estimate incorporating the Fisher information matrix, including inter-weight interaction terms.
- Their claims are also supported by evaluation results presented in Section 5 and Appendices C and D.
Methods And Evaluation Criteria: The proposed methods are well-justified for the following reasons.
- They address the limitations of existing quantization methods, which fail to account for the varying importance of layer-wise outputs in relation to end-to-end loss.
- They also mitigate another drawback of end-to-end loss-based approaches, which tend to overlook inter-weight dependencies.
- The guided quantization framework is adaptable and can be integrated with various existing quantization methods.
However, the proposed methods lack novelty.
- The objective function is derived from changes in the end-to-end loss, which is similar to previous work.
- The proposed approach to addressing increased computational and memory costs is relatively straightforward. More ablation studies are needed to either substantiate the claim that a simple method is sufficient or to explore ways to mitigate accuracy degradation in 2-bit quantization.
Additionally, evaluation criteria is somewhat limited.
- The results are presented only on LLaMA 2 models using perplexity on the Wikitext-2 and C4 datasets. To better support the effectiveness of the proposed methods, evaluations on additional benchmark datasets (e.g., MMLU, zero-shot evaluations) should be included.
- Since uniform scalar quantization is widely used, further comparisons with this approach would provide a more comprehensive assessment of the proposed method’s effectiveness.
Theoretical Claims: In Section 4.2, the closed-form solution for optimizing the codebook is theoretically well-motivated; however, it lacks a formal proof or derivation.
Experimental Designs Or Analyses: - The experimental designs are well established to support the paper's main claim and address potential concerns about the proposed methods' weaknesses.
- However, authors mentioned they used perplexity values reported in previous papers for several baselines. However, perplexity can vary depending on evaluation setups and is rarely reproducible across different setups. If the authors aim to claim state-of-the-art performance, they should evaluate baseline methods in the same environment to ensure a fair comparison.
- Moreover, while the proposed methods achieve the best results, the accuracy improvement appears marginal, particularly in 3-bit and 4-bit quantization. Broader evaluations across diverse architectures and tasks would strengthen empirical validation.
Supplementary Material: I reviewed all sections of the supplementary material.
Relation To Broader Scientific Literature: The key contributions of this paper relate to the literature on LLM post-training quantization (PTQ), which aims to minimize layer-wise reconstruction error. While adopting the concept of layer-wise error minimization, this paper introduces a new objective function that accounts for both layer-wise importance and inter-weight interactions.
Essential References Not Discussed: None
Other Strengths And Weaknesses: I have discussed the strengths and weaknesses of this paper in the preceding sections.
Other Comments Or Suggestions: - Can you clarify how GPU cost in Table7 and Table8 is measured?
- In Section 5.1, you mentioned applying GuidedQuant to non-uniform scalar quantization and vector quantization. However, Table 3 includes GPTQ, which uses uniform scalar quantization. If you applied a non-uniform scalar quantization method for the GPTQ baseline, could you provide a more detailed explanation? If not, further clarification on this would be helpful.
Questions For Authors: Comments and suggestions section covers up my questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the valuable feedback. We address below your questions and concerns.
---
**Q1. The objective is derived from changes in the end-to-end loss, which is similar to previous work & Concern on the simplicity of the approximation method.**
We believe the simplicity of our approach should not be mistaken for a lack of novelty. On the contrary, we view the simplicity of our approach as a strength rather than a limitation, as it achieves strong and robust performance gains through a simple and efficient method (see Figure 2 and Tables 3-5 of the paper).
That said, we agree that exploring alternative grouping strategies, such as incorporating sophisticated clustering algorithms for more accurate Hessian approximation, is a promising direction for future research.
---
**Q2. Evaluations on zero-shot and few-shot downstream tasks**
Please refer to **our response to Q1 from Reviewer WqtU**, where we evaluate our method alongside baselines on eight zero-shot downstream tasks and the MMLU benchmark under a 5-shot setting. The results show that LNQ combined with GuidedQuant consistently matches or surpasses baseline performance, with especially notable improvements under extreme quantization scenarios.
---
**Q3. Comparison with uniform scalar quantization methods**
We note that comparisons with uniform scalar quantization methods are included in our experiments. Specifically, Table 3 includes the baselines of weight-only uniform scalar quantization methods such as GPTQ and QuIP, and Table 5 includes comparisons for weight-and-activation uniform scalar quantization methods like QuaRot and SpinQuant. Our method consistently outperforms these baseline approaches in both weight-only and weight-and-activation settings, and shows effectiveness even when used with a uniform quantization scheme as well (SpinQuant + GuidedQuant (Ours) in Table 5).
---
**Q4. Evaluations on different models**
Please refer to **our response to Q3 from Reviewer NTd9**, where we conduct more experiments on Llama-3-8B and Llama-3-70B under a weight-only scalar quantization setting. The results show that our method consistently outperforms the baseline methods in these settings as well.
---
**Q5. Derivation for closed-form solution in Section 4.2**
The closed-form solution $\mathbf{c}^\ast$ in Section 4.2 (Eq. 10) is the standard least squares solution
$\mathbf{c = A^\dagger y}$ for the problem $\min_{\mathbf{c}} \\| \mathbf{y - A c} \\|^2$, where $\mathbf{y = X w}$ and $ \mathbf{A =X P}$. We assume that $\bf{A^\top A}$ is invertible, hence $\mathbf{A^\dagger = (A^\top A)^{-1} A^\top}$. This is a common assumption in quantization.
In practice, $\bf{A^\top A}$ is not always invertible. To address this, we add $\bf \lambda I$ to $\bf{A^\top A}$, with a small $\lambda = 10^{-7}$, as commonly done in prior work.
---
**Q6. Clarification on the perplexity evaluation setups and ensuring a fair comparison**
To ensure a fair comparison, we use the perplexity values reported in previous papers only when the evaluation setup strictly matches ours, i.e., same calibration and evaluation dataset, context length, and a calibration data size that is equal to or larger than ours (giving the baselines an advantage). When any of these criteria were not met, we reproduced the baseline methods under the exact same setup as our experiments. We will clarify this in the revision.
---
**Q7. Clarification on measuring GPU cost in Table 7 and 8**
In Tables 7 and 8, we report the end-to-end runtime for each quantization method, along with the number and type of GPUs used (indicated in parentheses). For example, "1×R6A" refers to the runtime measured using a single RTX 6000 Ada GPU, while "4×R6A" indicates that the method was run using four RTX 6000 Ada GPUs.
---
**Q8. Marginal improvements in quantization results with higher bits**
The performance gains from our method are especially prominent in more challenging quantization scenarios. In particular, our approach demonstrates clearer advantages in lower-bit settings or when applied to models that are inherently harder to quantize, such as Llama-3 models. In these settings, our method yields more substantial improvements. We will include these discussions and corresponding results in the revision.
---
**Q9. Non-uniform scalar quantization format for the GPTQ method**
Non-uniform scalar quantization maps weights to non-uniform grids. GPTQ is designed to minimize the layer-wise output reconstruction error for a fixed quantization grid, thus it can be extended to non-uniform grids by also optimizing the choice of the grid.
The GPTVQ 1D baseline does this; it applies the GPTQ algorithm to optimize assignments for a fixed quantization grid while optimizing the non-uniform codebook using gradient descent. We included GPTVQ 1D as a baseline in Table 3 in the paper and showed that both LNQ and LNQ with GuidedQuant consistently outperform it across all evaluated models and bit-widths. | Summary: The paper introduces GuidedQuant, a layer-wise quantization method that also considers its impact on the final loss. Through derivation, the authors propose a new objective that reweights the Hessian using the loss with respect to the layer outputs. To mitigate computational costs, they approximate the Hessian by averaging it over groups. Additionally, the paper presents an EM-based algorithm (LNQ) to learn quantization scalars.
The proposed methods are evaluated on LLaMA2 across various quantization settings, where GuidedQuant consistently outperforms existing approaches in nearly all scenarios.
**Update after rebuttal**: My latest reply reflected my final update.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes. However, it would be great to evaluate on newer models and on some downstream tasks, as perplexity sometimes doesn't reflect the downstream performance.
Theoretical Claims: Yes. I checked the proof of proposition 3.1 and I think there are no issues.
Experimental Designs Or Analyses: Yes.
However, I have question on how is the caching is calculated in table 8. For gradient respect to outputs, I thought the approximate cost for a LLaMA2-7B model is
1024 (batch size) * 4096 (sequence length) * 4000 (dimension) * 32 (number of layers) * 7 (number of modules per layers) * 2 (if store in 16bits) / 10 ** 9 = 7516 GB, the same question also extends to Hessian caching.
Supplementary Material: I didn't check the supplementary material.
Relation To Broader Scientific Literature: The findings can be applied to layer-wise pruning as well.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: [Strength]
* The paper introduces an useful improvement to equip layer-wise quantization with awareness of final output loss.
* The paper is well-written and easy to follow.
* The experimental results are strong and generalize to different quantization methods.
[Weakness]
I only have some minor comments which can be found in "Methods And Evaluation Criteria" and "Experimental Designs Or Analyses".
Other Comments Or Suggestions: N/A
Questions For Authors: See Other Strengths And Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive review. We address below your questions.
---
**Q1. Results on newer models**
Please refer to **our response to Q3 from Reviewer NTd9**, where we conduct more experiments on Llama-3-8B and Llama-3-70B under a weight-only scalar quantization setting. The results show that our method consistently outperforms the baseline methods in these settings as well.
---
**Q2. Evaluations on zero-shot and few-shot downstream tasks**
Please refer to **our response to Q1 from Reviewer WqtU**, where we evaluate our method alongside baselines on eight zero-shot downstream tasks and the MMLU benchmark under a 5-shot setting. The results show that LNQ combined with GuidedQuant consistently matches or surpasses baseline performance, with especially notable improvements under extreme quantization scenarios.
---
**Q3. Clarification on the gradient / Hessian cache size**
We note that GuidedQuant only requires the Hessian averaged within groups for each layer. Since this averaged Hessian can be computed using the group-wise averaged gradient values, there is no need to cache the full gradient with respect to the output tensor. Instead, we only store one averaged scalar gradient value per group, per layer.
For example, in the case of the Llama-2-7B model with $g=4$ number of groups, each layer only needs to cache 4 scalar values per layer for averaged gradients.
The total gradient cache size is therefore computed as:
- $1024$ (batch size) $\times$ $4096$ (sequence length) $\times$ $4$ (number of groups) $\times$ $32$ (number of layers) $\times$ $7$ (number of modules per layers) $\times$ $2$ bytes (stored in 16-bits) = $7.5 \text{GB}$.
This averaging based approximation also applies to the Hessian computation, substantially reducing the storage cost compared to storing the full Hessian without approximation. This is how the cache sizes reported in Table 8 in the paper are derived. We will clarify this in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the rebuttal contents. My concerns are addressed and I will keep my rating as "accept".
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer i4yb,
Thank you for your response and for your positive review. We truly appreciate the time and effort you have spent providing valuable feedback.
Best regards, Authors. | Summary: This paper introduces GuidedQuant, a post-training quantization framework for large language models (LLMs) that integrates gradient information from the end-to-end loss into the quantization objective while explicitly modeling inter-weight dependencies. The authors claim that GuidedQuant improves the performance of state-of-the-art quantization algorithms across scalar and vector quantization methods. Additionally, they propose a non-uniform scalar quantization algorithm, LNQ, which outperforms existing methods in this category. The paper presents experimental results on Llama-2 models.
Claims And Evidence: - GuidedQuant “explicitly models pairwise interactions between weights.”
**Issue**: While Proposition 3.1 links the objective to second-order error estimates, the actual implementation approximates Hessian matrices via grouping (Section 3.3), which dilutes the theoretical benefit. No empirical evidence shows improved modeling of interactions.
- LNQ “outperforms existing non-uniform scalar quantization methods.”
**Issue**: Comparisons to GPTVQ 1D and SqueezeLLM are provided, but SqueezeLLM’s mixed-precision variant (Kim et al., 2024) is excluded.
Methods And Evaluation Criteria: - The grouping heuristic is reminiscent of block-wise quantization (Dettmers et al., 2022) but lacks justification.
- Results are restricted to Llama-2 models. No experiments on other architectures.
Theoretical Claims: - LNQ’s closed-form codebook update (Equation 10) is a theoretically sound improvement over gradient-based fine-tuning.
- The proof (Appendix A) correctly links the objective to the Fisher matrix but assumes gradients are computed precisely. In practice, gradients are noisy due to calibration data (Frantar et al., 2022), which is unaddressed.
Experimental Designs Or Analyses: Table 10 (Appendix D.1) shows minimal gains with larger groups, suggesting the method is not robust to hyperparameters.
Supplementary Material: - I reviewed the code.
- Appendix A. Proof of Proposition 3.1 is sound.
- Appendix D.1. Group size experiments are superficial—no analysis of why \(g=4\) works best for 7B models.
Relation To Broader Scientific Literature: GuidedQuant builds on second-order optimization (Hassibi & Stork, 1992) and layer-wise reconstruction (Frantar et al., 2022).
Essential References Not Discussed: Not applicable.
Other Strengths And Weaknesses: Not applicable.
Other Comments Or Suggestions: **Typos**:
- Page 3: \(\mathbf{z}_{j}\in\mathbb{R}^{d_{\text{out}}}\) has a mismatched bracket.
- Table 1: “WAA4KV4” is undefined.
**Repetition**: Section 2 redundantly explains basics of quantization.
**Clarity**: Figure 1’s caption is overly technical.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive review. We address below your questions.
---
**Q1. Lack of empirical evidence showing improved modeling after grouping approximation (Section 3.3).**
We highlight that the GuidedQuant objective, after applying the grouping approximation, consistently demonstrates a clear empirical advantage over both the layer-wise objective and the diagonal Hessian approximation (i.e., the weighted $k$-means objective). This is evidenced by the performance gains shown in Figure 2, Tables 3, 4 and 5 of the paper.
To provide further qualitative evidence, we visualize a submatrix of the Fisher information matrix corresponding to the first two output channels in the linear layers of the first Transformer block of Llama-2-7B in *Figures A1 and A2*. Since each output channel contains $d_{\mathrm{in}}$ weights, the visualized matrix has dimensions of $2 d_{\mathrm{in}} \times 2 d_{\mathrm{in}}$ (here, $d_{\mathrm{in}}=4096$ for all layers except `mlp.up_proj` for which $d_{\mathrm{in}}=11008$).
The visualizations reveal that the original Fisher information matrix exhibits strong off-diagonal values and a prominent block-diagonal structure with blocks of size $d_{\mathrm{in}} \times d_{\mathrm{in}}$, corresponding to interactions within each of the two output channels. GuidedQuant approximately preserves the off-diagonal terms within each output channel, thereby capturing significantly more structural detail than the diagonal approximation (SqueezeLLM), which ignores the cross-weight interactions. We believe this better structural modeling helps explain the performance gains achieved by our method. We will include these results and discussions in the revision.
- *Figure A1: https://drive.google.com/file/d/1ilhJarLN1u1nNBzT2ywDCxg8IRz-T2oz/view?usp=sharing*
- *Figure A2: https://drive.google.com/file/d/19LJkHuKebYyOvPsqsjJQG_5PkJAWr7Bz/view?usp=sharing*
---
**Q2. Comparison with mixed-precision variant (dense-and-sparse) of SqueezeLLM.**
Please refer to **our response to Q2 from Reviewer WqtU**, where we present a comparison between our method and SqueezeLLM under a mixed-precision (dense-and-sparse) setup. The results show that our method consistently outperforms the SqueezeLLM in this setting as well.
---
**Q3. Results on newer models other than Llama-2 (Reviewers NTd9, i4yb, UEhr)**
We have conducted additional experiments on newer models, including Llama-3-8B and Llama-3-70B, comparing our method with SqueezeLLM under a weight-only scalar quantization setting. We present the results in *Table A4*. LNQ with GuidedQuant consistently outperforms the baselines, demonstrating the robustness and effectiveness of our approach. We plan to conduct more comprehensive experiments on these newer models and will include the results in the revision.
- *Table A4: https://drive.google.com/file/d/11bgdJ5eOO5s3LjjcZgVPyAzPsXDr6JJZ/view?usp=sharing*
---
**Q4. The gradients are computed on calibration data, which is unaddressed in Appendix A when connecting the objective to the Fisher matrix.**
In Proposition 3.1, we define the Fisher information matrix $\mathbf{F}$ as the *empirical* Fisher information matrix computed over the calibration data, $\mathbf{F} = \frac{1}{n} \sum_{i=1}^n \nabla \ell_i (\mathbf{w}) \nabla \ell_i (\mathbf{w})^\top$. Accordingly, the proof assumes that gradients are computed using the calibration data and is connecting the objective to this empirical Fisher information matrix. In the revision, we will explicitly refer to $\mathbf{F}$ as the empirical Fisher information matrix on the calibration data and clarify that it is an approximation of the true Fisher information matrix.
---
**Q5. Minimal gains with a bigger number of groups, and lack of explanation on the choice of hyperparameter $g$ (e.g., $g=4$ in 7B models).**
We believe that achieving strong performance even with a small number of groups is a strength of our method, making it particularly effective and robust in resource-constrained settings. Regarding the hyperparameter choice, we selected the number of groups $g$ to be as large as possible within the limits of our computational and memory constraints. We will clarify this in the revision.
---
**Q6. Additional comments and suggestions.**
Thank you for the suggestions. We will revise the writing to improve overall clarity, including correcting typos, reducing redundant explanations, and refining figure captions for clarity.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors’ response in the rebuttal and my concerns are addressed. Therefore, I incline to raise my rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer NTd9,
Thank you for your response and for increasing your score. We truly appreciate the time and effort you have spent providing valuable feedback.
Best regards,
Authors. | Summary: This paper proposes to use layer output combined with gradient information as the objective to minimize layer-wise quantization perturbation, along with an approximate method to solve this resource-intensive problem. By improving GPTVQ and combining it with the new objective proposed in the paper, the experiment performed well on non-uniform quantization and also showed some improvements in uniform quantization.
Claims And Evidence: Yes, it is clear. The method proposed is relatively clear and simple, and the paper uses sufficient empirical study to prove the effectiveness of the proposed method.
Methods And Evaluation Criteria: Not sufficient. As an LLM PTQ paper, the article only involves model perplexity as the only evaluation indicator, and does not test it on any generative tasks or other few-shot benchmarks, which is inconsistent with convention.
Theoretical Claims: Yes, I've checked the theoretical analysis in Section 3.1.
Experimental Designs Or Analyses: Yes, I've checked the main experiment, including the weight-only PTQ and weight-activation PTQ parts. My main concern comes from the fact that the evaluation is limited to model perplexity.
Supplementary Material: Yes, I checked the code about layer-wise non-uniform quantization.
Relation To Broader Scientific Literature: The key contribution of this paper can be viewed as an improvement of previous layer-wise error reconstruction problems. By Introducing gradient information as an importance indicator, the reconstruction process can be more accurate and the empirical study shows better results.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: # Strengths:
1. The paper is well written, and the previous knowledge is fully explained and introduced, making it easy for readers to follow.
2. The results on non-uniform quantization show significant improvement over existing methods
# Weaknesses:
1. In the weight-activation quantization experiment(i.e. table 5), it seems to show that GuidedQuant does not show obvious advantage than GPTQ
2. It is recommended that the authors supplement the test results of generative tasks and other commonly used zero-shot tasks, such as MMLU.
Other Comments Or Suggestions: 1. The third entry in Table8 should be "Llama-2-70B" instead of "Llama-2-7B"
Questions For Authors: 1. Why dense-sparse separated quantization is not enabled for squeezellm ? This setting may significantly improve the results according to the results reported in the original paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive review. We address below your questions.
---
**Q1. Evaluations on zero-shot and few-shot downstream benchmarks (Reviewers WqtU, i4yb, UEhr).**
We provide evaluations of our methods (LNQ and LNQ + GuidedQuant) alongside baselines (SqueezeLLM and GPTVQ 1D) under the weight-only scalar quantization settings, using Llama-2-7B and Llama-2-13B models, in *Table A1*. The evaluation includes eight zero-shot tasks: BoolQ [1], PIQA [2], SIQA [3], HellaSwag [4], WinoGrande [5], ARC-easy [6], ARC-challenge [6], and OBQA [7]. For a few-shot benchmark, we include results on the MMLU [8] benchmark in a 5-shot setting.
*Table A1* reports both accuracy and standard error for all methods. We highlight the best-performing results, as well as those whose accuracy falls within the top score $\pm$ standard error, under the same bit width constraint. The results show that LNQ combined with GuidedQuant consistently matches or surpasses baseline performance, with notable improvements in extreme quantization scenarios, such as 2-bit quantization. While we currently report results on a subset of models and settings due to time constraints, we will perform a more comprehensive set of experiments and include the full results in the revision.
- *Table A1: https://drive.google.com/file/d/1XnAs0V8CwTKpoul5I3cIVecE5JL8cgto/view?usp=sharing*
---
**Q2. Comparison with Dense-and-Sparse variant of SqueezeLLM (Reviewers WqtU, NTd9).**
To ensure a fair comparison, all methods in the paper were evaluated under uniform precision. The dense-and-sparse variant of SqueezeLLM, which preserves a small fraction of weights in 16-bit precision to maintain accuracy, is orthogonal to our method and can be combined with it. Accordingly, we report results for SqueezeLLM, LNQ, and LNQ + GuidedQuant methods, with the dense-and-sparse approach applied to all of them, in *Table A2*.
Following the original SqueezeLLM paper, we retain 0.45% of the weights in 16-bit and evaluate with 2-, 3-, and 4-bit quantization on the Llama-2-7B model. The results show that LNQ with GuidedQuant consistently outperforms the baselines in the dense-and-sparse setting as well, demonstrating the superiority and robustness of our method. We will include these results in the revision.
- *Table A2: https://drive.google.com/file/d/1dORQnlmUi1FPQmuSR8n1OebNQ5h0M8Gi/view?usp=sharing*
---
**Q3. In weight-and-activation quantization (Table 5 in the paper), the GuidedQuant does not show obvious advantage over the baseline.**
This is primarily because we present weight-and-activation results using relatively high bit-width for weight quantization (4-bit), where the benefits of our method are less evident. Our method demonstrates more significant improvements in more aggressive quantization settings.
To illustrate this, we conducted additional experiments with lower bit-widths for weights, specifically 2-bit and 3-bit, while keeping activations and KV caches at 4-bit precision (denoted as W2A4KV4 and W3A4KV4, respectively), on Llama-2-7B model. The results, shown in *Table A3* below, demonstrate that GuidedQuant significantly outperforms baseline methods in these more extreme scenarios, highlighting the strength of our approach under stricter bit-width constraints. We will include these new results and the corresponding discussion in the revision.
- *Table A3: https://drive.google.com/file/d/1PaDjWnzsWEYVd5Lwdd_Xoath72Fwfghr/view?usp=sharing*
---
**References**
[1] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. 2019.
[2] Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. 2020.
[3] Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. Socialiqa: Commonsense reasoning about social interactions. 2019.
[4] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? 2019.
[5] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. 2021.
[6] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. 2018.
[7] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. 2018.
[8] Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D. and Steinhardt, J., Measuring massive multitask language understanding. 2020. | null | null | null | null | null | null |
CABS: Conflict-Aware and Balanced Sparsification for Enhancing Model Merging | Accept (poster) | Summary: This paper introduces a model merging algorithm, CABS, to address the key issues in sparsified task vectors: high parameter overlap and unbalanced weight distribution. The proposed method consists of two components: Conflict-Aware (CA) Sparsification, which sequentially prunes task vectors to minimize overlap, and Balanced Sparsification (BS), which ensures even distribution of retained parameters. The authors conduct comprehensive experiments, demonstrating that CABS outperforms existing pruning-based merging techniques and surpasses an ideal baseline in some cases.
Claims And Evidence: Yes, most of the claims are well supported by evidence.
Methods And Evaluation Criteria: Yes
Theoretical Claims: There are some analyses in Sec 4.3, but they are pretty shallow.
Experimental Designs Or Analyses: I think the experiments are comprehensive and nicely executed in general, and I particularly appreciate the experiments on large-scale language models (up to 7b). Nevertheless,
- The paper does not discuss whether CABS requires validation data for tuning the scaling coefficients, which is a critical factor in model merging methods
- The performance gap between different methods is not very large. Reporting error bars in addition to accuracy would help assess the significance of the improvements
- The paper should include comparisons with recent baselines [1,2] to strengthen its empirical claims
[1] He, Yifei, et al. "Localize-and-stitch: Efficient model merging via sparse task arithmetic." arXiv preprint arXiv:2408.13656 (2024).
[2] Wang, Ke, et al. "Localizing Task Information for Improved Model Merging and Compression." arXiv preprint arXiv:2405.07813 (2024).
Supplementary Material: I checked the part on how to tune the scaling coefficients.
Relation To Broader Scientific Literature: The paper is positioned within the broader literature on model merging. Specifically, it aims to further enhance the technique of sparsified task vectors (which could potentially mitigate the interference between different tasks) using sequential and balanced pruning.
Essential References Not Discussed: Most of the related work is sufficiently covered, except for a few cases (e.g., [2]). I encourage the authors to elaborate further on both [1] and [2].
Other Strengths And Weaknesses: **Strengths**:
- The proposed approach is straightforward yet effective
- The evaluation is thorough, spanning multiple tasks, models and sparsity levels
**Weaknesses**:
- Motivation: My complaint is mainly about the parameter overlap part (I think the unbalanced weight distribution is a good observation).
- The current discussion is more or less similar to that made in [1]; the additional theoretical justifications in Sec 4.3 are shallow
- There are a few points that I believe the authors should consider and discuss: 1) The authors claim that conflicting weight signs between task vectors lead to interference, but this is precisely what the "elect" step in TIES is designed to address. Are you suggesting that this step is ineffective? 2) [2] suggests that parameter overlap (or weight interference in their terminology) is not the primary cause of performance decline in model merging, which raises concerns about whether CABS (and similarly [1]) are targeting the right problem
- Method:
- About "CA": The authors claim that the order of pruning does not significantly impact accuracy, but this is not necessarily true in general. A deeper analysis of the effect of ordering and some recommendations for practitioners would be a huge plus. Additionally, due to the nature of this step, there seems to be a scalability issue with respect to the number of tasks. In fact, CABS is only tested on a maximum of six tasks.
- About "BS": While simple, the motivation behind using n:m pruning is not clearly explained, and alternative structured pruning methods should be discussed
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback and constructive suggestions! We've provided additional experimental tables in **[Anonymous Link](https://anonymous.4open.science/r/CABS-027B/rebuttal_tables.pdf).** Below, we address your main concerns:
**Q1: Comparison with recent baselines (Localize-and-Stitch [1], TALL-masks [2]).**
We experimentally compared CABS with TALL-masks and Dataless Localize-and-Stitch, with our results clearly demonstrating the superior performance of CABS.
For TALL-masks, please refer to our detailed response to Reviewer QfMa (Q5) for a comprehensive comparison.
For Dataless Localize-and-Stitch, experimental results are presented in Tables 4, 5, and 6 of the anonymous link. When merging 2, 4, or 6 models, its performance is often close to the base model, indicating limited effectiveness. We found it essentially equivalent to simplified TIES-Merging (fixed λ=1, no 'elect').
This setting works reasonably well in the original setup, where 12 models are merged using 5–10% sparsity, so that λ = 1 happens to act as a good approximation (e.g., 5-10% × 12 ≈ 0.6-1.2). However, when merging a smaller or larger number of models, such a fixed λ becomes suboptimal.
If desired, we are happy to include a λ-tuned version of Dataless L-and-S for fairer comparison.
**Q2:Effectiveness of "elect" step in TIES.**
We appreciate the valuable ideas introduced by TIES-Merging, particularly its insight and solution regarding parameter redundancy and conflict. Inspired by TIES-Merging, CABS offers a more robust alternative.
Specifically, the “elect” step in TIES is designed to resolve conflicting signs between task vectors and can indeed be helpful in certain cases. For example, as shown in Table 17 of our appendix, TIES-Merging is the best-performing baseline after CABS. However, its effectiveness is inconsistent. The TIES paper itself (Appendix B.1, Table 7) shows that re-estimating the sign vector via few-shot training improves performance, indicating that its elect strategy may be suboptimal.
Our experiments show similar trends. As seen in Table 1, the performance gap between Task Arithmetic+Magnitude and TIES-Merging is often small despite the difference being the elect step, and in some cases, TIES-Merging even underperforms.
One possible explanation is the significant imbalance in average magnitudes among task vectors—up to a 10× difference as illustrated in Table 8 of the anonymous link. In such cases, the sign vector may be dominated by the task with the largest magnitude, resulting in biased merged results.
Overall, while TIES-Merging laid important groundwork in addressing parameter conflicts, CABS advances this direction by providing a more robust and effective solution.
**Q3: Are parameter interference the right problem?**
As noted in our response to Reviewer QfMa (Q5), CABS and TALL-masks operate under fundamentally different paradigms.
While TALL-masks separates tasks during inference and does not explicitly address parameter interference, it incorporates TIES-Merging as the first stage in its pipeline. This design choice implicitly acknowledges parameter interference as a key factor.
In such settings—where task identity is unavailable and generalization is required—parameter conflict becomes an inevitable challenge for training-free model merging methods.
**Q4: motivation for choosing n:m pruning.**
We choose n:m pruning for two reasons: (1) Prior methods, like row-wise (topk), already outperform layer-wise pruning, making it natural to explore more structured forms, like block-wise (n:m), to further improve merging performance. (2) n:m pruning operates at the finest granularity (weight level), is simple to apply, and requires no changes to model architecture.
Other pruning techniques are less suitable for training-free merging. Coarse-grained methods (e.g., head pruning) cannot remove redundant weights or resolve conflicts within components. Sparse fine-tuning and similar methods require training, limiting their practicality.
**Q5: Potential scalability concerns for CA.**
Please refer to our detailed response to Reviewer zNot (Q4).
**Q6: Analysis on the impact of pruning order.**
Please refer to our detailed response to Reviewer QfMa (Q6).
Building on our new findings on task vector magnitudes (Table 8 in the anonymous link), we suggest that placing task vectors with smaller magnitudes earlier in the pruning sequence—or assigning them larger λ—may help improve balance. This could be a promising future direction, as exhaustive search is infeasible in LLM merging.
**Q7: necessary of Validation data usage for tuning λ.**
Like TA, TIES, and DARE, we use validation data for tuning λ. The specific values are provided in Appendix B.11.
**Q8: Reporting statistical significance with confidence intervals.**
Please refer to our detailed response to Reviewer QfMa (Q4).
We hope these responses address your concerns. Please feel free to let us know if anything remains unclear.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. In light of the new experiments conducted, I am raising my score to 3. I do not increase further because I generally feel that the work is solid but not particularly exciting.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and for taking the time to reassess our work. We are glad we could address your concerns and sincerely appreciate your updated score! | Summary: This paper introduces a task-vector-based model merging method, CABS. The authors attribute the performance degradation of model merging to: (1) high parameter overlap, and (2) unbalanced weight distribution. The proposed CABS mainly contains two modules, CA and BS. Between them, CA aims to eliminate parameter overlap and BS is introduced to guarantee a more even weight distribution. Experiments on both LLMs and small language models (SLMs) are conducted to validate the performance of the proposed method.
Claims And Evidence: The authors attribute the performance degradation of model merging to high parameter overlap, and unbalanced weight distribution. I agree with the viewpoint that parameter overlap degrades the performance, which is intuitive. As to parameter overlap, the authors demonstrate that the pruning based on magnitude results in unbalanced distribution of weights. However, from my point of view, I believe that the success of DARE is because of its rescaling process. I guess that if magnitude-based pruning is properly rescaled, it can achieve the performance close to DARE. As a result, I think that the relationship between weight distribution and the merged model’s performance needs to be further proved.
Methods And Evaluation Criteria: Yes. The proposed method is simple and provides an interesting perspective for model merging. However, some of its claims and experiments are not that convincing to me.
Theoretical Claims: Yes. I checked Eq. 2 to Eq. 9. No obvious errors are found.
Experimental Designs Or Analyses: Yes, I checked the experiments in Section 5 and the additional results in Appendix A. I have several questions:
(1) The selected LLMs have similar performance on the target tasks. For example, the performance difference of WestSeverus and WildMarcoroni is below 1% on most tasks in Table 3. However, from my point of view, model merging targets at replacing multi-task learning by merging expert models on different kinds of tasks. Say, merging LLMs finetuned on code or math tasks. Merging models with very similar performance may reduce the soundness of the method.
(2) For SLMs, the authors selected 4 tasks from GLUE and 2 other tasks in Table 2. However, I checked the huggingface links for roberta models utilized by the authors and find that there are many other weights that can be used, including roberta models finetuned on other tasks from GLUE. Similarly, in Table 11, only two GPT-2 models are utilized for merging but the original setting for this benchmark is merging 7 GPT-2 models finetuned on different tasks. Providing experimental results for merging more models make the proposed method more convincing.
Supplementary Material: Yes. I reviewed all the parts of the supplementary material. The authors provide a detailed description of the proposed method and experimental results.
Relation To Broader Scientific Literature: This paper attributes the performance degradation to high parameter overlap and unbalanced weight distribution. Previous studies usually use pre-processing to reduce the parameter overlap or interference. In this paper, the proposed CABS applies pre-computed masks to avoid the overlap.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
(1) Simple and intuitive idea.
(2) Detailed proofs.
(3) Comprehensive experimental results.
(4) Ablation studies provided.
Weaknesses (both previous-mentioned):
(1) The relationship between weight distribution and the merged model’s performance is not that convincing to me.
(2) Some experimental settings could be broadened.
Other Comments Or Suggestions: (1) In Line 1120, there is a ref error.
(2) In Figure 6, the font sizes are a bit too small to read.
Questions For Authors: Overall, I think the proposed method is meaningful, but it may lack a motivation strong enough and convincing experimental results. This is why I submit a negative overall recommendation. However, if the authors could:
(1) further prove the relationship between weight distribution and performance;
(2) expand the experimental settings.
I would carefully consider raising my rating score.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback and constructive suggestions! We've provided additional experimental tables in **[Anonymous Link](https://anonymous.4open.science/r/CABS-027B/rebuttal_tables.pdf).** Below, we address your main concerns:
**Q1: Whether the observed performance gap is due to lack of rescaling in magnitude-based pruning.**
We agree that rescaling is a key factor contributing to DARE’s effectiveness and could enhance the performance of magnitude-based pruning methods. In our experiments, all magnitude-based pruning methods were equipped with rescaling. The λ values used for merging (e.g., λ=1.88 for 4-model TIES-Merging) are detailed in Appendix B.11, confirming that proper rescaling was employed. Additionally, we also conducted rescaling experiments for magnitude-based pruning methods, which are reported in Appendix A.8.
Notably, as illustrated in Table 1, even when rescaling is applied, magnitude-based pruning can sometimes impair the model merging performance. The comparison in Table 1 underscores that factors beyond rescaling - such as weight distribution - are also important for the merging performance, as further discussed in our response to Q2.
**Q2: The impact of unbalanced weight distribution on merged model performance needs clearer evidence.**
We provide supporting evidence in Table 4 in the main text. It compares different sparsification strategies for model merging: Task Arithmetic + Magnitude (layer-wise), + Magnitude (row-wise), and + BS (block-wise). As the pruning becomes more structured and the weight distribution more balanced, performance improves progressively from 80.38 to 80.61 to 81.30.
From a theoretical perspective, weight imbalance can amplify task interference during merging. Since the merged model is typically rescaled by a λ, if weights are highly concentrated in specific regions—as often caused by magnitude-based pruning—this rescaling will disproportionately amplify those regions, worsening cross-task interference.
While weight distribution may not be the sole cause, consistent improvements across balanced pruning provide strong empirical support. We acknowledge that deeper theoretical understanding, especially of activation dynamics, would be valuable. While our current work focuses on developing an effective, training-free merging method, we see this as a promising direction for future research.
**Q3: The LLMs used for merging are too similar in performance; more diverse tasks are expected.**
To address the concern about task diversity, we additionally conducted an experiment merging a Mistral model fine-tuned on instruction-following with another fine-tuned on mathematical tasks. As shown in Table 2 in anonymous link, our method still outperforms TaskArithmetic by +0.79, achieving SOTA performance in this heterogeneous setting and demonstrating robust cross-task merging capability.
We also would like to clarify that, merging models fine-tuned on similar tasks is a standard and widely adopted setup in model merging, especially under the model soup paradigm. Our large-model experiments follow DARE’s protocol, which focuses on improving performance or generalization within similar task domains.
This setup is also practically relevant: the Open LLM Leaderboard—one of the most active community benchmarks—frequently features submissions involving merges of LLMs fine-tuned on similar tasks. Using our method, we constructed four merged models and occupied the top four positions among all models with fewer than 8B parameters at the submission time (see Table 1 in the anonymous link or the official leaderboard).
**Q4: The number of tasks/models in SLM experiments is limited; merging more models is recommended.**
We appreciate the suggestion and have added an experiment merging all 7 GPT-2 models from FusionBench, following its official setup (see Table 7 in the anonymous link).
While CABS still achieves the best performance among all methods, the overall gains are notably smaller than in our 2- or 6-model merging settings. In fact, all methods—including CABS—suffer significant performance drops when merging more models, especially on some tasks.
We investigated this and found that the task vectors have highly imbalanced average weight magnitudes—up to a 10× difference across tasks (see Table 8 in the anonymous link). Since model merging uses a shared scaling factor (λ), tasks with small-magnitude vectors (e.g., RTE, MRPC, CoLA) are overwhelmed by stronger ones (e.g., QQP, MNLI), resulting in near-random performance on weaker tasks.
This suggests that blindly merging too many heterogeneous task vectors with vastly different magnitudes introduces severe imbalance and can degrade model utility. While CABS remains robust, merging fewer well-aligned models appears more practical and effective under current frameworks.
We hope our responses have addressed your concerns. Please let us know if there are any remaining questions or clarifications we can provide. | Summary: The paper presens CABS, a method for pruning and merging different task vectors, seemingly resolving conflicts between different tasks. As the authors argue, conflicts can arise due to parameter overlap and unblanaced weight distribution - CABS is intended to address these issues. To that end, disjoint pruning masks are used, while the latter problem is resolved using n:m pruning. Experiments are conducted to show the effectiveness of the method.
## Update after rebuttal
I thank the authors for their rebuttal. However, since I find some of their arguments not really convincing, I intend to keep my score. I am not convinced that pruning, originally a method to compress models, is here a better idea than otherwise merging the task vectors. Especially the statement "when task vectors are highly similar, their directions are nearly aligned, causing their linear combination to span only a narrow subspace" does not really resonate with me. While this statement as is is untrue (pick any vector, add eps*canonical vectors to it, giving you a full dim subspace and all vectors are arbitrarily close", I am not convinced why this should be an argument in the first place. Overall, I think the work needs revision to resolve also the causation/correlation issue I stated.
Claims And Evidence: - While I like the storyline of the paper, I am not convinced that the experiments show that high parameter overlap and unbalanced weight distribution are the issues in model merging, the experiments presented are not causal, but rather show correlation. For example, the overlap is larger when doing magnitude pruning, instead of random pruning, cf. Figure 2. Why more overlap should lead to worse performance is not clear - I understand the arguments when e.g. one task is positive at a certain weight, the other one negative, but why would pruning one then be any better? I believe the numbers, but I do not think that the experiments show that these are truly the issues.
- When merging two task vectors, why are you pruning both to then build up a linear combination - why doesn't it work to build a new task vectors that is accounts for both tasks, e.g. on a calibration dataset? I find the strategy a bit convoluted.
- When pruning task A, then using the inverse of the mask to restrict the weights of task B, which are then in turn pruned, do I not necessarily prune either A or B too much? In the end, this guarantees that the task vectors are orthogonal, which seems to be much much more than what is needed - if two tasks are fairly similar, why force them to orthogonality?
- From Eq. 7 to Eq. 9 it is derived that the "decoupling" in the norm "ensures that adjust $\lambda_A$ affects only the contribution of $\tau_A$", but this is not clear to me. This statement holds in Equation 7 already, why would the norm be needed here?
Methods And Evaluation Criteria: The experimental design overall seems sound, however I am not entirely sure which sparsities are used, e.g. in Section 5.2.
Theoretical Claims: N/A
Experimental Designs Or Analyses: See above.
Supplementary Material: I briefly checked the appendix.
Relation To Broader Scientific Literature: The authors do a good job of relating their work to the broader literature.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: - The derivation in lines 259-274 is trivial and does add little value, I would have preferred to use this space to explain why "these sparse updates are nearly lossless in retaining task-specific information, as simple rescaling compensates for pruning-induced changes". This statement may hold for very little sparsities, but otherwise this will not be the case.
Other Comments Or Suggestions: - The running title has not been updated and is still the default one.
- I find it a bit odd to cite Liang et al. when introducing magnitude pruning in line 34. Why would you cite a 2021 paper to introduce one of the most basic pruning methods? Later, Zhu & Gupta are cited (line 156), which seems more appropriate, but still not really fitting - they introduced basically a pruning schedule which is based on magnitude pruning.
- There are some typos in the paper, e.g. "Remaining" in line 83 should be lowercase, "compare to" should be "compared to" in the caption of Figure 3.
- Here, sparsity seems to be used in the sense of density. I see this from time to time in other papers, but it is not the correct usage in my opinion. A sparsity of 80% means that 80% of the parameters are zero. A density of 80% would mean that 20% of the parameters are zero.
Questions For Authors: - What is used as the calibration dataset for SparseGPT and Wanda? Did I miss this or is this nowhere stated?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback and constructive suggestions! Below, we address your main concerns:
**Q1: Causality vs. Correlation in Experiments**
We acknowledge the reviewer’s distinction between correlation and causation. While strict causal proof is difficult, our experiments go beyond correlation. In Figure 5, by fixing sparsity and method and varying only the overlap rate, we observe performance drop—indicating a causal link.
Table 4 shows that more structured pruning improves balance and performance (80.38 → 80.61 → 81.30). While full causality is not isolated, consistent trends across models highlight the practical value of addressing overlap and imbalance in training-free model merging.
**Q2: why would pruning one then be any better?**
Prior work shows that pruning improves model merging. TIES-Merging demonstrates that conflicting signs between task vectors cause interference, and pruning one side improves performance. DARE further shows that task vectors are highly redundant—keeping only ~5% of weights with proper rescaling still maintains near-lossless performance.
Unlike standard model pruning, task vector pruning is not for compression, but for reducing conflict and removing redundant updates that may interfere with other tasks—making it a strategic, empirically supported choice.
**Q3: Why merge Task Vectors Instead of Learning a Joint Vector?**
Task-vector-based merging aims to reuse independently fine-tuned models in a training-free manner, without access to original task data. In contrast, learning a joint vector requires access to all task data and additional training, which significantly increases computational cost.
CABS builds on TIES-Merging and DARE, proposing structured and conflict-aware pruning to enable efficient, training-free model merging.
**Q4: Does CA over-prune task vectors by enforcing disjoint masks?**
CA does not cause excessive pruning. In many cases, task vectors already reach 90% sparsity without CA. At the same sparsity level, CA improves merging by allocating disjoint weights, reducing interference while retaining essential updates.
**Q5: If two tasks are fairly similar, why force them to orthogonality?**
In our experiments with similar models—such as the model soup setting in Table 3 and Table 17 of the main paper, and Table 1 in **[Anonymous Link](https://anonymous.4open.science/r/CABS-027B/rebuttal_tables.pdf)**—we consistently observe that enforcing orthogonality via CABS still improves merging performance.
One geometric explanation is that, when task vectors are highly similar, their directions are nearly aligned, causing their linear combination to span only a narrow subspace. In contrast, enforcing orthogonality increases the span of the merged space. This broader coverage may facilitate better exploration of the solution space, enabling access to lower-loss regions.
Thus, even for similar tasks, CABS effectively improves expressiveness and reduces redundancy during the merging process.
**Q6: Clarifying Eq. 7–9**
Equation 7 defines a linear combination of two task vectors, but does not reveal how adjusting one scaling factor (e.g., λ_B) may interfere with the other task vector’s contribution in the resulting update.
For example, if τ_A = −τ_B, even small changes in λ_B can diminish the effect of τ_A, indicating strong coupling between the scaling factors. This illustrates that without further analysis, the two task vectors' contributions cannot be assumed independent.
The squared Frobenius norm in Eq. 8 explicitly captures this interaction via the inner product ⟨τ_A, τ_B⟩_F. As shown in Eq. 9, the cross term vanishes only when τ_A and τ_B are orthogonal.
This ensures that each λ independently controls its corresponding task vector’s contribution without unintended interference. Hence, the norm-based formulation is essential to formalize the decoupling effect of orthogonality.
**Q7: Density vs Sparsity**
We consistently use "sparsity" to refer to the percentage of pruned (zero) parameters—e.g., 90% sparsity means 90% of weights are zero. We do not use "density" in the paper, so there should be no ambiguity.
That said, we acknowledge that Table 2 does not explicitly mention the sparsity setting. However, the table references specific results in Tables 1, 8, and 9, where the sparsity level (90%) is clearly stated. We will revise Table 2 to include this information. We hope this clarification resolves the misunderstanding.
**Q8: Why does rescaling work?**
Rescaling is effective because task vectors are highly redundant—shown by DARE and our Figure 6 (Appendix A.8). Pruning up to 90%, with rescaling, yields near-lossless performance. Even at 99%, the drop is minimal, showing strong robustness.
**Q9: The calibration dataset for SparseGPT and Wanda**
We use C4 as the calibration dataset, following the standard setup in SparseGPT.
We hope these responses address your concerns. Please feel free to let us know if anything remains unclear. | Summary: Authors propose a novel methodology, Conflict Aware Balanced Sparsification (CABS), for model merging based on task vectors. Previous work has shown that sparsifying task vectors before merging typically yields better performance for merged model. Authors identify two main issues:
- High Parameter Overlap: Retained weights across task vectors overlap significantly, causing conflicts during merging.
- Unbalanced Weight Distribution: Sparsification concentrates weights in specific regions, amplifying imbalances during merging and degrading performance
Authors propose to resolve this by proposing:
A) Conflict Aware pruning: Prunes task vectors sequentially, masking already retained parameters to eliminate overlap.
B) Balanced Sparsification: Structured n:m pruning of task vectors to enforce uniform pruning of task vectors.
Authors evaluate their approach on large LM (Mistral-7B) and small LMs (RoBERTa and GPT-2). Experiments show CABS generally outperforms existing methods when merging 2, 4, and 6 tasks language modelling tasks.
Claims And Evidence: Authors run ablation studies to verify most of their claims.
Authors claim that they are first to introduce an "ideal" baseline and CABS outperforms this ideal baseline.
- First, the ideal baseline authors proposed is to compare the performance of the merged model to the finetuned expert. This is pretty well known and has been used as a baseline in this [1].
- Second, The claim that CABS outperforms this baseline is misleading because only setting is in Table 3 with sparsity =0.75. CABS has average accuracy of 76.5% and "ideal" model has average accuracy of 76.3%. These results are within the margin of errors and I would encourage the authors to withdraw this claim.
[1] Ke Wang, Nikolaos Dimitriadis, Guillermo Ortiz-Jiménez, François Fleuret, and Pascal Frossard. 2024. Localizing task information for improved model merging and compression. In Proceedings of the 41st International Conference on Machine Learning (ICML'24), Vol. 235. JMLR.org, Article 2057, 50268–50287.
Methods And Evaluation Criteria: For small LMs, authors tested on GLUE tasks (CoLA, SST-2, MRPC, RTE, SQuAD, RACE), arbitrarily chosing 2, 4, and 6 tasks to merge.
For large LMs, authors tested on LLM Leaderboard tasks (ARC, HellaSwag, MMLU, TruthfulQA, Winogrande, GSM8K)
These benchmarks are standard and make sense for text data modalities. However authors have not run experiments on other data modalities, such as vision tasks.
Theoretical Claims: There are no significant theoretical claims int he paper. Authors show that CA procedure produces non-overlapping task vectors leading to orthogonality. Thus leading to no interference. This analysis is pretty straightforward.
Experimental Designs Or Analyses: I checked the experimental design for evaluating the CABS and the ablation studies.
1. This work lack of confidence intervals in the evaluation. It appears most of the work is produced on a single seed. It'll be useful to report margin of error which could boost significance of the work.
2. Authors argue that overlapping task vectors typically lead to performance degradation. I can imagine tasks where the data distributions are very similar where, having overlapping task vectors can indeed boost the performance. Thus, this analyses is not very convincing.
Supplementary Material: No.
Relation To Broader Scientific Literature: Prior work has shown that sparsifying task vectors can lead to improved performance in model merging. In this work authors suggest that removing overlaps in the task vectors + using structured n:m sparsity can improve the performance of model merging.
Essential References Not Discussed: Authors should include discussion / comparison with [1]. This is published work that also appears to improve on the baselines considered in this work and seems pretty related.
[1] Ke Wang, Nikolaos Dimitriadis, Guillermo Ortiz-Jiménez, François Fleuret, and Pascal Frossard. 2024. Localizing task information for improved model merging and compression. In Proceedings of the 41st International Conference on Machine Learning (ICML'24), Vol. 235. JMLR.org, Article 2057, 50268–50287.
Other Strengths And Weaknesses: Strengths:
- Novel merging approach that improves the performance of model merging in evaluated tasks
- Strong ablation studies including studying the role of sparsity rate, overlap rate, pruning approaches, and general experimental setup.
- Paper is well structured.
Weakness:
- Only text data modality is considered. In this literature, it is common practice to also evaluate results on vision data modality.
- Claim about comparison to "ideal" baseline are not justified.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. For CA approach: does the sequential nature of masking induce an implicit ordering on the "importance" of tasks? Have you tried shuffling this order and seen any difference in the performance?
2. Figure 5 suggests that perhaps small overlap rate (20%) can be beneficial for model performance. Can you include the performance of CABS with 0% overlap rate on that plot? it'll be useful to compare the performance with varying overlap rates.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback and constructive suggestions! We've provided additional experimental tables in **[Anonymous Link](https://anonymous.4open.science/r/CABS-027B/rebuttal_tables.pdf).** Below, we address your main concerns:
**Q1: Inclusion of experiments on vision modality.**
We added experiments on vision tasks (ViT-B-32 on DTD and EuroSAT, see Table 2 in the link). CABS surpasses Task Arithmetic by +2.20% average accuracy, confirming its effectiveness on vision tasks.
**Q2: Justification of claims regarding outperforming the "ideal model".**
We agree that this claim should be more cautiously presented. We will revise the paper to clarify the contribution statement and the discussion of Table 3 results, adding appropriate qualifiers to avoid overstatement.
Empirically, the claim is supported by consistent results across multiple settings (Tables 3, 11, 12, 14, and 15), including both Mistral and GPT-2 models and sparsity levels of 0.25 and 0.75 and 0.90. In particular, the improvement in GPT-2 experiments (Table 11) is free from randomness, as both the method (CABS) and the evaluation tasks (classification) are deterministic. As noted in Q4, confidence intervals in the large-model experiments further strengthen the reliability of the observed gains.
**Q3: Clarification about the novelty of the "ideal model".**
We agree that comparing to fine-tuned models is common for SLM, where the "ideal baseline" is equivalent to the ''fine-tuned models''. However, in LLM merging (e.g., multi-task LLMs), this type of upper-bound baseline is not used in prior large-model merging works such as DARE. We acknowledge that this distinction was not clearly explained and will revise the text accordingly.
**Q4: Reporting confidence intervals.**
Our magnitude-based merging method is deterministic for classification tasks, producing consistent results across runs. This aligns with common practice in model merging literature, where confidence intervals are often omitted. Still, for LLM evaluations, we conducted 10 runs:
|Method|Mean Score|95% Confidence Interval|
|-|-|-|
|TIES-DARE|0.7606|(0.7597, 0.7614)|
|CABS|0.7647|(0.7641, 0.7653)|
These results confirm the statistical significance of our improvements.
**Q5: Discussion of related work TALL-masks.**
Compared to TALL-masks, CABS distinguishes itself in both design and applicability:
- TALL-masks assumes task id is available at inference, loading tuned task-specific masks—analogous to utilizing separate LoRA adapters—thus achieving near-lossless performance but requiring task-specific inference and added complexity.
- TALL-masks also involves tuning the mask sparsity factor for each task and scaling coefficients λ. While effective on small models, it has not been evaluated at LLM scale. In contrast, CABS only tunes λ and has demonstrated strong scalability: on the Open LLM Leaderboard, CABS occupied the top four positions among all sub-8B models at submission time (Table 1 in the link).
- TALL-masks produces a base model, a merged task vector, and k task-specific masks, resulting in >2× storage overhead. In contrast, CABS yields a single, compact model for multi-task inference.
Given these differences, direct comparison with TALL-masks is inappropriate. However, its merged variant Consensus TA is a fair baseline. We have added comparisons on merging 2,4,6 models (Table 4,5,6 in the anonymous link, Mask sparsity factor =0.5), with our results clearly demonstrating the superior performance of CABS. A discussion will be added in the Related Work section.
**Q6: Effect of task ordering in CA.**
We tested shuffled task orders explicitly (SLM: Table 1 in the main text; LLM: Appendix A.9). We further added results on merging six models with shuffled orders (Table 5 in the anonymous link). Results show that task ordering has limited impact relative to overall gains, demonstrating the robustness of the CA strategy.
**Q7: Impact of overlap rates.**
As discussed in the paper, the main issue is high overlap rather than all overlap. We agree some overlap can be beneficial—e.g., Figure 5 shows 20% sparsity achieving strong performance. However, determining the optimal overlap level introduces additional complexity. Considering the efficiency and generalizability, we adopt the current CA strategy.
Figure 5 is intended to analyze the effect of varying overlap rates within a fixed method (DARE), and already includes the result of applying CA with DARE. Adding CABS—which uses a different method (BS)—would conflate method differences with overlap effects.
Instead, we have updated Table 4 in the main paper (see Table 9 in the anonymous link) to enable a clearer comparison between BS and DARE under the same sparsity level.
**Q8: Effectiveness on similar tasks.**
We discuss this in our response to Reviewer 8hif (Q5).
We hope our responses have addressed your concerns. Please let us know if there are any remaining questions or clarifications we can provide.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal and answering my questions. Overall, the contributions in the paper are interesting and I am still leaning towards an accept. | null | null | null | null | null | null |
EvoMesh: Adaptive Physical Simulation with Hierarchical Graph Evolutions | Accept (poster) | Summary: This paper presents a hierarchical GNN method for learning PDEs. In contrast to many other such methods, it uses differentiable graph construction, so the message passing hierarchy is learned end-to-end.
Claims And Evidence: Yes. The main claim is better accuracy compared to multi-scale GNNs with fixed hierarchy, and the paper does provide evidence for this.
Methods And Evaluation Criteria: Benchmark datasets are fine for the task. Although I probably would have included a dataset with fast-acting/long-range physics, for which single-level GNNs perform poorly and hierarchies are necessary.
I'm not really convinced by the motivation for AMP. Oversmoothing is particularly a problem for long-range information retrieval tasks, and much less of an issues for learned PDE solvers. Those generally provide geometry information in edge features, plus by definition in PDEs information does spread locally. Summation aggregation is also actually much closer to the underlying computations in physics than attention: see force superposition in Lagrangian and differential kernel computation in the Eulerian view.
Theoretical Claims: N/A
Experimental Designs Or Analyses: There's a few important comparisons & baselines missing.
1. I didn't see any test for the efficacy of AMP-- there should be an ablation of AMP against 'isotropic' summarization
2. It would be very nice to analyze/discuss number of levels/downsampling ratio for EvoMesh & the baselines. How different are the hierarchies used by the baselines or the 'static' ablation? Is spatial location important, i.e. how does EvoMesh compare against a hierarchy with same amount of nodes per level, but sampled uniformly across the domain? Etc.
3. There should be a comparison against a single-level GNN (probably MGN, since most of the datasets are taken from that paper). I took a glance at the RMSE number in EvoMesh and the MGN paper, and RMSE reported for MGN seem actually lower for CylinderFlow, Airfoil, Plate? (it's possible there are differences in how those are computed though).
Supplementary Material: Looks good-- particularly fig 10 is nice to get a sense for the generated hierarchies.
Relation To Broader Scientific Literature: AMP looks _very_ similar, if not identical, to Graph Attention Networks (Velickovic et al). So either this should be removed as a contribution, or if the authors believe that it is sufficiently different, GAT needs to be discussed & compared to, and notable differences pointed out.
Essential References Not Discussed: GAT (see above).
There is also a bunch of literature on differentiable graph pooling (e.g. Ying et al's DiffPool), which should probably be discussed in related work.
Other Strengths And Weaknesses: Strengths:
I like the premise of the paper-- there's clear evidence that for GNNs, multi-scale hierarchies are important for many problems in physics. But the optimal way to construct these hierarchies is still TBD, and being able to perform differentiable end-to-end graph construction would be very impactful.
Weaknesses:
1. As mentioned above, part of the contribution may not be novel.
2. In a way, node selection is the easy part. The paper needs to explain how it gets around the issue of differentiable edge graph construction -- fully connected graphs for gl>1 are very expensive, which is often cited as the reason that fully differentiable end-to-end graph construction is not viable.
3. The paper needs a bit more thorough evaluation (see missing experiments above)
Other Comments Or Suggestions: ### After rebuttall ###
The additional experiments (especially vs MGN, GAT) strengthen the paper, and hence I have improved my score. A paper with all the additional experimental results and updated explanations might meet the bar for ICML.
But there's still a bunch of unanswered questions around differentiability: differentiable sampling with GS for graph construction seems to be rarely done, yet somehow this paper seems to make it work without any major new innovation. It was my understanding that with GS sampling gradient quality wouldn't be sufficient, and most papers in this space I'm aware of use soft sampling with all edges active.
Also I'm not sure the role of AMP is properly addressed-- in the rebuttal the authors pose the theory that while it might not necessarily help with physics it helps with node selection. It would be nice to properly investigate this (and the relation to GAT).
So while I wouldn't be opposed to having the paper published, I do feel that with taking a step back, a performing a major revision, this could be a much better paper, which provides much more value to the ML community.
###
Questions For Authors: - From fig 10, it looks like the downsampling ratio is around ~2 per level. Is there anything in the model that encourages a particular ratio, or put pressure to use less points? What's stopping the model from retaining every single point?
- Differentiable node sampling makes sense; however, what about edge construction? Differentiable pooling often uses fully connected graphs, with down-weighting for inactive edges, to make this process differentiable. But it sounds like this paper doesn't use fully connected graphs for G_l>1-- how is differentiability guaranteed?
These points also need to be described more explicitly in the paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > Q1: Differentiability for $G_{l>1}$.
EvoMesh avoids using fully connected graphs for ${G}_{l>1}$. Instead, it ensures differentiability through:
1. Node selection: Differentiable via Gumbel-Softmax (L183-197).
2. Edge construction: Edges are formed based on the selected nodes and the connectivity of the original graph, further enhanced by $K$-hops (L199-219).
3. Edge feature generation: A differentiable process that encodes distance vectors and norms between connected nodes into high-dimensional edge features, serving as inputs for edge and node updates in message passing.
As a result, the prediction error can be backpropagated to both the generated edges and selected node features, allowing the node selection module to be optimized.
> Q2: Novelty beyond AMP.
We appreciate the reviewer’s focus on evaluating the rationale behind AMP. Meanwhile, we'd like to emphasize that the primary novelty of EvoMesh lies in its **learnable dynamic graph structures**, with AMP being one component for enabling this capability. We hope the reviewer will consider the broader innovation of our work as a whole.
* **Core Contribution - Learnable Dynamic Graph hierarchies**: Unlike traditional PDE solvers that rely on fixed/predefined graphs, EvoMesh introduces a differentiable method to learn time-evolving graph structures that adapts to physical system dynamics.
* **Role of AMP**: AMP enables anisotropic intra-level propagation to determine node selection; and its predicted importance weights facilitate inter-level feature propagation.
> Q3: Efficacy of AMP.
The efficacy of AMP is demonstrated in two ways:
- Fig 4 (Inter-level): By comparing the model with M1-M3 baselines, it suggests that AMP's predicted importance weights are crucial for inter-level information propagation.
- Anisotropic vs. Isotropic (Intra-level): We here use isotropic summation for intra-level feature aggregation, while retaining the AMP importance weights for inter-level propagation. The results are shown below:
||RMSE-1($10^{-2}$)||RMSE-All($10^{-2}$)||
|-|-|-|-|-|
||Cylinder|Flag|Cylinder|Flag|
|Isotropic|0.4521|0.3752|40.59|118.9|
|EvoMesh|0.1568|0.3049|6.571|76.16|
> Q4: Number of levels/downsampling ratio.
Number of hierarchy levels: We use the same number of layers as the static bi-stride hierarchy method. Please refer to **Appendix C.4** for the impact of varying hierarchy numbers.
Downsampling ratio: Below, we present the average downsampling ratios for EvoMesh and the static bi-stride hierarchies. We visualize static and EvoMesh hierarchies at https://sites.google.com/view/evomesh/.
|Node retention|Cylinder||Flag||
|-|-|-|-|-|
|Layer|Static|EvoMesh|Static|EvoMesh|
| 1| 0.5029| 0.4510| 0.5009| 0.4409|
| 2| 0.5069| 0.5090| 0.5082| 0.6288|
| 3| 0.5147| 0.5294| 0.5174| 0.5251|
| 4| 0.5356| 0.5031| 0.5654| 0.7323|
| 5| 0.5131| 0.4306| 0.5582| 0.2626|
| 6| 0.5039| 0.6331| 0.5129| 0.7182|
> Q5: vs. Uniform node-sampling.
We present the required comparisons below. The baseline model uses an equal number of uniformly sampled nodes at each level.
||RMSE-1($10^{-2}$)||RMSE-All($10^{-2}$)||
|-|-|-|-|-|
||Cylinder|Flag|Cylinder|Flag|
|Uniform sample|0.3019|0.3999|9.357|145.27|
|EvoMesh|0.1568|0.3049|6.571|76.16|
> Q6: MGN reports lower RMSE for CylinderFlow/Airfoil/Plate.
Most models converge after 1 million steps, while MGN trains for 10 million steps (10 days for CylinderFlow, 20 for Airfoil). To fix this, we use 500 trajectories for the Plate dataset, whereas MGN paper uses 1000. For Airfoil, our sequence length is 100 steps, while MGN uses 600 (detailed in Appendix). We now include MGN results in our response to **Reviewer Mnuf Q4**, showing EvoMesh outperforms MGN.
> Q7: AMP vs. GAT.
Please see our response to **Reviewer Mnuf Q3**.
> Q8: Related work -- DiffPool.
We'll include a discussion of these methods in the revised paper -- Most differentiable graph pooling methods like DiffPool are designed for graph classification, where unpooling is not typically needed. In contrast, EvoMesh focuses on mesh-based physics simulation, requiring dynamic modeling across multiple scales and integration of global and local details. Dynamic hierarchy construction remains largely under-explored in this field.
> Q9: Ratios of retained points.
We provided sampling ratios of EvoMesh in our previous response to Q4, which shows variations across different graph layers. While static methods follow predefined reduction steps, EvoMesh adaptively retains nodes **based on the optimization objective**, which is solely aimed at improving prediction accuracy. We do not impose constraints on sampling ratios.
Is it possible for the model to retain all nodes? Theoretically, yes. However, in practice, the model prioritizes the most relevant nodes for the simulation, those that contribute the most to the final prediction. This behavior emerges naturally during training, suggesting that dense high-level graph structures may not be necessary for this task. | Summary: This paper proposes anisotropic message passing (AMP) with hierarchical structure for mesh-based simulation. Specifically, the AMP enables GNN to predict the edges’ weights before aggregation. The hierarchical graphs are constructed dynamically given the predicted importance. Experiments on five different domains show that the proposed method outperforms baselines.
## Update after rebuttal
The author has addressed my concerns, so I remain positive about this paper and will keep my original score.
Claims And Evidence: The claims are clear.
Methods And Evaluation Criteria: Make sense.
Theoretical Claims: N/A. The claims are generally correct.
Experimental Designs Or Analyses: The experiments are sufficient. Visualized results on all domains are provided either in the main text or appendix. Both the mean and standard deviation are reported in the quantitative results.
Supplementary Material: I review both the appendix and visualized gif.
Relation To Broader Scientific Literature: Please refer to "Essential References Not Discussed".
Essential References Not Discussed: The paper should discuss LayersNet [1], which also deals with learning-based simulation. The relation and similarity should be discussed. For example, LayersNet also applies hierarchical structure for mesh-based simulation; the attention mechanics and the AMP's selection share similarity.
[1] Shao, et al. Towards Multi-Layered 3D Garments Animation, ICCV2023.
Other Strengths And Weaknesses: The experiments are sufficient and convincing. It would be better if the author could provide more visualized results, such as videos with more frames.
Other Comments Or Suggestions: Please refer to "Questions For Authors".
Questions For Authors: Here I summarize and list my questions:
1. Is the proposed method bring extra computational overhead? What is the efficiency of this method?
2. Missing reference in "Essential References Not Discussed".
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments. Below, we address each comment point-by-point.
> Q1: Missing reference -- LayersNet.
Thank you for bringing LayersNet to our attention. We will include it in our related work and provide detailed comparisons in the revised manuscript. Below is a brief discussion of the key differences:
- *Distinct application domains:* While LayersNet focuses on garment animation influenced by external forces, EvoMesh is a general mesh-based simulation method for both Eulerian (e.g., fluid and structure simulations) and Lagrangian (e.g., cloth simulation) systems.
- *Different hierarchical structures:* LayersNet employs a **static, patch-based two-level hierarchy**, where garments are represented as particle patches to simplify interaction modeling. In contrast, EvoMesh introduces a **learning-based, time-evolving graph structure that dynamically adapts to physical system dynamics**, with the entire structure being learned in a fully differentiable manner.
- *Attention vs. AMP mechanisms:* LayersNet employs specialized attention mechanisms to handle rotational symmetry in garment dynamics. In EvoMesh, **AMP serves as the core mechanism for dynamic graph construction**, where the differentiable node selection is guided by AMP features. Additionally, AMP’s predicted importance weights facilitate inter-level feature propagation. We also discuss the impact of replacing AMP with GATConv, which results in degraded performance, as detailed in our response to **Reviewer Mnuf Q3**.
> Q2: More visualized results.
We provide additional visualizations in https://sites.google.com/view/evomesh/, including prediction showcases of FoldingPaper, constructed comparisons for static-based bi-stride hierarchy and dynamics hierarchies of EvoMesh. We hope this addresses your concern.
> Q3: Is the proposed method bring extra computational overhead?
We have provided a comprehensive efficiency comparison in **Appendix Table 13**, covering training time, inference time, and model size.
Admittedly, the dynamic hierarchy construction in EvoMesh does introduce some additional computational overhead. However, as shown in Table 13, this cost is manageable as **the overall training time remains comparable to that of static hierarchy methods**. This efficiency is primarily attributed to the significant reduction in the number of edges at each hierarchy level, as detailed in our response to **Reviewer TDvH Q7**. | Summary: The paper presents a novel hierarchical graph network architecture in which the hierarchy is determined in a data-driven manner. Additionally, it introduces an anisotropic message-passing step that incorporates an attention mechanism into the aggregation process. With these innovations, the proposed EvoMesh architecture outperforms various hierarchical baselines across multiple challenging tasks by a significant margin.
Claims And Evidence: All claims are backed up by evidence.
Methods And Evaluation Criteria: ## Method
I did not fully understand the details of the DiffSELECT operation. The model needs to sample nodes during training, and through reparameterization, the gradient somehow propagates through this process. However, the exact mathematical formulation of Eq. (6) remains unclear from reading only the current paper. Providing a precise formulation—perhaps in the appendix—would improve the paper, as this is a crucial component enabling the data-driven hierarchical selection.
Similarly, the motivation behind FeatureMixing is not entirely clear. Is it intended to combine features from a coarser level with those from the current level? A more detailed explanation would help clarify this aspect of the architecture.
## Evaluation
The evaluation appears thorough, covering a range of ablations and baselines. However, I missed a complete rollout from the OOD mesh resolutions—or, if a full rollout is too noisy, a multi-step rollout could still be informative. Additionally, a qualitative showcase of the paper-folding task would be valuable. Since, to my understanding, this is a new task and dataset, visualizing the results would help illustrate the model's performance more effectively.
Theoretical Claims: No theoretical claims were made.
Experimental Designs Or Analyses: The selection of only three seeds seems somewhat low. However, the standard deviations suggest that significant variations between different training runs are unlikely.
Supplementary Material: After reviewing the appendix, I noticed that the Gaussian input noise values were not reported. Are these values tuned per task? Additionally, do they vary based on mesh resolution in the OOD task? Including this information would provide further clarity and improve the completeness of the evaluation.
Relation To Broader Scientific Literature: This work builds upon hierarchical graph network simulators, introducing novel aspects in both anisotropic message passing and data-driven hierarchy construction. While Attentive Graph Neural Networks share some similarities with the anisotropic message passing steps, their structure differs significantly, making this a clear contribution.
Essential References Not Discussed: Not the case.
Other Strengths And Weaknesses: The introduction effectively communicates the problem, the proposed method, and its broader context. Additionally, the analysis and evaluation are thorough. I enjoyed reading the paper. However, as mentioned earlier, some details in the method description—particularly regarding the DiffSELECT operation and the FeatureMixing method—are missing. Expanding on these aspects would improve clarity.
Other Comments Or Suggestions: - In Figure 2, the authors speak from “DHMP”, I suspect that this is an older acronym of the EvoMesh method?
- Same goes for Figure 8a.
Questions For Authors: - According to the appendix, the runtime was similar to that of static hierarchy models. Why is this the case? Given that hierarchy construction is typically costly, did the authors employ any optimizations or tricks to improve efficiency? Clarifying this would be helpful.
- Will the authors release their code and datasets? Given the complexity of the data-driven hierarchy implementation, making the code available would help increase adoption of the method. Additionally, I would appreciate the opportunity to compare it more easily in my own work. I consider raising my score to Strong Accept if that is the case.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer’s valuable comments.
> Q1: Details of DiffSELECT.
We here revise Eq. (6) to clarify the node selection process. Specifically, we employ Gumbel-Softmax independently for each node based on a 2-dimensional logits vector predicted by model $\phi^v$. This enables stochastic sampling of node retention in the downsampled graph while ensuring differentiability for gradient-based optimization.
The formulation is as follows:
$$
z\_i^l=\text{Gumbel-Softmax}(\mathbf{l}\_i^l)=\frac{\exp\left((\log p\_{i,0}^l+g\_{i,0}^l) / \tau \right)}{\sum_{k=0}^1 \exp\left((\log p_{i,k}^l + g_{i,k}^l) / \tau \right)},
$$
where:
- $\mathbf{l}\_i^l=(\log p\_{i,0}^l, \log p\_{i,1}^l)$ represents the logits for node $v_i$ at layer $l$,
- $g_{i,k}^l$ is the Gumbel noise sampled independently for each node and category,
- $\tau$ is the temperature parameter controlling the smoothness of the sampling process.
This process allows for a **differentiable approximation of discrete node selection**. We will include this explanation in the revised manuscript.
> Q2: On FeatureMixing.
FeatureMixing is to integrate features from finer levels with those from coarser levels, to better capture both local and global information in the U-Net architecture. While the EXPAND operation upsamples coarser-level features (e.g., level $L$) to match the resolution of the current level ($L−1$), naive upsampling can introduce noise or lead to uneven upsampling. FeatureMixing refines and aligns coarser-level features before fusion.
The whole upsampling process works as follows:
1. **EXPAND**: Coarser-level features ($\mathbf{F}_L$) are upsampled to the resolution of level $L-1$.
2. **Refinement**: A message-passing step is applied to refine the upsampled features ($\text{EXPAND}(\mathbf{F}_L)$):
$\mathbf{\tilde{F}}_L = \text{MessagePassing}(\text{EXPAND}(\mathbf{F}_L))$
3. **Fusion**: The refined coarser-level features ($\mathbf{\tilde{F}}\_L$) are combined with the current level’s intra-level features ($\mathbf{F}\_{L-1}$):
$\mathbf{F}\_{L-1}^{\text{mixed}} = \mathbf{F}_{L-1} + \mathbf{\tilde{F}}_L$
We will improve the clarity of FeatureMixing in the revision.
> Q3: Multi-step rollouts on OOD Mesh Task.
We have supplemented the rollout predictions with 50 steps and results of additional baselines on the Cylinder and Airfoil datasets in the rebuttal period. Results are presented in the reply to **Reviewer Mnuf Q4**. As shown, EvoMesh outperforms other methods in both the 1-step and 50-step rollout predictions, demonstrating superior generalization.
> Q4: Qualitative showcases of paper-folding.
Visualizations of EvoMesh and MGN rollouts are available in https://sites.google.com/view/evomesh/. EvoMesh better captures fine details and deformation, adapting to intricate folds and complex geometries. We hope this addresses your concern.
> Q5: Gaussian input noise.
The values for Gaussian input noise are reported in Appendix A, Table 7. We adopted the same noise scale used in the MGN and BSMS-GNN papers for each dataset, without task-specific tuning. The MGN paper and its referenced GNS paper[1] offer a detailed discussion on the selection of the Gaussian noise scale, and subsequent work generally adhered to these established values. The same scale is applied in the OOD task, regardless of mesh resolution.
[1] Learning to Simulate Complex Physics with Graph Networks.
> Q6: Typo in Figure 2 and Figure 8a.
"DHMP" indeed refers to EvoMesh, and we'll correct this in the revised manuscript.
> Q7: Why similar runtime to static hierarchy models? Any optimizations to improve efficiency?
EvoMesh maintains a runtime comparable to static hierarchy models because the computational overhead of hierarchy construction is *balanced by a substantial reduction in the number of edges* at each hierarchical level. As shown below, EvoMesh consistently yields fewer edges per layer compared to the static bi-stride hierarchy. Since message passing complexity is primarily driven by the number of edges rather than nodes, this reduction substantially lowers computational costs.
Notably, EvoMesh achieves this without explicit efficiency-driven modifications. Instead, this efficiency emerges naturally during training, as EvoMesh autonomously identifies key nodes, which we observe to be relatively sparse. This partially suggests that the dense high-level graph structures may not be necessary for this task.
|| CylinderFlow (#nodes, #edges)||(#nodes, #edges)||
|-|-|-|-|-|
|Layer|Static |EvoMesh| Static|EvoMesh|
|1|(949, 8128)|(851, 5322)|(791, 7130)|(696, 4347)|
|2|(481, 6333)|(433, 3150)|(402, 5728)|(438, 3975)|
|3|(247, 5308)|(229, 2021)|(208, 5210)|(230, 2681)|
|4|(133, 4645)|(115, 1172)|(118, 5654)|(168, 2178)|
|5|(81, 4285)|(50, 484)|(77, 4923)|(44, 588)|
|6|(41, 1654)|(31, 358)|(40, 1537)|(29, 174)|
> Q8: Code release.
We'll release both the code and datasets upon acceptance. | Summary: The paper presents EvoMesh, a graph neural network for mesh-based physical simulations that dynamically learns evolving graph hierarchies instead of relying on fixed structures. Using anisotropic message passing, it adaptively selects nodes based on physical inputs, improving long-range dependency modeling. Experiments show improvement over fixed-hierarchy GNNs across multiple simulation datasets.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Make sense.
Theoretical Claims: Yes, seems correct
Experimental Designs Or Analyses: Yes, seems reasonable.
Supplementary Material: Yes, reviewed all the supplementary materials.
Relation To Broader Scientific Literature: The paper advances GNN-based physical simulations by replacing fixed graph hierarchies with adaptive, time-evolving structures via differentiable node selection.
Essential References Not Discussed: The related work seems include sufficiency references.
Other Strengths And Weaknesses: Overall, the proposed approach appears reasonable, and the results show improvement over baseline methods. However, a primary concern is that the novelty of the method seems limited, as it closely resembles the BSMS-GNN in overall structure.
Other Comments Or Suggestions: Please find comments in the question section.
Questions For Authors: 1. Could the authors provide more detail about the differences or novel aspects of the proposed approach compared to BSMS-GNN?
2. BSMS-GNN’s main contribution is its computational efficiency, which may come at the expense of prediction accuracy. In contrast, methods like EAGLE (Janny et al.) emphasize multi-scale message-passing algorithms and typically achieve better predictive performance.
The authors might consider comparing their approach to such methods.
3. The anisotropic message-passing layer appears similar to a variant of GAT. Could the authors explain this resemblance in more detail?
4. There are inconsistencies in the choice of methods used for comparison in different experimental setups. For example, why is MGN not included in Table 2 but only in Table 3? Likewise, why are methods like MGN and Lino et al. (2022) excluded from Table 4 comparisons?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments.
> Q1: Novelty compared to BSMS-GNN.
While both EvoMesh and BSMS-GNN adopt U-Net-based hierarchical structures for multi-scale modeling, EvoMesh introduces critical innovations in **end-to-end joint learning of graph hierarchies and physics dynamics that adapt to changing physics states**:
**Core Innovation: Adaptive vs. Static (preprocessed) Hierarchies**
- *EvoMesh* integrates graph structure learning and dynamics learning into a fully differentiable framework, enabling **time-evolving graph hierarchies that adapt dynamically to changing physical conditions**. Specifically, this is implemented with *anisotropic intra-level propagation*, where edge weights are modulated by local physical states to enable directionally sensitive message passing, and *learnable inter-level propagation*, which adaptively learns interactions between hierarchical levels to optimize information flow across scales.
- *BSMS-GNN* separates hierarchy construction (preprocessing) from dynamics learning. It employs a **fixed, pre-defined graph hierarchy** with static connectivity throughout the simulation, limiting its ability to refine its structure in temporally evolving systems.
>Q2: EvoMesh vs. EAGLE (Janny et al.).
In methodology,
- *EAGLE* uses a *two-scale* hierarchical message-passing approach, downscaling mesh resolution via *geometric* clustering of mesh positions. **The fixed-size clustering is precomputed with a modified k-means algorithm, independent of dynamics modeling.**
- *EvoMesh* constructs a *multi-scale* hierarchy that enables the **joint learning of adapative high-level graph structures and physical dynamics** within an end-to-end differentiable framework.
The table below compares the performance of EAGLE and EvoMesh. The results show that EAGLE’s purely geometric two-scale hierarchy results in slightly lower predictive performance, while EvoMesh benefits from its context-aware dynamic hierarchy.
| | RMSE-1 ($\times 10^{-2}$)| | RMSE-All ($\times 10^{-2}$) | |
|-|-|-|-|-|
|| Cylinder| Flag | Cylinder| Flag |
|EAGLE| 0.1733|0.3805| 20.05|127.7|
| EvoMesh | 0.1568| 0.3049 | 6.571| 76.16 |
> Q3: AMP vs. GAT
The key distinction between AMP and GAT is that **AMP enables differentiable dynamic hierarchy construction**, a capability not supported by GAT. While AMP shares similarities with attention-based methods like GAT in computing importance weights, the detailed implementation differences are as follows:
- GAT applies computed weights to aggregate node features through weighted summation;
- Ours: (a) directly applies predicted weights to edge features, (b) leverages weights for inter-level feature propagation.
This dual mechanism enables AMP to dynamically learn graph hierarchies adaptable to evolving physical systems while jointly modeling physics dynamics through gradient-based learning, rather than relying on static hierarchical structures.
To further demonstrate the effectiveness of AMP, we replaced it with GATConv in EvoMesh, using a single attention head. **The results below show that AMP layers used in EvoMesh outperform GATConv.** Full comparisons between AMP and GAT will be included in the revised manuscript.
| | RMSE-1 ($\times 10^{-2}$)| |RMSE-All ($\times 10^{-2}$) ||
|-|-|-|-|-|
||Cylinder| Flag |Cylinder| Flag|
| EvoMesh-GATConv|0.2025|0.3009|10.253| 106.2|
| EvoMesh |0.1568|0.3049|6.571|76.16|
> Q4: There are inconsistencies in the choice of methods used for comparison in different experimental setups.
**Table 2:** Here, we present the results of MGN, showing that EvoMesh achieves superior performance.
|| RMSE-1 ($\times 10^{-2}$)||||RMSE-All ($\times 10^{-2}$) ||||
|-|-|-|-|-|-|:-:|:-:|:-:|
|| Cylinder | Airfoil | Flag | Plate| Cylinder |Airfoil|Flag|Plate|
|MGN| 0.3046|77.38| 0.3459 | 0.0579 |59.78| 2816|115.3|3.982|
|EvoMesh|0.1568| 41.41| 0.3049 | 0.0282 |6.571 |2002|76.16| 1.296 |
**Table 3:** For the Paper Folding dataset, fixed hierarchy-based approaches (e.g., BSMS-GNN and HCMT) are excluded because their hierarchies, derived from the initial graph input, cannot be applied across the entire sequence as the mesh structure evolves.
**Table 4:** We have supplemented this table with additional baseline models in the rebuttal period. Additionally, we present results from 50-step rollouts to offer further insights into the performance of extended simulations. The results demonstrate the generalizability of EvoMesh to OOD mesh structures through dynamic hierarchy construction. Full results will be provided in the revised paper.
||RMSE-1 ($\times 10^{-2}$)|| RMSE-50 ($\times 10^{-2}$) | |
|-|-|-|-|-|
| |Cylinder|Airfoil|Cylinder|Airfoil|
| MGN (**New**) | 1.0596|169.577|7.8332|1829.1|
| Lino et al. (**New**) |25.8930|144.35|65.207|1299.4|
| BSMS-GNN| 0.9177 |202.30 |2.0971|1677.3|
| EvoMesh | **0.4855** | **126.70**|**1.0771**|**812.47**| | null | null | null | null | null | null |
Decoupled SGDA for Games with Intermittent Strategy Communication | Accept (poster) | Summary: This paper presents a novel adaptation of Stochastic Gradient Descent Ascent (SGDA) tailored for multiplayer games where strategy updates occur intermittently. The authors introduce **Decoupled SGDA**, a method that allows players to update strategies locally using outdated opponent strategies, significantly reducing communication overhead.
The paper establishes theoretical guarantees for convergence in strongly-convex-strongly-concave (SCSC) games and weakly coupled games. Moreover, the proposed method outperforms existing federated minimax approaches in noisy, imbalanced settings. Numerical experiments validate its effectiveness in quadratic minimax games, federated learning, and non-convex GAN training. A further extension, **Ghost-SGDA**, is introduced as a heuristic to enhance communication efficiency.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: The proofs appear sound, but I did not verify all derivations.
Experimental Designs Or Analyses: Experiments cover key settings and demonstrate benefits.
Supplementary Material: I did not review the entire supplementary material in depth, particularly the theoretical derivations.
Relation To Broader Scientific Literature: The work extends prior research in minimax optimization, federated learning, and game theory, introducing a novel approach for communication-efficient optimization.
Essential References Not Discussed: No
Other Strengths And Weaknesses: #### Strengths
- Well-motivated problem and clear contributions.
- Theoretical guarantees and strong empirical results.
- Addresses real-world constraints in **distributed learning and federated settings**.
Other Comments Or Suggestions: Nothing
Questions For Authors: See weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's time and effort in reviewing our work and highlighting its strengths. We would be happy to further discuss any concerns and address the reviewer’s questions. If the reviewer finds our clarifications satisfactory, we appreciate if you consider increasing your score. | Summary: The authors introduce a variant of SGDA to compute solutions to decentralized min-max problems with limited communication. Players update locally with outdated strategies and synchronize intermittently, improving efficiency. Theoretical results establish near-optimal communication complexity in strongly convex–strongly concave settings, with extensions to weakly coupled games. Experiments on quadratic minimax problems and GANs to empirically validate the proposed method.
Post-rebuttal
The authors response was helpful, but given that I am not well versed in communication complexity, I am not in a good position to evaluate the contribution of this work. I will defer to the other reviewers and maintain my score.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes, I checked the correctness of the convergence guarantees.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes, I reviewed the proofs in the appendix.
Relation To Broader Scientific Literature: The authors do a good job of contextualizing their work in the broader literature on GDA methods for SC-SC minimax games.
Essential References Not Discussed: Not as far as I am aware.
Other Strengths And Weaknesses: While the idea presented is an interesting one, and I understand the motivation behind studying a limited communication algorithm for equilibrium computation, the novelty of the work in the paper appears to be limited, since the theoretical analysis is standard and the approach focuses on the restrictive SC-SC setting. The speedups compared to other algorithms mentioned in Table 1 is not really interpretable. A comparison of convergence rates in addition to communication complexities would help provide additional context for the proposed algorithm.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Can you provide further interpretation on the conditions for speedup relative to algorithms in the literature?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's time and effort in reviewing our work. As the first work to consider solving minimax games in a distributed setting while players are on different machines, we decided to focus on SCSC games in this study. Extending this method to other types of games, such as convex-concave or non-convex-non-concave games, is a promising direction for future work.
However, our work introduces the following novelties: Firstly, it is not clear whether the worst-case analysis of our method could be as good as the baseline GDA. We provide a proof demonstrating that this is indeed the case (see Appendix B.2). We refer to this as the non-weakly coupled regime, in which players have high interaction with each other. On the other hand, we present a novel proof for the weakly coupled regime, where players have low interaction (see Appendix B.1). To the best of our knowledge, this is the first proof that removes the dependency on players' conditioning and instead relies solely on the degree of interaction between players, which is low in the weakly coupled regime.
>Can you provide further interpretation on the conditions for speedup relative to algorithms in the literature?
Other works in the literature typically consider the worst-case scenario, where no communication acceleration can be achieved when player interaction is low. For instance, the recent work [1] considers a scenario where $N$ players collaboratively find the equilibrium while taking local steps and communicating intermittently. This framework is the same as ours (we provide the $N$-player extension in Section C), yet they do not demonstrate any communication acceleration, as their convergence rate always depends on $L_{\max}$, the maximum smoothness parameter among all players (Theorem 3.3). This assumption is overly pessimistic when player interaction is low. However, in our work we demonstrate the level of interaction using the parameter $L_c$ which is zero when there is no interaction between players and is small when interaction is low. This allows us to have a rate depending on $L_c$ (and not $L_{\max}$) in the weakly coupled regime. As far as we know, our work is the only one that explicitly accounts for this fact in the distributed minimax problems.
We hope we could address your concerns about our work and would be happy to clarify if you have further questions. If we managed to address all your concerns, we appreciate if you consider increasing your score.
references:
[1] Yoon, T., Choudhury, S. and Loizou, N., 2025. Multiplayer federated learning: Reaching equilibrium with less communication. arXiv preprint arXiv:2501.08263. | Summary: This paper studies the communication complexity of first-order methods for solving strongly convex games (two-player games in the main body and an extension to $n$-player games in the appendix). Motivated by the fact that the communication complexity of recent (near-)optimal methods (Lin et al., 2020; Kovalev & Gasnikov, 2022) is independent of the coupling in the minimax objective—resulting in no speed-up even in fully decoupled objectives—the authors introduce a measure of coupling, $\kappa$. This measure is defined as the ratio between the Lipschitz continuity of the lagged operators of the corresponding VI and its monotonicity.
The authors propose a new algorithm, a direct extension of first-order methods for minimax optimization, which updates locally using outdated information from other players and only communicates periodically. They analyze the algorithm in a noisy lagged setting. The new algorithm, termed **Decoupled SGDA**, demonstrates some relative speed-up compared to existing methods in weakly coupled settings, in terms of the condition numbers of the players, $\kappa_u$, $\kappa_v$, and the coupling degree $\kappa$. In general settings, the communication complexity of the algorithm is comparable to optimal methods (Zhang et al., 2022; Kovalev & Gasnikov, 2022), though it may be off by a factor that depends on $\kappa_u$, $\kappa_v$, and $\kappa$.
The authors conduct experiments to demonstrate the effectiveness of their algorithm in weakly coupled settings on classical quadratic convex games. Additionally, they illustrate that similar ideas as Decoupled SGDA can be applied to nonconvex optimization problems by showing practical results in a toy GAN setting.
Claims And Evidence: Yes, the theoretical claims and experiments are well supported. Additionally, it is important to note that there is no magic in the theory, and claims such as speed-ups in weakly coupled settings, given the design of the new algorithm, are expected.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No, I have not checked the proof in the appendix.
Experimental Designs Or Analyses: Yes, I checked the ones in the main body.
Supplementary Material: No.
Relation To Broader Scientific Literature: This research focuses on the communication complexity of minimax optimization, which may have some impact on training GANs and adversarial machine learning, particularly in scenarios where some form of limitations on synchronization exist.
Essential References Not Discussed: Not that I know of. However, I would like to note that I am not an expert in this line of work.
Other Strengths And Weaknesses: Strengths:
- The observation and formalism on the effect of the coupling level of the objective function on communication complexity are interesting, e.g., the definition of $\kappa$, weakly coupled settings, etc.
- While the proposed algorithm is quite natural (which I do not consider a drawback), the paper theoretically demonstrates considerable speed-ups in certain regimes depending on the condition numbers of the players' objectives.
- Well-designed experiments support the main claims of the paper.
- The paper is well-written and easy to follow.
Weaknesses:
- I highly doubt that the theoretical novelties in this work provide sufficient merit for a top-tier publication, as the analysis appears to be relatively straightforward and closely follows previous work. Additional discussion on these aspects, especially on how the analysis differs from prior works, seems necessary.
Given these points, I am leaning toward rejection, but I am open to reconsideration based on the opinions of the other reviewers and the forthcoming discussion.
Other Comments Or Suggestions: I suggest that the authors discuss the mathematical novelties of their proofs in the main body, as this will significantly strengthen the paper.
In Algorithm 1, it appears that two lines are missing, which update the initial belief of the other player at the start of each round based on the previous completed round. Specifically, after Line 2, the following should be added:
set $v_0^{r} = v_K^{r - 1}$ and $u_0^{r} = u_K^{r - 1}$.
Questions For Authors: How are $\kappa_u$ and $\kappa_v$ related to $\kappa$? Is it possible to provide crude upper and lower bounds on the relationship between these parameters to make the speed-up of Decoupled SGDA in Table 1 more meaningful compared to other algorithms?
Additionally, can you design concrete examples where the speed-up criteria are satisfied for illustration purposes?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's time and effort in reviewing our work. Although some parts of the analysis follow the previous works, we have novelties in our proof, especially in the weakly coupled regime. Firstly, one novelty of our work is identifying the regime in which we can achieve communication acceleration. We mathematically identified under which condition the coupling between players is low, and it can give us communication acceleration. To the best of our knowledge, previously, there was no proof in the literature in which the low interaction of players has been identified based on players' related parameters. Secondly, in section B.1 in the Appendix, we provide a novel proof for the convergence of our method in the weakly coupled regime in which we remove the dependency on the players' conditioning and our rate depends only on the coupling parameters of the players ($L_c$), which can be very small and even zero. So far, this type of proof has not existed in the literature, and previous works always use some pessimistic assumption for all regimes (even when there is no interaction between players). As an example, the very recent work [1], which considers the same setting as our work in which $N$ players (we discuss the extension to $N$ players in Section C) collaboratively find the equilibrium while taking local steps and synchronize periodically, gives a rate that always depends on the quantity $L_{\max} = \max ( L_1, \dots, L_N )$, where $L_i$ is the smoothness parameter of the $i$th player (Theorem 3.3). This rate is pessimistic in low interaction games. One can verify that when there is no interaction, players can independently optimize their own objective without the need for communication. While this work cannot recover this scenario, our rate does because of introducing the parameters $L_c$ and our novel proof. We also cover the case that the interaction exists but is somewhat small, and we still can get communication acceleration.
>How are $\kappa_u$ and $\kappa_v$ related to $\kappa$? Is it possible to provide crude upper and lower bounds on the relationship between these parameters to make the speed-up of Decoupled SGDA in Table 1 more meaningful compared to other algorithms?
It always holds that $\kappa \ge \max(\kappa_u, \kappa_v,\kappa_c)$ (note that without the loss of generality, $\kappa_c = \kappa_{uv} = \kappa_{vu}$ if we use regular $l_2$ norm or equally if the parameters $\alpha_u = \alpha_v = 1$. See Section "Notation" from the paper for the definition of $\alpha_u, \alpha_v$). If $\kappa_{c}$ is large compared to $\kappa_u,\kappa_v$, meaning that we have a strong interaction between players, our method achieves nearly GDA complexity as $\kappa_c$ dominates the condition numbers of players (worst-case analysis of our method). However, when $\kappa_{c}$ is small, which we refer to as the weakly coupled regime, our method achieves significant acceleration as there is no dependency on the players' conditioning or $\kappa$ and only $\kappa_c$ appears as opposed to all other methods which are still affected by $\kappa_u,\kappa_v$, even if there is no interaction (meaning that $\kappa_{uv} = \kappa_{vu} = \kappa_c = 0$). So, our method performs very well when the interaction is low and players suffer from poor conditioning, but also it does not get much worse than baseline GDA in the case of high interaction between players.
> Can you design concrete examples where the speed-up criteria are satisfied for illustration purposes?
To show this speedup on some examples, one can consider quadratic games with linear coupling in the form of $f(u, v) = \frac{1}{2} u^\top A u - \frac{1}{2} v^\top B v + u^\top C v$ where $A, B$ are positive definite matrices and $C$ is a general possibly rectangular matrix. Let $\mu_u, \mu_v$ be the minimum eigenvalues of $A, B$. The condition for which we get communication acceleration on this class of functions is $\frac{\|\|C\|\|}{\min(\mu_u, \mu_v)} \leq \frac{1}{2}$, where $\|\|. \|\|$ is the $l_2$ norm of the matrix $C$. The norm of matrix $C$ defines the level of interaction, and the larger it is, the more interactive the game is. So, it is expected to get more acceleration as this norm gets smaller, and in the extreme case, when $C = 0$, there is no interaction, and players can independently optimize and find the equilibrium without any communication.
We want to thank the reviewer again for their constructive feedback and we will consider adding some details about the novelty of proof in the main body of the revision. If we managed to address your concerns, we appreciate if you consider increasing your score.
references:
[1] Yoon, T., Choudhury, S. and Loizou, N., 2025. Multiplayer federated learning: Reaching equilibrium with less communication. arXiv preprint arXiv:2501.08263.
---
Rebuttal Comment 1.1:
Comment: I sincerely appreciate the detailed response from the authors.
However, I would like to note that I am not an expert in communication complexity, and despite the additional explanation provided by the authors, I am not able to verify the significance or extent of the mathematical novelty of the algorithm and analysis based on my understanding. For these reasons, I am not changing my score and will leave the final decision to the post-discussion phase among the reviewers.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer’s thoughtful engagement with our work and fully understand the challenge of evaluating contributions outside one's specialty. We are grateful for the opportunity to clarify two key points that may help contextualize our work:
- ***Accessible Novelty***: Our work addresses a previously overlooked but practically significant scenario in game-theoretic learning, which fundamentally differs from standard federated settings. This area is gaining attention, as evidenced by a concurrent work [1] that validates the timeliness of this research direction. We wish to emphasize that this concurrent work closely follows techniques from the federated learning literature. Consequently, their convergence rates (as presented in Theorem 3.3) do not demonstrate a significant speedup in the deterministic regime due to the strong constraint on the step sizes discussed below their Theorem 3.3. In contrast, our novel approach, based on the concept of ***weakly coupled games***, allows us to overcome this step size limitation and achieve, for the first time, a significant speedup in the deterministic case (see, for instance, our Corollary 4.4 and Table 1).
- ***Review Process Considerations***: We fully respect your evaluation but wish to emphasize that in ICML’s competitive process, even small score adjustments can significantly influence whether area chairs review the full discussion of a paper’s merits. If you perceive any potential value in our work, despite it being outside your specialty, we would be deeply grateful if you could consider a slightly more favorable evaluation to ensure our work receives full consideration in the discussion phase.
Thank you for your time and expertise in reviewing our work. | Summary: This paper introduces Decoupled SGDA, a novel stochastic gradient-based method designed for multi-player (or two-player) games with intermittent strategy communication. The key idea is that each player updates their parameter (e.g., minimizer or maximizer) locally using potentially outdated (yet periodically synchronized) opponent strategies. The authors analyze the approach under strongly-convex strongly-concave (SCSC) assumptions and show that in a regime they call Weakly Coupled Games, Decoupled SGDA achieves accelerated communication complexity compared to traditional methods.
Claims And Evidence: **Reduced communication in weakly coupled games**
- *Claim*: Decoupled SGDA requires fewer communication rounds than standard GDA/SGDA, especially when the interaction between players is weak.
- *Evidence*: The authors introduce a coupling parameter $\kappa_c$ and prove that if $\kappa_c \le 1/4$, the method’s communication complexity *no longer depends* on the potentially large condition numbers $\kappa_u, \kappa_v$. Experimental plots on quadratic games confirm substantially fewer communication rounds for small $\kappa_c$.
**Near-optimal communication complexity**
- *Claim*: Decoupled SGDA can match or even outperform, in terms of *communication rounds*, the best-known first-order methods (like FOAM) under certain stronger assumptions on the coupling.
- *Evidence*: Corollary 4.5 states that if $\kappa_c \le \tfrac{1}{2}\sqrt{1 - \frac{1}{\sqrt{\kappa_u \kappa_v}}}$, Decoupled SGDA beats FOAM (which has optimal iteration complexity for classical SCSC). The proofs (in the main text and appendices) and carefully derived bounds support this.
Methods And Evaluation Criteria: - Decoupled SGDA performs local gradient updates with respect to *one’s own parameter* only, using *outdated* opponent strategies that are synchronized less frequently. In practice:
1. Each round begins with a synchronization step where $(\mathbf{u}, \mathbf{v})$ are exchanged among players.
2. Each player runs $K$ local steps using only the gradient w.r.t. their own variable and a *fixed* (outdated) opponent parameter from the start of the round.
3. After $K$ local steps, parameters are synchronized again.
- *Theoretical metrics*: The standard measure is the *distance to the equilibrium* $\|\mathbf{x}_t - \mathbf{x}^\star\|$ for strongly-convex strongly-concave games, and *gradient norm* for more general (non-convex) settings.
- *Empirical metrics*:
- Quadratic toy games: distance to equilibrium or gradient norm.
- Non-convex toy game (GAN-like): smallest gradient norm vs. number of communication rounds.
Theoretical Claims: Under strong monotonicity, if $\kappa_c \le 1/4$, Decoupled SGDA achieves a *linear rate* $\exp(- (1-4 \kappa_c) R)$ in terms of *rounds* $R$.
Experimental Designs Or Analyses: Overall, the experimental methodology is sound: they fix the total number of communication rounds and observe the final performance. This directly speaks to the main claim of *communication efficiency*.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The work builds on foundational methods (GDA, EG, OGDA) and references advanced algorithms such as FOAM, APPA, etc. The authors also connect to known lower bounds in strongly-convex strongly-concave optimization, ensuring theoretical completeness.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The requirement of unbiased gradient oracles *for each player’s own parameters* might be challenging in certain practical contexts (though standard in many theoretical studies).
Other Comments Or Suggestions: 1. It might be interesting to see if adapting the number of local steps $K$ dynamically (e.g., using line search or variance monitoring) can further reduce communication, especially during early or late phases of training.
2. If a player partially observes the other’s parameters between rounds, can the approach be adapted to incorporate partial synchronization?
Questions For Authors: 1. **Choice of Local Steps $K$ in Practice**
- *How sensitive is performance to the choice of $K$ in real applications?* For instance, in the experiments, do you see a clear sweet spot for $K$ depending on $\kappa_c$?
- *Would an adaptive schedule for $K$ be simpler or more robust?*
2. **Beyond Strong Monotonicity**
- Have you considered *weaker* assumptions like *Minty* variational inequalities or *quasi-strong convexity*? If not, do you anticipate major challenges?
3. **Ghost-SGDA**
- Could you comment more on the empirical overhead or potential instability of “ghost updates”?
- Are there conditions under which Ghost-SGDA might fail to converge or require additional assumptions?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s time and effort in reviewing our work.
> It might be interesting to see if adapting the number of local steps $K$ dynamically can further reduce communication, especially during early or late phases of training.
This is an interesting idea which can be a new line for future works. Basically, in each round, we can take as many local steps as it gives some improvement , as soon as the improvement (gradient norm) is less than some threshold, we can stop taking local steps and do one communication. This could reduce the overhead of taking so many local steps.
> If a player partially observes the other’s parameters between rounds, can the approach be adapted to incorporate partial synchronization?
Thank you for your suggestion. This idea can be very useful in practice. This situation would be like mixing 'Distributed minimax' with 'coordinate descent' for each player. We could imagine each player periodically gets updates on some parts of their opponent's parameters, maybe based on a certain probability $D$. So, instead of assuming the opponent's parameters stay completely fixed between rounds, it could be seen as we are performing coordinate descent steps on specific coordinates (parameter parts) sampled from a distribution $D$, using the partial information received.
>How sensitive is performance to the choice of $K$ in real applications? For instance, in the experiments, do you see a clear sweet spot for $K$ depending on $\kappa_c$?
The sensitivity of performance to the choice of $K$ depends on $\kappa_c$. As seen in Figure 2 of the paper, for smaller values of $\kappa_c$, the algorithm quickly saturates as $K$ increases. In other words, there isn’t a significant difference in performance between larger and smaller $K$ values. However, as $\kappa_c$ increases, using larger $K$ values actually improves performance significantly. So, for larger $\kappa_c$, different choices of $K$ can lead to more noticeable differences in performance.
If we ignore local computation costs (i.e., the local steps performed by the players), there is no theoretical sweet spot for our algorithm (increasing $K$ never leads to worse performance compared to smaller $K$ values). However, when considering local step costs, the optimal $K$ depends on the relative cost of computation versus communication. Roughly speaking, The best K is usually where performance plateaus; increasing it further offers little gain.
>Have you considered weaker assumptions like Minty variational inequalities or quasi-strong convexity? If not, do you anticipate major challenges?
Yes, it is possible to relax the strong convexity assumption. Specifically, we can derive Lemma B.4 and Lemma B.8 under a weaker assumption instead of strong convexity:
**Assumption**: For operators $ F_{\bar{\mathbf{x}}} $, we assume that for all $ \mathbf{x}, \mathbf{x}' \in \mathcal{X} $ and $ \bar{\mathbf{x}} \in \mathcal{X} $ such that $ F_{\bar{\mathbf{x}}} (\mathbf{x}) = \mathbf{0} $, the following inequality holds:
$\langle F_{\bar{\mathbf{x}}}(\mathbf{x}) - F_{\bar{\mathbf{x}}}(\mathbf{x}'), \mathbf{x} - \mathbf{x}' \rangle \geq \mu' ||\mathbf{x} - \mathbf{x}'||^2$
This assumption implies that functions $ f(\cdot,\mathbf{v}) and f(\mathbf{u},\cdot)$ are strong-quasi-convex and strong-quasi-concave, respectively, in the case of two-player games.
We can relax the strong monotonicity assumption of the operator $ F $ and we still can drive the results in Theorem B.12 as follows:
**Assumption:** For operators $F$, we assume that for all $ \mathbf{x} \in \mathcal{X} $ and $ \mathbf{x}^{\star} \in \mathcal{X} $ such that $ F(\mathbf{x}^{\star}) = \mathbf{0} $, the following inequality holds:
$\langle F(\mathbf{x}) - F(\mathbf{x}^{\star}), \mathbf{x} - \mathbf{x}^{\star} \rangle \geq \mu''||\mathbf{x} - \mathbf{x}^{\star}||^2$
Therefore, Assumption 2.2 can be replaced with these two new assumption.
>Could you comment more on the empirical overhead or potential instability of “ghost updates”?
Decoupled SGDA with Ghost Sequence has minor overhead (vector ops) vs. standard SGDA, insignificant for simple games (bilinear, quadratic), negligible for complex tasks like GAN training (vs. gradient costs).
Ghost Sequence is stable and improves performance in simple (quadratic) games. Its effectiveness in complex scenarios (e.g., GAN training) needs more study.
>Are there conditions under which Ghost-SGDA might fail to converge or require additional assumptions?
For payoff functions that have Hessians with large norms, Ghost-SGDA may encounter difficulties. This is because the underlying assumption for the approximation strategy in the Ghost sequences is that the gradient of the other player remains relatively constant between communication rounds.
We want to thank the reviewer again for their constructive feedback. If we managed to address your concerns, we appreciate if you consider increasing your score. | null | null | null | null | null | null |
Concept-Based Unsupervised Domain Adaptation | Accept (poster) | Summary: This paper aims to improve the generalizability and transferability of Concept Bottleneck Models (CBMs) by proposing a novel Concept-based Unsupervised Adaptation (CUDA) framework. The CUDA framework is designed to align concepts across domains in an adversarial manner while introducing a relaxation threshold. The authors evaluated the proposed framework across three sets of domain adaptation datasets.
Claims And Evidence: While the authors have put in good effort in proving the generalization error bound for CBMs with the concept terms and also the optimality of the designed CUDA framework with the relaxed discriminator loss, it can be observed that the proves are discussed within the concept space which is assumed to be a separate space than that of the original feature space. This allows Theorem 3.1 over the target-domain error to be defined directly for the source and target concept spaces. However, the ceoncept embeddings originates from the feature embeddings which is part of the concept embedding encoder. Could the authors explain the relationship between the target domain error that is bound for the CBM (Formula 3) and the original target domain error (changing $\tilde{D}_S$ and $\tilde{D}_T$ in Formula 1 with $D_S$ and $D_T$)? That would help to understand how different the target error will be in the context of feature embedding and concept embedding.
Methods And Evaluation Criteria: The authors evaluate the methds against a number of CBMs and UDA methods on three sets of benchmarks. However, some widely used DA benchmarks, including DomainNet, VisDA-2017, and Office-31 (or Office-Home/Office-Caltech) have not been leveraged. Therefore while the efficacy of CUDA is indeed proven against prior CBMs and some very prior UDA methods (DANN, MCD), its efficacy in more comprehensive benchmarks against more recent UDA methods are not sufficiently proven.
Theoretical Claims: The proofs provided are mostly detailed and correct.
Experimental Designs Or Analyses: One outstanding result that can be observed is the efficacy of CUDA on WB-2, where the second best result is an astonishingly 20+ percent lower than that of CUDA while the majority methods (even methods such as GH++ which performs well on WB-200 and WB-CUB) scores lower than 50, which indicates the performance to be worse than even a wild guess. Could the authors explain the huge gap between the results and why CUDA stood out much more in WB-2 than other WB related datasets?
Supplementary Material: The supplementary material contains the detailed notation table, proofs, experimental details and the pseudocode for the algorithm. The proofs and experimental details have been checked. However, the authors have not provide the actual code therefore the reproducibility of the results cannot be guaranteed.
Relation To Broader Scientific Literature: The paper touches upon domain adaptation in the context of CBM, which relates to the domain adaptation in the context of feature spaces.
Essential References Not Discussed: Not available (to the best of my knowledge)
Other Strengths And Weaknesses: Overall, the paper is well written with comprehensive proofs and sufficient results. The idea of combining DA with CBM is interesting and could potentially be leveraged for improving both the generalizability and interpretability of current methods. However, there are some concerns over the assumptions that leads to the computation of the error upper bound with the concept embedding, and the evaluation benchmarks, highlighted as above.
Other Comments Or Suggestions: Not available, see above for details.
Questions For Authors: Not available, see above for details.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We are glad that you found our framework ``"novel"``, our idea ``"interesting"``, and our proofs ``"correct"``. We address your comments in turn below.
**Q1. ... relationship between the target domain error ... CBM (Formula 3) and the original target domain error in feature space ...?**
To see the relationship between Formula 3 and the original target domain error in feature space, we start with the target error defined in the feature space (denoted as $\\mathcal{Z}$):
**Without concept learning**, we have the standard domain adaptation bound as (similar to Formula 1):
$$
\\epsilon_{T}^{\\boldsymbol{z}}(h) \\leq \\epsilon^{\\boldsymbol{z}}_{S}(h) + \\frac{1}{2} d _{\\mathcal{H} \Delta \\mathcal{H}}(\\mathcal{D}^{\\boldsymbol{z}} _S, \\mathcal{D}^{\\boldsymbol{z}} _T) + \\eta^{\\boldsymbol{z}},
$$
where $\mathcal{D}_S^{\boldsymbol{z}}$ and $\mathcal{D}_T^{\boldsymbol{z}}$ are denoted as the induced marginalized distributions in $\mathcal{Z}$, and all terms are defined in this space.
**With concept learning**, the bound above can be refined to include complex transformation error terms (resembling Formula 3):
$$
\\epsilon\_T^{\\boldsymbol{z}}(h) \\leq \\epsilon^{\\boldsymbol{z}} _S(h) + \\frac{1}{2} d _{\\mathcal{H}\\Delta\\mathcal{H}}(\\mathcal{D}^{\\boldsymbol{z}} _S, \\mathcal{D}^{\\boldsymbol{z}} _T) + ( const \times \\mathbb{E} _S [ || \\hat{\\boldsymbol{c}} - \\boldsymbol{c}|| ] + error _{other}) + \\eta^{\\boldsymbol{z}},
$$
where $error _{other}$ contains additional losses during the transformation from $\\mathcal{Z}$ to $\\mathcal{V}$ (e.g., projection errors). If $error _{other}$ is spread and absorbed into $\\epsilon _S^{\\boldsymbol{z}}(h)$, $d _{\\mathcal{H} \\Delta \\mathcal{H}}( \\mathcal{D} _S^{\\boldsymbol{z}}, \\mathcal{D} _T^{\\boldsymbol{z}})$, and $\\eta^{\\boldsymbol{z}}$, the resulting bound becomes structurally equivalent to Formula 3 (copied below):
$$
\\epsilon _{T}(h) \\leq \\epsilon _{S}^{\\boldsymbol{c}}(h) + \\frac{1}{2} d _{\\mathcal{H} \\Delta \\mathcal{H}}(\\tilde{\\mathcal{D}}^{\\boldsymbol{c}}_S, \\tilde{\\mathcal{D}} _T) + R \\cdot \\mathbb{E} _{S} [ || \\hat{\\boldsymbol{c}} - \\boldsymbol{c} ||] + \\eta^{\\boldsymbol{c}}.
$$
Thus, Formula 3 can be viewed as a refinement of the feature-space bound, making concept prediction error explicit and improving interpretability.
**Q2. ... widely used DA benchmarks ... against more recent UDA methods ...**
Good question.
**Mainstream DA Datasets Are Not Applicable:** Mainstream DA datasets for DA such as Office-31, Office-home, VisDA-2017 are not applicable to our setting because they do not contain concept annotations, which are needed to evaluate the correctness of concept prediction (and interpretation) in all methods.
**Additional Datasets:** As you suggested, we add a new dataset: AwA2-ImageNet, where AwA2 [Liu et al. 2015] is the source domain containing 85 concepts for animals, and ImageNet is the target domain. We match the AwA2 classes with corresponding subsets of ImageNet and filter overlapping data. We further perform style transfer on ImageNet to induce domain shift, making the task more challenging. The table below summarizes the results, which demonstrate our CUDA outperforms existing CBM variants and DA methods even in the case of larger domain gaps and more diverse data.
|Metrics|Concept Accuracy|Concept F1|Class Accuracy|
| :-----: | :--------: | :--------: | :--------: |
|CBM|80.15±0.09|59.25±0.64|33.89±1.12|
|CEM|80.63±0.37|59.17±0.19|33.73±2.81|
|GH++|-|-|54.83±0.23|
|CUDA|86.84±0.30| 65.60±0.26 |55.30±0.11|
**More Recent UDA methods.** We did compare CUDA with GH++, a more recent, state-of-the-art UDA method (PAMI '24) that outperforms DANN and MCD. Our results show CUDA
+ enables concept interpretation while GH++ cannot, and
+ achieves competitive prediction performance.
We also follow your suggestion to include more DA baselines, GVB (CVPR '20) and DWL (CVPR '21); see our **response to Q2&W2 for Reviewer tPTT**.
**Q3. ... why CUDA stood out much more in WB-2 than other WB related datasets?**
**Poor Performance of DA Methods on WB-2.** DA methods, such as GH++, rely heavily on class label information during training, without access to concept annotation. WB-2 has only binary labels, which contain limited information; it therefore leads to worse performance for DA methods.
**Large Improvement from CUDA in WB-2.** In contrast, CUDA utilizes concept learning and relaxed alignment, effectively leveraging the 112 concepts for binary classification in WB-2.
**Smaller Improvement in WB-200.** However, when applied to WB-200 or WB-CUB, it is more challenging for 112 concepts to represent 200 classes, leading to CUDA's smaller relative advantage against baselines.
**Q4. Code reproducibility.**
Thank you for your interest. We have finished cleaning up the source code and will release it for reproducibility. | Summary: This paper introduces a framework to handle domain shifts effectively while maintaining interpretability. It leverages adversarial training and a relaxed alignment mechanism to align concept embeddings across source and target domains, allowing for flexibility in capturing domain-specific variations. It also eliminates the need for labeled concept data in the target domain, enabling it to operate in unsupervised settings. The framework is supported by theoretical guarantees, including a generalization error bound and analysis of relaxed alignment, and achieves state-of-the-art performance in improving both concept accuracy and classification accuracy. Experiments on benchmark datasets demonstrate its significant improvements.
Claims And Evidence: 1. Theoretical analysis and experimental results show that relaxed alignment achieves better concept distribution matching and higher target accuracy.
2. Self-supervised learning and adversarial training are used to infer concepts in the target domain without explicit labels, validated by experiments across multiple datasets.
3. Figure 3 shows that the proposed method achieves better performance with higher intervention ratios compared to baseline methods.
Methods And Evaluation Criteria: 1. The proposed methods are well-tailored to address domain adaptation and concept learning challenges.
2. The use of diverse datasets strengthens the evaluation’s comprehensiveness.
Theoretical Claims: The proofs for Theorem 3.1 and Theorem 4.1 were reviewed. They are rigorous and mathematically sound.
Experimental Designs Or Analyses: 1. The experiments cover a wide range of datasets and scenarios, including background shifts and medical imaging, which enhance the framework’s applicability.
Supplementary Material: The supplementary material includes pseudocode for CUDA, additional proofs, and detailed experimental setups.
Relation To Broader Scientific Literature: The paper builds upon works like CBMs, CEMs and DANN. It extends these by integrating interpretability into domain adaptation.
Essential References Not Discussed: No critical omissions
Other Strengths And Weaknesses: Strengths:
1. The introduction of relaxed alignment for concept-based domain adaptation is novel and impactful.
2. The method addresses a critical gap in interpretable domain adaptation, enhancing both accuracy and interpretability.
3. The paper is well-organized.
Weakness:
1. The paper lacks visual or qualitative examples to illustrate how relaxed alignment impacts interpretability.
Other Comments Or Suggestions: None
Questions For Authors: Can the proposed method scale to larger datasets or more complex domains? Have you tested this?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We are glad that you found our method ``"novel and impactful"``, our theoretical guarantees ``"rigorous and mathematically sound"``, and the experiments ``"comprehensive"`` and ``"applicable across a wide range of scenarios"``. Below, we address your comments one by one.
**W1. The paper lacks visual or qualitative examples to illustrate how relaxed alignment impacts interpretability.**
Thanks for mentioning this. Following your suggestion, we provide a more concrete example to illustrate how relaxed alignment impacts interpretability. We will also include this example into our revised version.
We consider the task of making predictions for a target-domain image of a Black-Footed Albatross. The table below summarizes the predictions of different methods for both concept predictions and class label predictions; note that "CUDA w/o Relax" means "CUDA w/o Relaxed Alignment". Each row corresponds to one method, while the columns represent the predictions for specific concepts and class labels.
+ **Concept Predictions:** The first three columns show predictions for specific concepts (e.g., Seabird Bill, Black Nepe, Solid Breast). The values "0" and "1" indicate whether the prediction is incorrect or correct, respectively. The ground-truth distribution (GT) of each concept (in terms of GT positive rates) in the source and target domains is provided in parentheses ("GT Ratio: Source / Target").
+ **Class Label Predictions:** The last two columns show the predicted and ground-truth labels indices for each method.
| Method | Seabird Bill (6.10% / 5.90%) | Black Nepe (48.59% / 50.69%) | Solid Breast (64.72% / 64.55%) | Predicted Label Index | Ground-Truth Label Index |
| :--------------: | :--------------------------: | :--------------------------: | :----------------------------: | :-------------------: | :----------------------: |
| CBM | 0 | 0 | 1 | 24 | 0 |
| CEM | 1 | 0 | 1 | 0 | 0 |
| CUDA (w/o Relax) | 0 | 1 | 1 | 0 | 0 |
| **CUDA** | 1 | 1 | 1 | 0 | 0 |
We can see that:
+ CBM predicts incorrect concepts, resulting in an incorrect class label prediction.
+ CEM and CUDA (w/o Relax.) predict the correct class label but fail to predict some concepts correctly, which harms interpretability.
+ Our full method (CUDA with Relaxed Alignment) predicts both the concepts and the class label correctly in the target domain, demonstrating that it achieves both interpretability and performance simultaneously.
+ Interestingly, the table shows that concepts with clear distribution differences (e.g., "Seabird Bill" and "Black Nepe") lead to incorrect concept predictions in models without the relaxed alignment mechanism (e.g., CUDA w/o Relax. and CBM predict "Seabird Bill" incorrectly). This further supports the importance of our relaxed alignment.
**Q1. Can the proposed method scale to larger datasets or more complex domains? Have you tested this?**
Yes, our method can scale to larger datasets and more complex domains. To demonstrate this, we followed your suggestion to conduct additional experiments on ImageNet. In particular, we include a new dataset: AwA2-ImageNet [4], where AwA2 is the source domain containing 85 concepts for animals, and ImageNet is the target domain. We match the AwA2 classes with corresponding subsets of ImageNet and filter overlapping data. Furthermore, we perform style transfer on the target domain (ImageNet) to induce domain shift, making the task more challenging. The table below summarizes the results, which
+ demonstrate our CUDA can improve performance even in the case of larger domain gaps and more diverse data and
+ highlight the scalability of our approach while maintaining its performance and interpretability in larger and more complex domains.
| Metrics | Concept Accuracy | Concept F1 | Class Accuracy |
| :-----: | :--------: | :--------: | :--------: |
| CBM | 80.15±0.09 | 59.25±0.64 | 33.89±1.12 |
| CEM | 80.63±0.37 | 59.17±0.19 | 33.73±2.81 |
| GH++ | - | - | 54.83±0.23 |
| CUDA | 86.84±0.30 | 65.60±0.26 | 55.30±0.11 |
[4] Deep learning face attributes in the wild, CVPR15. | Summary: This paper aims to tackle the problem of limited generalization of concept bottleneck models in cross-domain scenarios. It utilizes adversarial training to align the cross-domain concept embeddings and introduces a relaxed uniform alignment technique to alleviate the influence of over-restricted domain alignment assumptions. Experiments on diverse datasets validate its effectiveness.
Claims And Evidence: Yes, the author has validated his hypotheses and claims through clear and convincing evidence, including extensive theoretical derivation and proof, as well as experimental results.
Methods And Evaluation Criteria: Yes, the methods CUDA and evaluation criteria for concept and class prediction make sense for the problem.
Theoretical Claims: I check the correctness of the proof of Optimal Discriminator(Lemma 4.1, B.2 in supplementary material), I believe this proof is correct and there are no issues with it.
Experimental Designs Or Analyses: The experimental section shows the comparison with other CBM and DA methods, by which I check the validity of the designs and analyses.
Supplementary Material: I review the supplementary material, including the extra experimental results, limitation analysis and partial theoretical proofs.
Relation To Broader Scientific Literature: The introduced adversarial training and relaxed uniform alignment technique may be extended toward multi-class classification scenarios, but the author leaves it for future research.
Essential References Not Discussed: To my knowledge, the paper includes all the necessary related works essential for its understanding with discussion. But there are several UDA methods are not included in the experiment section for quantitative comparison, such as [1-3] listed below.
Other Strengths And Weaknesses: 1. For the over-constraints for domain distributions alignment, can the author provide some concrete examples for that, source and target concepts indeed have minor distribution differences?
2. In experiments, the author includes several datasets including bird, digital and medical scenarios. But several mainstream datasets for DA are not considered, such as Office-31, Office-home, VisDA-2017, etc.
Other Comments Or Suggestions: The paper is well-written and easy to follow, I have no further comments here.
Questions For Authors: 1. The author is suggested to provide more concrete examples to convince me the existence of concept distribution differences between source and target domains, and such discrepancy is a common problem in diverse DA scenarios, to make the questions this paper claimed for DA task more valuable.
2. The author should include the experimental results on more datasets as mentioned in weakness, and compare with more UDA methods such as [1-3].
[1] Gradually vanishing bridge for adversarial domain adaptation, CVPR20
[2] Dynamic weighted learning for unsupervised domain adaptation, CVPR21
[3] Safe self-refinement for transformer-based domain adaptation, CVPR22
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We are glad that you found our theory ``"extensive"``/``"correct"``, our claim ``"convincing"``, our experimental designs/analyses ``"valid"``, and the paper ``"well-written"``. Below, we address your questions one by one.
**W1 & Q1. Provide more concrete examples to demonstrate concept distribution differences between source and target domains.**
+ One example is the concept "Primary Color: Brown", where the positive rates are 19% and 17% for the source and target domains, respectively. Additional examples include "Black Bill" (48% in source / 51% in target) and "All-purpose Bill" (40% in source / 41% in target).
+ Concept distribution differences are indeed a common problem in domain adaptation (DA) datasets with concept annotations. By directly counting the dataset statistics, we observe such discrepancies frequently arise across diverse DA scenarios.
**W2 & Q2. Include experimental results on more datasets and compare with additional UDA methods (e.g., [1-3]).**
This is a good question.
**Common Datasets for Concept Bottleneck Models:** Since our CUDA belongs to the category of concept bottleneck models (CBMs), we follow the literature (e.g., CBMs [1], CEMs [2], and PCBMs [3]) to use common datasets such as CUB, MNIST, and SkinCON in our experiments, as they provide rich concept annotations essential for evaluating concept-based methods.
**Mainstream Datasets for DA Are Not Applicable:** Note that mainstream datasets for DA such as Office-31, Office-home, VisDA-2017 are not applicable because they do not contain concept annotations, which are needed to evaluate the correctness of concept prediction in all methods.
[1] Koh et al., ICML'20. [2] Zarlenga et al., NIPS'22. [3] Yuksekgonul et al., ICLR'22.
**Additional Datasets:** Inspired by your comments, we add a new dataset: AwA2-ImageNet [4], where AwA2 is the source domain containing 85 concepts for animals and ImageNet is the target domain. We match the AwA2 classes with corresponding subsets of ImageNet and filter overlapping data. Furthermore, we perform style transfer on the target domain (ImageNet) to induce domain shift, making the task more challenging. The table below summarizes the results, which demonstrate our CUDA can improve performance even in the case of larger domain gaps and more diverse data.
| Metrics | Concept Accuracy | Concept F1 | Class Accuracy |
| :-----: | :--------: | :--------: | :--------: |
| CBM | 80.15±0.09 | 59.25±0.64 | 33.89±1.12 |
| CEM | 80.63±0.37 | 59.17±0.19 | 33.73±2.81 |
| GH++ | - | - | 54.83±0.23 |
| CUDA | 86.84±0.30 | 65.60±0.26 | 55.30±0.11 |
[4] Deep learning face attributes in the wild, CVPR'15.
**Comparison with UDA Methods [1-3]:** Following your suggestion, we tested [5] and [6] on our nine datasets, and the results are summarized in the table below. However, [7] relies on a ViT backbone, making it unsuitable for fair comparison with our results, which use ResNet50. We agree that integrating transformer-based methods into our framework is a promising direction for future work. We will include these references in our paper. Thank you for this insightful suggestion!
| Datasets | WB-2 | WB-200 | WB-CUB | AVG | M → M-M | S → M | M → U | AVG | I-II → III-IV | III-IV → V-VI | III-IV → I-II | AVG |
|:------:|:----------:|:----------:|:----------:|:-----:|:----------:|:----------:|:----------:|:-----:|:-------------:|:-------------:|:-------------:|:-----:|
| GVB | 40.97±3.78 | 74.89±0.69 | 73.66±0.35 | 63.17 | 41.28±3.55 | 81.21±0.36 | 94.27±1.64 | 72.25 | 76.35±0.73 | 78.3±1.43 | 74.32±0.1 | 76.32 |
| DWL | 54.86±2.07 | 73.64±0.38 | 72.31±0.52 | 66.94 | 50.69±3.25 | 83.97±2.19 | 96.29±0.13 | 76.98 | 74.98±0.72 | 79.24±0.83 | 73.97±0.97 | 76.06 |
| CUDA | 92.90±0.31 | 75.87±0.31 | 74.66±0.19 | 81.15 | 95.24±0.13 | 82.49±0.27 | 96.01±0.13 | 91.25 | 78.85±0.31 | 80.58±0.72 | 76.53±0.49 | 78.65 |
[5] Gradually vanishing bridge for adversarial domain adaptation, CVPR20.
[6] Dynamic weighted learning for unsupervised domain adaptation, CVPR21.
[7] Safe self-refinement for transformer-based domain adaptation, CVPR22. | null | null | null | null | null | null | null | null |
Latent Preference Coding: Aligning Large Language Models via Discrete Latent Codes | Accept (poster) | Summary: Authors claim that human preferences on LLM responses consist of multiple latent factors. They also claim these factors can vary across tasks. To capture such factors, authors propose a discrete latent variable model for preference learning. They employ standard methods for discrete latent variable modeling, such as Gumbel-softmax and variational inference. This approach is applied to DPO and their variants, IPO and SimPO. Evaluations are conducted on Mistral-7B, Llama3-8B, and Llama3-8B-Instruct base models, and Arc-Challenge, Arc-Easy, GSM-8K, TruthfulQA, and AlpacaEval 2.
## update after rebuttal
My major concerns from the review are largely unaddressed in the rebuttal, as discussed in my reply to the rebuttal. Hence, I will maintain my score.
Claims And Evidence: One of authors' major claim is that factors in preference vary across tasks (line 055), but this is not actually analyzed in depth. Figure 2 (right) is related but does not convincingly tell the method captures. It only suggests different sources of data cluster together, but the clustering looks weak, and the source of data tells very little about the task type; prompts in a single source, for example FLAN v2, cover diverse tasks.
Authors also claim that their method captures multiple factors involved in human preference. However, it is unclear how well the proposed method actually captures these factors. The major evidence used to support this is Table 1. However, improvements are mostly on GSM-8K, especially with Mistral7B model, which performs very weakly on the benchmark. Improvements on other base models and other tasks tend to be minimal, less than 1 percent point in most instances.
It's also worth noting that there are prior papers aiming to capture factors in human preference. In particular, HelpSteer 2 https://arxiv.org/abs/2406.08673 provide annotations on multiple factors, and the dataset has been very actively in preference modeling literature. If latent codes from the paper can be validated with human annotations like HelpSteer2 or models based on such annotations (for example Nemotron-RM https://arxiv.org/abs/2406.11704), it would much strengthen the claim of the paper. Or, authors could've conducted qualitative analysis of latent codes. They could've annotated what each latent code represents, or how different types of prompts activate different latent codes.
Methods And Evaluation Criteria: Regarding method: Authors employ the standard techniques for discrete latent variable models. The method to represent prompts and responses using hidden states is also standard. Hence overall, authors use principled techniques which have good empirical support from the literature.
The set of benchmarks is OK but not ideal. https://arxiv.org/abs/2406.09279 can be a good reference for the selection of benchmarks. GSM8K and TruthfulQA make sense, but usually DPO-based alignment methods show stronger benefit on AlpacaEval, MT-Bench and IFEval benchmarks. Authors do analyze AlpacaEval improvements in Table 4, and it is discussed in 'Experimental Designs Or Analyses' section.
In Table 3, preference accuracy is used to evaluate models, but this is not the standard method of evaluating alignment. DPO and IPO require base model predictions to compute the preference, and these preference predictions are not necessarily well-correlated with the actual generation performance. Previous works (again https://arxiv.org/abs/2406.09279 as a good example) mostly rely on benchmarks which evaluate generation results. For truthfulness, for example, it's better evaluating on XSTest.
Theoretical Claims: I took a brief look at the derivation of variational inference procedure in Appendix A, but it looked standard and intuitively made sense, so I didn't check the correctness line-by-line.
Experimental Designs Or Analyses: Authors do analyze AlpacaEval improvements in Table 4, but here the improvement with LPC is mostly with SimPO, and its raw win rate is very low. Also, these experiments are based on Llama3-8B-Instruct model, and per the public leaderboard, the base model's win rate is 22.6%, which is much higher than numbers reported in Table 4. Hence, all these methods are actually degrading model performance on AlpacaEval 2.0, and the base model performance wasn't reported for this particular experiment.
Figure 2 was discussed in 'Claims and Evidence' section.
Robustness analysis in line 408 is a clever idea of demonstrating how latent factors are captured.
Supplementary Material: I checked Appendix A for the derivation of variational inference objective and additional experimental results (Table 5, 6, 7).
Relation To Broader Scientific Literature: Latent coding is a well-established topic of research in machine learning. Making a solid connection between latent coding and LLM alignment will open up a lot of opportunities for both topics to benefit from methods and data from each other.
Essential References Not Discussed: Efforts to annotate multiple factors of preference such as HelpSteer 1/2 https://arxiv.org/abs/2406.08673 and the effort to automatically learn the weighting (challenge 3 mentioned in line 063) -ArmoRM https://arxiv.org/abs/2406.12845 - should be referenced.
Other Strengths And Weaknesses: Most strengths and weaknesses are already discussed. The paper is clearly written, and easy to follow.
Other Comments Or Suggestions: There is too much introduction and literature review. The proposed method starts being discussed only in page 4.5, which is too late. Authors could've cut down on literature review. In particular, the content in Section 3.1 could focus just on DPO since authors only experiment with DPO variants and don't use online RL methods like PPO.
Questions For Authors: What was the size of codebook used in most experiments? In Section 4.3, experiments on the codebook size was run, but in Section 4.1, 4.2, I don't think this crucial information was provided.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thanks for your review.
### Regarding AlpacaEval results
> ...but usually DPO-based alignment methods show stronger benefit on AlpacaEval, MT-Bench and IFEval benchmarks...
> ...Hence, all these methods are actually degrading model performance on AlpacaEval 2.0, and the base model performance wasn't reported for this particular experiment.
In our experiments, we find that offline alignment algorithms do not always boost performance on these benchmarks. This finding is consistent with https://arxiv.org/abs/2406.08673
### Regarding preference accuracy
> In Table 3, preference accuracy is used to evaluate models, but this is not the standard method of evaluating alignment. DPO and IPO require base model predictions to compute the preference, and these preference predictions are not necessarily well-correlated with the actual generation performance
Thanks for your suggestion. We use preference accuracy as a metric mainly because of its computational efficiency. Though preference accuracy is not a direct measurement of generation quality, it can serve as an indicator of how well the model can distinguish good responses from bad ones. In the main experiments, we report results on downstream benchmarks. We will add more comprehensive results to the revision.
### Regarding HelpSteer2 results
We have conducted experiments on the HelpSteer2 dataset following the same setting as https://arxiv.org/abs/2406.08673. In this setting, the preference dataset is constructed by taking the response with the higher helpfulness score as the chosen response, with the remaining response being the rejected response. Results on Llama3-8B are shown below.
| Model | Arc-Challenge | Arc-Easy | Gsm8K| TruthfulQA |
|-------|----------|----------|----------|----------|
| Base | 50.43 | 80.05 | 49.51 | 43.82 |
| DPO | 52.53 | 82.13 | 53.43 | 46.16 |
| DPO+LPC | 53.10 | 81.85 | 55.01 | 46.93 |
As can be seen, LPC improves performance on most of the benchmarks. We will add more comprehensive results to the revision.
> If latent codes from the paper can be validated with human annotations like HelpSteer2 or models based on such annotations (for example Nemotron-RM https://arxiv.org/abs/2406.11704), it would much strengthen the claim of the paper.
However, we want to clarify that the learned latent codes do not necessarily align well with human-annotated multi-view scores. This is because 1) latent codes are predicted based on the prompt, but human annotations are mostly scoring the response; 2) there is a gap between model-learned latent information and human-annotated explainable factors.
### Regarding qualitative analysis of latent codes
> ...authors could've conducted qualitative analysis of latent codes. They could've annotated what each latent code represents, or how different types of prompts activate different latent codes.
We have already conducted a qualitative analysis of latent codes in Section 4.3, which shows how prompts from different sources activate different latent codes. For example, prompts from the Flan subset are usually more reasoning-intensive, and prompts from the truthful_qa and false_qa subsets are related to factual correctness. We will add more details in the revision.
### Regarding codebook size
> What was the size of codebook used in most experiments? In Section 4.3, experiments on the codebook size was run, but in Section 4.1, 4.2, I don't think this crucial information was provided.
We set the codebook size to 64 in the main experiments. We will add this information to the revision.
---
Rebuttal Comment 1.1:
Comment: >We have already conducted a qualitative analysis of latent codes in Section 4.3, which shows how prompts from different sources activate different latent codes.
In typical analyses of unsupervised clustering (for ex topic models, which also learn "codebook"), each cluster is labeled with key phrases they capture, and they are further summarized by their theme. I already discussed limitations of the t-SNE plot authors are mentioning here:
>One of authors' major claim is that factors in preference vary across tasks (line 055), but this is not actually analyzed in depth. Figure 2 (right) is related but does not convincingly tell the method captures. It only suggests different sources of data cluster together, but the clustering looks weak, and the source of data tells very little about the task type; prompts in a single source, for example FLAN v2, cover diverse tasks.
>In our experiments, we find that offline alignment algorithms do not always boost performance on these benchmarks. This finding is consistent with https://arxiv.org/abs/2406.08673
I understand this, but how can you demonstrate the benefit of your alignment algorithm if it is not making improvements, but just "least degrading"? New HelpSteer experiments seem to show improvement over base model on GSM8K/TruthfulQA, but the boost across benchmarks seems inconsistent/small. As I mentioned in the review, I would like to see results on benchmarks more standard for evaluating alignment: AlpacaEval, MT-Bench, IFEval, etc.
>However, we want to clarify that the learned latent codes do not necessarily align well with human-annotated multi-view scores. This is because 1) latent codes are predicted based on the prompt, but human annotations are mostly scoring the response; 2) there is a gap between model-learned latent information and human-annotated explainable factors.
I understand learned latent codes may not necessarily align well with an existing data, but the responsibility is on authors to demonstrate how well latent codes capture human values, which is author's main claim. Here I wanted to make a constructive criticism to suggest a method that I believe is promising to do this validation, but yes it may not be successful. | Summary: This paper studies alignment and addresses the challenge of inconsistent human preferences, which stem from various underlying factors. The authors propose a latent codebook and integrate a method to learn the latent variable within standard alignment techniques like DPO.
They test their approach on standard benchmarks, claiming improvements over preference learning algorithms like DPO, IPO, and SimPO.
Claims And Evidence: > Claim: The latent variable encodes factors underlying human preferences.
The paper’s main claimed contribution seems plausible but has a key issue: using variational autoencoders (VAEs) for binary (preference) classification risks posterior collapse, where the model assigns preference labels without leveraging the latent space. This problem is especially severe when preferences are unambiguous per sample (i.e., for each input $x, y_1, y_2$, there exists a unambiguous preference label, e.g., $y_1 > y_2$). In such cases, Eq. (3) leads the decoder to ignore $z$ and at the same time achieve zero $\text{KL}$.
This concern is especially relevant given the primary dataset used in the experiment UltraFeedback, where model-generated preferences are unique and consistent per sample.
Methods And Evaluation Criteria: The proposed method makes sense for modeling diverse preferences, and it is important to evaluate this method in terms of how well it adapts to diverse test-time human preferences for the same set of queries. However, there are key limitations:
1. The authors primarily model preferences using UltraFeedback, a model-generated dataset that lacks diverse and conflicting preferences, limiting its ability to learn meaningful latent codes. Although section 4.4 is a good step of understanding conflcting preference labels, the experiments are very limited.
2. The authors primarily test on standard benchmarks, which may not be ideal for this purpose. While the results show some improvement, it's unclear whether the proposed approach is responsible or if other factors are at play. One possibility is that the added complexity of LPC acts as a regularizer rather than contributing in the intended way.
Theoretical Claims: The proof appears correct. Conceptually, the method is a VAE for binary preference classification, where the classifier is modeled as $\log p(\text{preference} \mid x, y_1, y_2)$ and further parameterized using language modeling log-probs.
Experimental Designs Or Analyses: See "Methods And Evaluation Criteria".
Supplementary Material: Yes. Appendix A.
Relation To Broader Scientific Literature: Although the method's contribution is not fully validated, its motivation is significant. Previous work models aggregated human values from complex preferences, while this approach disentangles preferences into a latent space. This method can potentially help us understand human preferences better and enable language model steering at test time via different latent codes.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The idea behind the method is interesting: modeling complex and sometimes conflicting human preferences in an unsupervised way from off-the-shelf preference dataset.
Weaknesses
Some of the weakness has already been outlined primarily in "Claims And Evidence" and "Methods And Evaluation Criteria". Here is a recap:
1. The dataset and benchmark chosen can not validate the claimed benefit of the method
2. Lack of analysis and discussion for the improved score from the standard benchmark.
Other Comments Or Suggestions: Typo 1. Line 051, "safty" -> "safety"
See "Questions For Authors" for suggestions.
Questions For Authors: 1. Could you analyze the improved scores in Table 1? It’s unclear how the method enhances performance when both training and testing datasets are homogeneous—where preference labels are unique and consistent per query-response pair, and no customized steering is needed.
2. Could you provide evidence that the method is not susceptible to posterior collapse as I pointed out in the "
Claims And Evidence" section. Plotting $\text{KL}$ is a good starting point.
3. Besides the T-SNE plot in Fig. 2, could you further interpret the learned latent space? For example, can you probe the latent space to see if switching latent codes $z$ generates responses that align with different human preferences?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thanks for your review.
### Regarding the effectiveness of latent codes
> The authors primarily model preferences using UltraFeedback, a model-generated dataset that lacks diverse and conflicting preferences, limiting its ability to learn meaningful latent codes.
Thanks for your suggestion; however, we do not agree that the UltraFeedback dataset lacks diverse preferences. Prompts in this dataset are collected from various sources, including math, reasoning, factual knowledge, creative writing, and other domains. The diversity of prompts is sufficient to learn meaningful latent codes, as reported in Section 4.3.
> Could you provide evidence that the method is not susceptible to posterior collapse as I pointed out in the " Claims And Evidence" section. Plotting is a good starting point.
> Besides the T-SNE plot in Fig. 2, could you further interpret the learned latent space? For example, can you probe the latent space to see if switching latent codes generates responses that align with different human preferences?
We conduct a latent code switching experiment to show the effectiveness of learned latent codes. Specifically, we collect 1000 original preference pairs and 1000 flipped preference pairs. Then, we substitute the latent codes of ordinary preference pairs with the flipped ones, the results are shown below.
| Preference | Latent code | Preference Accuracy |
|--------|-----------|----------|
| original | original | 66.8 |
| original | flipped | 61.4 |
As can be seen, using the wrong latent codes leads to a significant performance drop, which indicates that the learned latent codes are effective. We will add more careful experiments in the revision.
### Regarding the performance in Table 1
> Could you analyze the improved scores in Table 1? It’s unclear how the method enhances performance when both training and testing datasets are homogeneous—where preference labels are unique and consistent per query-response pair and no customized steering is needed.
The motivation of this work is to improve LLM alignment by modeling complex human preferences behind the overall preference label. In this setting, the complex human preferences are implicitly determined by the query. In contrast, steering LLM to generate responses under different preference directions is to model complex human preferences in an explicit way, which is not the focus of this work. We will add more details in the revision.
---
Rebuttal Comment 1.1:
Comment: > Thanks for your suggestion; however, we do not agree that the UltraFeedback dataset lacks diverse preferences. Prompts in this dataset are collected from various sources, including math, reasoning, factual knowledge, creative writing, and other domains. The diversity of prompts is sufficient to learn meaningful latent codes, as reported in Section 4.3.
The issue is that datasets like UltraFeedback lack diverse preferences for the same queries. When preferences are consistent within queries but may vary across different queries, standard preference alignment algorithms can handle this perfectly well - note that both language models and reward models are query-conditioned, meaning they naturally adapt to different preferences based on the input prompt. **For instance, in safety alignment, models learn to reject unsafe prompts and answer safe ones - this variation across different prompts (and for different values) is easily handled by standard alignment methods.**
Your algorithm resembles a conditional VAE, and its advantage becomes apparent specifically when there are diverse preferences for identical queries. In such cases, we can leverage latent codes to adapt to different preferences even when responding to the same query.
> We conduct a latent code switching experiment to show the effectiveness of learned latent codes. Specifically, we collect 1000 original preference pairs and 1000 flipped preference pairs. Then, we substitute the latent codes of ordinary preference pairs with the flipped ones, the results are shown below.
As the paper states, you "add [FLIP] to the prompts associated with these flipped instances." This approach does not actually increase preference diversity for identical prompts. I recommend evaluating performance without adding the [FLIP] tag to better assess its impact. | Summary: This paper studies the problem of complex and often conflicting human preferences for the alignment algorithm. To address this, the paper introduces Latent Preference Coding (LPC), a framework that models the implicit, multifaceted factors behind human preferences using discrete latent codes. LPC integrates seamlessly with offline alignment algorithms and autonomously infers underlying preference factors and their importance directly from data, eliminating the need for predefined reward functions or manual weight adjustments. Extensive Experiments across multiple benchmarks demonstrate the effectiveness of the proposed algorithm.
Claims And Evidence: The motivation is reasonable and the written is easy to follow.
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: I review all parts of the supplementary materials.
Relation To Broader Scientific Literature: This paper studies the problem of using latent code to automatically learn human preference. This idea can be used in complex senarios with complex human preference.
Essential References Not Discussed: There are also other papers about using latent representation to fine-tune LLMs:
1. Language Models are Hidden Reasoners: Unlocking Latent Reasoning Capabilities via Self-Rewarding
2. Training Large Language Models to Reason in a Continuous Latent Space
Other Strengths And Weaknesses: ### Strengths
1. The paper is well written and easy to follow.
2. The experiment of verifying the effectiveness of their model is reasonable, especially complex scenarios for human preference alignment.
### Weakness
1. It seems that the model need to load the encoder transformer and the decoder LLMs during the training process. A critical practical concern is the computational overhead of jointly training an encoder alongside large decoder-only LLMs. The added memory footprint from parallel transformer modules could limit scalability.
2. The posterior network conditions on the preference pair $y_w \succ y_l$ but the paper implements this via simple concatenation of $y_w$ and $y_l$ embeddings. This raises questions about whether richer interaction modeling (e.g., explicit difference encoding, cross-attention) could better capture the relative preference structure. The authors should clarify design choices or explore ablation studies on alternative relational encodings.
3. There are also other works using latent representation to improve the fine-tuning process of LLMs like [1], [2].
[1] Language Models are Hidden Reasoners: Unlocking Latent Reasoning Capabilities via Self-Rewarding
[2] Training Large Language Models to Reason in a Continuous Latent Space
Other Comments Or Suggestions: Please refer to Weakness part.
Questions For Authors: Please refer to the weakness part.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review.
#### Regarding the training cost issue
> It seems that the model need to load the encoder transformer and the decoder LLMs during the training process...
The additional computation of LPC is negligible compared to the baseline methods. This is because the policy model, prior and posterior networks share the same backbone model, so most of the computation can be reused. We have briefly discussed this in section 3.3. We will add more details in the revision.
#### Regarding modeling choice of posterior network
> ... This raises questions about whether richer interaction modeling (e.g., explicit difference encoding, cross-attention) could better capture the relative preference structure. ...
Thanks for your suggestion. We choose the simple concatenation design because we want to reuse the computation of forwarding $<x,y_w>$ and $<x,y_l>$ in the policy model. In this way, the additional computation of the posterior network is minimal. Using more sophisticated modeling is ok, but it would require forwarding the LLM again, which is inefficient. We will add more details in the revision.
#### Regarding related works
> There are also other works using latent representation to improve the fine-tuning process of LLMs like [1], [2].
The literature you mentioned investigates latent reasoning without explicitly generating CoT tokens, which is different from our work. Although our work also uses latent space, we focus on modeling complex human preferences using latent variables. We will add them to the related work section in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply! I will maintain my score. | Summary: This paper addresses the challenge of single reward model not being able to reflect the full latent factors that determine the user preference. It proposes LPC to model a latent variable model where the latent is configured as a discrete variable fully predictable by the prompt. For any prompt, a prior network infers which latent factor distribution is most relevant, while a posterior network refines this distribution based on preference comparisons. Experiments show that LPC improves performance across various preference-optimization methods (e.g., DPO, IPO, SimPO) and with multiple base language models (e.g., Llama3, Mistral), when being trained on cross-domain preference dataset.
Claims And Evidence: The main claim made by this paper is: Modeling human preference demands considiering latent variables, with this idea, the proposed LPC method improves baseline alignmeny methods.
Evidence: experiments test the effect of LPC when added on to DPO, IPO, SimPO and with multiple base language models (e.g., Llama3, Mistral), according to the reads in Table 2, though the improvement seems to be mostly marginal, especially for context domain like Arc-Easy, but the imptovement is rather consistent.
Methods And Evaluation Criteria: Proposed method makes sense to me as it is a straightforward carry-out of the VAE method, with architechture designs specific to LLMs. Evaluation criteria also makes sense.
Theoretical Claims: No theory provided in this paper.
Experimental Designs Or Analyses: Experimental Designs are reasonable and analysis and results make sense.
Supplementary Material: No.
Relation To Broader Scientific Literature: There are existing literature considering latent variables in the preference model, instead of relying on single reward. The novelty of this paper is it consider a latent variable reflecting the diverse domain context encoded in the prompt.
Essential References Not Discussed: I think the authors should be aware of https://arxiv.org/abs/2402.08925, which considers a mixture model that assigns each piece of data to one among a few distinct groups (also a discrete latent exsit here). The author should discuss whether a clustering algorithm like the EM algorithm proposed there can be applied to modelling the mixture of prompt domains studied in this paper, and include that method as a baseline if comparable.
Other Strengths And Weaknesses: My major concern is in Essential References Not Discussed part.
Other Comments Or Suggestions: Please refer to the Questions For Authors part.
Questions For Authors: The following points may need discussion in the main text:
1. how does the training cost of LPC compared to the baseline methods?
2. how to set up the codebook of latent z, are those embedding prefixed or obtained by training?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review.
> How does the training cost of LPC compared to the baseline methods
The additional training cost of LPC is negligible compared to the baseline methods. This is because the policy model, prior and posterior networks share the same backbone model so most of the computation can be reused. We have briefly discussed this in section 3.3. We will add more details in the revision.
> How to set up the codebook of latent z, are those embedding prefixed or obtained by training?
The embeddings are trained jointly with the rest of the model parameters.
> Essential References Not Discussed
Thanks for your suggestion. We have read the paper you mentioned. The method in that paper is based on PPO. However, our paper focuses on offline alignment algorithm, so it is not comparable. We will add it to the related work section in the revision. | null | null | null | null | null | null |
Approximation to Smooth Functions by Low-Rank Swish Networks | Accept (poster) | Summary: The paper investigates whether deep neural networks that have been compressed using low‐rank factorization can still approximate smooth functions as accurately as uncompressed networks from a universal approximation theory perspective. In low‐rank compression, a weight matrix in a network layer is replaced by the product of two smaller matrices. This paper proposes a specific construction: between each pair of standard nonlinear (Swish) layers, a narrow linear layer without bias is inserted. This “bottleneck” enforces a low-rank structure on the transformation. The authors show, via a constructive proof, that with this constraint the network can approximate any function from the Holder space arbitrarily well. The construction uses classical tools from approximation theory. The domain is partitioned into a grid, and at each grid point the function is locally approximated by its Taylor polynomial. To control the approximation error away from the expansion point, the Taylor polynomials are “localized” using smooth bump functions that nearly vanish outside a small region. Both the Taylor polynomials and bump functions are then implemented using Swish networks.
Claims And Evidence: The theoretical claims are well-supported.
Methods And Evaluation Criteria: For empirical evaluation, the authors conduct grid searches over network depth and width on eight UCI datasets. They compare the performance of a classical fully connected Swish network with a low-rank version, which uses narrow linear layers of one-third the width. The evaluation follows standard criteria: accuracy for classification and RMSE for regression. The use of 10-fold cross-validation and statistical tests adds rigor to their experimental validation. However, the tasks remain somewhat simplistic.
Theoretical Claims: The proofs have been checked loosely and appear to be correct.
Experimental Designs Or Analyses: See above for my concerns regarding the experiments.
Supplementary Material: Yes, the proof has been loosely examined.
Relation To Broader Scientific Literature: It is difficult to connect this paper to the broader scientific literature. I believe that universal approximation theory offers little practical insight and fails to convincingly demonstrate the advantage of the proposed low-rank structure.
Essential References Not Discussed: The paper **"Approximation by Superpositions of a Sigmoidal Function"** should be cited. Published in 1989, it is one of the earliest works establishing universal approximation results for sigmoid networks.
Other Strengths And Weaknesses: See below.
Other Comments Or Suggestions: I am not convinced that universal approximation theory can be used to demonstrate the benefits of low-rank factorization. Moreover, the empirical results appear to be based on relatively simple tasks. At this stage, I find the paper unsuitable for acceptance.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you for your comments. We will try our best to relieve your concerns.
Q1: The paper "Approximation by Superpositions of a Sigmoidal Function" should be cited.
A1: Thank you for the reminder. We will add this groundbreaking work in our paper.
Q2: I am not convinced that universal approximation theory can be used to demonstrate the benefits of low-rank factorization. Moreover, the empirical results appear to be based on relatively simple tasks.
A2: Properly speaking, the aim of our work is not to demonstrate the benefits of low-rank compression. Low-rank compression, as a class of efficient and hardware-friendly network compression methods, can reduce the computation cost significantly, that is well examined in vast applications. However, as mentioned in our introduction: except low-rank compression, the remaining categories of methods are all underpinned by UATs to some extent. Hence, we aim to explain why low-rank networks can still perform comparably to full-rank ones from the perspective of approximation theory. Through our well-designed constructive approximation, we prove that the width of linear hidden layers needs only be no more than one-third of the width of nonlinear layers, implying that low-rank compression within this threshold does not impair the approximation capability of neural networks. By integrating our approximation results with sieve estimation theory, it is easy to derive the convergence rate of low-rank network estimates and limiting distributions for plug-in estimates based on these low-rank approximations.
Regarding the simplicity of the experiments, our consideration is that the paper has already rigorously proven from the perspective of approximation theory that low-rank compression of weight matrices does not affect the approximation rate. The purpose of the experiments is to demonstrate the reliability of our theoretical proof. Just as one has used the axioms of Euclidean geometry to prove that the base angles of any isosceles triangle are equal, there is no subsequent need to draw numerous different isosceles triangles and measure their base angles for verification. According to academic conventions in neural network approximation theory, the vast majority of papers—including the work “Approximation by Superpositions of a Sigmoidal Function”—do not include any experimental validation.
Thank you once again. If there are still any doubts, we sincerely hope that you will kindly offer your comments. | Summary: This paper discusses the universal approximation property of row-rank MLPs using the Swish activation function.
## update after rebuttal
Although the authors provide some explanation regarding the activation function and the number of parameters, my concerns are not fully addressed. Therefore, I will maintain my score.
Claims And Evidence: The authors provide the number of parameters required for the approximation used in Theorem 4.1, along with a bound. While I have not rigorously checked every detail of the proof, I find no issues with its overall structure.
Methods And Evaluation Criteria: The authors approximated the function using a commonly employed method: partitioning the space and connecting each partition with a bump function.
Theoretical Claims: The main structure of the proof follows a natural progression, and I do not anticipate any major issues.
Experimental Designs Or Analyses: .
Supplementary Material: .
Relation To Broader Scientific Literature: .
Essential References Not Discussed: .
Other Strengths And Weaknesses: The proof technique is highly standard, and it is difficult to consider it a significant mathematical advancement.
The choice of activation function is too restrictive. Why is the activation function in the paper restricted to Swish? As I understand it, the properties of Swish are used in Lemmas 5.4–5.6 to approximate $xy$, $x^2$, and $x$, but these approximations can also be achieved using many other activation functions. Additionally, approximating bump functions should also be feasible with alternative activations.
I do not quite understand the significance of the linear hidden layer having a smaller width than the nonlinear layer. These values merely compare the number of parameters within the same network, whereas a meaningful comparison should be made between different networks.
The authors conducted a quantitative approximation comparison only with Ohn & Kim (2019), but they should also compare their results with more recent studies for a more comprehensive evaluation.
Other Comments Or Suggestions: The authors mention that low-rank approximation is used as one of four methods for compressing neural networks. The paper does cite prior works related to low-rank approximation, but it does not explicitly specify the source of the definition of low-rank MLP when introducing it in Section 3.2. Clearly stating where this definition originates from would enhance clarity and strengthen the paper’s argument.
Questions For Authors: .
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your comments. We will try our best to relieve your concerns.
Q1: The proof technique is highly standard, and it is difficult to consider it a significant mathematical advancement.
A1: While our mathematical proof follows a well-established framework, we conducted a well-designed constructive approximation which guarantees that the width of linear hidden layers is no more than one-third of the width of nonlinear layers. To our knowledge, though many works demonstrated that low-rank compression performs in a variety of applications empirically, we are the first to develop the theoretical foundation for low-rank compression from the perspective of approximation theory by answering the question of whether a neural network processed by low-rank compression can serve as a good approximator for a wide range of functions. We believe that mathematics is a tool for reaching truth and what matters is the conclusion derived from the mathematical tools.
Q2: The choice of activation function is too restrictive.
A2: From the perspective of technical proofs, we approximated the identity function precisely by leveraging the characteristics of swish function (Lemma 5.6). From the perspective of foundational contributions to future research, as highlighted in the Introduction, the infinite differentiability of Swish neural networks creates unique opportunities to extend our approximation theorems to higher-order Sobolev spaces.
Q3: I do not quite understand the significance of the linear hidden layer having a smaller width than the nonlinear layer.
A3: As mentioned in the Introduction, to our knowledge, there is no work providing a theorical explanation to answer why low-rank compression does not hurt performance of neural networks. We are the first one to develop the theoretical foundation for low-rank compression from the perspective of approximation theory by answering the question of whether a neural network processed by low-rank compression can serve as a good approximator for a wide range of functions. Though our compression rate is not very significant, we believe that our publication will inspire more important theories.
Q4: The authors conducted a quantitative approximation comparison only with Ohn & Kim (2019), but they should also compare their results with more recent studies for a more comprehensive evaluation.
A4: While there are many articles discussing the approximation capabilities of neural networks, most focus on ReLU neural networks. Additionally, most studies target function classes of continuous functions, Lipschitz functions, and Hölder continuous functions. Many works also do not provide simultaneously the required depth, width, upper bound of absolute values of parameters, and upper bound of number of nonzero parameters for approximation. Considering all the above points, we selected Ohn & Kim (2019) for comparison.
Q5: it does not explicitly specify the source of the definition of low-rank MLP when introducing it in Section 3.2.
A5: Up to now, there is no mathematically rigorous and universally accepted definition of low-rank networks. After all, even the strict mathematical definition of neural networks themselves lacks a consensus—how can we expect one for low-rank networks? Our Definition 3.1 builds upon extensive prior research on low-rank compression, proposing a definition aligned with the majority of scholars' understanding. This definition captures the essence of low-rank compression: reducing computational cost by decomposing large matrices (or convolutional kernels) into products of smaller matrices (or smaller convolutional kernels). We will supplement additional references to demonstrate that this definition aligns with the network architectures proposed in these studies:
[1]. Denil, Misha, et al. "Predicting parameters in deep learning." Advances in neural information processing systems 26 (2013).
[2]. Idelbayev, Yerlan, and Miguel A. Carreira-Perpinán. "Low-rank compression of neural nets: Learning the rank of each layer." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
[3]. Sainath, Tara N., et al. "Low-rank matrix factorization for deep neural network training with high-dimensional output targets." 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013.
Thank you once again. If there are still any doubts, we sincerely hope that you will kindly offer your comments. | Summary: This paper investigates low-rank compression techniques for neural networks by strategically inserting narrow linear layers between adjacent nonlinear layers. The authors theoretically demonstrate that low-rank Swish networks with a fixed depth can approximate any function within a Hölder ball Cβ,R([0,1]^d) to an arbitrarily small error. Notably, the theoretical analysis establishes bounds on the minimal width required for these inserted linear layers, achieving substantial reductions in computational cost without significant accuracy loss.
Claims And Evidence: I can understand the claims in Section3, but not much in Section 4 and 5.
Methods And Evaluation Criteria: The selecting specific datasets and models in the evaluation is unclear. Clarification on the choice and potential applicability to more prevalent and larger-scale models would significantly strengthen the practical impact of the method.
Theoretical Claims: The theory and proofs in section 4 and 5 are hard to follow for me.
Experimental Designs Or Analyses: Please see the evaluation part for my confusion of model and dataset selection.
Supplementary Material: I have checked the supplementary material.
Relation To Broader Scientific Literature: This paper offers valuable theoretical insights into structured low-rank compression strategies.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
Rigorous Mathematical Proofs: The paper provides extensive mathematical proofs, rigorously demonstrating the proposed theorem, significantly contributing to the theoretical understanding of low-rank compression.
Clear Compression Theory: The authors articulate a clear theoretical basis for the achievable compression ratio and computational reductions, providing a solid analytical foundation.
Innovative Use of Hölder Ball: Employing the Hölder ball in the analysis introduces a novel perspective, enriching the theoretical contributions of the paper.
Weaknesses:
Complexity and Readability of Proof: The mathematical proofs, while rigorous, are challenging to follow due to the absence of a clearly structured narrative to guide readers effectively.
Lack of Runtime Performance Analysis: The paper lacks empirical data on real-world runtime reductions. Providing measurements of actual runtime improvements could significantly validate the practical utility of the methods.
Other Comments Or Suggestions: See strengths and weaknesses.
Questions For Authors: See strengths and weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your comments. We will try our best to relieve your concerns.
Q1: Complexity and Readability of Proof
A1: To establish a novel theoretical foundation for low-rank compression, we develop a rigorous mathematical framework comprising multiple interlocking proofs that, while intricate for researchers new to neural network approximation theory, are indispensable for upholding formal rigor. To enhance accessibility, we distill the conceptual underpinnings of our constructive approximation framework in a dedicated subsection at the conclusion of the Introduction (preceding the contributions summary). In Section 5, we adopt a modular approach to present the complete derivation: Theorem 4.1 is decomposed into four self-contained modules, each addressing distinct aspects of the proof architecture while highlighting critical technical innovations and their interrelationships.
Q2: Lack of Runtime Performance Analysis
A2: The purpose of this paper is not to prove how much low-rank compression, as a network compression method, can reduce neural network inference time. Low-rank compression has been proven in many literatures to be an efficient, easy-to-implement, and hardware-friendly network compression method that significantly reduces computational cost. As emphasized in the Introduction, our work aims to develop a theoretical foundation for low-rank compression from the perspective of approximation theory. Our Theorem 4.1 provides an upper bound of approximation error for representing any function in the Hölder ball $\mathcal{C}^{\beta,R}([0,1]^d)$ using low-rank Swish networks. Consequently, our experiments also focus on error analysis. Furthermore, Section 4 presents theoretical calculations confirming that the low-rank architectures in Theorem 4.1 reduce the number of multiplication operations—a more scientifically rigorous metric than runtime, as runtime is highly susceptible to environmental variability.
Thank you once again. If there are still any doubts, we sincerely hope that you will kindly offer your comments. | Summary: This paper studies the question of whether networks with low rank matrices and Swish activations can approximate a class of Holder-continuous and smooth functions. The authors show that the number of parameters and operations can be reduced by 1/3 to still obtain the same approximation rates.
Claims And Evidence: The theoretical claims are supported by proofs. While I have not checked the proofs thoroughly, the logical structure of the proofs seems to be correct. The empirical evaluations in section 6 demonstrate comparable performance between low rank networks and full rank networks, however these are on small scale datasets and with small networks. Nevertheless the literature contains substantial empirical support for the natural emergence of low-rank structures within neural network layers. Since the main contribution of the paper is the theoretical results, I will not dwell on the empirical evaluation.
Methods And Evaluation Criteria: Yes
Theoretical Claims: I did not check the correctness of the proofs in detail. However the results and proofs in the paper seem similar to other results on approximation with neural networks so I trust them.
Experimental Designs Or Analyses: Some more details about the training (choice of regularization, optimizer, other hyperparameters) could be provided.
Supplementary Material: Not in detail. I only checked Figure 3 for the construction of the network.
Relation To Broader Scientific Literature: The approximation results of the paper are in line with other neural network approximation results like [1] (for Sobolev spaces, not Holder).
[1] Hrushikesh N Mhaskar. Neural networks for optimal approximation of smooth and analytic functions. Neural computation, 8(1):164–177, 1996
Essential References Not Discussed: I am not sure.
Other Strengths And Weaknesses: Weaknesses: The results do not seem to escape the curse of dimensionality. One hope of using low rank layers is to achieve more efficient approximation rates. This does not seem to be the case here.
The improvement in terms of number of parameters/computational steps is only a constant 2/3 factor.
The role played by the Swish activation is unclear - why is that a necessary component of the network architecture?
Overall, while the paper proves that networks with Swish activations and low rank layers can approximate Holder functions at the same rate, the reasons to prefer these Low rank swish networks are not very compelling.
The amount of savings that can typically be achieved in practice is much more than 33%, so it is not clear the current results can explain how to achieve better efficiency.
Other Comments Or Suggestions: None.
Questions For Authors: Is there an advantage to using Low rank swish networks over full rank ReLU networks in terms of approximation? Or is your goal to show that Low rank swish networks achieve comparable rates of approximation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your comments. We will try our best to relieve your concerns.
Q1: Problem on the curse of dimensionality.
A1: If you want to approximate any function in a Hölder ball, then theoretically it is impossible to escape the curse of dimensionality[1, 2]. In Remark 4.5 of Section 4, we introduce a novel function class that is prevalent in practical applications, and when approximating functions within this class, one can avoid the curse of dimensionality.
Q2: Problem on the compression rate.
A2: As mentioned in the Introduction, to our knowledge, there is no work providing a theorical explanation to answer why low-rank compression does not hurt performance of neural networks. We are the first one to develop the theoretical foundation for low-rank compression from the perspective of approximation theory by answering the question of whether a neural network processed by low-rank compression can serve as a good approximator for a wide range of functions. Though our compression rate is not very significant, we believe that our publication will inspire more important theories.
Q3: Problem on the choice of activation function
A3: From the perspective of technical proofs, we approximated the identity function precisely by leveraging the characteristics of swish function (Lemma 5.6). From the perspective of foundational contributions to future research, as highlighted in the Introduction, the infinite differentiability of Swish neural networks creates unique opportunities to extend our approximation theorems to higher-order Sobolev spaces.
Thank you once again. If there are still any doubts, we sincerely hope that you will kindly offer your comments.
[1]. Schmidt-Hieber, A. J. Nonparametric Regression using Deep Neural Networks with ReLU Activation Function. The Annals of statistics, 48(4):1875–1897, 2020.
[2]. Bauer, B. and Kohler, M. On deep learning as a remedy for the curse of dimensionality in nonparametric regression. The Annals of statistics, 47(4):2261 - 2285, 2019. | null | null | null | null | null | null |
Optimizing Language Models for Inference Time Objectives using Reinforcement Learning | Accept (poster) | Summary: This paper explores the benefits of explicitly training language models using reinforcement learning to perform well under inference time algorithmic techniques like pass@$k$ and majority voting. The authors argue that directly optimizing for inference time objectives can lead to improved performance on those specific metrics.
This paper provides a formulation for $k$-sample objectives and efficient gradient estimation techniques. The authors derive an unbiased gradient estimator using the leave-one-out method, and a biased variant that reduces variance by centering the advantage function.
The paper demonstrates these approaches on mathematical reasoning (MATH dataset) and code generation tasks (CodeContests), showing performance benefits when training directly for inference time objectives compared to training for mean performance.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: No
Relation To Broader Scientific Literature: This paper makes a contribution to language model training by showing a difference between and inference objectives. Most prior work has treated these as separate concerns, but this approach directly incorporates inference techniques into the training process.
For practical applications like code generation, where pass@$k$ metrics are commonly used in evaluation, this approach provides a clear path to improving performance.
Essential References Not Discussed: No
Other Strengths And Weaknesses: ### Strengths
- **Technical Contribution:** The theoretical framework for optimizing $k$-sample objectives extends beyond previous work in multi-sample objectives. The proposed estimators are verified to be KL-efficient than directly optimizing mean performance on benchmarks.
- **Extensive Empirical Validation:** The authors conduct thorough ablation studies across model sizes (3B, 8B, 70B), different values of $k$, and various datasets, strengthening their claims.
### Weaknesses
- **Result Within Expectation:** Since there is gap between mean performance objective and pass@$k$ objective, the results in this paper is normally within expectation. It seems unfair to compare with mean policy gradient, which is not talored for pass@$k$ objective.
- **Computational Efficiency:** The $k$-sample gradient methods require $k$ times more inference computation during training, which might limit practical applicability for very large models, especially because generating a sample is the most time-consuming process.
- **Task-Dependent Performance:** As acknowledged by the authors, the benefits vary significantly based on task difficulty - for easier tasks, the standard policy gradient approach remains competitive (and more efficient in samples).
Other Comments Or Suggestions: ### Comments
In general, I think this is a comprehensive paper, which tells a complete story about the gap between mean performance objective and pass@$k$ objective, showcasing the inference-time RL method better aligns with the latter. However, this result is not surprising, as stated in weaknesses.
### Suggestions
The authors should use the same colors in different figures. Currently Figure 2 and 3 have different colors for the same algorithms, which is highly confusing in the first sight.
Questions For Authors: Regarding the computational efficiency issue mentioned in weaknesses, can the authors provide another ablation study on the **total number of samples**? Namely, allow mean policy gradient to do $k$ times of update than the proposed algorithms, and plot the figures.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and efforts in providing us with the valuable reviews. We will incorporate your feedback into our later revision. We address your comments below.
> Regarding the computational efficiency issue mentioned in weaknesses, can the authors provide another ablation study on the total number of samples? Namely, allow mean policy gradient to do k
times of update than the proposed algorithms, and plot the figures.
In our manuscript the baseline (mean PG, and mean PPO), already uses the same number of samples as the other variants to enable a fair comparison in terms of compute cost.
We also conducted exp. using k samples for using the PPO loss but remove the value model. For mean objective, this corresponds to the GRPO loss.
For the pass@k objective, note that the proposed biased pass@k objective already subtracts the mean out of k samples, which coincides with the Dr.GRPO loss recently proposed https://arxiv.org/abs/2503.20783 (only subtract the mean but not divide by the std. deviation).
For completeness, we also include:
1. the mean objective using Dr. GRPO loss
2. biased pass@k objective using GRPO loss, i.e., further divide by the std. Deviation
Due to time and resource constraints we are only able to finish the exp. using llama 3.1 8b model on the coding tasks. We will add the others in the revision.
Checkpoints are all trained for 8000 gradient steps using the same set of hyperparameters. We report the number by avg the perf of the last 5 ckpts, (8k, 7.8k, …, 7.2k) steps, to ensure a non cherry-picked results.
pass@1:
| Method | CodeContests/Valid | CodeContests/Test | TACO/easy | TACO/hard |
|----------------------------------|--------------------|-------------------|-----------|-----------|
| **Mean (Dr. GRPO)** | 7.17 | 8.88 | 16.47 | 5.40 |
| **Mean (GRPO)** | 9.21 | 8.39 | 17.89 | 4.74 |
| **Biased pass@k (Dr. GRPO)** | 6.79 | 7.03 | 15.87 | 4.86 |
| **Biased pass@k (GRPO)** | 8.11 | 6.67 | 14.51 | 4.12 |
pass@10:
| Method | CodeContests/Valid | CodeContests/Test | TACO/easy | TACO/hard |
|----------------------------------|--------------------|-------------------|-----------|-----------|
| **Mean (Dr. GRPO)** | 10.61 | 14.89 | 22.91 | 6.89 |
| **Mean (GRPO)** | 13.88 | 14.41 | 24.11 | 7.06 |
| **Biased pass@k (Dr. GRPO)** | 15.71 | 16.44 | 27.01 | 8.55 |
| **Biased pass@k (GRPO)** | 16.83 | 15.45 | 26.08 | 7.37 |
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for their updated experiments. My concerns are addressed and I'll raise my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for engaging with our rebuttal. We sincerely appreciate you adjusting the scores. Please also let us know if you have further questions. | Summary: This paper investigates the impact of reinforcement learning (RL) objectives that optimize multi-sample metrics such as Pass@k and Majority Voting (Maj@k). It contrasts these objectives with standard mean reward RL objectives across mathematical reasoning and code generation tasks. The study highlights the trade-off between optimizing for multi-sample inference objectives and maintaining strong Pass@1 performance.
Claims And Evidence: Improved Pass@k Performance: The paper demonstrates that training with Pass@k policy gradient (PG) and biased Pass@k PG improves Pass@k performance. Supporting evidence is presented in Sections 4 and 5.
Effect of Increasing k During Training: The paper examines the impact of larger k values on performance (Section 4.2).
Methods And Evaluation Criteria: The paper evaluates three RL Objectives: Mean Policy Gradient, Pass@k Policy Gradient, Biased Pass@k Policy Gradient
Performance is primarily assessed through Pass@1 and Pass@4 scores while comparing these methods against KL with a reference policy. Experiments are conducted on three datasets:
MATH (Mathematical Reasoning), HARP (More difficult Mathematical Reasoning), CodeContests (Competitive Programming)
Theoretical Claims: The paper provides analytical insights into optimizing inference-time objectives but does not introduce major theoretical results.
Experimental Designs Or Analyses: - The paper effectively demonstrates that improving Pass@k often comes at the expense of Pass@1 performance, particularly in code generation tasks.
Practical Limitations of Pass@k: Pass@k is not a practical metric in real-world inference scenarios as it requires an oracle verifier. While the authors argue that Pass@k represents an upper bound on inference-time scaling, it is unclear if this upper bound is useful given the lack of a reliable verifier (DeepSeek-AI, 2025).
- The field is shifting towards inference compute scaling within a single trajectory (e.g., longer chains of thought) rather than multi-sample methods. The paper does not address how its proposed techniques integrate with this paradigm.
- The authors compare their metrics against KL divergence from a reference policy but do not justify why this comparison is meaningful. If the primary objective is to improve Pass@1 and Pass@4, it is unclear why lower KL divergence is relevant.
- The primary MATH benchmark results are reported for the training set in the main paper, while test set results are relegated to the appendix. This should be reversed.
Baseline Comparisons: A simple PG with an entropy penalty should be included as a baseline for a fairer comparison.
Supplementary Material: The appendix was only briefly skimmed.
Relation To Broader Scientific Literature: The paper focuses on the generator-verifier framework, where increased inference-time compute is allocated to generating multiple solutions and selecting the best one. However, recent trends have shifted toward single-trajectory inference-time scaling (e.g., longer chain-of-thought reasoning), limiting the practical applicability of methods discussed in the paper.
Essential References Not Discussed: Variational Best-of-N Alignment (https://arxiv.org/abs/2407.06057v1)
Other Strengths And Weaknesses: Strengths:
I think the paper is well-organized and easy to follow. It evaluates reasoning and coding tasks across diverse datasets and honestly the analytical insights provide useful perspectives on the optimization of multi-sample objectives.
Weaknesses:
- I think the gains in Pass@k come at a significant cost to Pass@1, particularly in coding tasks. Importantly, Mean PG outperforms other multi-sample objectives on the MATH test set, even at Pass@4.
- The approach depends on having a strong verifier, raising doubts about practical improvements from these objectives.
- The paper lacks a comparison with standard PG + entropy bonus.
- The majority of the related work section is relegated to the appendix, omitting key citations from the main text. Important prior work, such as Variational Best-of-N Alignment, is not discussed.
Other Comments Or Suggestions: I don't have any other comments.
Questions For Authors: I think I have discussed my questions in strengths and weaknesses section.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and efforts in providing us with the valuable reviews. We will incorporate your feedback into our later revision. We address your comments below.
> I think the gains in Pass@k come at a significant cost to Pass@1, particularly in coding tasks.
One major takeaway message is that there is a trade-off between pass@k and pass@1 in general. For coding tasks, pass@k is generally a more meaningful objective than pass@1 because we focus more on harder tasks, and obtaining one valid solution to a hard problem is enough [1].
[1] Li et al, Competitive level code generation with AlphaCode, 2022
> Importantly, Mean PG outperforms other multi-sample objectives on the MATH test set, even at Pass@4.
We are being truthful about the train/test generalization gap here - the performance improvements of our algorithm are better assessed at the training set, since the algorithm does not account for train/test generalization. The generalization gap might depend on multiple factors out of our control, such as the choice of the base model and data used for pretraining the models.
It is true that the mean PG outperforms multi-sample objectives with MATH at pass@4, but the outperformance is marginal and limited to pass@4. The overall conclusions about the performance improvement from multi-sample gradient estimate still stand, but we are truthful about the fact that it does not necessarily generalize in all cases.
> The approach depends on having a strong verifier, raising doubts about practical improvements from these objectives.
This is not true. We do not require verifiers at all during training, unlike Chow et al. Can the reviewer clarify on the claim that the approach depends on a strong verifier and where it is used?
For both training and evaluations, we use the standard numpy package to compute the rewards for training. This is common practice in the reasoning LLM literature.
> The paper lacks a comparison with standard PG + entropy bonus.
The entropy bonus in PG algorithms is complementary to the approach we take. In other words, in principle, our method can be combined with an entropy bonus as-is.
As a result, we do not think adding entropy bonus comparison offers new insights into the current algorithmic designs. Entropy bonus might lead to performance improvements as a result of avoiding premature collapse, but it also introduces a tunable hyper-parameter, whereas our approach is hyper-parameter free compared to the baseline.
> The majority of the related work section is relegated to the appendix, omitting key citations from the main text. Important prior work, such as Variational Best-of-N Alignment, is not discussed.
We apologize for the inconvenience of moving most related work discussion to the Appendix due to ICML page limit. When the limit is relieved (camera ready), we can move much of the discussion into the main paper.
Variational BoN: this is a related work and we will discuss it. Variational BoN does not optimize for the original objective as-is, but rather requires a variational approximation. This leads to an algorithm akin to the one introduced in Balashankar et al, and in a pairwise comparison setting. In comparison, we focus on the point-wise reward case and require no approximation to the original objective of interest.
---
Rebuttal Comment 1.1:
Comment: Thank you for your effort and detailed answer. I have read your rebuttal. I think my main issues still stand. Regarding the verifier: A high pass@K can be useful when we can sample multiple times and then choose the correct answer between the K times. If we cannot do that in test time, it is not clear to me how we can use this LLM with high pass@K.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for engaging with our rebuttal, we appreciate your time.
> Regarding the verifier: A high pass@K can be useful when we can sample multiple times and then choose the correct answer between the K times. If we cannot do that in test time, it is not clear to me how we can use this LLM with high pass@K.
We agree with the usefulness of pass@k with a high value of k. There is no indication in the paper that we cannot use a high value of k at test time.
In fact, in Table 1 we show this situation: at training time we might train with a smaller value of k=8 while testing it for k=10,100. The performance does generalize and transfer to higher value of k. Is this what you have in mind?
Please let us know if you have further questions on our paper. If our response addresses your technical points, please consider adjusting our scores too. | Summary: This paper explores the potential benefits of explicitly training LLMs for test-time inference. The authors introduce a new RL objective, which explicitly utilizes multiple samples and optimizes LLMs for inference-time objectives, like better pass@k performance or better majority voting performance. The empirical experiments show improvements on various math reasoning tasks and code generation tasks compared with baseline algorithms.
Claims And Evidence: See “Methods And Evaluation Criter” and “Experimental Designs Or Analyses”.
Methods And Evaluation Criteria: I am not fully convinced by the motivation of this work. Model training doesn't explicitly account for the downstream inference time algorithm. As presented in work [1], there is no single inference-time algorithm that works best for any scenario, and usually, during model training, it is not practical to assume what inference-time algorithm will be used. It's a user decision, determined by computation budget, latency requirements, etc. However, the objective proposed in this work, if I understand correctly, seems to require separate training for different objectives, which doesn't seem to be well-justified.
[1] Gao, Peizhong, et al. "Meta reasoning for large language models." arXiv preprint arXiv:2406.11698 (2024).
Theoretical Claims: The proof in section 3 is correct.
Experimental Designs Or Analyses: As mentioned in the Methods and Evaluation Criteria section, on one hand, I don't think it is a valid assumption to assume which test-time algorithm to use during training. On the other hand, for general RL algorithms, I suspect the performance of algorithms like GRPO, that utilize the same number of samples, will even outperform the objective proposed in this paper. To be clear, GRPO [1] doesn't require any prior assumption on test-time algorithm, but we can still evaluate it on both approaches. As shown in Figure 18 and Figure 19 from [2], the R1-Distill-Qwen-7B model, a single model, outperforms both maj@k and pass@k separately-trained 8B models reported in this paper on MATH. This is not a fair comparison since R1-Distill-Qwen-7B is a stronger base model, but my argument is mainly to question the necessity of separate objectives; a more rigorous comparison is needed. Baselines like GRPO and other single-objective functions with the same sample complexity are necessary for comparison.
[1] Shao, Zhihong, et al. "Deepseekmath: Pushing the limits of mathematical reasoning in open language models." arXiv preprint arXiv:2402.03300 (2024).
[2] Qu, Yuxiao, et al. "Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning." arXiv preprint arXiv:2503.07572 (2025).
Supplementary Material: Yes, I reviewed all the supplementary material.
Relation To Broader Scientific Literature: This work is related to the claim that improving inference time performance can exceed the benefits of additional training time.
Essential References Not Discussed: The key contribution is a multi-sample RL objective that aligns with test-time inference. However, single-sample RL algorithm work is not well-discussed in this paper, such as [1], [2], and [3].
[1] Kumar, Aviral, et al. "Training language models to self-correct via reinforcement learning." arXiv preprint arXiv:2409.12917 (2024).
[2] Kazemnejad, Amirhossein, et al. "Vineppo: Unlocking rl potential for llm reasoning through refined credit assignment." arXiv preprint arXiv:2410.01679 (2024).
[3] Shao, Zhihong, et al. "Deepseekmath: Pushing the limits of mathematical reasoning in open language models." arXiv preprint arXiv:2402.03300 (2024).
Other Strengths And Weaknesses: See “Methods And Evaluation Criter” and “Experimental Designs Or Analyses”.
Other Comments Or Suggestions: See “Methods And Evaluation Criter” and “Experimental Designs Or Analyses”.
**After Reply Rebuttal Comment**
I maintain my concern about the generalizability of the method. As the authors mentioned in "Reply Rebuttal Comment":
> If we train with pass@k and evaluate with maj@k, there is generally no guarantee that we can do better than pass@1.
I think it's a strong assumption that "inference time objective is known to inform the training time procedure." For multiple objectives during inference time, the computational overhead of retraining the model to optimize each target is concerning, and achieving better results by specifically optimizing for each objective seems less appealing. However, the authors have addressed my other concerns; therefore, I have updated my score accordingly.
Questions For Authors: There are no implementation details in the paper. How are the hyperparameters selected for different settings in the paper?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and efforts in providing us with the valuable reviews. We will incorporate your feedback into our later revision. We address your comments below.
> On the other hand, for general RL algorithms, I suspect the performance of algorithms like GRPO, that utilize the same number of samples, will even outperform the objective proposed in this paper.
We strongly disagree with the reviewer.
In our manuscript the baseline (mean PG, and mean PPO), as what the reviewer refers to as “general RL algorithm”, already uses the same number of samples as the other variants to enable a fair comparison. And we already provide empirical evidence in our paper that they do not outperform our proposed variants that optimize for pass@k.
In other words, general RL algorithms such as GRPO only optimize for the mean performance, and would similarly benefit from algorithmic improvements proposed in this work. GRPO is not designed to optimize for pass@k on its own.
> Baselines like GRPO and other single-objective functions with the same sample complexity are necessary for comparison.
Please refer to our response to Reviewer bYLj for exp. on GRPO; we also include Dr. GRPO as a variant. Note that, our method could be integrated with various policy gradient variants, e.g., the PPO or vanilla Policy Gradient in the manuscript. Please refer to our response to Reviewer 61s6 for results of SFT and Rejection Sampling Finetuning (RFT) for additional comparison.
> To be clear, GRPO [1] doesn't require any prior assumption on the test-time algorithm, but we can still evaluate it on both approaches.
GRPO (or any policy gradient method derived from the objective $\max E(f(y)))$ **does** have prior assumptions on the test-time algorithm: the one that uses the exact protocol as the $E(f(y))$, i.e., pass@1. The reviewer refers to [2] to argue that a model trained using $\max E(f(y))$ objective works well on pass@k metrics, i.e., $\max E(f(y_1, … y_k))$ objective. This could be also true on the other way round.
We motivate this from the first principle that such an objective is to maximize the k-sample objective $E(f(y_1, … y_k))$, contrary to the common formulation of maximizing cumulative return $E(f(y))$, where the return is a function for individual sample. Our proposed scheme works for settings that fit into this objective $E(f(y_1, … y_k))$. Specifically, we show that in the case of $f = r \circ \text{maj}$ and $f = \max$, this corresponds to the maj@k and pass@k metrics in math reasoning and code generation and gives empirical evidence.
That said, our method also can be evaluated in multiple sampling budgets in test time (we do not set a constraint that training on k = 8 only works for k = 8 in test time, see our section of generalization to k = 100).
Also, we share exactly the same hyperparameter as GRPO: the number of samples per prompt.
> As shown in Figure 18 and Figure 19 from [2], the R1-Distill-Qwen-7B model, a single model, outperforms both maj@k and pass@k separately-trained 8B models reported in this paper on MATH. This is not a fair comparison since R1-Distill-Qwen-7B is a stronger base model, but my argument is mainly to question the necessity of separate objectives.
The reviewer suggests that the access to a stronger model R1-Distill-Qwen-7B questions the necessity of separate training objectives.
We object to this argument because the R1-Distill-Qwen-7B model is distilled from a capable mode R1, which is specifically trained via RL methods, which is the focus of our paper.
This does not undermine the necessity of separate objectives but rather reinforces the validity and potential benefits of explicitly targeting inference-time objectives during training, which results in a better model in terms of pass@k performance and could bring the pass@k superiority to other models by following the same distillation process. Also, Figure 18 and Figure 19 from [2] is reported on MATH500, while our numbers are on a full MATH test set (a total 2500 problems).
> There are no implementation details in the paper. How are the hyperparameters selected for different settings in the paper?
Please find implementation details in the Appendix A (in particular A. and A.4).
For sampling, we use top-p=1 and temperature $\tau=1$ sampling with standard parameters, as they are compatible with the RL training, though evaluations can be executed with various sampling scheme.
For training, the experiments are rather robust to the choice of learning rate and other training parameters: Therefore, we fixed these hyperparameters among different settings (mean/pass@k objective) within math/code experiments respectively, to ensure a fair comparison.
Other hyper-parameters such as $k$ are ablated in the experiments. We bump the $k$ for code experiments since CodeContests is a challenging benchmark.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses! I have some follow-up questions regarding the responses:
**General RL comparison**
It is good to see the generalization across pass@k with different k values, but my concern about separate training objectives is **whether training the model with GRPO + pass@k can outperform GRPO + pass@1, (correct me if I am wrong, where the former is the method proposed in the paper, and the latter is the classical approach), when the evaluation metric is maj@p**, and vice versa. Any other algorithms are also acceptable since only the objective matters here.
**Evaluation benchmark**
I encourage the authors to evaluate the method on MATH500, which is derived from the original MATH dataset [1] to avoid potential contamination issues. During the development of PRM800K [2], since the initial 7.5K training set was insufficient for training a robust Process Reward Model (PRM) on step-by-step solution data, 4.5K problems from the test set were incorporated into the training set, leaving a remaining subset of 500 problems now referred to as MATH500. Since the release of PRM800K, MATH500, instead of the original MATH test set, has been widely adopted to prevent overlap between training and test sets. Also, datasets like AIME2024 and AIME2025, which are even less likely to be contaminated, would be better options for evaluation.
**Clarification**
Can the author point me to the ablation/evidence show that “For training, the experiments are rather robust to the choice of learning rate and other training parameters”?
**Hyperparameter and base model selection (optional)**
Moreover, the authors might want to tune the hyperparameters following the recipes in [1]; based on some reproduction work, temperature=1 doesn't give the desired model performance. Additionally, studies like [2] reveal that running outcome reward RL on top of Llama produces more noise and less appealing performance. The authors could also consider including results on other base models if time permits.
**Reference**
[1] Hugging Face, . "Open R1: A fully open reproduction of DeepSeek-R1." (2025).
[2] Gandhi, Kanishk, et al. "Cognitive behaviors that enable self-improving reasoners, or, four habits of highly effective stars." arXiv preprint arXiv:2503.01307 (2025).
[3] Hendrycks, Dan, et al. "Measuring mathematical problem solving with the math dataset." arXiv preprint arXiv:2103.03874 (2021).
[4] Lightman, Hunter, et al. "Let's verify step by step." The Twelfth International Conference on Learning Representations. 2023.
[5] Guo, Daya, et al. "Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning." arXiv preprint arXiv:2501.12948 (2025).
[6] OpenAI. “Learning to reason with LLMs.” url: https://openai.com/index/learning-to-reason-with-llms/ (2024)
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for engaging with our rebuttal. We agree that addressing the concerns raised (eval on MATH500 and AIME24, 25 for example) will strengthen the paper. The issues pointed out are all omissions (of results, baselines, etc.) due to the rapid evolving landscape in the RL on LLM field (given that the manuscript is submitted in January and we are, same as the reviewer, quite aware of the evolved references here happening in Feb and March; techniques like GRPO, the PPO variant by using MC rollouts to estimate the value $V$, gained prominence only recently with the release of Deepseek-R1 in mid-Jan), rather than flaws in the method derived therectically from the optimization objective or experimental design for empirical evidence, and since most of them can be addressed easily we would like to kindly ask the reviewer to consider raising their score if they feel the solutions discussed below are satisfactory.
> General RL comparison
If we train with pass@k and evaluate with maj@k, there is generally no guarantee that we can do better than pass@1.
However, our work targets situations where the inference time objective is known to inform the training time procedure. We need to make assumptions, this is the trade-off. If we have no assumptions on the inference time objectives, there is in principle no better way than training with pass@1.
> Evaluation benchmark
We agree it is a valid point to measure contamination especially when we want to report SoTA numbers that compare against prior results.
However, in our case we believe it is more meaningful to measure the "delta" of performance, since the improvements are compared across different methods.
AIME 24 and AIME 25 are fair benchmarks too, but since we do not claim SoTA on reasoning benchmarks and our contribution rests on measuring the relative performance improvements across different methods, having MATH and HARP dataset should provide valid empirical evidence as well.
> Clarification
We do not have results in the paper showcasing this, though in very early experiments we settle with a learning on which the baseline methods (pass@1) work stably. We hence use the same lr for the pass@k objectives too, and have not changed the lr since.
Does the reviewer believe that we should provide ablation on lr too? Learning rate, among many other hyper-parameters in the system, can in principle all be ablated and realistically it is infeasible to ablate across all combinations. We have ablations for important hyper-parameters to the problem such as the value of k.
> Hyperparameter and base model selection (optional)
We agree that temp=1 is not necessarily the best sampling config for max'ing out evaluation performance - in fact in our experience greedy sampling might add a few points to the evals. That said, we feel that using temp=1 and top-p=1 ensure that training and eval use the same sampling config (temp=1 and top=1 is on-policy sampling for RL). This is meant to reduce confounding factors introduced by the discrepancy between eval and training.
Other base models: we do not mean to pursue SoTA results in this work (as data and the starting model plays a crucial role here), and rather we feel it is more meaningful to measure the "delta" in performance across methods to provide a sound scientific ground for algorithmic advancement. Though the absolute numbers might not transfer across different models (llama vs. deepseek r1), we believe the "delta" should hopefully have a better transferable property. | Summary: This paper investigates the reinforcement learning algorithms of LLMs in the training time for achieving the test-time objectives. Specifically, it focuses on k-sample policy gradient approaches assuming pass@k and majority vote are the test-time strategies of interest. The authors proposed a leave-one-out like advantage function, which is claimed to be an unbiased estimation of the policy gradient. Furthermore, they showed another variant which further reduces the variance through introducing a bias term by adding an average baseline in the advantage function. Empirical results showing that the biased k-sample method outperforms standard policy gradients. The proposed algorithm is validated through different benchmark datasets (e.g., MATH, CodeContests and TACO) using different sizes of models, demonstrating practical applicability of the proposed methods.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: - In the proof of Lemma 1, why is the expectation of two independent variables zero? I didn’t see any assumption made on either variable being zero.
- The proof for Lemma 2 is hard to follow. Not sure which expression the authors refer to. I suggest the authors add a complete proof in the appendix for better understanding of the claims.
Experimental Designs Or Analyses: Yes.
Supplementary Material: No.
Relation To Broader Scientific Literature: The contribution of this paper has shown that aligning the test-time objective with the training-time effort is promising, which is related to RL training and test-time compute of LLMs.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: ## Other Weaknesses
- The variance reduction is not shown both theoretically and empirically.
- The only baseline the authors considered was mean policy gradient, which is not comprehensive.
- The comparisons/inconsistent results on PPO integration are not explained adequately. For example, why does pass@k policy gradient become much worse than the baseline when PPO is used?
Other Comments Or Suggestions: - I suggest the authors add a complete proof for lemmas in the appendix for better understanding.
Questions For Authors: - What is the performance comparison between the proposed approach (e.g., pass@k policy gradient) and purely SFT trained or pass@1 mean policy gradient method with pass@k test-time strategy? This would be meaningful as it shows how much benefit the proposed method can bring.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and efforts in providing us with the valuable reviews. We will incorporate your feedback into our later revision. We address your comments below.
> In the proof of Lemma 1, why is the expectation of two independent variables zero? I didn’t see any assumption made on either variable being zero.
This is because the expectation of the score function is zero $E[ \log \pi(y_i|x)]=0$. More precisely,
$E[f_{-i}\log \pi(y_i|x)] = E[f_{-i}] \cdot E[\log \pi(y_i|x)]=0$
Where the equality makes use of the statistical independence between $i$ and other samples $-i$. We will make this more clear in the revision.
> The proof for Lemma 2 is hard to follow. Not sure which expression the authors refer to. I suggest the authors add a complete proof in the appendix for better understanding of the claims.
Thank you for the suggestion, we will add a more detailed explanation in the Appendix.
The goal of Lemma 2 is mainly to derive the analytic form of the biased gradient estimate, which turns out to optimize a biased objective with an interesting analytic form - it is the average of leave-one-out objectives from $k$ samples.
> The variance reduction is not shown both theoretically and empirically.
The understanding that the control variate improves variance, either in the form of the leave-one-out control variate (for the unbiased estimator), or subtracting a mean baseline (for the biased estimator), was based on common consensus in the literature on this topic [1]. In fact, we cannot show theoretically that the variance is reduced in all cases, due to the multiplication with the score functions; but we can show that the advantage estimation has lower variance. We can make the theoretical argument more precise in the revision.
For empirical validation: we never implemented an estimator without control variate for variance reduction, as such estimators are plain REINFORCE gradient estimators and do not work well in practice. We can add such an ablation in the revision in case the reviewer finds it useful.
[1] Kool et al, Buy 4 REINFORCE Samples, Get a Baseline for Free!, 2023
> why does pass@k policy gradient become much worse than the baseline when PPO is used?
We have some speculations regarding this observation: the pass@k policy gradient, compared to the mean policy gradient and biased pass@k policy gradient, has the sparsest gradient, i.e., the weight before the policy gradient is non-zero only for positive samples where it is the only 1 positive sample among k samples, while for the other samples the weight is 0. Biased pass@k policy gradient, by subtracting the mean among k samples, makes the policy gradient no-sparse in such cases. When incorporating PPO, adding a learnt value breaks this sparsity due to the imperfectness of the value model.
> What is the performance comparison between the proposed approach (e.g., pass@k policy gradient) and purely SFT trained or pass@1 mean policy gradient method with pass@k test-time strategy? This would be meaningful as it shows how much benefit the proposed method can bring.
We’ve observed a performance drop when doing SFT on CodeContests training set. Given that Llama 3.1 70B has already been heavily tuned in the post-training, some code
solutions in CodeContests training set are of less quality than the data presented in its
post-training phase. For example, some imports in the Python codes are outdated (e.g., from
fractions import gcd will throw an ImportError since Python 3.9).
Therefore, we included a baseline for Rejection Sampling Finetuning (RFT) instead, where the model generates 200 rollouts for each problem instance in the training dataset and collect the correct solutions, we downsample them to maximum 50 solutions per problem and conduct SFT on this dataset.
We show the comparison below, the best performance is bold and the second best is underscored. With these comparisons, the both pass@k (k=8) variants achieves the best performance in terms of the closest metrics pass@10.
| Method | pass@1 | pass@10 | pass@100 |
|---------------------------------|--------|---------|----------|
| **Llama 3.1 70B Inst.** | 16.1 | 34.2 | 48.2 |
| **SFT** | 10.0 | 26.7 | 39.2 |
| **Rejection Sampling Finetuning**| 18.1 | 37.2 | 52.5 |
| **mean PPO** | 24.9 | 34.6 | 41.1 |
| **pass@k PPO** | 21.2 | 41.4 | 51.0 |
| **biased pass@k PPO** | 22.6 | 45.1 | 56.3 |
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, but my concerns have not been fully addressed. Additionally, I want to know:
- How would the expectation of the score function be zero? It is just log probability of tokens, which seems to me should be positive.
- Why does the new result suggest that the pass@k training does not generalize well to other metrics? For example, the gain of biased pass@k PPO is much smaller on pass@100 compared with Rejection Sampling Finetuning (RSF). Not to mention that pass@k PPO is even worse than RSF.
I found the setting of the paper interesting, but it may need some revisions to improve the presentation and results. In light of this response, I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging with our rebuttal, we appreciate your time very much.
> How would the expectation of the score function be zero? It is just log probability of tokens, which seems to me should be positive.
We apologize for this confusion - the score in our case is at times -1 for failure and +1 for success. Hence, a negative score means that the model has not been able to answer correctly >50% of all questions in the eval set or training set. The non-negative score - which is the accuracy of the evaluation - can be obtained by a simple linear transformation. We will make this more clear in the revision.
> Why does the new result suggest that the pass@k training does not generalize well to other metrics? For example, the gain of biased pass@k PPO is much smaller on pass@100 compared with Rejection Sampling Finetuning (RSF). Not to mention that pass@k PPO is even worse than RSF.
We speculate there are some good technical reasons for this, which are insightful for practitioners and researchers in general.
In our Rejection Sampling Finetuning (RSF) setting, the number of generations is 200 per problems for the whole training set (13k problems) so in total 2.6M trajectories are sampled. In contrast, in our PPO experiments (results from Table 1) setup detailed in A.4, a total number of 512000 trajectories (8000 gradient steps * 2 batches / steps / sampler * 32 sampler) are sampled.
We think in some cases, RSF can have an edge over PPO because it leverages offline samples at larger scale and makes use of off-policy data and offline data in a deliberate way, which might have a positive impact. PPO, as well as many other online RL algorithms, is known for shrinking the policy entropy over training due to online training, and as a result limits the model's exploration ability over time. This somehow bounds the performance, at the cost of being more data efficient (512k samples v.s. 2.6M samples). The RFT, a offline setting, enjoys a lot the exploration (able to find the correct solutions for hard problems if it's solved once out of 200 attempts) and benefits from the subsequent filtering we set, while it takes more compute to obtain the total samples.
We will make this discussion more clear in the revision. In the meantime please let us know if you have other technical questions you'd like more clarification on. | Summary: This work presents a training objective that directly optimizes pass@k and maj@k performance for LLMs, and optimizes them using standard RL training algorithms (policy gradients with a baseline, PPO) known in practice. They evaluate performance on a synthetic bandit task, math reasoning and coding tasks, and find that they are able to improve performance over algorithms that only optimize pass@1 performance.
## update after rebuttal
In the LLM literature, it is well-established that optimizing for pass@1 does not necessarily improve pass@k. Thus, it makes sense to optimize it with an RL training algorithm like Reinforce. The main contribution of the paper is to apply Reinforce on the pass@k objective. The results and discussion are interesting. My main concern is with technical novelty since prior works like Chow et. al. do exactly the same thing (choice of optimizing pass@n for a trained vs. ground-truth reward function does not change the algorithm or setup). **But, I am still willing to raise my score to 2.5 and would not be strongly opposed to accepting the paper if other reviewers are willing to champion it, since the empirical insights are useful to the community**.
Claims And Evidence: Yes, the claims made are clear. They propose an objective to optimize pass@k directly and run experiments that implement this objective and RL algorithm.
Methods And Evaluation Criteria: Yes, the math and coding benchmarks used seem standard. The authors only plot performance on the pass@k objective they trained for. It would be interesting to see, if the performance gain on value of k used in training also extends to other values of k. If not, how was this value of k (4 or 8) chosen in practice? This part was a bit unclear.
Theoretical Claims: The only theoretical claims are made with respect to the bias of their proposed gradient estimates, which seem correct given their readily apparent nature.
Experimental Designs Or Analyses: Yes, I read through their experimental setup for math reasoning and coding benchmarks, and it looks reasonable to me.
Supplementary Material: I skimmed through it, to check the pseudocode of the algorithm they implement, some experiment details, and Figure 15, which is also referenced in the main text. Figure 15 presents the maj@k results on MATH dataset.
Relation To Broader Scientific Literature: The key idea in this paper is to optimize for inference time performance, like pass@k or maj@k. From prior work, we know that optimizing for pass@1 may not necessarily lead to better performance on pass@k. So, the idea is to improve performance at a higher compute budget. In general, the broader goal should be to learn policies that push up the pareto frontier of performance against inference time compute. This work does not do that, but instead makes an attempt to push the performance up at higher test compute budget, i.e., optimize for performance at higher pass@n.
Essential References Not Discussed: No, I believe the paper discusses most relevant works that optimize pass@n performance. Though, in the next iteration, it would be useful to include some discussion on training time interventions, that optimize for inference time performance, for example more recent meta-RL training objectives like Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning, Qu. et. al 2025.
Other Strengths And Weaknesses: Strengths:
- The gains on pass@8 performance for coding benchmarks is impressive, compared against a baseline that only optimizes pass@1.
- The proposed approach is fairly straightforward to implement.
Weaknesses
- The paper lacks key comparisons with InfAlign (Balashankar et. al.), and Chow et. al., that both propose RL training objectives that directly optimize pass@n or BoN performance.
- The performance on MATH is underwhelming, especially Maj@k in Figure 15, and even pass@4 in Figure 2, is only better in a narrow KL regime, with the proposed policy gradient.
Other Comments Or Suggestions: - If the authors can run experiments against suggested baselines that would strengthen the paper.
- If the authors can also demonstrate that the approach can extend to broader functions like maj@k or even self-verification@k (see Sample, Scrutinize and Scale: Effective Inference-Time Search by Scaling Verification, Zhao et. al.), that would also strengthen this paper.
- In general if the authors can empirically show that the proposed approach can improve performance on math reasoning, either at a broader range of KL divergence, or for a broader range of k (beyond the pass@k trained for), that would be great.
Questions For Authors: - Do the authors think that there is a fundamental tradeoff between optimizing for pass@1 and pass@k performance? Why is it that we cannot learn a single model with good pass@1 performance on easy problems and good pass@k on hard problems? Is this lack of representation capabilities, pre-training biases, or issues with training objectives/algorithms?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and efforts in providing us with the valuable reviews. We will incorporate your feedback into our later revision. We address your comments below.
> The paper lacks key comparisons with InfAlign (Balashankar et. al.), and Chow et. al., that both propose RL training objectives that directly optimize pass@n or BoN performance.
We have discussed the technical differences between our work and Balashankar et. al, Chow et al, in Section 3.3, with more extended discussion in Appendix C.
As pointed out in the paper, Balashankar et. al and Chow et al consider structurally different problem formulation as we have in this work. Balashankar et. al considers a pairwise preference setting (vs. point-wise reward setting in our case) while Chow et al considers the case with auxiliary models such as verifiers or scorers (vs. no requirement for auxiliary models in our case). As a result, it is challenging to set up an apple-to-apple comparison against such concurrent work since they tackle very different settings. Further, we also feel that it’s fair not to make direct comparison to unpublished concurrent work.
> The performance on MATH is underwhelming, especially Maj@k in Figure 15, and even pass@4 in Figure 2, is only better in a narrow KL regime, with the proposed policy gradient.
We acknowledge the fact that the performance improvement on MATH is not huge, though they are still statistically significant. The upper bound of the performance improvement also depends heavily on the base model we started with (which we do not have full control over). MATH as a benchmark task is also relatively easy since the pass@16 has also reached ~86%, at a competitive level as the frontier models such as O1 and Deepseek R1. This also puts a limit on the amount of improvement.
The narrow KL divergence regime: Fig 2 shows improvement across all KL values we observe during training. For Fig 15, since maj@k signals are much sparser, this makes it more difficult for the model to reach a larger KL divergence deviation during training. However, for the KL divergence regime it reaches, the performance improvements are clear. We have highlighted this point as an algorithmic trade-off in the paper.
Additionally, depending on practical applications, the “narrow KL” regime might suffice since for general post-training, ideally the RLHF stage does not lead to too much deviation from the reference policy. We leave this to the judgement of practitioners.
> Do the authors think that there is a fundamental tradeoff between optimizing for pass@1 and pass@k performance?
In principle, pass@1 and pass@k share common goals but there is a conflict between the two objectives, in that pass@k always encourages higher diversity and coverage over the solution space, while pass@1 encourages more determinism.
> Why is it that we cannot learn a single model with good pass@1 performance on easy problems and good pass@k on hard problems? Is this lack of representation capabilities, pre-training biases, or issues with training objectives/algorithms?
It should be possible in principle: this might require a hybrid objective, and switching between pass@1 vs. pass@k training objectives depending on the problem difficulty. We might even want to devise a novel algorithm that adjusts the objective based on the online sampled pass rate. This definitely is a promising path for future work that integrates the two objectives into a unified algorithm.
The pretraining biases and representation capabilities certainly plays a key role as well, as suggested by recent investigations to reproduce Deepseek R1 (e.g., Qwen models can reproduce R1 behavior more consistently, and model sizes matter too). That said, training algorithms / objectives should make a difference, as we have demonstrated in the paper, especially because RL has become increasingly important in post-training.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response, especially for the clarifications on the poor performance for Maj@k, and gains in a narrow KL regime. Those concerns are addressed. But, I still think that the "Chow et al.", setting is very similar to this paper, in that even they optimize for "pass@k on math". I believe both your work and theirs assumes access to ground-truth outcome rewards, so it is definitely a valid baseline. While I agree that this work is not published, it was certainly available on arXiv in 2024, and warrants a comparison, *even a single dataset/experiment comparing the two approaches would be helpful*. I will keep my score for now, and update (if needed) after discussions with other reviewers and AC.
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging with our rebuttal, we appreciate your time very much.
> But, I still think that the "Chow et al.", setting is very similar to this paper, in that even they optimize for "pass@k on math". I believe both your work and theirs assumes access to ground-truth outcome rewards, so it is definitely a valid baseline.
The key difference is that: Chow et al assumes access to a verifier score $r$, which it has access to at both training and inference time. In our case, we assume access to the ground truth score, but only at training time.
That said, both our method evaluates with pass@k at evaluation time - though Chow et al trains with best-of-k with the verifier $r$ strategy at training time, while we train with pass@k.
> While I agree that this work is not published, it was certainly available on arXiv in 2024, and warrants a comparison, even a single dataset/experiment comparing the two approaches would be helpful.
We are happy to provide a more detailed discussion of the difference + a single experiment comparison to Chow et al, in the revision. We can simulate Chow et al's setting for verifier with a corrupted ground truth score, and see its impact on the evaluation performance.
Please let us know if you have other questions that you'd need clarification on to adjust our scores, thanks. | Summary: This paper explores the potential of explicitly optimizing language models for inference-time performance objectives, particularly pass@k and majority voting, using reinforcement learning (RL). The authors propose a k-sample objective formulation and derive both unbiased and biased gradient estimators, including a leave-one-out control variate for variance reduction. The paper provides theoretical justification for the estimators, including proofs of unbiasedness and characterizations of induced biases. Empirical evaluations on mathematical reasoning (MATH, HARP) and code generation (CodeContests, TACO) benchmarks demonstrate that training-time optimization of inference-time metrics can yield superior performance on those metrics (especially pass@k), compared to standard mean policy gradient approaches.
**update after rebuttal**:
Thanks for the authors' responses. My concern remains that the generalization sometimes degrades when optimizing inference-time metrics. However, the overall idea and method are interesting, and I encourage the authors to explore more on this.
Claims And Evidence: Claims:
Inference-time objectives like pass@k and majority voting can be explicitly optimized during training. Doing so leads to improved performance on those inference-time metrics, especially on more difficult problems. The proposed gradient estimators are theoretically justified and practically effective.
While the training-time improvements are consistent, test-time generalization improvements are more mixed, especially in the majority voting case and for high-capacity models (e.g., LLaMA 70B on MATH). The authors acknowledge this.
Methods And Evaluation Criteria: The methods are clearly described and well-motivated. The paper adapts standard reinforcement learning techniques to k-sample objectives, and proposes novel unbiased and biased gradient estimators for these settings. Evaluation is performed on realistic and challenging benchmarks: MATH and HARP for mathematical reasoning, and CodeContests and TACO for code generation.
Metrics are appropriate for each domain. Evaluation is comprehensive, including training curves, sample-efficiency analysis (via KL-divergence plots), and ablations on model size and number of samples k.
Theoretical Claims: The paper contains several theoretical claims, mainly regarding gradient estimation:
1. The leave-one-out gradient estimator is unbiased.
2. The biased variant optimizes a modified leave-one-out objective.
These derivations follow standard techniques in policy gradient literature. The proofs are concise, correct, and help contextualize the practical impact of bias vs variance.
Experimental Designs Or Analyses: The experimental design is sound and robust. Key strengths:
- Ablations on different model sizes (3B, 8B, 70B).
- Evaluation across multiple benchmarks and multiple difficulty levels.
- Consideration of both training performance and test-time generalization.
- Integration of the proposed gradient estimators with PPO, demonstrating compatibility with standard RL fine-tuning pipelines.
One weakness is the limited analysis of why generalization sometimes degrades when optimizing inference-time metrics. This could benefit from further exploration.
Supplementary Material: Yes, the supplementary material was reviewed. It contains:
- Detailed hyperparameters and experimental setups.
- Additional ablations (e.g., number of samples k, effect of model size).
- Generalization results for pass@k with large k (up to 100).
- Formal derivations for biased majority voting objectives.
- Implementation details for PPO integration.
Relation To Broader Scientific Literature: The paper is well-positioned in the context of:
- RLHF and preference-based fine-tuning.
- Best-of-k and self-consistency generation
- Inference-time optimization
Essential References Not Discussed: The related work section (and appendix) is comprehensive and cites relevant prior work on:
- RL with multi-sample objectives,
- Policy gradient variance reduction,
- Self-consistency and majority voting.
Other Strengths And Weaknesses: Strengths:
- Good theoretical and empirical contributions.
- Clearly written, with well-illustrated examples and intuitions.
Weaknesses:
- Generalization of improvements to evaluation time is not always consistent.
- The method may be less effective for problems where models already achieve high performance (e.g., MATH with LLaMA 70B).
Other Comments Or Suggestions: - Consider emphasizing earlier that these methods do not require auxiliary models (unlike best-of-k with reward models)
- Including a discussion on compute efficiency (vs benefit) can help practitioners decide when to use these techniques.
Questions For Authors: - Can you elaborate on why training-time improvements on pass@k/majority voting objectives do not always translate to better test-time performance? Could this relate to diversity collapse or overfitting to training prompts?
- When should practitioners prefer biased vs unbiased gradient estimators? Do you recommend any heuristics for setting k?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and efforts in providing us with the valuable reviews. We will incorporate your feedback into our later revision. We address your comments below.
> 1. Generalization of improvements to evaluation time is not always consistent.
The generalization of improvements to evaluation is indeed less straightforward, as we have explicitly discussed in the paper. The generalization gap depends in a complex way on the training data and model sizes. The cleanest way to assess our algorithmic progress is via the training performance, though we have also included evaluation for completeness. We still see evaluation improvements overall, though they are not as strong as training rewards improvements.
> 2. The method may be less effective for problems where models already achieve high performance (e.g., MATH with LLaMA 70B).
This is true and we have elaborated on this observation in the paper. The k-sample objectives are generally less effective for easy problems to the model, since the model does not have strong signals to make improvements. However, for more difficult problems (HARP for Llama 70B), we see more significant improvements, which might be a more useful case for many practitioners as well.
> Consider emphasizing earlier that these methods do not require auxiliary models (unlike best-of-k with reward models)
We will make sure to emphasize this point earlier on in this work. We have introduced the inference objectives of interest in Section 2.
> Including a discussion on compute efficiency (vs benefit) can help practitioners decide when to use these techniques.
We will elaborate on such points more extensively in the revision. In general, we find that the newly proposed algorithms automatically improve inference time objectives with a similar computational budget as the baseline algorithms (which maximizes the mean objectives). The algorithmic change is also almost a few-line code change, which is quite convenient for practitioners.
> Can you elaborate on why training-time improvements on pass@k/majority voting objectives do not always translate to better test-time performance? Could this relate to diversity collapse or overfitting to training prompts?
We conjecture that this might be more related to the generalization gap between training and test set, which depends in a rather complex way on factors such as the model size, and training data that went into the model pre-training and SFT (the released Llama 3.1 model might have SFT’ed on similar dataset to our RL prompt set, which might reduce the level of generalization we expect). As a result, maybe training set diversity and performance improvements do not transfer 100% to the evaluation set.
> When should practitioners prefer biased vs unbiased gradient estimators? Do you recommend any heuristics for setting k?
We feel that a good approach is to decide on the estimator adapting to the specific use case. Overall, we find the two estimators to be both quite competitive, and since the unbiased gradient estimator optimizes for the exact objective, it might be the first thing to try.
For setting k: this depends on the application of interest. We can choose a specific k that will be used at inference time. We can also choose a generic value of k to maximize model performance while diversifying the samples as much as possible - in general, a large value of k indicates more diversity. We find that values of {4,8,16} are good starting points, depending on the application. | null | null |
Communication Efficient Federated Learning via Model-Agnostic Projection Adaptation | Reject | Summary: The authors propose a new method called MAPA for parameter-efficient federated fine-tuning. The main advantages of MAPA are that it does not depend on the architecture, unlike other LoRA-based methods, and it reduces the computational and memory costs while providing a better or comparable performance. The authors provide a convergence guarantee of the proposed method under common assumptions with a decaying learning rate. The experimental results show the superiority of the proposed method.
Claims And Evidence: I think the authors may want to clarify their claim about being not architecture-independent. I couldn't fully follow it. First, I thought their method was applicable to all types of neural net layers (CNN, etc.), unlike LoRA, which is only for linear layers. If this is the case (I can see that you use some CNN architectures in the experiment), I wonder how it is happening.
Methods And Evaluation Criteria: Yes, it looks good overall. My questions are in the weaknesses and questions section.
Theoretical Claims: I didn't follow the proofs of Propositions, but they seem to make sense. I skimmed through the proof of the convergence analysis. It seemed reasonable to me.
Experimental Designs Or Analyses: The experiments sound good overall. I have a few questions, which can be found in the questions and weaknesses section.
Supplementary Material: Skimmed theoretical proofs, seems reasonable. Skimmed other parts as well.
Relation To Broader Scientific Literature: The proposed method seems to improve the federated fine-tuning in communication efficiency while providing better or comparable performance. Compared to the previous literature, I have found the explanations and theoretical analysis good. The experimental results are superior to the selected baselines.
Essential References Not Discussed: I think one missing part in the intuition of the integration of the single reshaped matrix update/factorization idea is any other work using this in a centralized setting. If this has not been done before in a centralized setting, then it means that the authors' contribution may be an even wider setting, i.e., a centralized setting. If there exist similar ideas in centralized LoRA literature, mentioning them in the related works would be good. Please correct me if I am mistaken.
Other Strengths And Weaknesses: - Strengths:
1. The paper is written very clearly. I appreciate the authors' presentation.
2. The proposed method's intuition is well-explained.
3. The theoretical guarantee is a plus considering many works in the literature lack it.
4. The experiments show the superiority of the proposed method compared to the baselines in terms of communication efficiency and training quality.
- Weaknesses:
1. I don't fully understand what is wrong with layer-by-layer separation (let's say we select some fixed $k$ for all layers) in terms of architecture dependence. Cannot the proposed method (first reshaping the matrix and separating A and B) be applied layer by layer for any architecture type following a similar method? Do the authors have an ablation experiment where they apply their technique layer by layer to the models and compare with the current version?
2. The model backbone is updated at every round as in Eq. (1). This may create the following problem: What makes LoRA advantageous is that at the end of fine-tuning, we have some small number of parameters that can easily be merged with or separated from the original model. Here, if we update the model backbone and initialize different A matrices at every iteration, we won't be able to have a low rank representation of the fine-tuned part in the end. Yes, we can separate the fine-tuned part ($\sum_t A_t\bar{B}_t$), but it will take a memory size of $d$, unlike the low storage cost of LoRA parameters.
3. There are many newer techniques in Federated LoRA literature. I would expect a comparison with a few more recent and solid FL LoRA baselines.
4. I think the forward pass of the proposed method should be slower than the other LoRA methods. In the proposed one, $\Delta W_i$ s in every layer $i$ is a full dimensional matrix. However, in the LoRA versions, it is split into in order to have $2\times d\times q$ instead of large $d^2$ (Here, $d$ and $q$ represent the full and LoRA dimensions within a layer). Can authors elaborate on it?
Other Comments Or Suggestions: 1. In Fig. 2, why are the methods compared in a centralized setting? To my knowledge, for example, FFA-LoRA ([SLLD'24] in ICLR24) also solves the exact aggregation problem (with updates $A_i$, $B_i$s from clients, how to aggregate them is a question), which is specific to the federated setting.
2. I think there is a typo in the proof where $e_t^i$ is defined. I guess it should be just the negation of the written expression.
Questions For Authors: 1. I have found the propositions and the same reconstruction error with a smaller number of parameters solid. However, I question if it is practical with first-order methods, i.e., SGD, etc. What I mean is that I agree that the reconstruction error given in Defn 3.3 holds for the best A and B parameter set. However, it might be the case that we cannot find A and B with just gradient-based optimization (what actually happens in our setting). In that case, one might claim that having more parameters may find better A and B with a practically lower reconstruction error. Do authors have an explanation for this or an experimental result supporting it?
2. Shouldn't Theorem 4.3 be a high probability bound due to JL lemma? If so, I would suggest stating that the bound holds for a high probability of (the probability at least, e.g., $1-T\epsilon$..).
3. In experiments, how are these models pre-trained? Since we compare fine-tuning techniques, sharing how the models are actually pre-trained is important.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## 1. **Architecture-independency**
You are correct. Our approach flattens the gradients and factorizes them in a matrix form rather than directly factorizing the parameters. This gradient-based factorization is independent of specific architectural details, making it applicable to any model.
## 2. **Centralized setting**
We discussed this issue in **response 5 to reviewer NJUL** and presented the experimental results in **response 1 to reviewer FKCh**.
## 3. **Layer-wise factorization vs. Entire model factorization**
While layer-wise factorization is possible, it introduces following practical issues:
1. **Memory Overhead**: For reducing model communication $k$-fold, global factorization on $W^d$ results in $W^{k * d/k} = A^{k * 1} B^{1 * d/k}$, while layer-wise factorization of an $n$-layer model, given layer-$i$'s parameters as $W_i^{d_i}$ will be $W_i^{k * d_i/k} = A_i^{k * 1} B_i^{1 * d_i/k}$ requires storing $n$ separate $A_i$, increasing memory overhead $n$ times.
2. **Architecture Constraints**: In global model factorization, we can essentially decide on any arbitrary rate of compression. In contrast, layer-wise factorization faces limitations, as choosing $k$ higher than the layer size is impossible.
As shown experimentally in **response 3 of reviewer FKCh**, global factorization enables effective fine-tuning even at **10k-fold** compression for a model with **357M** parameters. Meanwhile, individual compression of a 1024-parameter layer at 10k-fold is not possible.
3. **Suboptimal Performance**: Lastly, global factorization outperforms layer-wise factorization since layer-wise allocates relatively equal expression budgets $d_i/k$ across layers, regardless of their gradient magnitudes. This leads to suboptimal communication budgeting, unlike global factorization, which allocates a higher expressivity budget to high magnitude gradients and higher compression to less informative gradients.
Additional experiments on QNLI and SST2 illustrate this suboptimality:
Model|SST2 Acc|Round@80%|QNLI Acc|Round@80%
-|-|-|-|-
Layered$_{1k}$|95.94|10|88.98|18
MAPA$_{1k}$|96.41|7|92.58|15
Layered$_{10k}$|92.27|12|82.42|29
MAPA$_{10k}$|94.92|9|90.86|19
## 4. **Backbone is updated**
You are correct. Unlike PEFT methods such as LoRA, MAPA does not inherently reduce the number of model parameters, as the full model backbone is updated at each communication round. Instead, MAPA primarily reduces gradient communication overhead in FL, as minimizing communication overhead is typically more critical than reducing parameter storage in this setting.
## 5. **Comparison with more recent baselines**
Many FL LoRA studies address broader challenges than communication. We used FA-LoRA as a central baseline due to its communication focus and conceptual similarity. Per your suggestion, we added LoRA and SA-LoRA to our baselines for LLM fine-tuning comparisons. (see **response 3 to reviewer FKCh**).
## 6. **Forward pass cost**
LoRA adds computational overhead in the forward pass by including additional low-rank adaptation layers. The computation $y = Wx + BAx$ incurs complexity:
- Frozen parameters: $O(d^2)$
- LoRA layers: Two multiplications $Ax$ and $B(Ax)$: $O(2dq)$
Thus, LoRA’s total forward pass complexity is $O(d^2 + 2dq)$.
In contrast, MAPA applies low-rank factorization **only during the backward pass**, leaving the forward pass complexity unchanged at $O(d^2)$.
## 7. **Why is it centralized in Fig. 2?**
In Fig. 2, we used a single-client setup to isolate the intrinsic performance impact of gradient compression, independent of data heterogeneity or client sampling in FL. Thus, any differences reflect compression effectiveness alone.
## 8. **Typo**
Thank you; we fixed it.
## 9. **As B is not the optimal point, can having more parameters in A lead to better convergence?**
In MAPA, updates follow \(W_t = W_{t-1} + AB\), where the \(A\) is **fixed** within each round. Therefore, gradient $B$ computation is just the gradient \(\nabla W\) projection linearly onto the subspace spanned by \(A\), and SGD can reliably solve linear projection to find a near-optimal $B$.
## 10.**Theorem 4.3**
We thank the reviewer for this insightful point. The JL lemma provides a *probabilistic* guarantee of low-distortion embeddings. As such, to be entirely rigorous, our convergence statement can be made a *high-probability* result by union-bounding the (small) failure probability $\delta$ over the $T$ rounds:
- At each round, distortion $\le\epsilon$ holds with probability $\ge 1-\delta$.
- By union bound, for *all* $T$ rounds simultaneously, the probability is $\ge 1 - T\delta$.
- Conditional on that event, the same inequalities apply, and the convergence proof remains identical.
We fully agree with you and will strengthen the theorem statement via the presented arguments.
## 11. **Pre-trained Models**
Original experiments were trained from scratch. For LLM fine-tuning, we used the HuggingFace `FacebookAI/roberta-large` checkpoint. | Summary: The paper proposes Model-Agnostic Projection Adaptation (MAPA), an approach to reduce communication overhead in FL. MAPA improves upon existing low-rank adaptation (LoRA) methods by factorizing the entire model parameter space as a single matrix, as opposed to decomposing layers independently. This model-agnostic approach allows for flexible balancing of communication and accuracy by randomly regenerating the reconstruction matrix (one of the two matrices) in each round.
Claims And Evidence: I have issues agreeing with stated contributions. The authors state as the first contribution the idea of applying LoRA at the model level instead of layer by layer. The original LoRA paper casts the idea in this way where I don't see any difference. It is correct that in the implementation they apply it layer by layer, but these are experimental and implementational details.
The convergence analysis is also unclear if it is new. The convergence of LoRA has already been analyzed. The proposed algorithm introduces one of the two matrices to be random but I don't see why this requires quite different proofs.
The proposed algorithm is efficient for small networks but I have scalability doubts. The experiments have been conducted only on small datasets. I wonder what would happen for fine tuning of an LLM.
Methods And Evaluation Criteria: not quite. They should do it on fine tuning of LLMs. It is easy to create FL settings based on standard fine tuning data.
Theoretical Claims: I'm unsure what is new in convergence results and proofs.
I wonder why don't standard SGD/FL technique apply. One possible argument would be the handling of random reconstruction matrices. But this can be viewed as stochasticity in gradient computation. It seems that the assumption of unbiased gradient would imply convergence. As a result, one needs to only show that the gradient estimators are unbiased. There is no reason to believe they are not and the proof should not be that hard.
Experimental Designs Or Analyses: I have checked the main body. The issue are discussed above.
Supplementary Material: I read appendices A and B.
Relation To Broader Scientific Literature: I'm unclear about contribution statements. The work has major overlap with the LoRA paper (and the offsprings).
Essential References Not Discussed: none
Other Strengths And Weaknesses: Discussed above.
Other Comments Or Suggestions: None
Questions For Authors: 1. Why not using and unbiased gradient approach?
2. Assess performance on fine tuning of LLMs.
3. What is really new in the algorithm?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: ## 1. **Key contributions**
Thank you for raising this important point. To clarify, there are two fundamentally different strategies for leveraging low-rank structures in optimization:
1. **Low-Rank Parameterization**
2. **Low-Rank Gradient Projection**
MAPA explicitly utilizes the latter strategy, whereas LoRA and its variants follow the first approach.
**Why LoRA should be applied layer-wise?** Low-rank parameterization methods, like LoRA, inherently depend on **layer-wise decomposition**, as reparameterization must preserve input/output dimensions for each layer to maintain forward pass compatibility. LoRA decomposes each layer’s weight matrix $W$ individually as $h = (W+BA)x=Wx + BAx$. Treating the entire model parameters as a single matrix violates compatibility due to nonlinear activations and differing layer dimensions. LoRA paper acknowledges this constraint (Hu et al., **LoRA [Page 4, Section 4]**).
Given a model-level LoRA factorization of $W^{I *O}$ into $A^{I * r}$ and $B^{r * O}$, where $I$ and $O$ are input and output dimensions of the model and $r$ is the factorization rank, the forward pass will be reduced to $y = BA(x)$, which does not express any nonlinearity of the network.
In contrast, MAPA employs **gradient factorization** rather than **parameter factorization**. By applying low-rank constraints directly on the gradient instead of the parameters, MAPA reduces the gradient size while fully preserving model capacity. This approach is not constrained by layer-wise decomposition or model architecture since factorization occurs **after** computing gradients via standard forward/backward passes. Further discussion on gradient factorization literature appears in **response 5 to reviewer NJUL**.
## 2. **Theoretical contribution**
Our convergence analysis extends standard federated SGD proofs [3,4] by incorporating a **random projection** (via the JL lemma) that introduces distortion $\epsilon$. This affects both descent direction and update variance. Unlike works that assume a fixed subspace or no gradient projection, we rigorously track how random, time-varying subspaces influence FL convergence. When $\epsilon=0$, MAPA becomes FedAvg. For $\epsilon>0$, we add a factor $(\epsilon + \beta + \epsilon\beta)$ but retain the same $\mathcal{O}(1/\sqrt{T})$ rate.
We will revise the manuscript to emphasize this distinction in our convergence analysis.
## 3. **Fine-tuning of LLMs on larger datasets**
Based on your feedback, we conducted fine-tuning experiments of **RoBERTa-large** on five large datasets of GLUE tasks. We evaluated MAPA, alongside *LoRA*, *FA-LoRA*, and *SA-LoRA* [2].
The **1st Table** below compares the number of trainable parameters and communication load per round for each baseline.
The **2nd Table** summarizes the results of fine-tuning, in which communication efficiency is evaluated by the number of rounds and the total communication needed to **reach 80%** accuracy, and the **3rd Table** presents the results for centralized LLM fine-tuning.
The experiments used base code from [12], following the experimental setup and parameters from [2] for 300 FL rounds,
References are located in **response 4 to reviewer Wq8k**.
---
1st Table:
| Method | # Train Param | # Com. Param / Round |
|-|-|-|
| LoRA | 1.83M | 0.78M |
| FFA-LoRA | 1.44M | 0.39M |
| SA-LoRA | 1.83M | 0.39M |
| MAPA$_{d/1k}$ | 357M | 0.36M |
| MAPA$_{d/10k}$ | 357M | 35.70K |
| MAPA$_{d/100k}$ | 357M | 3.57K |
| MAPA$_{d/1m}$ | 357M | 357 |
---
2nd Table, FL fine-tuning:
Model|SST2 Acc|SST2 Round|SST2 Total|QNLI Acc|QNLI Round|QNLI Total|RTE Acc|RTE Round|RTE Total|MNLIm Acc|MNLIm Round|MNLIm Total|MNLImm Acc|MNLImm Round|MNLImm Total
-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-
LoRA|84.86|36|28.08M|91.72|85|66.30M|86.62|180|140.40M|87.41|86|67.08M|87.34|82|63.96M
FA-LoRA|94.15|44|17.16M|91.63|76|29.64M|57.28|—|—|85.92|76|29.64M|86.46|213|83.07M
SA-LoRA|95.41|19|7.41M|91.04|55|21.45M|70.01|—|—|89.44|29|11.31M|85.49|126|49.14M
MAPA$_{d/1k}$|96.79|5|1.78M|93.14|11|3.93M|87.91|23|8.21M|88.90|17|6.07M|88.26|22|7.85M
MAPA$_{d/10k}$|96.10|5|178.50K|92.57|8|285.60K|89.57|23|821.10K|88.81|18|642.60K|87.43|25|892.50K
MAPA$_{d/100k}$|95.53|5|17.85K|89.24|7|24.99K|84.38|24|85.68K|85.04|20|71.40K|84.60|29|103.53K
MAPA$_{d/1m}$|90.37|7|2.50K|80.09|34|12.14K|57.04|—|—|72.46|—|—|37.76|—|—
---
3rd Table, centralized:
Model|SST2 Acc|SST2 Round|SST2 Total|QNLI Acc|QNLI Round|QNLI Total|MNLI Acc|MNLI Round|MNLI Total
-|-|-|-|-|-|-|-|-|-
LoRA|95.23|51|39.78M|88.20|111|86.58M|85.23|132|102.96M
FFA-LoRA|87.50|48|18.72M|68.05|—|—|86.48|66|25.74M
SA-LoRA|94.69|110|42.90M|88.20|111|43.29M|86.02|62|24.18M
MAPA$_{d/1k}$|95.47|9|3.21M|92.58|15|5.36M|86.80|37|13.21M
MAPA$_{d/10k}$|94.61|8|0.28M|90.86|19|0.68M|85.00|38|1.36M
MAPA$_{d/100k}$|79.38|—|—|83.83|18|64.26K|75.47|—|—
MAPA$_{d/1m}$|58.52|—|—|56.56|—|—|37.81|—|—
Overall, it can be seen that MAPA has the potential to enhance fine-tuning performance in centralized training too.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing the answers. I have no further questions and comments.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for acknowledging our response, and we are pleased that all your concerns have been addressed.
We would greatly appreciate it if you could update your score accordingly.
Best regards, Authors of Paper 2961
-----------------------------
Edit:
Dear Reviewer FKCh,
We thank you so much for taking the time to review our paper and for your helpful suggestions.
The author-reviewer discussion period ends soon. With the time ticking, we are getting very anxious.
We did our best to provide answers to the questions and concerns you raised, including conducting extra experiments.
You indicated you have no further questions and comments. We thank you for your prompt response.
May we respectfully request that you reevaluate your score, unless you have further issues?
We would be grateful.
Best Wishes - authors | Summary: This paper aims to improve communication efficiency in federated learning by proposing a new parameter factorization method. The proposed method is evaluated on seven public datasets and shows improved performance.
Claims And Evidence: The claims are supported by method design and experimental validations.
Methods And Evaluation Criteria: The proposed method and evaluation make sense in general but lack some comparison.
Theoretical Claims: The theoretical claims and proofs look correct.
Experimental Designs Or Analyses: The experimental design and analysis are sound in general.
Supplementary Material: The supplementary material provides more details and looks good.
Relation To Broader Scientific Literature: This paper contributes to the general federated learning community.
Essential References Not Discussed: A work with similiar idea needs to be discussed.
Jeong, Wonyong, and Sung Ju Hwang. "Factorized-fl: Personalized federated learning with parameter factorization & similarity matching." Advances in Neural Information Processing Systems 35 (2022): 35684-35695.
Other Strengths And Weaknesses: **Strength**
- Improving communication is an important topic in federated learning.
- The motivation for improving the LORA-based method is well demonstrated.
- The proposed method shows improvements in both communication and performance.
**Weakness**
- The proposed method approximates the updates of all layers by adjusting matrix B only, which may harm the model’s ability to explore richer subspaces.
- The design of single vector factorization shares a similar idea from [1], which needs to be included in the discussion and experiment comparison.
- It is not clear how good the convergence bound is compared with the FedAvg convergence, and in addition, what is the practical implication of this convergence analysis.
[1] Jeong, Wonyong, and Sung Ju Hwang. "Factorized-fl: Personalized federated learning with parameter factorization & similarity matching." Advances in Neural Information Processing Systems 35 (2022): 35684-35695.
Other Comments Or Suggestions: NA
Questions For Authors: Please see the weakness part.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## 1. **Only updating B**
Thank you for highlighting this concern. Indeed, relying solely on $B$ limits subspace exploration, as seen in FA-LoRA’s performance decline, SA-LoRA [2], and Figure 7. MAPA addresses this by **randomizing $A$ each round**, promoting diverse subspaces. Figure 7 shows that fixing $A$ at low ranks severely degrades accuracy, whereas randomizing $A$ maintains performance. We further verified this advantage in **response 2, reviewer NJUL**.
## **2. Comparison with [1]**
We appreciate your mentioning Factorized-FL. Below are key distinctions alongside comparative experiments under the same setup:
The key difference is that Factorized-FL applies factorization on layer-wise parameters, whereas MAPA factorizes the gradient of the entire model. In response to similar questions, we previously elaborated on why gradient (**response 1, reviewer FKCh**) and model-level factorization (**response 3, reviewer JjTg**) can outperform layer-wise parameter factorization.
Although both methods use rank-1 factorization, in Factorized-FL, rank-1 is a hyperparameter needing tuning or increasing for larger models to avoid limiting representation capacity. In MAPA, rank-1 is inherent and does not restrict model capacity; instead, the reshaping factor $k$ determines the compression rate. Consequently, Factorized-FL's communication per round is constrained by model dimensions, while MAPA can compress gradients to arbitrary degrees independent of architecture dimensions.
Factorized-FL is similar to a rank-1 LoRA architecture, with a sparse bias matrix replacing LoRA’s frozen fine-tuned parameter, initialized as zero. LoRA imposes strict regularization to preserve pre-trained parameters, whereas Factorized-FL employs softer regularization, allowing updates when necessary.
Factorized-FL emphasizes personalized FL by sharing one vector globally and keeping the other client-specific. To directly compare factorization effectiveness with MAPA, one could share both vectors globally. However, as noted by the authors [1] (**Page 6, Personalized Weight Averaging**), sharing both vectors significantly increases communication load, adversely affecting efficiency.
Below, we highlight this fact by comparing **global model** training on CIFAR-10 and SVHN under IID and non-IID splits. **“Com@X%”** indicates total communication needed to reach X% of FedAvg’s final accuracy:
|Method|CIFAR10 Com@80%|CIFAR10 Com@90%|CIFAR10-N Com@80%|CIFAR10-N Com@90%|SVHN Com@80%|SVHN Com@90%|SVHN-N Com@80%|SVHN-N Com@90%|Com/Round|
|-|-|-|-|-|-|-|-|-|-|
|FedAvg|305.85|407.80|326.24|652.48|183.51|244.68|285.46|509.75|20.39GB|
|Factorized-FL|182.50|292.00|200.75|310.25|127.75|182.50|146.00|219.00|18.25GB|
|MAPA$_{2k}$|0.32|-|0.94|-|0.32|0.79|0.56|-|**0.78MB**|
|MAPA$_{16k}$|**0.08**|**0.18**|**0.23**|**0.45**|**0.08**|**0.18**|**0.12**|**0.27**|6.25MB|
|MAPA$_{40k}$|3.84|8.64|10.88|21.12|3.84|8.64|5.76|13.12|0.32GB|
## 3. **Theorem 4.3**
We apologize for the confusion regarding our convergence result. Our convergence bound matches FedAvg and recovers it as a special case: when reconstruction error is zero ($\epsilon = 0$), MAPA reduces exactly to FedAvg with the tightest convergence bound. For $\epsilon \neq 0$, the bound introduces a modest constant factor $(\epsilon + \beta + \epsilon\beta)$ due to compressed update distortion. Nevertheless, MAPA maintains the same asymptotic rate $\mathcal{O}(1/\sqrt{T})$ as FedAvg under standard assumptions (smoothness, bounded variance) [3,4]. Practically, this means MAPA might require slightly more rounds at higher compression, yet the total communication cost to achieve target accuracy significantly decreases, allowing training with substantially reduced overhead.
## 4. **References**
[1] Jeong, W. and Hwang, S.J. "Factorized-FL: Personalized Federated Learning with Parameter Factorization & Similarity Matching."
[2] Guo, P. et al. "Selective Aggregation for Low-Rank Adaptation in Federated Learning."
[3] Yu, H. et al. "Parallel Restarted SGD with Faster Convergence and Less Communication."
[4] Kim, D.-Y. et al. "Achieving Lossless Gradient Sparsification via Mapping to Alternative Space in Federated Learning."
[5] Denil, M. et al. "Predicting Parameters in Deep Learning."
[6] Li, C. et al. "Measuring the Intrinsic Dimension of Objective Landscapes."
[7] Gressmann, F. et al. "Improving Neural Network Training in Low Dimensional Random Bases."
[8] Aghajanyan, A. et al. "Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning."
[9] Hameed, M.G.A. et al. "ROSA: Random Subspace Adaptation for Efficient Fine-Tuning."
[10] Zhao, J. et al. "Galore: Memory-Efficient LLM Training by Gradient Low-Rank Projection."
[11] Zhao, H. et al. "SEPARATE: A Simple Low-Rank Projection for Gradient Compression."
[12] Kuang, W. et al. "Federatedscope-LLM: A Comprehensive Package for Fine-Tuning LLMs in Federated Learning."
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and clarifications! I don’t have any further questions and will keep my score as it is. | Summary: This paper proposes Model-Agnostic Projection Adaptation (MAPA), which improves LoRA and FA-LoRA in federated learning (FL) by treating the entire model update as a single matrix rather than using layer-wise factorization. This approach enhances computational and communication efficiency while maintaining accuracy. MAPA introduces round-wise randomization of the reconstruction matrix to avoid suboptimal solutions and balance communication and accuracy. Unlike FA-LoRA, which uses a fixed A, MAPA regenerates A each round, enabling better parameter space exploration and preventing suboptimal convergence. Additionally, MAPA reduces memory and computational overhead compared to LoRA, ensuring greater efficiency in FL settings.
Claims And Evidence: The main claim in this paper is:
- The proposed MAPA treats the entire model's weights as a single matrix and uses a unified low-rank space (Delta W and A) for low-rank
adaptation fine-tuning in FL.
The authors also claim that MAPA:
- Reduces communication costs compared to existing methods.
- Improves convergence through randomization of the reconstruction matrix.
The empirical experiments conducted on various benchmark datasets and tasks mostly support these claims.
However, I have two concerns regarding the evidence:
In some experimental settings, MAPA does not consistently outperform certain baselines in terms of convergence accuracy.
The paper lacks ablation studies to fully analyze the impact of the MAPA factorization process.
Methods And Evaluation Criteria: This paper use popular benchmark models and datasets in FL for emperical experiments, which make sense.
My only 2 concerns are:
- Lack of ablation studies: While MAPA introduces randomized reconstruction matrices, the paper does not provide sufficient ablation experiments to isolate the impact of this randomization on convergence. A comparison between fixed vs. randomized reconstruction matrices would help clarify the exact benefits of the approach.
- FA-LoRA comparison: The comparison between FA-LoRA and MAPA should be extended and further elaborated. The authors could provide more context on FA-LoRA and explain why there is a performance gap between the two methods in certain settings.
Theoretical Claims: I checked the convergence proof, it make sense to me.
Experimental Designs Or Analyses: The proposed method and evaluation criteria are mostly appropriate, but the lack of ablation studies and deeper analysis of MAPA’s improvements. Additional experiments on more detailed ablation studies would strengthen the claims.
Supplementary Material: No
Relation To Broader Scientific Literature: Not sure
Essential References Not Discussed: Not Available
Other Strengths And Weaknesses: One notable strength of this paper is that the MAPA framework may not necessarily limited to FL, its model-agnostic factorization approach could be useful in regular centralized ML as well.
Other Comments Or Suggestions: 1. Some experimental comparisons (e.g., with FA-LoRA) could be further elaborated
2. Conduct Ablation study to validate the proposed model agnostic factorization
Questions For Authors: The idea of unified low-rank space adaptation for fine-tuning is quite interesting. It seems like this approach could be useful not only in federated learning (FL) but also in traditional centralized fine-tuning. What makes this method particularly beneficial in FL settings?
Have similar ideas been explored in centralized ML?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## 1. **MAPA does not consistently outperform certain baselines**
Thank you for your careful observation. We want to emphasize that all the results provided in Table 2 and additional experiments during this rebuttal show that MAPA consistently outperforms in communication and performance.
The results shown in Figure 5, in the top row, do not consider communication load. We are just comparing in terms of global rounds. As we take communication load into account, as shown in Table 2, we always perform better than baselines in performance per communication.
## 2. **The paper lacks ablation studies**
Thank you for your constructive feedback. We initially provided our ablation studies on the effect of matrix rank on training (Figure 7) and the importance of fixed vs. fresh matrix A (Figure 6). Considering your comments, we additionally extended our studies on:
### 1. Fixed vs. fresh (randomization) of matrix A
To elaborate on the effectiveness of randomization, additional experiments regarding MNIST and CIFAR10 are presented here, showcasing the accuracy across various ranks from $2^0$ to $2^{13}$, which clearly highlights the advantage of randomization, especially at lower ranks. Moreover, a discussion on the importance of randomization in training is located in **response 1 to reviewer Wq8k**.
Method-Dataset/2^|0|1|2|3|4|5|6|7|8|9|10|11|12|13
-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
FrozenA-MNIST|7.93|9.43|9.83|16.86|19.36|42.09|69.94|81.57|92.85|95.17|96.46|96.91|97.84|97.86
**FreshA-MNIST**|72.21|83.0|91.00|93.05|96.14|96.93|97.48|97.56|97.75|97.78|97.83|97.74|97.79|97.76
FrozenA-CIFAR10|12.46|13.69|16.72|19.13|21.64|20.99|27.35|31.07|40.23|47.28|54.0|63.36|67.26|68.77
**FreshA-CIFAR10**|51.53|55.02|57.95|61.37|63.82|65.5|66.5|69.2|68.62|69.02|68.31|68.34|68.71|68.59
### 2. Effect of rank in LLM fine-tuning
We study the effect of MAPA rank on four different orders of magnitude alongside LoRA's baselines in communication-efficient LLM fine-tuning. The results are presented in the 2nd Table of **response 3 for reviewer FKCh**.
### 3. Experiments on layer-wise vs. model-level factorization
Additionally, during the rebuttal, we conducted further experiments on LLM fine-tuning across various MAPA ranks, clarifying the trade-off between communication and performance, and additionally conducted an ablation study on layer-wise vs model-level factorization. (See **response 3 to reviewer JjTg**)
If concerns remain, please specify any additional ablation studies you recommend. We remain committed to conducting further experiments.
## 3. **The comparison between FA-LoRA and MAPA**
Methodologically, MAPA:
1. Factorizes gradients, not parameters (**response 1, reviewer FKCh**).
2. Uses a randomized $A$ instead of a fixed $A$, as shown in our ablations and **response 1 to reviewer Wq8k**.
3. Operates at the model level rather than layer by layer (**response 3, reviewer JjTg**).
We further validated these claims via additional GLUE fine-tuning experiments against FA-LoRA and other baselines (**response 3, reviewer FKCh**).
## 4. **Centralized fine-tuning**
Following your advice, we tested MAPA in a centralized setup and observed substantial gains over other baselines (**3rd table in response 3, reviewer FKCh**).
## 5. **Have similar ideas been explored in centralized ML?**
The literature on low-rank gradient factorization in deep learning can start from:
- [5] shows the inherent low-rank structure of gradients.
- [6] examined intrinsic dimensionality by identifying the lowest-dimensional fixed random subspace enabling model convergence.
Subsequent works [7–11] expanded on these concepts by training NN within randomly generated gradient subspaces.
Although these approaches shows the efficacy of low-rank gradient factorization, they suffer from extensive memory overhead, as they represent the gradient as a single vector \(G^d\), where \(d\) is the number of model parameters, resulting considerable memory usage to construct the random transformation \(A^{d \times m}\).
MAPA significantly differs from prior approaches by reshaping gradients before factorization. This simple yet effective modification achieves roughly \(k\)-fold reduction in computation and \(k^2\)-fold lower memory usage without compromising performance, supported by our theoretical and empirical analyses **(Appendix H and C.5)**. Additional discussion comparing gradient vs parameter factorization appears in **response 1 to Reviewer FKCh**.
References are located in **response 4 to reviewer Wq8k**.
## 6. **What makes this method particularly beneficial in FL?**
A primary challenge in FL is mitigating communication overhead. Our MAPA directly addresses this via low-rank gradient factorization integrated with efficient communication. While highly beneficial in FL, gradient reductions offer limited advantages in centralized settings, where gradient communication isn't required. | null | null | null | null | null | null |
Volume-Aware Distance for Robust Similarity Learning | Accept (poster) | Summary: This paper presents Volume-Aware Distance (VAD), a novel metric for similarity learning that extends traditional point-wise distances to field-to-field distances by introducing volume-aware data representations. The authors propose a measure-head network for volume prediction and a volume expansion regularizer to improve generalization. The method is mathematically well-grounded, providing strong theoretical guarantees, and is empirically validated across multiple domains.
Claims And Evidence: The main contributions of the paper are: A new similarity metric (VAD) that enhances generalization by incorporating volume-awareness. A theoretical framework demonstrating VAD’s tighter generalization bounds. Empirical results showing VAD’s superiority over state-of-the-art similarity learning methods.
Methods And Evaluation Criteria: The proposed VAD framework, including the measure-head network and regularizer, is well-motivated. The evaluation includes multiple supervised and unsupervised tasks, making the results highly convincing.
Theoretical Claims: The authors present rigorous mathematical proofs for generalization error bounds and sample-space coverage, which validate the effectiveness of VAD.
Experimental Designs Or Analyses: The experiments are well-structured, covering both metric learning and contrastive learning. The baselines used for comparison are strong, ensuring credibility. Ablation studies demonstrate the necessity of each component.
Supplementary Material: I reviewed the supplementary material.
Relation To Broader Scientific Literature: The work is well-grounded in existing literature and makes a meaningful contribution to the field of similarity learning.
Essential References Not Discussed: No critical references appear to be missing.
Other Strengths And Weaknesses: Strengths:
Novel and impactful contribution to similarity learning.
Solid theoretical underpinnings.
Comprehensive empirical evaluation.
Weakness:
The computational complexity of the method could be further analyzed in large-scale settings.
Other Comments Or Suggestions: None
Questions For Authors: See the weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive and constructive comments! Our responses are given below.
---
**Comment_1:** The computational complexity of the method could be further analyzed in large-scale settings.
**Response_1:** Thank you for your suggestion! Here we would like to provide the training time comparison of our VADSL and the baseline methods (SimCLR and SwAV) on the large-scale ImageNet-1K dataset. Specifically, we use eight NVIDIA TeslaV100 GPUs to train our models based on SimCLR and SwAV with 100 epochs, respectively. For each case, we set the batch size to 512, 1024, and 1536.
| Method | ImageNet-1K (ba. si. = 512) | ImageNet-1K (ba. si. = 1024) | ImageNet-1K (ba. si. = 1536) |
| :--- | :----: | :----: | :----: |
| SimCLR | 70.1 | 35.2 | 23.1 |
| SwAV | 71.2 | 36.7 | 24.2 |
| VADSL (SimCLR + VAD) | 71.5 | 35.6 | 23.5 |
| VADSL (SwAV + VAD) | 72.1 | 36.9 | 24.6|
The table above reveals that the introduction of the proposed VAD component causes very little increase in time consumption (less than **2%**). This is because the gradient calculation of VAD is independent of the size of the training data, thereby keeping the training time well within practical limits. We will also add the above time comparison in the camera-ready version if this paper is finally accepted. | Summary: The paper introduces Volume-Aware Distance (VAD), a novel metric for robust similarity learning. Unlike conventional point-level similarity measures, VAD models instances as volume-aware data balls, improving generalization by capturing field-to-field geometric relationships. The paper also proposes a measure-head network to learn instance volumes and a volume expansion regularizer to further enhance generalization. Theoretical analysis shows that VAD tightens generalization bounds and preserves topological properties, while extensive experiments on supervised and unsupervised tasks demonstrate clear advantages over state-of-the-art methods.
Claims And Evidence: The key claims in the paper include: VAD generalizes better than conventional similarity learning methods, supported by theoretical proofs. VAD outperforms baseline models across multiple tasks, shown via strong empirical results.
Methods And Evaluation Criteria: The measure-head network and volume expansion regularizer are well-designed and logically justified. The proposed method is evaluated comprehensively across various datasets and baselines, making the results reliable.
Theoretical Claims: I examined the proofs for generalization bounds and distance flexibility. The claims are mathematically sound, and the derivations align with existing principles in similarity learning.
Experimental Designs Or Analyses: Experiments cover multiple learning paradigms (metric learning and contrastive learning). Baseline comparisons are thorough, ensuring fairness. Ablation studies validate the necessity of each proposed component.
Supplementary Material: I reviewed the supplementary material, including additional experiments and theoretical proofs. The additional insights further solidify the validity of the approach.
Relation To Broader Scientific Literature: The paper clearly situates itself within the literature on metric learning, contrastive learning, and regularization methods. The citations are appropriate and comprehensive.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
Well-motivated and novel approach to similarity learning.
Strong theoretical backing with comprehensive analysis.
Extensive empirical validation across diverse datasets.
Weaknesses:
Computational overhead due to volume estimation, though the authors argue it remains manageable.
Other Comments Or Suggestions: None
Questions For Authors: i). How sensitive is VAD to the choice of the measure-head network’s architecture?
ii). Would VAD work in few-shot learning settings where data volume is extremely limited?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive and constructive comments! Our point-by-point responses are provided below.
---
**Comment_1:** Computational overhead due to volume estimation, though the authors argue it remains manageable.
**Response_1:** Thanks for your comments! We agree with the reviewer that the volume estimation does indeed introduce additional computation burden due to the necessary feedforward of the head network. However, we want to clarify that such additional computation cost is actually necessary to achieve better performance and is negligible in real-world use.
Specifically, as shown in Appendix A.3 of our manuscript, the baseline method SimCLR and our method VADSL spend 2.3 hours and 2.4 hours, respectively, to train 100 epochs on CIFAR-10 (batch size = 512). It implies that the calculation of volume estimation only adds less than **5%** of the time consumption. Similarly, on ImageNet-100, the time cost of the baseline method and our method are 10.9 hours and 11.2 hours, respectively, which means that the volume estimation merely requires an additional **3%** time consumption. In summary, we believe that the additional time cost of our method is relatively small and acceptable in practice.
---
**Comment_2:** How sensitive is VAD to the choice of the measure-head network’s architecture?
**Response_2:** Thanks for your comments! Since the measure-head network is implemented by the classical MLP with a single hidden layer, the only human-tuned parameter is the dimensionality of the hidden layer (because the input-feature dimensionality is usually fixed and the output-layer dimensionality is always 1). Therefore, here we want to investigate the robustness of our method by changing the hidden-layer dimensionality.
|Dataset| 32-dim | 64-dim | 128-dim | 256-dim | 512-dim |
| :--- | :----: | :----: | :----: | :----: | :----: |
| CIFAR-10 (400-eps, 512-bs) | 86.3 | 93.5 | 94.9 | 93.5 | 91.2 |
| STL-10 (400-eps, 512-bs) | 79.4 | 85.2 | 85.6 | 85.8 | 81.1 |
In above table, we vary the hidden-layer dimensionality from 32 to 512 to record the final performance of our method on STL-10 and CIFAR-10. We can easily observe that the hidden-layer dimensionality indeed affects the classification accuracy to some extent, because VAD needs a certain amount of hidden nodes to ensure its nonlinearity. However, it is also obvious that the performance of our method is relatively stable around 128 (e.g., from 64 to 256), which means that such a hyper-parameter can be easily tuned in practical use.
Meanwhile, we would like to further investigate the sensitivity of VAD by setting different number of hidden layers (including 1-layer, 2-layers, and 3-layer). For each case, we let the dimensionalities of all hidden layers be the same (i.e., 128-dim). The following table records the corresponding accuracy rates of our method on CIFAR-10 and STL-10.
|Dataset| 1-layer hidden | 2-layers hidden | 3-layers hidden |
| :--- | :----: | :----: | :----: |
| CIFAR-10 (400-eps, 512-bs) | 94.9 | 93.6 | 94.5 |
| STL-10 (400-eps, 512-bs) | 85.6 | 85.6 | 85.5 |
We can clearly observe that the results on both datasets are relatively stable when the layer number changes, and this means that our method is robust to the choice of network architectures.
---
**Comment_3:** Would VAD work in few-shot learning settings where data volume is extremely limited?
**Response_3:** Thanks for your comments! Actually, VAD is able to work well for few-shot recognition tasks. To be specific, for the four datasets (i.e., CAR-196, CUB-200, SOP, and In-Shop) used in the metric learning task, no classes of the test data are involved in the training phase, and only very few reference examples are provided for classification during the test phase. In such a few-shot setting, our VAD effectively improves the NMI and Recall@K scores of the baseline methods, and our final performance can surpass SOTA methods in most cases (Tab. 2). We believe that the effectiveness of our VAD in few-shot learning settings is well validated. | Summary: This paper introduces a novel approach to similarity learning with the Volume-Aware Distance (VAD) metric. Instead of relying on traditional point-level similarity measures, VAD models data instances as volume-aware data spheres, allowing it to capture both observed and unobserved neighbor relationships. To improve generalization, the method utilizes a measure-head network for estimating instance volumes and incorporates a volume expansion regularizer. Theoretical analysis confirms its enhanced error bounds and topological properties, while extensive experiments across multiple domains demonstrate its effectiveness compared to existing methods.
Claims And Evidence: This paper asserts that VAD enhances generalization by encompassing a wider sample space while maintaining essential topological properties, as substantiated by theoretical analysis. Furthermore, VAD surpasses leading methods in both supervised metric learning and unsupervised contrastive learning. These conclusions are reinforced by rigorous mathematical derivations and comprehensive experimental evaluations, highlighting its robustness and broad applicability.
Methods And Evaluation Criteria: The experimental setup is thorough, covering multiple datasets (e.g., CIFAR-10, ImageNet, SOP, CUB-200) and different learning paradigms. The proposed method makes sense in similarity learning, particularly in contrastive learning and metric learning.
Theoretical Claims: The proofs related to generalization error bounds and distance flexibility looks fine. They are mathematically sound, and the theoretical contribution is clear.
Experimental Designs Or Analyses: The experimental design covers both supervised and unsupervised settings. The baselines are well-selected, and results consistently show the advantage of the proposed VAD. The t-SNE visualizations and ablation studies further strengthen the empirical support.
Supplementary Material: The supplementary material provides additional experiments and proofs, which reinforces the claims in the main paper.
Relation To Broader Scientific Literature: The paper effectively connects its contributions to prior work in metric learning, contrastive learning, and regularization techniques. The citations are comprehensive and demonstrate a good understanding of the field.
Essential References Not Discussed: Not obvious
Other Strengths And Weaknesses: Strengths:
+ Novel extension of distance metric learning and comprehensive theoretical guarantees.
+ Comprehensive empirical evaluation. Well-written and clearly structured.
Weaknesses:
Some details are unclear:
a. Can VAD be extended beyond similarity learning, e.g., in clustering?
b. How does the measure-head network perform when used with transformers instead of CNNs?
Other Comments Or Suggestions: N/A
Questions For Authors: a. Can VAD be extended beyond similarity learning, e.g., in clustering?
b. How does the measure-head network perform when used with transformers instead of CNNs?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your appreciation of the novelty, theoretical analyses, and experimental results of our paper! Thanks also for your very insightful and constructive suggestions! Our point-by-point responses are as follows.
---
**Comment_1:** Can VAD be extended beyond similarity learning, e.g., in clustering?
**Response_1:** Thanks for your comments! Yes, VAD is also applicable to clustering task. Actually, in our manuscript, we have already evaluated the clustering performance of VAD by showing the NMI scores on CAR-196, CUB-200, SOP, and In-Shop, where our VAD can successfully improve the baseline methods Npair and ProxyAnchor with at least 3 percentages (see Tab. 2). Here we also want to follow the reviewer’s suggestion to further validate the effectiveness of our VAD on the deep clustering model “contrastive clustering (CC)” (Li et al., Contrastive Clustering, AAAI’21). We simply implement our measure-head network by replacing the contrastive head in CC, so that we can calculate the corresponding VAD similarity values. We record the clustering accuracy and NMI scores of compared approaches on CIFAR-10 and CIFAR-100, and the following table reveals that our method still improves the strong baseline upon CC itself.
|Setting| CIFAR-10 (Acc) | CIFAR-10 (NMI) | CIFAR-100 (Acc) | CIFAR-100 (NMI) |
| :--- | :----: | :----: | :----: | :----: |
| CC (baseline) | 79.0 | 70.5 | 42.9 | 43.1 |
| VADSL (ours) | 79.4 | 71.2 | 43.6 | 43.8 |
---
**Comment_2:** How does the measure-head network perform when used with transformers instead of CNNs?
**Response_2:** Thanks for your comments! We agree with the reviewer that transformer has shown very promising reliability in lots of vision tasks. As our Tab. 4 has provided the detailed performance results (i.e., k-NN, Top-1, Top-5), we would like to discuss here again the superiority of our method when it is equipped with the ViT model (ViT.16/48MB).
Specifically, for both ImageNet-100 and ImageNet-1K, we implement our method on the cluster-free method BYOL as well as the cluster-based method SwAV, and the encoder networks of all compared methods are ViT.16. It can be observed that our method consistently improves the two baseline methods on all cases of the two datasets. Furthermore, we also compare our method with SOTA methods including DINO, iBOT, and MTE, and we can find that our VADSL outperforms the SOTA in most cases (see the bottom of Tab. 4). This implies that our method works well with the vision transformer in the feature extraction, and also shows that our VAD is a fairly general technique that can be used in different frameworks. | Summary: The paper introduces Volume-Aware Distance (VAD), a new similarity metric that generalizes conventional point-wise distance by incorporating volume information. The authors propose a measure-head network to learn instance volumes and a volume expansion regularizer to improve generalization. Theoretical analyses prove a tighter generalization bound, and experimental results show superior performance in metric learning and contrastive learning tasks.
Claims And Evidence: The authors claim:VAD provides better generalization (supported by theoretical proofs).VAD outperforms state-of-the-art methods in supervised and unsupervised tasks (validated via extensive experiments). These claims are well-supported by rigorous proofs and strong empirical evidence.
Methods And Evaluation Criteria: The method is well-motivated and logically sound. Evaluation is extensive, with results on various domains and datasets, ensuring robustness.
Theoretical Claims: The proofs for error bounds, flexibility, and sample-space coverage are well-structured and mathematically valid.
Experimental Designs Or Analyses: Experiments are comprehensive, covering both supervised and unsupervised learning. Ablation studies clearly show the importance of each component. Comparison with strong baselines ensures credibility.
Supplementary Material: The appendix provides additional theoretical proofs and more experimental results, further strengthening the contributions.
Relation To Broader Scientific Literature: The paper relates well to existing literature in metric learning and contrastive learning.
Essential References Not Discussed: No major missing references.
Other Strengths And Weaknesses: Strengths: 1) Well-motivated and novel approach. 2) Strong theoretical guarantees.
3) Impressive empirical results.
Weaknesses:
Slightly higher computational cost due to volume estimation.
Other Comments Or Suggestions: NA
Questions For Authors: (1) Can VAD be applied to NLP tasks such as sentence similarity?
(2) How does VAD behave in extremely high-dimensional spaces?
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your positive and insightful comments! Our explanation and clarification can be found as follows.
---
**Comment_1:** Slightly higher computational cost due to volume estimation.
**Response_1:** Thanks for your comments! We agree with the reviewer that the volume estimation does indeed introduce additional computation burden due to the necessary feedforward of the head network. However, we want to clarify that such additional computation cost is actually necessary to achieve better performance and is negligible in real-world use.
Specifically, as shown in Appendix A.3 of our manuscript, the baseline method SimCLR and our method VADSL spend 2.3 hours and 2.4 hours, respectively, to train 100 epochs on CIFAR-10 (batch size = 512). It implies that the calculation of volume estimation only adds less than **5%** of the time consumption. Similarly, on ImageNet-100, the time cost of the baseline method and our method are 10.9 hours and 11.2 hours, respectively, which means that the volume estimation merely requires an additional **3%** time consumption. In summary, we believe that the additional time cost of our method is relatively small and acceptable in practice.
---
**Comment_2:** Can VAD be applied to NLP tasks such as sentence similarity?
**Response_2:** Thanks for your comments! Yes, VAD can be applied to NLP tasks. In fact, we have already validated the effectiveness of our method on two sentence embedding related datasets including STS and BookCorpus. As the reviewer considered, for each dataset, we construct the positive (similar) pairs by aligning each sentence with its contextual sentences in the same paragraph, while we build the negative (dissimilar) pairs by combining those sentences from different paragraphs. For both datasets, our method can consistently improve the baseline methods and outperform the SOTA methods in most cases (see Tab. 5 and Fig. A1), so we believe that our method is applicable to the sentence embedding tasks.
---
**Comment_3:** How does VAD behave in extremely high-dimensional spaces?
**Response_3:** Thanks for your comments! Here we follow the reviewer’s suggestion to investigate the reliability of VAD for high-dimensional features. Specifically, we increase the output-layer dimensionality of the feature encoder from 512 to 4096, and record the corresponding accuracy of our method on STL-10 (batch size = 512). From the following table, we can clearly observe that the high-dimensional space indeed weakens the feature discriminability, where the accuracy decreases when the feature dimensionality is higher than 1024. However, in these high-dimensional cases, our VAD can still successfully improve the final performance on STL-10, which demonstrates the usefulness of VAD even in the high-dimensional space.
|Setting| STL_100-eps (w/o VAD) | STL_400-eps (w/o VAD) | STL_100-eps (w/ VAD) | STL_400-eps (w/ VAD) |
| :--- | :----: | :----: | :----: | :----: |
| 512-dimension | 71.1±1.2 | 78.2±3.3 | **77.1±2.9** ✔| **85.6±3.2** ✔ |
| 1024-dimension | 70.4±4.2 | 78.2±2.1 | **77.2±3.6** ✔ | **85.9±1.6** ✔ |
| 2048-dimension | 68.3±2.3 | 76.5±1.3 | **73.3±1.2** ✔ | **82.3±2.3** ✔ |
| 4096-dimension | 68.9±4.1 | 77.2±1.6 | **75.4±3.2** ✔ | **83.2±2.9** ✔ | | null | null | null | null | null | null |
Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented One-Max-Search | Accept (poster) | Summary: This paper studies the learning-augmented one-max-search problem. In the classic one-max-search problem, the input of an algorithm is a sequence of prices $(p_i)_{i=1}^{n}\in [1,\theta]^n$. At each $i$, the algorithm must decide whether to accept a price $p_i$ and terminate, or to reject $p_i$ and proceed. In the learning augmented one-max-search problem, the algorithm has access to a prediction $y$ of the maximum price in $p$.
The objective of the algorithm is to achieve the best trade-off between consistency (the competitive ratio of the algo. if the given prediction $y$ is accurate), robustness (the competitive ratio of the algo. if the given prediction $y$ is arbitrarily inaccurate), and smoothness (the competitive ratio of the algo. as a function of the prediction error). This paper designs a pareto-optimal algorithm (w.r.t. the consistency and robustness tradeoff) that is also smooth, and prove that the smoothness of the algorithm matches the lower bound of any Pareto-optimal algorithm.
Claims And Evidence: The claims are supported by the theoretical analysis.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I only checked the first 8 pages of this paper.
Experimental Designs Or Analyses: I checked the experiments in Section 5.
Supplementary Material: I did not review the proofs in the appendix.
Relation To Broader Scientific Literature: The closest related works is Sun et al. (2021), which characterized the Pareto front of consistency and robustness. The idea of designing algorithms that have smooth performance w.r.t. the prediction error is an important question in learning-augmented algorithms.
Essential References Not Discussed: This paper discussed essential related works.
Other Strengths And Weaknesses: Strengths:
- The theoretical results in this paper are neat and solid.
- The paper is well-written and self-contained.
Weakness:
- The organization of this paper could be improved to include some high-level overview of proof techniques. While the difference between the algorithms proposed in this paper and prior algorithms is clear in Figure 1, the technical novelty in the analysis is not addressed.
Other Comments Or Suggestions: I have no other comments.
Questions For Authors: - While Figure 2 clearly shows that the average performance when $\rho=1$ is better than $\rho=0$ (the algorithm discussed in prior work), the confidence fluctuation (shaded area) of the ratio also significantly overlaps. Is it because of the randomness of the environment? In addition, what is the relative performance of $\rho=1$ compared with $\rho=0$? E.g., for the same randomly generated environment, what is the win-rate of $\rho=1$ against $\rho=0$?
- Can you also draw the theoretical bound given by Theorem 3.2 (or Theorem 4.6) in Figure 2? How well is the empirical performance of the algorithm characterized by the theoretical analysis?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort spent on our submission and for the positive feedback. We address their questions and comments below.
### **Weakness**
Due to page limitations, we could not always give detailed proof methodologies in the main paper. However, if the paper is accepted, we will use the additional page to enhance this aspect, as suggested by the reviewer. For instance, please see our response below for some specific paragraphs that we will add in regard to Section 3.
**Technical Novelty.** Our techniques fundamentally differ from those used by Sun et al. While their work aims to determine the Pareto-optimal tradeoffs between consistency and robustness, our work extends their results by leveraging the knowledge of the Pareto-optimal frontier for consistency-robustness, and establishes smoothness guarantees, which is not addressed in the previous work. Specifically, we characterize all threshold algorithms that achieve this frontier and further analyze the optimal smoothness within this class of algorithms using two distinct notions of smoothness: multiplicative and additive.
**High level overview of proof techniques.**
- **Theorem 3.1**: If a threshold function $\Phi$ belongs to $\mathcal{P}_r$, then we use the bounds on $\Phi$ defining $\mathcal{P}_r$ to prove its Pareto-optimal robustness and consistency. Conversely, for a Pareto-optimal $\Phi$, we analyze the worst-case instances $I_n(q)$ (Equation 15), where prices increase uniformly from 1 to $q$ before dropping to 1. By evaluating $\text{ALG}/\text{OPT}$ on these instances for well-chosen $q$, we derive the bounds on $\Phi(y)$ that define $\mathcal{P}_r$.
- **Theorem 3.2**: For any $\rho \in [0,1]$, the threshold of Algorithm $A^\rho_r$ lies in $\mathcal{P}_r$, ensuring $r$-robustness and $1/r\theta$-consistency (Theorem 3.1). To establish smoothness, we determine the best value of $s$ such that $\text{ALG}/\text{OPT} \geq (1/r\theta) \mathcal{E}^s$ across all price sequences and predictions. Since the threshold is piecewise-defined, it suffices to analyse cases based on $y$ and whether the maximum price is above or below the threshold. The final smoothness guarantee is then determined by the worst such case.
- **Theorem 3.3**: We first prove that, on worst-case instances $I_n(q)$, any Pareto-optimal algorithm behaves like a threshold algorithm. Using results from Theorem 3.1, we establish that its threshold function must belong to $\mathcal{P}_r$. Finally, by applying the defining inequalities of $\mathcal{P}_r$ with well-chosen $q$ and $y$, we derive lower bounds on the smoothness $s$.
### **Questions**
- **Confidence fluctuation.** The confidence fluctuation in Figure 2 is indeed a result of the randomness of the environment.
- **Relative performance.** Could the reviewer clarify what they mean by the "win-rate" of $\rho=1$ against $\rho=0$? We interpret this as the ratio $CR(A^1)/CR(A^0)$, indicating how much $A^1$ gains compared to $A^0$. Both the empirical (solid line) and theoretical (dashed line) worst-case ratios are shown in the following figure (https://imgur.com/a/PmnrYOn). The figure shows that the empirical ratio is 1 when the prediction is perfect (i.e. $1-\mathcal{E} = 0$), as expected, since both algorithms have the same consistency. The ratio then increases abruptly to approximately $1.3$, for an arbitrarily small error, which is due to $A^0$'s brittleness. It then increases, up to a value of 1.5 approximately, after which it decreases down to 1, as the prediction becomes highly inaccurate.
This is explained, respectively, by the fact that $A^0$ remains inefficient for relatively small values of error, and by the fact that both algorithms have the same robustness.
- **Theoretical bounds in the figure.** Thank you for the suggestion. We included the theoretical worst-case ratio (dashed lines) for $\rho \in$ {0, 0.5, 1} alongside the empirical ratios from Figure 2. The updated figure is available here (https://imgur.com/a/87xE5Qv). For the three values of $\rho$, the figure shows that the empirical ratio is better than the theoretical one because the latter represents the worst case over all predictions with error at most $\mathcal{E}$, while the empirical ratio is based on random predictions, explaining the observed gap. Note that, for $\rho < 1$, there is a sharp degradation in performance; for example, for $\rho=0$, this degradation occurs at infinitesimally small prediction error. In contrast, for $\rho=1$, the degradation is smooth, which aligns with our objectives and the intuition given in Section 3.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response. I will maintain my positive evaluation and recommend incorporating the new figures and discussions in the revision.
> Could the reviewer clarify what they mean by the "win-rate" of $\rho=1$ against $\rho=0$?
I meant $\Pr(\mathbb{I}[CR(A^1) > CR(A^0)])$ where the probability is taken over the randomness of the environment? This is because confidence fluctuation in Figure 2 is significant due to the randomness of the environment, which makes the comparison a bit unclear. Another way to compare the algorithms is to look at their relative performance in the same environment, and then consider the randomness of the environment. The new figure already answered my questions. | Summary: This paper studies one-max search with prediction: Given a sequence of prices $p_1, \ldots, p_n$ in an online fashion and a prediction $y$ of the maximum price $p^*$, pick a price irrevocably to compete with $p^*$. Previous algorithms with prediction can simultaneously achieve _consistency_ (when the prediction is correct, the performance is optimal) and _robustness_ (when the prediction is arbitrary, still achieves the worst-case-optimal competitive ratio). There are also previous algorithms (Sun et al, 2021) achieving the optimal consistency-robustness tradeoff (_pareto-optimal_ in the two extreme cases). However, those algorithms are not _smooth_: when the prediction accuracy degrades smoothly, the performance of the algorithm should change smoothly. The authors provide the first Pareto-optimal and smooth algorithms for the problem of one-max search with predictions. Moreover, this algorithm does not need randomization.
The technique is to fully characterize the class of _threshold algorithms_ that are Pareto-optimal, and then within this class find the best smooth algorithm. The authors also show that the smoothness guarantee of the found algorithm is tight.
The authors then apply the main results to a setting where the prediction and the maximum price are sampled from some joint distribution, and study the impact of the distribution on the algorithmic performance. They also provide experiments to show the advantage of their algorithm over a previous one.
Claims And Evidence: The claims are supported by clear and convincing evidence, both theoretically and experimentally.
Methods And Evaluation Criteria: The theoretical approach to this problem -- fully characterizing the class of Pareto-optimal algorithms and then selecting the best smooth algorithm among them -- is non-trivial and interesting.
Theoretical Claims: I checked the proof of Theorem 3.1 and briefly went over other proofs and didn't find any issues.
Experimental Designs Or Analyses: I didn't find any issues in the experiments.
**(Strength 1)** On the positive side, the experiment with real-world Bitcoin data, showing that the proposed algorithm ($\rho = 1$) performs better than previous algorithm ($\rho = 0$), is interesting and strengthens this paper a lot in my opinion.
Supplementary Material: I checked the proof of Theorem 3.1 and briefly went over other proofs.
Relation To Broader Scientific Literature: **(Strength 2)** Previous algorithms for one-max search with predictions (Sun et al, 2021) achieve the optimal consistency-robustness tradeoff (_pareto-optimal_ in the two extreme cases). However, those algorithms are not _smooth_. Even though one-max search is such a fundamental question, it is still open whether a doubly optimal algorithm (Pareto-optimal and smooth) exists. This work successfully answers this question by providing a doubly optimal algorithm. Moreover, this algorithm is deterministic, which is even better. So, I think this is a significant contribution to the literature.
**(Strength 3)** The technical part of this work fully characterizes the class of Pareto-optimal algorithms for the one-max search problem with predictions. This technical contribution might be of independent interest.
Essential References Not Discussed: I don't know of essential missing references.
Other Strengths And Weaknesses: I have mentioned three strengths above.
A **minor weakness**, in my opinion, is that this work is restricted to the one-max search problem. The authors "believe their methodology can be applied to generalization such as the $k$-search problem" (Conclusion), but didn't give any theoretical evidence.
But since the one-max search problem is fundamental and this work gives an almost complete characterization for the Pareto-optimal and smooth algorithms for this problem, I am overall positive towards this work.
Other Comments Or Suggestions: (1) Page 6, second column of line 320: "this functional of $G$" -> "this functional of $F^*$" ?
(2) The last sentence before Section 4.1 "Finally, we show how to isolate the interaction of $G$ and $F^*$ using analytical tools from optimal transport theory" is confusing. In Section 4.2, you isolate the effect of $G$ in Corollary 4.2 and isolate the effect of $F^*$ in Corollary 4.4, not using optimal transport theory. Then in Section 4.3, you consider the joint effect of $G$ and $F$ -- you use the optimal transport theory here to find the worst coupling between $G$ and $F$ such that the competitive ratio attains the infimum. I don't think you are using the optimal transport theory to "isolate the interaction of $G$ and $F$".
Questions For Authors: (Q1) Does tight lower bound result (Theorem 3.3) hold for all randomized algorithms or just for deterministic algorithms?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort spent on our submission and for their positive feedback. We address their questions below.
**Minor weakness.**
The $k$-search problem generalizes the one-max search problem, requiring a trader to sell $k$ items instead of one, with arbitrary prices in a bounded range $[p_{\min}, p_{\max}]$. Lee et al. studied this problem in a learning-augmented setting with a prediction of the maximum price, proposing a threshold-based policy that achieves a Pareto-optimal tradeoff between consistency and robustness, but lacks smoothness guarantees. Given the Pareto-optimal consistency and robustness values, we can aim to characterize all algorithms achieving them. Building on the intuition we established for one-max search, the smoothness of such algorithms depends on the maximum slope of their thresholds. We can then analyse the algorithm within this family with the smallest maximum slope, which we expect to have good smoothness guarantees.
**Other Comments Or Suggestions**.
(1) We thank the reviewer for pointing out the typo. We have corrected it.
(2) We agree with the reviewer's observation and apologise for the confusing formulation. We meant to highlight the way that Optimal transport was used to characterise the effect of the interaction between $G$ and $F^*$ beyond their own effect. We will clarify this point.
**Questions**.
(Q1) The bound holds for deterministic algorithms. In the presence of randomization, the problem is equivalent to the one-way trading problem, in which fractions of an item can be bought. The interplay between smoothness and Pareto-optimality for one-way trading has been fully studied in "Overcoming Brittleness in Pareto-Optimal Learning-Augmented Algorithms" (Elenter et al., NeurIPS '24), where the authors proved that any algorithm achieving Pareto-optimal consistency and robustness is necessarily brittle (i.e., it cannot have any smoothness guarantees). This gives further evidence that, unlike many decision problems, the deterministic and randomized versions of one-max search are significantly different. | Summary: The authors study the one-max-search problem where the decision maker is presented with a stream of value within a range $[1, \theta]$. The decision maker has to irrevocably select and end the process, or forfeit a value in the stream. The authors study the consistency, and robustness tradeoff of a Pareto optimal algorithm that relies on a prediction of the optimal value. Consistency refers to the approximation ratio of an algorithm when it is supplied with the correct prediction. Robustness refers to the approximation ratio of an algorithm supplied with incorrect prediction. The authors study the tradeoff as a function of the smoothness of the threshold function, where smoothness is defined using a symmetric and scale invariant multiplicative prediction error. They establish consistency of a specific class of threshold function that satisfies a given robustness $r$ and smoothness $\rho$. Furthermore, they show that their guarantees are nearly tight for any Pareto Optimal algorithm. The authors also study the competitive analysis under multiple stochastic frameworks where price and/or prediction exhibit stochasticity. Interestingly, when both exhibit stochasticity they show the authors establish performance guarantees using optimal transport.
Claims And Evidence: The theoretical results stated in the paper is supported by proofs.
Methods And Evaluation Criteria: The methods are standard for a theoretical paper.
Theoretical Claims: I have checked the theoretical claims. I am reasonably certain about the correctness, but may have missed details (e.g. algebraic derivations of inequalities).
Experimental Designs Or Analyses: Numerical evaluations are presented. The results look good to me.
Supplementary Material: I have gone through the proof of the theoretical results in the supplementary material. But I may have missed few proof details as stated earlier.
Relation To Broader Scientific Literature: This paper establishes almost tight relationship between the consistency, robustness, and smoothness of an algorithm. This is a good addition to the understanding of the one-max-search problem.
Essential References Not Discussed: The important references seem present in my understanding.
Other Strengths And Weaknesses: Strengths:
- This paper establishes a near optimal smoothness guarantees for Pareto optimal algorithms, and provides a simple threshold based algorithm that attains the nearly tight bounds.
- This paper studies the performance of the proposed algorithm, and establishes performance guarantees through optimal transport.
Weakness: Please check comments and suggestions.
Other Comments Or Suggestions: Possible Improvements/Discussions:
- The current paper claims to establish a three-way Pareto optimality. However, in my understanding, for proper three-way relation the robustness should depend on the multiplicative error $\mathcal{E}$.
- On a related note to the above, the lower bound established relies on A being Pareto optimal for all $y$. What happens if we restrict the prediction to have bounded error $\mathcal{E}_{\max}$.
- What is the relation of the optimal transport specified from a learning perspective? Can we discover the coupling in a offline and online learning manner?
Questions For Authors: Please check comments and suggestions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort spent on our submission, and for their positive feedback. We address their comments below.
- The objective of the robustness term is to give a valid guarantee even under arbitrarily inaccurate predictions, since no assumption is made on the quality of the prediction. This is why the robustness term should not depend on the prediction error.
- It might be possible to improve the bound if the error is known to be at most $\mathcal{E}_{\max}$. However, our main objective is to design an algorithm that is able to leverage a prediction without any additional assumptions on its quality. More generally, in any learning-augmented problem, if the error is assumed to be bounded, then the consistency, robustness, and smoothness guarantees might all be improved. While this question is out of the scope of our work, it constitutes a very interesting research direction.
- Along a single trajectory of prices, it is impossible to learn the coupling as the whole trajectory provides only a single sample of the maximum price and a single prediction, which is statistically meaningless. However, it is possible to learn the coupling if the game is repeated several times using standard density estimation techniques, see e.g. (Silverman, 2018; Chen, 2017). Studying a repeated version of the one-max-search problem (whether on-- or offline) is a significantly methodologically different task. This raises an interesting further direction of research.
**References.**
Silverman, Bernard W. Density estimation for statistics and data analysis. Routledge, 2018.
Chen, Yen-Chi. A tutorial on kernel density estimation and recent advances. Biostatistics \& Epidemiology 1.1 (2017): 161-187.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response. I agree that the posed questions are possibly out-of-scope. Maybe adding those to future works can improve the paper. I will maintain my positive score. | Summary: This paper studies one-max-search problem in the learning-augmented setting and develops an algorithm that is both Pareto-optimal and smooth to the multiplicative prediction error.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application at hand. The experiments clearly demonstrate the improvements of the proposal algorithm compared to existing methods.
Theoretical Claims: I checked the correctness of the arguments appeared in the main paper.
Experimental Designs Or Analyses: I checked the soundness/validity of experimental designs or analyses.
Supplementary Material: I didn't review the supplementary material.
Relation To Broader Scientific Literature: The key contributions of the paper is related to general ML.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: The paper characterizes all Pareto-optimal thresholds in a general class which captures many previous works. This result seems to be fundamental to the one-max-search problem.
Other Comments Or Suggestions: NA
Questions For Authors: In the stochastic setting, can we have high-probability guarantees on ALG?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort spent on our submission, and for their positive feedback.
In proposing a stochastic formulation of the one-max-search problem with predictions, we aimed to introduce the concept of a stochastic notion of accuracy for predictions. We gave bounds in expectation, but, as the reviewer suggests, one can also consider high-probability bounds. However, these bounds would concern the joint tail probabilities of $P^*$ and $Y$, which are difficult objects to conceptualise. In contrast, bounds in expectation naturally give rise to a characterisation of the worst case as an optimal transport problem. | null | null | null | null | null | null |
Low-Dimension-to-High-Dimension Generalization and Its Implications for Length Generalization | Accept (poster) | Summary: This paper conducts a theoretical analysis of the problem of low-dimension-to-high-dimension generalization, with an application to the problem of length generalization. The main theoretical results are based on Boolean function analysis and extend the analysis in Abbe et al. (2023) by considering different functional bases in the Boolean function space and analyzing more neural network models, including random feature models with projection and Position-Only Linear Attention with Advice (PLAA) models with Absolute Position Embedding (APE) and Generalized Absolute Position Embedding (GAPE). The paper shows that gradient descent on these models results in min-degree solutions with regard to different functional bases. On the algorithm side, the paper proposes RPE-Square, an ad-hoc improvement of Relative Position Embedding (RPE), and demonstrates its effectiveness on length generalization problems including unaligned copy and URF addition.
**Post-rebuttal update:** The authors have addressed some of my concerns. At this point, my main criticisms of the paper are (1) the limited implications of the formulation and theoretical analysis (compared with the prior work by Abbe et al. (2023)), and (2) the overall presentation (see my rebuttal comment for details). The authors have responded to both points in the rebuttal, but not to a very satisfactory level. I decided to keep my original score but would let AC and other reviewers decide whether this paper should be considered to reach the acceptance bar of ICML.
Claims And Evidence: Most of the claims made in the paper are supported by theoretical or empirical evidence. However, I did not understand the claims on Chain-of-Thought (CoT) prompting in Sec. 5.1. See "Questions for authors" for more details.
Methods And Evaluation Criteria: The proposed RPE-Square is evaluated on two representative length generalization tasks including unaligned copy and URF addition. I am not an expert in benchmarking length generalization, but given the scope and scale of the considered tasks, they should mainly serve as proof-of-concept demonstrations at this stage. Since the main contents of the paper are of theoretical nature, I think the current evaluation is acceptable.
Theoretical Claims: I reviewed some proofs in the appendix and did not find major issues.
Experimental Designs Or Analyses: Experimental designs are OK, though it would be better if the authors could also compare RPE-square with more position embedding methods (e.g., RoPE) in the experiments.
Supplementary Material: I briefly skimmed the proofs in the supplementary material but did not check the experiment details.
Relation To Broader Scientific Literature: Prior work by Abbe et al. (2023) has shown that several types of NNs converge to min-degree interpolators when trained by gradient descent; this work extends this result by showing that other types of NNs, including simplified attention models with position embeddings, may converge to min-degree interpolations under different bases in the function space.
The proposed RPE-Square is an ad-hoc improvement to RPE in mathematical reasoning problems.
Essential References Not Discussed: To my knowledge, most of the related works are properly cited and discussed.
Other Strengths And Weaknesses: *Strengths:* Theoretical results are clearly presented.
*Weaknesses:*
- The paper is not very easy to read, especially for those who are not familiar with the prior work by Abbe et al. (2023).
- Many definitions and results heavily build on those in Abbe et al. (2023), e.g., degree profile, min-degree interpolators, Theorem 2, and Corollary 1. Overall I think the theoretical contributions of the paper are somewhat incremental.
- Many parts of the paper, in my opinion, are kind of disconnected:
- I did not get how Abstraction 1 is used in the theoretical results/proofs.
- The theory and the algorithm seem not connected; in fact, I think that Theorems 3 and 4 are not relevant to the proposed RPE-Square (which seems to have nothing to do with functional bases induced by the position embedding), nor are they relevant to Example 1 (regarding the mismatch between the problem scale and the input length) which serves as the main motivation of the paper.
- I think that the main theoretical contributions of the paper, i.e., introducing min-degree interpolators under different bases, can only address very limited length generalization problems: for example, the target function in Example 3 only has degree 1. From my angle, introducing additional projections/linear operations cannot solve length generalization problems with higher-degree target functions, e.g., the one considered in Theorem 5.1 of Abbe et al. (2023).
Other Comments Or Suggestions: In general, at this point my overall evaluations of this paper are as follows:
- The theoretical contributions of the paper are rather incremental;
- The proposed method RPE-Square is interesting but is only evaluated in limited settings;
- The theory and algorithm seem disconnected.
I am now between weak accept and weak reject and decided to choose weak accept. Yet, I may also lower my rating if my main concerns are not well addressed in the rebuttal.
Questions For Authors: - I did not understand Sec. 5.1: how did you determine the latent space of each instance? Why should the latent space take the form in the paper?
- It appears to me that the formulation of LDHD generalization (Def. 1) is quite similar to length generalization (the main difference is that LDHD considers different dimensions in train/test data subspaces, while length generalization replaces the notion of "dimension" to "length"). Are there any other applications of this formulation except length generalization?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your effort and insightful comments. We respond to your main concerns below.
All experiments results are in the anonymous link: https://www.dropbox.com/scl/fi/52t23nfzev1lo1sq5dmyj/ICML_2025_5857_Rebuttal.pdf?rlkey=5nde6aampze744klvn0rsb3mp&st=37lg3nzo&dl=0.
1. *The theoretical contributions of the paper are rather incremental.*
Our main contribution is the LDHD generalization formulation (Abstraction 1), which offers a more precise characterization of length generalization by separating it into two challenges: (1) LDHD generalization in the latent space (which is inherent), and (2) data format nuisance. Under the formulation, we can derive the fundamental limitation of length generalization, explain the effectiveness of certain methods, and design novel position embeddings for challenging length generalization problem.
The theoretical results in Section 4 demonstrates how different model designs can introduce different inductive bias to handle LDHD generalization, only related to the LDHD generalization aspect in Abstraction 1.
While our Boolean analysis in Section 4 is technically inspired by the remarkable work of Abbe et al. (2023), our core novelty lies in Abstraction 1 and its implications, not in extending prior Boolean analysis.
2. *The proposed method RPE-Square is interesting but is only evaluated in limited settings.*
We evaluate RPE-Square in three additional tasks: Parity (with CoT), Multiplication (1 * N), and Division (N / 1). RPE-Square achieves length generalization in all the three tasks (Figures 3 - 5 in the link).
3. *The theory and algorithm seem disconnected.*
All components of our paper are built around Abstraction 1. Here’s how they connect:
**Section 3** focuses on the LDHD challenge and shows that length generalization requires prior knowledge. **Section 4** illustrates how inductive bias can be introduced through model design to address the LDHD challenge. **Section 5** consider practical length generalization techniques from the perspective of Abstraction 1. **Section 5.1** explains the effectiveness of CoT as extending the latent space, which may promote LDHD generalization. **Section 5.2** derive the position embedding design principle according to Abstraction 1, highlighting the importance of handling LDHD generalization in the latent space and the data format separately.
RPE-Square is designed closely following the principle implied by Abstraction 1. The inner "relative distance to some special token" is to handle the unaligned format. The outer "relative distance of relative distance to some special token" is to handle the LDHD generalization in the latent space, which is partially explained by the analysis of PLAA-GRPE. Further details are in Appendix D.1.
4. *I did not understand Sec. 5.1: how did you determine the latent space of each instance? Why should the latent space take the form in the paper?*
The latent space is determined according to Abstraction 1. We consider the scale $n$ of each instance and a proper $\Sigma$ such that each instance of scale $n$ corresponds to an element in $\Sigma^n$.
With CoT, we treat each step (which requires a model prediction) as an instance of the same scale.
For example, in the addtion with CoT, the two predition steps, i.e., given $x_0 \dots x_{n-1} + y_0 \dots y_{n-1} =$ to predict $z_0$, and given $x_0 \dots x_{n-1} + y_0 \dots y_{n-1} = z_0$ to predict $z_1$, are treated as separate CoT-step instances of the same scale $n$.
We then decide a proper domain $\bar{\Sigma}$ such that each CoT-step instance of scale corresponds to an element in $\bar{\Sigma}^n$. Furthermore, since CoT typically inserts intermidate results to the original instances, $\bar{\Sigma}$ will take the form $\Sigma\times\Sigma'$. For the addition, $\Sigma'=\lbrace *,0,\dots,9\rbrace$ satisfies the requirement.
5. *...the formulation of LDHD generalization (Def. 1) is quite similar to length generalization... Are there any other applications of this formulation except length generalization?*
Changing "length" to "dimension" is not merely a notion replacement. By the notion "dimension", we highlight the **exponential growth** in sample space as scale increases, and the **complete absence of information** in low-dimension training data about high-dimension behavior. These aspects are not captured by "length", which can be confounded by formatting. Thus, “dimension” more precisely reflects the scaling challenge.
While LDHD generalization is originally to characterize the inherent scaling challenge in length generalization, it could be extended to other applications to character the exponential growth in the sample space size and imperfect information provided by the training data. For example, in graph learning, we may consider the number of nodes as the dimension and use this formulation to describe the generalization from small to large graphs.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I appreciate the following points:
- The clarification on the relation between the proposed method and Abstraction 1.
- The added experiments.
However, I am still not satisfied with the formulation/theory part after reading the rebuttal. As this work is of theoretical nature, I think the following points need to be further discussed:
- "Our main contribution is the LDHD generalization formulation (Abstraction 1)...". Note that Abstraction 1 itself is _not_ directly related to LDHD generalization---what it assumes is a _data generation model_ via a latent variable. Similar data generation models are in fact quite common in the representation learning/latent variable modeling literature (e.g., [1,2]). The main difference here is that you explicitly use the generation function $\phi$ to model "data format nuisances". Please note that I am not nitpicking here by saying that Abstraction 1 is not novel. In fact, I agree that considering data format nuisances separately for length generalization is indeed interesting. However, I do have reservations about **whether it is necessary/significant to formulate a new "LDHD generalization problem" based on Abstraction 1 for length generalization** (see below).
- You mentioned that "LDHD generalization" differs from conventional length generalization in that it characterizes "the exponential growth in sample space" and "the complete absence of information in low-dimensional training data". Yet, I believe that both aspects are already reflected in length generalization.
- For example, Abbe et al. (2023) [3] discussed the _non-uniqueness of solutions_ when training data is low-dimensional, which implies that the training data lacks information to identify the true concept in the whole sample space, and this is also the main intuition of your no-free-lunch theorem (Theorem 1).
- In Section 5 of [3], the authors note that the length generalization problem can be formulated as "increasing the number of bits" in test data, which naturally corresponds to an _exponential_ increase in the sample space.
- As you have mentioned in your response, the actual difference between "dimension" and "length" is the consideration of _data format confounding_. However, this is already reflected by the data generation model in Abstraction 1, and this generation model could also be integrated into the current length generalization setting. Hence, I do not see why it is necessary to formulate a new learning problem with a new name.
To summarize, from my current angle, the main contents of the paper are two-fold:
- Formulating and analyzing a "new" problem of LDHD generalization.
- Pointing out that considering data format nuisances is important in length generalization and proposing a method for it.
It seems that you mix these two points in the overall presentation, which could be somewhat confusing. In fact, I realized that this is also the main reason why I (and perhaps some other reviewers) found the paper not very easy to read.
In my opinion, the second point is interesting and aligns with the proposed algorithm. However, the first point seems to be not valid to me---to my knowledge, the hardness of generalization, the necessity of considering inductive biases, and the actual inductive bias considered (min-degree bias) are all explored in prior work [3], which makes the theoretical part incremental and does not warrant the formulation of a new problem.
In this light, I feel it might be necessary for the authors to reconsider the presentation of the paper, which I think seems to require quite a bit of rewriting. Please feel free to correct me if you find any misunderstanding in my comments.
---
[1] Hyvärinen and Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. Neural Networks, 1999.
[2] Schölkopf et al. Toward causal representation learning. Proceedings of the IEEE, 2021.
[3] Abbe et al. Generalization on the unseen, logic reasoning and degree curriculum. ICML, 2023.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your further in-depth feedback and constructive suggestions.
We would like to clarify that our main point is the LDHD generalization perspective/formulation for length generalization, rather than LDHD generalization as a standalone problem. Our motivation is to develop a principled framework that captures the core challenges commonly associated with so-called length generalization.
To further elaborate, we distinguish three concepts:
- (ideal) length generalization (LG0): some ideal (but unknown) formulation that perfectly captures all the common challenges of generalizing from small-scale instances to large-scale instances;
- length generalization defined on length (LG1): generalization from short sequences to long sequences;
- length generalization from the perspective of LDHD generalization (LG2): our proposal (Abstract 1) to seek LG0.
In other words, LG0 is the (unknown) "ground truth" notion of length generalization we aim to capture. LG1 and LG2 are two alternative formulations for LG0, through which we can analyze general length generalization problems and design algorithms accordingly.
1. *...whether it is necessary/significant to formulate a new "LDHD generalization problem" based on Abstraction 1 for length generalization*
- *...Abstraction 1 itself is not directly related to LDHD generalization...*
We respectfully disagree. Abstraction 1 is directly related to LDHD generalization. Specifically, Step 1 (lines 87 - 89) implies length generalization from instances of scale n to those of scale m corresponds to LDHD generalization (in the latent space) from dimension n to dimension m ($n < m$). It is this relation that characterizes the two important properties of LG0: "the exponential growth in sample space" and "the absence of information in training data".
- *...Similar data generation models are in fact quite common in the representation learning/latent variable modeling literature (e.g., [1,2])...*
We agree that our abstraction shares technical similarities with prior work in latent variable modeling. However, we believe the significance of any data generation model lies in how it reflects the problem it is intended to characterize. For example, the model in [2] is notable not simply for its technical structure, but for how effectively it captures challenges in causal representation learning. Similarly, while Abstraction 1 resembles other models structurally, its novelty and contribution lie in how well it approximates LG0.
2. *...both aspects are already reflected in length generalization.*
While the previous discussions under the name "length generalization" provide insights and intuitions on which properties LG0 has, to the best of our knowledge, they do not offer a formal, general formulation that explicitly characterizes these properties. For example, while Abbe et al. (2023) [3] mention both increasing "the number of bits" and "the number of -1s", they do not formally explain why or to what extent "both of these variants capture the notion and difficulty of length generalization" in Parity. Also, the failure of length generalization is shown specifically in Parity. In contrast, our proposed formulation (LG2) aims to formalize and unify these insights, serving as an approximation of LG0 that applies beyond individual tasks.
3. *...why it is necessary to formulate a new learning problem with a new name*
We introduce the new name to distinguish LG2 from LG1. The term "length" likely refers to the sequence length and LG1. The term "dimension" emphasizes the two properties that LG0 requires inherently. By explicitly naming our formulation with LDHD generalization, we aim to highlight these conceptual distinctions.
4. *It seems that you mix these two points in the overall presentation, which could be somewhat confusing. In fact, I realized that this is also the main reason why I (and perhaps some other reviewers) found the paper not very easy to read.*
We sincerely appreciate this valuable feedback. We will clarify the two points in the revision. Furthermore, we plan to make the following changes to improve clarity:
(1) We will explicitly distinguish between the different notions of “length generalization” throughout the paper, e.g., using precise terminology such as LG0, LG1, and LG2 to avoid ambiguity.
(2) We will make it clear that the central contribution of the paper is the LG2 formulation. Different sections of the paper address different aspects of this formulation, and we will reorganize the presentation to better reflect this structure.
(3) We will broaden the discussion of related literature, particularly regarding latent variable modeling and previous studies on length generalization. We will clarify the connections to prior work and the novel contributions in our work. | Summary: Summary: The paper examines Low-Dimension-to-High-Dimension (LDHD) generalization and theoretically demonstrated that LDHD
generalization is unattainable without appropriate inductive bias, focusing on Boolean functions and how different architectures and inductive biases influence this generalization. The study introduces RPE-Square to handle data format nuisances and improve LDHD generalization. Key formulas include the computation of latent variables h and labels y through mappings phi, and the modeling of low-dimension-to-high-dimension generalization with the Boolean function framework.
Claims And Evidence: While the paper presents strong theoretical underpinnings and some experimental results, there are areas where the evidence might not be as convincing or where further clarification and experimentation could strengthen the claims:
1. Generalization of Theoretical Concepts to Practical Applications: The paper does an excellent job in theoretical exposition, especially with the introduction of the No-Free-Lunch Theorem for LDHD generalization. However, the transition from theoretical models to practical, real-world applications is not deeply explored. The claims regarding the practical effectiveness of RPE-Square would be more convincing if supplemented by more diverse and extensive empirical evidence across various domains and tasks.
2. Robustness of RPE-Square: The paper claims that RPE-Square enhances the model's ability to handle data format nuisances effectively. While initial experimental results are provided, the robustness of RPE-Square across a broader range of scenarios and its comparison to other state-of-the-art position embeddings are not thoroughly examined. Claims about its superiority could be better supported by more comprehensive comparative studies that include a variety of datasets and model configurations.
3. Scalability and Performance Across Different Scales: The paper discusses the potential of RPE-Square in managing the challenges associated with LDHD generalization. However, the scalability of this approach—how it performs as the dimensionality of data dramatically increases—is not fully addressed.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper are appropriate for addressing the challenges LDHD.
Proposed Methods:
- RPE-Square: Enhances traditional Relative Position Embedding by better handling data format nuisances and the shift from low to high-dimensional spaces. This method theoretically supports improved out-of-distribution generalization in deep learning models.
- Chain-of-Thought (CoT): Applies CoT to enhance the understanding of sequences in transformer models, suitable for complex reasoning tasks where sequence understanding is crucial.
Evaluation Criteria:
- Task-Specific Benchmarks: Uses arithmetic reasoning and symbolic logic tasks to evaluate performance, which are appropriate for testing model capabilities in handling length generalization and increased problem complexity.
- Comparison with Baselines: Measures improvements over traditional embeddings and other state-of-the-art methods, providing a clear benchmark for assessing the efficacy of RPE-Square.
Areas for Improvement:
1. Wider Range of Datasets: Including datasets with inherently high-dimensional structures could better test the robustness of the proposed methods.
2. More Diverse Tasks: Expanding evaluations to include varied tasks like image processing could demonstrate the versatility of RPE-Square.
3. Quantitative Metrics: More rigorous metrics that measure performance in high-dimensional settings and assess computational efficiency would provide a comprehensive evaluation.
Theoretical Claims: The mathematically oriented proofs, such as those for the No-Free-Lunch Theorem, are rigorous and seem to be correct according to the details provided. However, there's a chance that I might have missed an error. Additionally, the proofs do not thoroughly address the computational complexity of implementing RPE-Square, which is vital for practical applications, particularly in very high-dimensional spaces.
Experimental Designs Or Analyses: In reviewing the paper, I examined the soundness and validity of the experimental designs and analyses, particularly those related to the implementation and evaluation of RPE-Square and its impact on LDHD generalization.
- Data Variability: The experiments would benefit from a more detailed examination of the test data's variability and distribution, particularly in how they mirror real-world scenarios essential for LDHD generalization.
- Computational Load: The paper does not adequately address the computational demands of implementing RPE-Square, crucial for practical applications, especially in resource-constrained environments.
Overall, while the experimental designs and analytical methods employed in the paper are sound for the most part, enhancing the range of benchmarks, incorporating a broader set of evaluation metrics, and providing more details on computational demands and data scales could significantly strengthen the validity of the findings.
Supplementary Material: No, I only briefly reviewed Part A: "Background on Boolean Analysis" and Part C, which includes the proofs of Theorems 1-4, to ensure I didn't overlook any crucial details.
Relation To Broader Scientific Literature: The key contributions of the paper, particularly the development of RPE-Square and its application in LDHD generalization, build on existing scientific literature by enhancing position embedding techniques previously established for transformer models. This development provides a nuanced approach to handling high-dimensional data challenges, extending the utility of relative position embeddings for complex reasoning tasks. The paper's theoretical additions, like the No-Free-Lunch Theorem for LDHD generalization, contribute a rigorous framework to the discussion of dimensionality in machine learning, bridging theoretical concepts with practical machine learning applications. Overall, these contributions effectively integrate and advance the current understanding of embedding designs and generalization in machine learning.
Essential References Not Discussed: Not essential but might beneficial:
1. Bellman, R. (1961). Adaptive Control Processes: A Guided Tour. Princeton University Press.
This reference is foundational in the discussion of the curse of dimensionality, providing the initial formal description of the problem which is crucial for understanding the theoretical underpinnings of dimensionality challenges.
2. Tenenbaum, J. B., de Silva, V., & Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500), 2319-2323.
This paper introduces Isomap, a manifold learning technique that provides insights into how high-dimensional data can be effectively reduced, maintaining the intrinsic geometry of the data, which is highly relevant for LDHD generalization.
3. Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P. A. (2008). Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning (pp. 1096-1103).
Autoencoders, particularly denoising autoencoders, are useful for dimensionality reduction and feature extraction in noisy datasets, relevant for LDHD generalization where maintaining data integrity across dimensions is critical.
Other Strengths And Weaknesses: Strengths:
It becomes increasingly interesting if you can get through the first two pages!
1. Novel Conceptual Framework: The paper effectively frames LDHD generalization within the broader context of Out-of-Distribution (OOD) generalization, offering a novel perspective on handling high-dimensional data spaces derived from low-dimensional training data.
2. Nice Theoretical Support: The theoretical underpinnings, including the No-Free-Lunch Theorem for LDHD generalization, are robust, providing a strong foundation for the arguments regarding the necessity of inductive biases.
3. Practical Implications: The introduction of RPE-Square as a novel embedding to address both LDHD generalization and data format nuisances is both innovative and practically relevant, offering direct applicability to transformer models.
4. Empirical Validation: The use of Chain-of-Thought (CoT) to demonstrate the practical effectiveness of the proposed methods in restructuring latent space for improved generalization is well-executed, with clear experimental setups and results.
Weaknesses:
As noted earlier, the main areas for improvement include:
1. Enhanced Mathematical Clarity: While foundational proofs like those for the No-Free-Lunch Theorem are well-established, additional mathematical detail could strengthen the theoretical support for RPE-Square.
2. More Empirical Evidence: Broader empirical testing would help validate the practical effectiveness of RPE-Square across various applications.
3. Computational Complexity: Further discussion on the computational demands of implementing RPE-Square is needed to assess its practicality in high-dimensional settings.
Other Comments Or Suggestions: Commnets:
- I recommend including "Relative Position Embedding (RPE)" in the abstract for clearer context.
- Figure 1c was unclear until I read lines 117-123 on page 3. Adding a brief explanation beneath the figure could assist readers who seek immediate clarity.
- The first two pages would benefit from a more detailed discussion of goals and applications.
Questions For Authors: Questions:
1. Has RPE-Square been evaluated in diverse or challenging environments? What were the challenges? What were the outcomes?
2. Could you provide an insight on the computational overhead of RPE-Square, especially in large-scale or real-time applications?
3. Are there specific conditions or assumptions in Theorem 3 that might limit its general applicability? If so, what are they?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are grateful for your careful evaluation and positive assessment of our work. All experiments results are in the anonymous link: https://www.dropbox.com/scl/fi/52t23nfzev1lo1sq5dmyj/ICML_2025_5857_Rebuttal.pdf?rlkey=5nde6aampze744klvn0rsb3mp&st=37lg3nzo&dl=0.
1. Has RPE-Square been evaluated in diverse or challenging environments? What were the challenges? What were the outcomes?
We evaluate RPE-Square in three additional tasks: Parity (with CoT), Multiplication (1 * N), and Division (N / 1). RPE-Square achieves length generalization in all the three tasks (Figures 3 - 5 in the link). These results are consistent with our insight of the inductive bias of RPE-Square and further justify the proposed position embedding design principle.
2. Could you provide an insight on the computational overhead of RPE-Square, especially in large-scale or real-time applications?
Suppose the sequence length is N and the hidden dimension is d. The computational overhead of an attention with RPE-Square is $O(N^4 + N^3 d)=O(N^2 (N^2 + N^d))$ (with the implementation that saves the token attention scores for all query j and key i and reuse them when computing $\text{RPE-Square}_{i,j}$). This overhead can lead to inefficiency for large-scale or real-time applications.
We note that RPE-Square is primarily intended to illustrate the design principle for position embeddings. Our focus is on the challenge of length generalization instead of long-sequence efficiency. We consider it a promising direction to develop more efficient variants of RPE-Square in future work.
3. Are there specific conditions or assumptions in Theorem 3 that might limit its general applicability? If so, what are they?
Theorem 3 is derived with a simplified model (PLAA-APE). The conclusion in Theorem 3 may not hold precisely for more complex models. Also, the target functions considered in Theorem 3 are restricted to Boolean functions and the notation "min-degree profiler w.r.t. linearly independent set" may not be applied to non-Boolean functions directly. Additionally, the result include a limit ($\hat{P}$) but we cannot use the exact limit in practice.
Despite the above theoretical limitations, we think Theorem 3 can provide sufficient insight on the inductive bias of APE. The simplified model captures the essential effect of APE on the inductive bias. The analysis in the Boolean functions can be naturally extended to functions defined on finite domains. The limit can be well approximately by choosing a sufficient small initialization.
We also appreciate your suggestions on writing to improve clarity and readability. We will revise our manuscript accordingly. | Summary: This paper introduces the concept of Low-Dimension-to-High-Dimension (LDHD) Generalization to address challenges in out-of-distribution (OOD) generalization, particularly in reasoning tasks where models are trained on low-dimensional subspaces and tested on higher-dimensional spaces. The authors propose that LDHD generalization is fundamental to understanding length generalization, where models extend learned patterns to longer sequences.
Claims And Evidence: The paper presents several key claims:
1. LDHD Generalization Necessitates Inductive Biases: The authors assert that without appropriate inductive biases, models cannot generalize from low-dimensional training data to high-dimensional testing scenarios. This claim is supported by the No-Free-Lunch Theorem for LDHD generalization, which mathematically establishes the necessity of inductive biases.
2. CoT Enhances Length Generalization: The paper suggests that Chain-of-Thought prompting improves length generalization by restructuring the latent space, facilitating better LDHD generalization. While the theoretical analysis supports this claim, direct empirical comparisons of models trained with and without CoT on reasoning tasks are lacking.
3. RPE-Square Improves Length Generalization: The authors propose that RPE-Square, their novel position embedding method, addresses both LDHD generalization and data format inconsistencies, leading to improved length generalization. Experimental results on addition and unaligned copy tasks demonstrate its effectiveness. However, comparisons with alternative position embeddings designed for length generalization, such as ALiBi, RoPE, NoPE, Abacus, as well as position coupling are not included.
Overall, the claims are theoretically substantiated, but additional empirical validation on diverse tasks would strengthen the conclusions.
Methods And Evaluation Criteria: Theoretical analysis is sound and clearly presented. The evaluation tasks (Unaligned Copy, URF Addition) are appropriate for illustrating their claims. However, broader validation on more varied or real-world data could strengthen applicability claims.
Theoretical Claims: Yes, I checked Theorem 1 (No Free Lunch Theorem) and the proof in the supplementary material seems correct.
Experimental Designs Or Analyses: The experiments are well-structured but exhibit certain limitations:
- Lack of Generalization Tests on Longer Sequences: It remains unclear whether models trained with RPE-Square can extrapolate to significantly longer sequences than those encountered during training.
- Absence of Robustness Analysis: The performance of RPE-Square across various tasks beyond addition and unaligned copy is not explored.
- Experiments comparing with and without CoT seems to be missing.
Supplementary Material: I checked supplementary material for Experimental details and proof for theorem 1.
I only skimmed through other parts.
Relation To Broader Scientific Literature: The paper appropriately references key works on:
- Length Generalization in Transformers: Citing studies by Anil et al. (2022) and Jelassi et al. (2023).
- Inductive Bias in Neural Networks: Expands on works by Gunasekar et al. (2017), Abbe et al. (2023), providing a clear link to length generalization.
Essential References Not Discussed: I am not aware of essential references not discussed.
Other Strengths And Weaknesses: ### Strengths:
- Novel Formalization: The introduction of LDHD generalization offers a valuable perspective on length generalization challenges.
- Practical Contribution: The development of RPE-Square presents a tangible improvement for length generalization tasks.
### Weakness:
- I find it hard to make connections between sections and get a clear intuition behind the approach.
- Limited diversity of tasks considered: The paper only considers two simple tasks of unaligned copy and URF addition of small scale.
- Lacks comparison with diverse Positional Embeddings, developed to tackle length generalization, leaving the relative effectiveness of RPE-Square unclear
- Computational overhead of RPE-Square not discussed.
Other Comments Or Suggestions: - typo: line 168 overparatermization -> overparameterization
- typo: line 202 equation X_m instead of X^m in the subscript of E
- discuss briefly computational complexity of RPE-Square in practice.
- the definition of "concept c" in Abstraction 1 seems missing.
Questions For Authors: - Generalization of theoretical results: Do you have empirical evidence confirming that actual Transformers trained in realistic scenarios indeed favor minimum-degree interpolants?
- Computational Complexity: What is the computational and memory overhead of RPE-Square compared to standard positional encoding, especially for longer sequences?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your time and valuable feedback. We'd like to response to your concerns as below.
All experiments results are in the anonymous link: https://www.dropbox.com/scl/fi/52t23nfzev1lo1sq5dmyj/ICML_2025_5857_Rebuttal.pdf?rlkey=5nde6aampze744klvn0rsb3mp&st=37lg3nzo&dl=0.
1. *...comparisons with alternative position embeddings designed for length generalization...*
We conduct additional experiments to compare the mentioned position embeddings (Since Abacus and position coupling are similar apart from some subtle implementation details in our tasks, we only implement Abacus currently.) The results (Figures 1 - 2 in the link) show that RPE-Square achieves the best overall length generalization in the tasks.
2. *Lack of Generalization Tests on Longer Sequences*
We evaluate the models with RPE-Square on significantly longer sequences in three tasks (Parity (with CoT), Multiplication (1 * N), and Division (N * 1)) The results (Figure 7 in the link) show that the models can extrapolate to significantly longer sequences to some extend and the performance may decreases as the sequence length increases.
3. *Absence of Robustness Analysis: The performance of RPE-Square across various tasks beyond addition and unaligned copy is not explored.
We apply RPE-Square to three new tasks: Parity (With CoT), Mulplication (1 * N), and Division (N / 1). The results (Figures 3 - 5 in the link) show that Transformers equipped with RPE-Square can achieve length generalization in all the three tasks, showing the robustness of RPE-Square across different tasks.
4. *Experiments comparing with and without CoT seems to be missing.*
We compare the length generalization of Parity with CoT and without CoT. We train Transformers with RPE-Square. We only achieve length generalization for Parity with CoT (Figure 8 in the link). This is because as CoT enables LDHD generalization via relative distances in the latent space. There are also many empirical results showing CoT can enhance length generalization in previous works, e.g., [1] [2].
[1] Zhou, H., et al., 2023. What algorithms can transformers learn? a study in length generalization. arXiv preprint arXiv:2310.16028.
[2] Feng, G., et al., 2023. Towards revealing the mystery behind chain of thought: a theoretical perspective. NeurIPS.
5. *I find it hard to make connections between sections and get a clear intuition behind the approach.*
(a) Connections between sections.
**Section 1** introduces the problem of length generalization and proposes LDHD generalization to address the mismatch between input length and problem scale. **Based on the LDHD formulation**, **Section 3** presents the No-Free-Lunch Theorem for LDHD generalization, motivating the need for inductive bias. **Section 4** consider how different inductive bias can be incorporated by choosing different models, a technical problem to address the LDHD generalization challenge in Section 3. In **Section 5**, we further discuss the implications of the LDHD generalization formulation for practical length generalization. We explain why CoT can promote length generalization. We also propose the position embeddding design principle for length generalization: consider the inherent LDHD generalization and the data format nuisance separately.
(b) Intuition behind the approach.
- The intuition behind LDHD generalization formulation includes: (1) The sample space grows exponentially as the scale increases and the small-scale instances do not tell how large-scale instances can be solved without external information; (2) In language modeling, the sequence length can be affected by the data format and may not faithfully reflect the problem scale.
- The intuition behind RPE-Square: the unaligned formats can be handled by the relative distances to some special tokens. Detailed illustrations are in Appendix D.1.
6. *the definition of "concept c" in Abstraction 1 seems missing.*
A "concept" refers to a target mapping, as in [1]. We'll clarify this in the revision.
[1] Mohri, Mehryar, 2018. Foundations of machine learning, Chapter 2, The PAC Learning Framework, Section 2.1, The PAC learning model.
5. *Generalization of theoretical results*
We provide empirical support (Figure 9) by training GPT2 with APE and RPE on different functions. Probing reveals the learned models roughly align with minimum-degree interpolators w.r.t. their respective bases.
6. *... the computational and memory overhead ...*
Suppose the sequence length is N and the hidden dimension is d.
RPE-Square: $O(N^4 + N^3 d)$ compute and $O(N^2 + N d)$ memory.
Standard PE: $O(N^2 d)$ compute and $O(N^2 + N d)$ memory.
While RPE-Square is less efficient than the standard position embedding, it mainly serves as a proof-of-concept for separating LDHD and format handling. It is focused on the length generalization challenge instead of the challenge of long sequence. Designing efficient variants is a promising direction for future work. | Summary: *Despite my best efforts, I found this paper very hard to parse. I am discounting my confidence to reflect the same.*
The paper studies Low-Dimension to High-Dimension generalization (LDHD) problem, a special case of OOD generalization. At its core, the paper argues the impossibility of generalizing to high-dimensions without imposing inductive biases or prior knowledge. They then consider implications of their argument on length generalization for unaligned copy and addition tasks, and propose RPE-Square (relative position embedding) that leverages some domain knowledge such as the distance to the special tokens (such as =, +, [BoS], [EoS]). The paper then demonstrated RPE-Square's generalization to high-dimensions.
The main contributions of the paper are the following. (a) a formalization to disentangle the data format and dimension of the problem, (b) through analysis on binary functions and various positional encodings (relative, absolute and general), the contribution to the attention scores from positional embedding is min-degree interpolator wrt to a linearly independent set, (c) a new positional embedding scheme: RPE-Square.
Claims And Evidence: The paper's analysis (see (b) from my paper's contribution list above) makes the following choices: (1) isolate and analyse only the positional embedding contribution to the attention score, (b) assume the value head is an identity, (c) restricts only to binary functions.
However, the paper did not justify how their choices do not compromise the generality of their analysis.
Their proposed positional embedding scheme: RPE-Square relies on domain knowledge of the task in hand. It is expected that the paper at least present how their approach extends to other tasks, and what kind of domain knowledge can be exploited for different problems.
Regarding OOD and LDHD. I believe the paper's analysis do not explain why we observe (surprising) LDHD generalization for several other problems without any special domain knowledge. For example, consider the multilingual abilities of LLMs despite <10% representation of non-english data in pretraining. Given the typically low language representation, it is possible that many (entity, language) combinations are not present in the training data, yet the models demonstrated generalization to such unseen combinations (see [1] for an example). We may see this as LDHD generalization where (entity, language) defines a dimension.
[1] Chen, Yang, Yitao Liang, and Zhouchen Lin. "Low-Dimension-to-High-Dimension Generalization And Its Implications for Length Generalization." arXiv preprint arXiv:2410.08898 (2024).
Methods And Evaluation Criteria: Yes, the paper pertains LDHD generalization and length generalization is a good task.
The only evaluation the paper conducted is with addition and copy tasks. The choice of tasks make sense.
Theoretical Claims: I did not check the correctness of any theorem, and only glanced at the statements.
Experimental Designs Or Analyses: Yes, their only experiment looks sound.
Supplementary Material: No.
Relation To Broader Scientific Literature: The findings need not surprise a significant class of researchers working on causality or symbolic methods. But the folks who bet on learning purely from signal, large scale training and exploration would point the paper to many phenomena that the paper do not explain like the way I did in the last point of the Claims section.
Essential References Not Discussed: NA, please see my response to the next question.
Other Strengths And Weaknesses: **Strengths**
LDHD generalization is a compelling problem, and studying the SoTA models for LDHD is even more.
The analysis, although on a special class of functions and assumptions, present some interesting insights into generalization.
**Weakness**
The writing of the paper needs much improvement. They should aim to reduce the number of notations introduced, explain in text every equation and symbol before presenting the math expression. Present proof sketches in the main paper. Make a more coherent and less dense read by taking care of the reasoning-related sections and Free-lunch thm section.
Other Comments Or Suggestions: L92, second column, formant-> format.
L221, first column. I believe it is $$n=\arg\min_k \\{h\in \Sigma^k\\\}$$
Questions For Authors: Please address my questions listed under "Claims and Evidence".
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive feedback. Below, we respond to your concerns.
All experiments results are in the anonymous link: https://www.dropbox.com/scl/fi/52t23nfzev1lo1sq5dmyj/ICML_2025_5857_Rebuttal.pdf?rlkey=5nde6aampze744klvn0rsb3mp&st=37lg3nzo&dl=0.
1. *... However, the paper did not justify how their choices do not compromise the generality of their analysis.*
(a) We isolate the positional contribution in attention to highlight how position embeddings affect inductive bias and length generalization. This helps us better understand their standalone impact, and the empirical results support this analysis.
(b) We assume an identity value head because it is not central to the question of length generalization. Intuitively, we expect that the interpolator learns the correct value head to perform in-distribution generalization. So when focusing on length generalization, it is reasonable to assume a correct value head and analyze how the model handles larger-scale instances. This simplification helps clarify the role of attention and position embeddings.
(c) While our analysis focuses on binary functions, the core insights naturally extend to tasks over finite alphabets, because we can consider the binary representations. For example, for an alphabet of size $N$, each symbol can be encoded using $log N$ bits, and the analysis for binary functions can be applied similarly. Though not identical, the notion of the "minimum-degree interpolator over a linearly independent set" continues to hold in spirit under binary encoding schemes. Thus, the results we derive for binary settings are relevant to broader cases like language modeling.
Thank you for pointing out this oversight and we will include the justification in our manuscript.
2. *... It is expected that the paper at least present how their approach extends to other tasks, and what kind of domain knowledge can be exploited for different problems.*
We conducted additional experiments to address this concern:
(a) We apply RPE-Square to three new tasks: Parity (With CoT), Mulplication (1 * N), and Division (N / 1), and compared its performance against more baselines (RPE, RoPE, NoPE, ALiBi, Abacus). The results (Figures 3 - 5 in the link) show that RPE-Square can also achieve length generalization in these tasks, demonstrating its applicability beyond the initial two tasks.
(b) We also consider a task called AdditionMod10, which requires different domain knowledge from what RPE-Square captures. AdditionMod10 computes the sum of the addends modulo 10. The result depends only on the first digits of the addends (due to the modulo operation), so positional embedding needs to capture the absolute value of the relative distances to some special tokens ([BOS] and "+"). Following the same design principle, we consider a new position embedding called RPE-Absolute to encode "the absolute value of the relative distances to some special tokens". The experiment result (Figure 6 in the link) shows that RPE-Absolute achieves length generalization in AdditionMod10. This suggests that our position embedding design approach is adaptable to different domain knowledge.
3. *Regarding OOD and LDHD. I believe the paper's analysis do not explain why we observe (surprising) LDHD generalization for several other problems without any special domain knowledge...*
(a) Domain knowledge can be implicitly encoded via architecture design, training algorithms, data distributions, preprocessing strategies, etc. Even if domain knowledge is not injected explicitly, these prior choices reflect inductive biases tailored to the task.
(b) "...despite <10% representation of non-english data in pretraining"
It is important to distinguish low representation from zero representation. Prior work [1] shows that adding even a very small amount of long sequences to the training set can dramatically improve length generalization, which is called priming. From our perspective, this is because the presence of long sequences, even in small quantities, prevents the model from learning an overly simple interpolator that fails to extrapolate. For example, in digit-wise addition, if the training set only includes 3-digit numbers, the model might learn to ignore digits beyond the third. However, including a few 5-digit samples forces the model to also consider later digits, improving its extrapolation ability.
(c) While this work is mainly focused on length generalization, it is an interesting future direction to study how the compositional generalization can be understood from the perspective of LDHD generalization.
[1] Jelassi, S. et al., 2023. Length generalization in arithmetic transformers. arXiv preprint arXiv:2306.15400.
4. *The writing of the paper needs much improvement...*
We will carefully proofread our submission in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. My concerns are well-addressed. I suggest the authors mention the point about generality and differences in zero vs low representations in the revised paper. I am updating my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for reconsidering the score and for the helpful suggestions. As recommended, we will discuss the point regarding generality and the distinctions between zero and low representations in the revised manuscript. | null | null | null | null | null | null |
Drug-TTA: Test-Time Adaptation for Drug Virtual Screening via Multi-task Meta-Auxiliary Learning | Accept (poster) | Summary: The paper introduces Drug-TTA, a novel framework for drug virtual screening that incorporates test-time adaptation through multi-task meta-auxiliary learning. The authors build upon a contrastive learning paradigm by integrating a series of self-supervised auxiliary tasks-three fine-grained tasks (masked atom type prediction, corrupted coordinate reconstruction, and distance prediction), and two coarse-grained tasks (KL divergence minimization and SimCLR contrastive learning)-to dynamically adjust the model during inference. Central to their approach is the multi-scale feature-based Auxiliary Loss Balance Module (ALBM), which adaptively computes loss weights for each auxiliary task based on features extracted from different layers of the encoder. Moreover, a meta-learning strategy is employed to harmonize the auxiliary tasks with the primary virtual screening objective, ensuring that the test-time adaptations improve the main task without causing overfitting. Extensive experiments across five zero-shot virtual screening benchmarks demonstrate that Drug-TTA significantly outperforms existing methods, particularly showing notable improvements in AUROC, BEDROC, and enrichment factor metrics.
Claims And Evidence: The claims are supported by experimental results showing improvements in AUROC, BEDROC, and enrichment factors across multiple benchmarks. However, the paper lacks efficiency analysis, which is crucial for assessing practicality, particularly in large-scale applications. Additionally, the generalization of the approach across diverse datasets and real-world scenarios is not sufficiently validated, weakening the overall claims.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper are appropriate for the problem at hand.
The evaluation criteria, including performance metrics such as AUROC, BEDROC, and enrichment factor, are standard and relevant for assessing the effectiveness of virtual screening methods.
Theoretical Claims: The paper does not include any formal proofs or rigorous theoretical claims that would require verification.
Experimental Designs Or Analyses: The experimental designs and analyses appear sound, with a clear comparison of Drug-TTA against baseline methods on multiple benchmark datasets. The metrics used, including AUROC, BEDROC, and enrichment factor, are standard and relevant for evaluating virtual screening methods.
However, there is no detailed analysis of the method's efficiency or scalability, which leaves questions about its practical deployment in large-scale settings unanswered.
Supplementary Material: I reviewed the supplementary materials and paid attention to the detailed description of the auxiliary task losses, ablation experiments, hyperparameter settings, and evaluation metrics. These contents are helpful to understand the methods and experimental results in the paper.
Relation To Broader Scientific Literature: The Drug-TTA approach proposed in this paper echoes existing scientific literature in several aspects. First, the DrugCLIP framework significantly improves the performance of virtual screening by redefining virtual screening as a dense search task and using contrastive learning to align protein pockets and molecular representations.(https://arxiv.org/abs/2310.06367?utm_source=chatgpt.com)
In addition, the Point-TTA method adopts a test-time adaptation strategy in the point cloud registration task and designs three self-supervised auxiliary tasks to adapt to the new data distribution in the testing phase.(https://arxiv.org/abs/2308.16481?utm_source=chatgpt.com)
These studies provide theoretical foundations and practical experience for the Drug-TTA method, demonstrating the effectiveness of adaptive and contrastive learning strategies when applying tests in different tasks.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
S1. The paper presents an innovative integration of test-time adaptation with multi-task meta-auxiliary learning, creatively combining contrastive learning with several self-supervised tasks. The proposed method addresses the domain shift between training and test data, which is a significant issue in virtual screening.
S2. It demonstrates significant performance improvements on multiple virtual screening benchmarks.
S3.The framework builds on recent advances (e.g., DrugCLIP, TTA) and addresses the challenging zero-shot setting in drug screening.
Weaknesses:
W1. The losses L_NX and L_NP appear in the framework diagram and loss function but are not explicitly introduced in the methodology. This omission makes it difficult for readers without prior knowledge to fully understand their role in the model.
W2. The ablation study table, table 6, does not clearly define what each experiment represents. For example, “w/o-L_kl” should be explicitly noted as “Drug-TTA without the KL loss” to improve clarity.
W3. The paper does not include any analysis of computational efficiency, scalability, or model complexity, which are crucial for assessing real-world applicability.
W4. The authors do not provide code or implementation details, making it difficult to verify whether the method can be reproduced in practice.
Other Comments Or Suggestions: None
Questions For Authors: Please address the concerns raised in W1–W4.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for your thoughtful comments. We sincerely appreciate your recognition of the novelty and significant performance of our method. Below, we provide detailed responses to your questions and concerns.
## W1: Losses $L_{NX}$ and $L_{NP}$ explanation
We sincerely apologize for the insufficient explanation of $L_{NX}$ and $L_{NP}$. Below, we provide a more detailed clarification of these loss terms.
Our encoder follows the overall framework of Uni-Mol, and the design of $L_{NX}$ and $L_{NP}$ is aligned with its pretraining tasks. Specifically, $L_{NX}$ is introduced to normalize masked atom features. Given a molecular representation $X \in \mathbb{R}^{B \times T \times d}$, where $B$ is the batch size, $T$ is the number of atoms, and $d$ is the feature dimension, the norm constraint is formulated as:
$
\mathcal{L}_{NX} = \max(0, ||x||_2 - \sqrt{d} - \tau),
$
where $||x||_2$ ensures numerical stability, and $\tau$ is a tolerance margin. This constraint prevents masked atom features from deviating excessively from the expected norm, thereby stabilizing feature learning and preserving consistency across different masked regions.
Similarly, $L_{NP}$ is applied to pairwise relational representations, ensuring that the learned attention patterns do not fluctuate excessively when certain atoms are masked. This is crucial because molecular graphs inherently depend on structural constraints, and unnormalized pairwise features may lead to unstable or biased attention distributions.
By enforcing these norm constraints, our approach ensures that both atomic-level and relational representations remain well-calibrated, leading to more robust and generalizable molecular embeddings. We will revise the manuscript to incorporate this explanation and appreciate your insightful comments.
## W2: Table 6 revision
Thank you very much for carefully reviewing our Appendix. We sincerely apologize for any lack of clarity in our original statement. Our revised version is as follows:
- Drug-TTA without coarse-grained tasks
- Drug-TTA without fine-grained tasks
- Drug-TTA without the KL loss
- Drug-TTA without the SimCLR loss
- Drug-TTA without the masked atom type construction loss and atom feature normalization loss
- Drug-TTA without the corrupted coordinate reconstruction loss, distance reconstruction loss, and pairwise normalization loss
- Drug-TTA without the corrupted coordinate reconstruction loss, distance reconstruction loss, atom feature normalization loss and pairwise feature normalization loss
- Drug-TTA without the atom feature normalization loss and pairwise feature normalization loss
- Drug-TTA without the masked atom type construction loss, atom feature normalization loss and pairwise feature normalization loss
## W3: Concern regarding computational efficiency, scalability, or model complexity
Thank you for raising this question. We apologize for not providing a detailed explanation in the paper. To address this issue, we conduct additional experiments comparing the memory cost and inference time of Drug-TTA and DrugCLIP under the same conditions (i.e., on an RTX 3090 GPU with a batch size of 64).
- **Memory Cost**: Compared to DrugCLIP, Drug-TTA increases memory usage from 1313 MiB to 1805 MiB for the molecule branch per batch and from 1232 MiB to 1627 MiB for the pocket branch (with a single target). Despite this increase, our model can still maintain a batch size of 64, ensuring training efficiency remains unaffected. Moreover, given that a standard 24 GB (24,576 MiB) GPU is widely accessible to researchers, this additional memory consumption remains well within practical limits and does not bring computational bottleneck.
- **Inference Time**: Based on the average inference time under our experimental setup on the DUD-E benchmark, Drug-TTA requires 2.1 days for virtual screening at a practical scale (100 million molecules for a single target), compared to 0.8 days for DrugCLIP. While TTA does introduce additional computation time due to model adaptation during inference, this overhead is insignificant in the context of the overall drug discovery timeline, which spans years (including both in silico and wet-lab experiments). More importantly, the substantial performance improvement brought by Drug-TTA significantly reduces the trial-and-error burden in wet-lab experiments, making the additional computational time worthwhile.
In practice, given the relatively low hardware cost of our approach, inference time can be further reduced with minimal investment in computational resources if needed. We will incorporate this discussion on computational cost and inference time in the final version of the paper. Once again, we sincerely appreciate your interest in the practical applicability of our method.
## W4: Code release
We assure you that upon acceptance, we will release our original code and model weights to facilitate reproducibility. | Summary: The authors pinpoint two issues in ML-based structure-based drug discovery: a lack of negative pairs for ML-based docking methods and overly simplistic negative pairs in contrastive learning approaches (e.g., DrugClip), leading to a domain shift during inference when most screened molecules are inactive.
To mitigate this, they propose a test-time adaptation pipeline centered on DrugClip that updates the encoders using multiple auxiliary tasks, where the weight of each task is adjusted per input, and using meta-learning to prevent overfitting to these auxiliary tasks.
The authors show that their methods, Drug-TTA, significantly improved over their baseline method DrugCLIP and is the new SOTA.
Claims And Evidence: Evidence for the claims is clear and convincing.
Methods And Evaluation Criteria: The proposed method was evaluated on established virtual screening benchmarks using standard metrics, and compared against both well-established and previous state-of-the-art approaches.
However, the evaluation does not include a comparison to the recent SPRINT [McNutt'24] method, which is, to my knowledge, the current SOTA for LIT-PCBA.
[McNutt'24] SPRINT Enables Interpretable and Ultra-Fast Virtual Screening against Thousands of Proteomes, http://arxiv.org/abs/2411.15418
Theoretical Claims: Application paper, doesn't contain proof.
In general, didn't check the correctness of the formulas.
Experimental Designs Or Analyses: - In general, the experimental design is sound using and their analysis seems sound.
- Furthermore, the proposed methods is quite complex with many components and the authors have done a systematic ablation study on each of the components.
- However, not a single value in the paper comes with an error bar giving limited significance to the results.
I'm very aware this is systematic problem with the community, and these error bars are never reported on these VS benchmarks, but you can do better.
I'm not asking you to produce triplicates for all of your experiments, but I would like to see replicates and the analysis for at least one of the datasets (e.g., CASF-2016).
Supplementary Material: No
Relation To Broader Scientific Literature: Building on prior work in virtual screening (VS), test-time adaptation (TTA), and meta-learning, the proposed approach, Drug-TTA, enhances generalizability by: (1) introducing TTA to drug virtual screening, and (2) achieving state-of-the-art performance on several VS benchmarks.
Essential References Not Discussed: No
Other Strengths And Weaknesses: - S1: Paper is generally well-written.
- S3: Figure 1 very nice and gives an excellent overview of the method!
- W1: With virtual screening libraries like GDB-17 and Enamine Real now containing billions of compounds, the efficiency of VS methods is more critical than ever.
While Drug-TTA shows promise, it introduces additional complexity over DrugClip by incorporating extra parameters and performing parameter updates during inference.
A direct comparison of parameter counts and inference times between DrugClip and Drug-TTA would be invaluable for assessing the practical efficiency of the proposed method.
Other Comments Or Suggestions: - C1: In figure 3, finding the pocket representation and the contrast between the actives and inactives is not very clear.
Maybe increase the size of pocket representation dot and change the color scheme.
- C2: In the introduction maybe clarify that your are talking about *structure-based* virtual screening.
- C3: Values in the Fig 4. heatmap are way too small.
Also is 4 decimals really required in a illustration?
Questions For Authors: - Q1: As I understand the text and equation in 3.2, the concatenated features directly are passed to the seven different MLPs, but in Figure 1 it seems that the concatenated features are first passed in to a common MLP before having a separate MLP for each task.
Can you clarify on this?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for your thoughtful and thorough feedback. We truly appreciate your recognition that the evidence for the claims is clear and convincing, as well as your positive comments on our writing and figures. Below, we will address your questions and concerns in detail.
## Methods and Evaluation Criteria: additional comparison method
Thank you for the suggestion. We will discuss this method and cite the new arXiv paper in the final version. While our method does not surpass SPRINT [1] in AUROC on LIT-PCBA, it achieves improvements in BEDROC and EF. We will incorporate this comparison accordingly.
Performance Comparison:
|Method| AUROC(%)| BEDROC(%)|EF 0.50% |EF 1%|EF 5%|
|-|-|-|-|-|-|
|DrugCLIP|57.17|6.23| 8.56 | 5.51| 2.27|
|Drug-TTA|71.24| **45.08**| **74.39**|**42.74**|**10.61**|
|SPRINT[1]| **73.4**|12.3|15.90|10.78|5.92|
[1] [McNutt'24] SPRINT Enables Interpretable and Ultra-Fast Virtual Screening against Thousands of Proteomes, http://arxiv.org/abs/2411.15418
## Experimental Designs or Analyses: replicates on all datasets
Thank you very much for raising this question. We conduct additional testing of our method on five benchmarks, and the results are averaged over three repetitions. Below are the mean and standard deviation for each benchmark:
|Benchmark| AUROC(%)| BEDROC(%)| EF 0.50%| EF 1%| EF 5%|
|-|-|-|-|-|-|
|DUD-E|93.15±0.01|82.72±0.27|57.47±0.09|53.90±0.13|16.85±0.06|
|LIT-PCBA|71.19±0.19|44.87±0.06|74.26±0.24|42.44±0.37|10.43±0.03|
|AD|92.68±0.08|86.80±0.10|32.56±0.03| 30.56±0.06|15.31±0.03|
|DEKOIS2.0|83.43±0.24|73.30±0.31|26.63±0.28|25.43±0.21|12.94±0.11|
|CASF-2016|91.82±0.08|86.01±0.25|42.84±0.12|41.49±0.19|16.05±0.02|
## W1: Concern regarding inference time and computational burden
Thank you for raising this question. We apologize for not providing a detailed explanation in the paper. To address this issue, we conduct additional experiments comparing the memory cost and inference time of Drug-TTA and DrugCLIP under the same conditions (i.e., on an RTX 3090 GPU with a batch size of 64).
- **Memory Cost**: Compared to DrugCLIP, Drug-TTA increases memory usage from 1313 MiB to 1805 MiB for the molecule branch per batch and from 1232 MiB to 1627 MiB for the pocket branch (with a single target). Despite this increase, our model can still maintain a batch size of 64, ensuring training efficiency remains unaffected. Moreover, given that a standard 24 GB (24,576 MiB) GPU is widely accessible to researchers, this additional memory consumption remains well within practical limits and does not bring computational bottleneck.
- **Inference Time**: Based on the average inference time under our experimental setup on the DUD-E benchmark, Drug-TTA requires 2.1 days for virtual screening at a practical scale (100 million molecules for a single target), compared to 0.8 day for DrugCLIP. While TTA does introduce additional computation time due to model adaptation during inference, this overhead is insignificant in the context of the overall drug discovery timeline, which spans years (including both in silico and wet-lab experiments). More importantly, the substantial performance improvement brought by Drug-TTA significantly reduces the trial-and-error burden in wet-lab experiments, making the additional computational time worthwhile.
In practice, given the relatively low hardware cost of our approach, inference time can be further reduced with minimal investment in computational resources if needed. We will incorporate this discussion on computational cost and inference time in the final version of the paper. Once again, we sincerely appreciate your interest in the practical applicability of our method.
## C1: Redrawing Figure 3
Thank you for your suggestion regarding our visualizations. We redraw Figure 3 with an increased size of the pocket representation, improved color schemes, and clearer markers for better readability. Since images cannot be uploaded here, please refer to the updated figure at https://pasteboard.co/Up1mGfvCKsYN.bmp.
## C2: Clarification on structure-based virtual screening
Thank you for your suggestion. We will clarify that we are talking about *structure-based* virtual screening in the Introduction.
## C3: Value size and decimal explanation
We appreciate your feedback. We redraw Figure 4 and enlarge the values in the middle heatmap, as shown in https://pasteboard.co/chAPEluE4BNN.bmp. The four-decimal precision is chosen to better show the differences in sample weights.
## Q1: Clarification of framework diagram
Thank you for your thorough review. We sincerely apologize for any misunderstanding caused by the design of our main framework diagram. We will clarify this in the paper. The common MLP is just employed to project the concatenated three-layer features (512 × 3 = 1536) into 128 dimensions. The subsequent seven MLPs are specifically designed to accommodate seven auxiliary losses. We will update our description in the Method. | Summary: The paper introduces Drug-TTA, a novel test-time adaptation (TTA) framework for drug virtual screening that dynamically adjusts a pre-trained model to each test instance. Unlike traditional zero-shot screening methods, Drug-TTA utilizes self-supervised auxiliary tasks to adapt its protein and molecule encoders at inference time, enhancing generalization to unseen data. A key innovation is the multi-scale feature-based Auxiliary Loss Balance Module (ALBM), which optimally balances auxiliary task losses. Additionally, meta-learning ensures that adaptation enhances rather than hinders the primary task. Drug-TTA significantly outperforms state-of-the-art (SOTA) methods across five benchmarks (DUD-E, LIT-PCBA, AD, DEKOIS 2.0, CASF-2016), with an average AUROC improvement of 9.86%, demonstrating its effectiveness in zero-shot-like virtual screening while not being strictly zero-shot due to test-time updates.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Not available.
Experimental Designs Or Analyses: Yes, but there seems to be a difference in evaluation settings with comparison baselines. The baseline models are zero-shot settings, but strictly speaking, Drug-TTA does not seem to be a zero-shot setting (due to Algorithm 2).
Supplementary Material: Yes
Relation To Broader Scientific Literature: In the case of molecules, it is difficult to find good negative samples, and we want to address this through test-time adaptation. This seems to have high utility not only in virtual screening but also in other molecular property prediction fields.
Essential References Not Discussed: Related works are properly discussed.
Other Strengths And Weaknesses: [Strengths]
- Incorporating five self-supervised auxiliary tasks (both fine-grained and coarse-grained) allows the model to learn useful representations without labeled binding affinity data, making it adaptable and efficient.
- Drug-TTA outperforms existing methods across five major virtual screening benchmarks, achieving an average AUROC improvement of 9.86% over DrugCLIP, demonstrating its superior accuracy in ranking active molecules.
[Weaknesses]
- While it operates in a zero-shot setting, Drug-TTA modifies model parameters during test-time inference, which contradicts the strict definition of zero-shot learning, making it less interpretable in direct zero-shot comparisons.
- Test-time adaptation requires multiple forward and backward passes per test instance, increasing inference time and computational burden compared to traditional zero-shot methods like DrugCLIP.
- The paper reports a significant improvement in performance on the LIT-PCBA benchmark, particularly in EF (Enrichment Factor) metrics, where Drug-TTA achieves an order-of-magnitude improvement over DrugCLIP. However, the paper does not provide a thorough discussion on why Drug-TTA performs exceptionally well on LIT-PCBA compared to other benchmarks.
Other Comments Or Suggestions: The table captions are poor and need to be improved.
Questions For Authors: Please see the weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for your thoughtful feedback. We greatly appreciate your recognition of our innovation and the practical significance of our work. Below, we provide detailed responses to your concerns.
## W1: Concern regarding "Zero-Shot" strict definition
We sincerely appreciate your comment. You are correct that our method involves updating model parameters during testing, which does not align with the definition of zero-shot learning. Nevertheless, as you commented, our method does work in a zero-shot-like setting. Because of the very similar working condition, some recent TTA [1-3] works also define methods like ours, where no labeled samples are used during testing, as zero-shot. We will clarify the difference in revision.
We acknowledge that our comparison methods do not adjust model parameters at test time, which may raise concerns about fairness. However, our input conditions remain identical across all methods, as we do not use any labeled test samples. Moreover, since our work is the first to introduce TTA in this task, there are no directly comparable baselines. Given these constraints, we compare against the most relevant existing approaches. We appreciate your insightful remark and will refine the description of zero-shot learning in the final version of our paper.
### References:
[1] Liberatori B, Conti A, Rota P, et al. Test-time zero-shot temporal action localization. CVPR 2024.
[2] Zhao S, Wang X, Zhu L, et al. Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models. ICLR 2024.
[3] Aleem S, Wang F, Maniparambil M, et al. Test-time adaptation with SALIP: A cascade of SAM and CLIP for zero-shot medical image segmentation. CVPR 2024.
## W2: Concern regarding inference time and computational burden
We appreciate your concern regarding inference time and computational burden of our method. Due to response length constraints, we kindly refer you to our response to Reviewer doCT(W1), where we provide a detailed analysis.
## W3: Performance analysis on LIT-PCBA
Thank you for carefully reviewing our experimental results and for your valuable suggestions. We supplement our analysis with additional experimental and visualization results as follows.
LIT-PCBA is a highly challenging task due to its extreme class imbalance, with only 0.74% of the screened molecules being active. This makes all methods have much lower performance on this dataset than on the other benchmarks and leaves much room for improvement. Drug-TTA, with its ability to adapt to each sample, is particularly well-suited to handling such challenging scenarios.
Specifically, in the figure of https://pasteboard.co/Go21VXS5RV5H.bmp, we visualize the feature distributions for the MTORC1 target on the LIT-PCBA benchmark using both Drug-TTA and DrugCLIP. The visualization clearly shows that positive molecules are positioned closer to the pocket features in Drug-TTA, resulting in higher ranking scores for these molecules. Consequently, this enhances the early retrieval capability, which explains the substantial improvements in BEDROC and EF—metrics that are highly sensitive to early ranking performance.
Furthermore, we summarize the performance improvements observed for several targets:
| Target | Active Molecule Proportion | AUROC (Drug-TTA) | BEDROC (Drug-TTA) | AUROC (DrugCLIP) | BEDROC (DrugCLIP) |
|-|-|-|-|-|-|
| **MTORC1** | 0.29% | 98.81% | 48.38% | 67.38% | 1.74% |
| **PKM2** | 0.22% | 93.71% | 59.31% | 74.28% | 0.90% |
| **FEN1** | 0.10% | 98.35% | 58.36% | 87.51% | 1.79% |
From the table, it is evident that Drug-TTA is more robust to extreme class imbalance. This adaptation significantly enhances early retrieval performance.
We must **clarify that** our performance on LIT-PCBA only surpasses the performance on DUD-E in one metric, EF (0.50%). This is because EF focuses solely on the top-ranked molecules, and given the low proportion of active molecules in the dataset, EF can be more easily improved as active molecules become more concentrated at the top. However, regarding AUROC and BEDROC, which are influenced by the overall ranking, the performance on LIT-PCBA is lower compared to other benchmarks.
## Other Comments or Suggestions: Table captions need improvement
Thank you for your comment on the table caption. Since we are unable to modify the original manuscript at this stage, we will make the revisions in the final version of the paper. Specifically, we plan to add the following clarifications: In Table 123: "AUROC, BEDROC, and EF are reported (higher values indicate better performance). Bold values represent the best performance, and green indicates improvements of Drug-TTA over DrugCLIP." In Table 4: "*"w/o"* indicates the removal of the respective component compared to Drug-TTA." | Summary: This paper introduces Drug-TTA, a novel test-time adaptation (TTA) approach for drug virtual screening that leverages multi-task meta-auxiliary learning to adapt the model to each test instance. Drug-TTA incorporates a large number of self-supervised auxiliary tasks into both training and inference processes and proposes an Auxiliary Loss Balance Module (ALBM) based on multi-scale features to dynamically adjust the weights of auxiliary tasks during inference. Extensive experiments demonstrate its effectiveness.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: This paper does not include much theoretical part.
Experimental Designs Or Analyses: Yes
Supplementary Material: The supplementary material is the same as appendix.
Relation To Broader Scientific Literature: Refer to the strengths and weakness.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
1. The proposed method significantly outperforms the baseline DrugCLIP on multiple datasets.
2. The experimental section is comprehensive.
3. The writing is clear, particularly with the well-illustrated framework diagram.
Weakness:
1. Parameter fine-tuning is required during the TTA (Test-Time Adaptation) phase, and new targets all need to be re-fine-tuned, which may incur additional computational overhead.
2. I'm a bit puzzled by the analysis of Figure 5. Could the authors provide a more detailed analysis and explanation?
3. It seems that there are numerous hyperparameters to design and many tricks involved. Although the authors conducted ablation studies on almost every module, I found that the performance drops significantly when one setting is missing, which is somewhat puzzling. On one hand, this indicates that the workload is substantial. However, on the other hand, it may increase the difficulty of reproducing the results.
4. As is well known, the Lit-PCBA dataset is relatively challenging. Why does the proposed method achieve such significant performance improvements on the Lit-PCBA dataset compared to other baselines, and even surpass the performance on simpler tasks like DUD-E? I believe more analysis is needed to explain why the method works so well in this context.
Other Comments Or Suggestions: NA
Questions For Authors: I have a few detailed questions:
1. How did you consider adjusting the normalization layers in the encoder using auxiliary tasks during test time? What would happen if all parameters were fine-tuned?
2. How long does it take to fine-tune during the test phase, and is it necessary for the loss of the auxiliary task to converge?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for your thoughtful feedback. We greatly appreciate your recognition of the novelty and effectiveness of our work, as well as your positive remarks on our writing and presentation. In response to your comments, we conduct additional experiments and provide further visualization analysis. Below are our point-by-point responses to your concerns.
## W1&Q2 (part 1): Concern regarding additional computational overhead and inference time
We appreciate your concern regarding computational overhead and inference time of Drug-TTA. Due to response length constraints, we kindly refer you to our response to Reviewer doCT(W1), where we provide a detailed analysis.
## W2: More detailed analysis and explanation of Figure 5
Thank you for carefully reviewing our appendix. As shown in Figure 2(b), our ALBM concatenates features from the molecule encoder's top, middle, and last layers and uses the concatenated features to calculate weights for each loss to balance the auxiliary losses. Figure 5 illustrates the feature space distribution of molecules at different encoder layers (top, middle, and last) during virtual screening across different targets. Our visualization highlights the diversity in molecular feature representations at different layers, demonstrating the necessity of fusing features from the three layers in ALBM. This observation is also consistent with the results of our ablation study in Table 4, which demonstrates that fusing features from all three layers is essential for optimal performance.
## W3: More detailed analysis of ablation study results
Thank you for acknowledging the substantial workload of our ablation studies. We understand your concern regarding the significant performance drop when removing certain components. This is because the key components of Drug-TTA are highly interdependent, with each component playing a crucial role. Below, we provide a detailed explanation:
- **Auxiliary Branch Selection**: Performing TTA on only one branch (either molecule or pocket) leads to a mismatch in feature adaptation, as the unadapted branch struggles to align within the shared feature space. Thus, TTA is necessary for both the molecule and pocket branches.
- **Multi-Scale Feature Layer Selection**: Our framework is designed to comprehensively capture molecular features across multiple layers. While removing any single layer leads to a performance drop, the model still outperforms DrugCLIP, demonstrating the robustness of our multi-scale feature design.
- **Self-Supervised Task Selection**: Self-supervised tasks are crucial for learning both fine-grained and coarse-grained molecular representations, where coarse-grained features aggregate fine-grained representations. Removing the fine-grained feature learning task results in suboptimal performance improvement.
- **Weight Regularization & Meta-Learning**: Weight regularization prevents model collapse during training. Meta-learning is the core mechanism that prevents overfitting to auxiliary tasks. Removing these components leads to a significant performance drop, highlighting their necessity.
Overall, most ablation settings still outperform DrugCLIP, validating the effectiveness of our design. We assure you that upon acceptance, we will release our original code and model weights to facilitate reproducibility.
## W4: Performance analysis on LIT-PCBA
We appreciate your concern regarding the performance analysis on LIT-PCBA. Due to response length constraints, we kindly refer you to our response to Reviewer neHb(W3), where we provide a detailed analysis.
## Q1: How did you consider adjusting the normalization layers in the encoder using auxiliary tasks during test time? What would happen if all parameters were fine-tuned?
Thank you for your question. We conduct an additional experiment with full fine-tuning, and the results on the DUD-E dataset are shown below. Fine-tuning all parameters with limited data leads to instability due to the large parameter space and distribution shift, causing gradient explosion or model collapse. This ultimately degrades performance, as reflected in our results. Therefore, we consider adjusting the normalization layers, as in other TTA methods.
|Method|AUROC(%)|BEDROC(%)|EF 0.50%|EF 1%|EF 5%|
|-|-|-|-|-|-|
|**DrugCLIP**|80.93|50.52|38.07|31.89|10.66|
|**Drug-TTA**|93.16|82.82|57.50|54.04|16.88|
|**All parameters fine-tuned**|64.34|11.32|7.20|5.97|3.81|
## Q2(Part 2): Is it necessary for the loss of auxiliary task to converge?
Thank you for your question. Our method does not require the auxiliary task loss to converge, as the model only needs to perform a single adjustment during testing. This aligns with the common practice in TTA, where most methods apply only one or a few updates rather than waiting for full convergence. This design ensures efficiency and generalizability, preventing excessive computation and overfitting to individual test samples. | null | null | null | null | null | null |
Don't Restart, Just Reuse: Reoptimizing MILPs with Dynamic Parameters | Accept (poster) | Summary: The paper presents a new heuristic to find primal solutions of mixed-integer linear problems (MILP). The approach is based on predicting a distribution for the value of binary variables and a multi-armed bandit approach for iterative variable fixing. The paper also considers new features for the graph-embedding of MILPs based on values at leaf nodes of historical optimal solutions. Experiments on instances of the 2023 MIP Competition measure the time to find a feasible solution as well as the relative gap for the method and several baselines.
Claims And Evidence: The paper claims to provide significant improvement in times to find feasible solutions. I have major concerns regarding the experiment setting and overall presentation of the results, which follows from an ambiguous framing of the paper’s goal and results.
First, two of the four baselines are general MILP solvers (SCIP and Re_Tuning). These methods are not only aiming to find feasible solutions, but *proving their global optimality*. It is well-known that the latter is a much harder task than the former. The comparison with these two baselines is biased since it is assumed that the optimal value OPT is known in advance, but SCIP and Re_Tuning are spending the majority of their computational budget to find OPT and prove that it is global lower bound. The authors’ should focus primarily on baselines that are also primal heuristics.
Second, the title of the paper is “Don’t restart, Just reuse”, but they do not include the very simple baseline that “just reuses” past feasible solutions. They consider warm start approaches, but only based on previous *optimal* solutions. They should consider adding the baseline that stores all feasible solutions of past problems, sort them by objective value for the new problem, and verifies whether they are feasible. Note that such an approach is often done by default by commercial solver (e.g., Gurobi calls this the “solution pool”).
Third, the paper lacks an ablation study that clearly shows the value of the different components of the final algorithm. The small experiment in Appendix C.5 focuses only on variable prediction. The ablation study should allow to critically assess the value for finding feasible solutions of (a) the new features considered for the historical problem embeddings, (b) using a GNN for initial variable prediction, and (c) the final Thomson sampling algorithm.
Because of the above concerns, I find that the experiments do not support the paper’s claims.
EDIT (Post-Rebuttal): Since the above concerns have been addressed in the rebuttal, I have raised my score from 2 to 4.
Methods And Evaluation Criteria: See above.
Theoretical Claims: The paper is mostly experimental, which is fine given the topic. Still, there is a theoretical claim in Appendix B regarding the relaxation mechanism: “theoretically, with enough iterations, we can ensure that the variables causing conflicts with the constraints are filtered out.” This is an interesting property that should be stated and proven formally.
Experimental Designs Or Analyses: See “Claims and Evidence”.
Supplementary Material: I reviewed the Supplementary Material in detail: the implementation of the algorithm and baselines, as well as additional experiment results.
Relation To Broader Scientific Literature: The current scope of the paper is ambiguous and broader than what is actually done. The paper presents itself as contributing to general MILP reoptimization. It counts “solved” instances whereas it is only about finding feasible solutions, and includes general MILP solvers as baselines. Yet, the core focus of the paper is presenting a new primal heuristic for reoptimization. The focus is only on feasibility. The authors should better frame their contribution in the title / introduction, and includes relevant benchmarks: primal heuristics, rather than general solvers.
Essential References Not Discussed: The choice of baseline method is insufficient. The experiments should include general primal heuristics such as (among others) the ML-guided LNS of Huang et al. (2024), which is cited in the paper.
Other Strengths And Weaknesses: While the above review is rather critical, I find the overall approach interesting and look forward to seeing it presented in a more comprehensive fashion. In particular, I find the relaxation mechanism as well as the update algorithm for Thomson sampling interesting and valuable.
Other Comments Or Suggestions: Please adapt the terminology used when presenting the results. The general understanding is that an MILP is “solved” when a feasible solution is found *and* it is proven to be globally optimal. The current experiments measure the time to find a feasible solution.
Section 4 is quite dense in terms of detail, but does not provide a high-level overview of the algorithm. It could be useful to add a pseudo-code algorithm to show the key steps of the algorithm. It is also important to specify that the algorithm is only run once, and not integrated in a branch-and-bound process as is typical of a MILP solver. A major difference is that a general MILP solver will run many different primal heuristics at different times in the solving process.
Questions For Authors: - How did you setup SCIP? Which variation of the baselines (SCIP vs. SCIP-WS) has access to historical optimal or feasible solutions? Did you enable the reoptimization module?
- Why do you specify that the ML-based methods are end-to-end? In what ways are they end-to-end and why is it an advantage? Is your method end-to-end?
- What is the loss function used to train the GNN? Do you propagate a gradient through the sampling step? Why/ why not?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback and suggestions.
## Response to Weakness 1
**Baselines**: We will clarify in the introduction that our method primarily focuses on primal heuristics.
However, we believe that comparing our approach with general MILP solvers like SCIP and Re_Tuning is still essential. These solvers are commonly used benchmarks within the field. For instance, heuristic methods by Han et al. and Huang et al. also use SCIP as one of their baseline comparisons.
## Response to Weakness 2 and Question 1
**Warm start baseline**: The baseline Re_Tuning includes a similar approach. It specifically addresses cases with minor problem changes by incorporating heuristic design methods in various SCIP solving modules. In the presolving module, it stores all past feasible solutions and constructs partial solutions based on variable-value pairs that are consistent across a high percentage of previous solutions. In SCIP, a heuristic called completesol captures these hints to find feasible solutions by solving a sub-MIP.
**SCIP-WS**: In Appendix C.7, we report the effectiveness of "SCIP-WS", which solely utilizes the solution hints from Re_Tuning to guide the completesol heuristic, without employing other heuristic methods.
**SCIP settings**: The SCIP setup uses all default settings without disabling any features, including the default preprocessing and reoptimization modules.
## Response to Weakness 3
**Ablation experiment**: We conduct an ablation experiment using Re-GNN for initial solution prediction in LNS and traditional GNN with the Thompson Sampling (TS) algorithm. We report the average gap_rel, where "inf" means no feasible solution was found. Re-GNN improves performance over traditional GNN in the LNS framework. The method combining traditional GNN with TS also quickly finds feasible solutions. The combined approach, VP-OR, achieves the best results across all datasets.
| | bnd_1|bnd_2|bnd_3|mat_1|obj_1|obj_2|rhs_1|rhs_2|rhs_3|
|---|--|--|---|---|----|---|--|--|--|
| GNN+LNS(PS) |0.81|inf|inf|0.23|0.00|0.51|193.04|0.00|0.00|
| Re-GNN+LNS |0.16|inf|inf|0.23|0.00|0.21|1.04|0.00|0.00|
| GNN+TS |0.04|0.17| 0.12|0.17|0.00|0.00 |0.00|0.00|0.00|
| Re-GNN+TS(VP-OR) |0.02|0.11| 0.06|0.16|0.00|0.06|0.00|0.00|0.00|
## Response to Weakness 4
We expand our experiments to include large-scale MILP experiments and add a comparison with the ML-guided LNS method of **Huang et al.** (ConPas).
**Large-scale MILP experiments**: We generate large-scale datasets IS and CA using the code from Gasse et al., consistent with those used by Han et al. The Gurobi and SCIP solvers could not reach an optimal solution within 3600 seconds for these instances. We run Gurobi for 3600 seconds to record the incumbent solution. For evaluation, we impose a 30-second time limit and update the incumbent solution if a better one is found. VP-OR achieves more significant acceleration than PS and ConPas during the early stages of solving on both Gurobi and SCIP.
CA:
| | gap_abs| gap_rel | wins|
|---|---|---|---|
|SCIP|24068.27|0.19|0/100|
|Gurobi|3754.16|0.03|0/100|
|PS+SCIP|20182.91|0.19|0/100|
|PS+Gurobi|3754.16|0.03|0/100|
|ConPaS+SCIP|8506.52|0.07|0/100|
|ConPaS+Gurobi|2526.28|0.02|0/100|
|VP-OR+SCIP(Ours)|0.00|0.00|12/100|
|VP-OR+Gurobi(Ours)|**0.00**|**0.00**|**88/100**|
IS:
| | gap_abs| gap_rel | wins|
|---|---|--|--|
|SCIP|104.92|0.05|0/100|
|Gurobi|133.07|0.06|0/100|
|PS+SCIP|104.57|0.05|0/100|
|PS+Gurobi|117.62|0.06|0/100|
|ConPaS+SCIP|30.25|0.01|0/100|
|ConPaS+Gurobi|24.66|0.01|0/100|
|VP-OR+SCIP(Ours)|9.57|0.00|33/100|
|VP-OR+Gurobi(Ours)|**0.97**|**0.00**|**67/100**|
## Response to Question 2
The term **"end-to-end"** is based on Han et al.'s classification of ML efforts for optimization. End-to-end learning refers to using machine learning to predict solutions based directly on the input problems and their final results. This approach leverages the capabilities of machine learning to facilitate efficient problem-solving by directly linking the problem's structure and solution values. Our method can also be considered end-to-end.
## Response to Question 3
**The loss function**: We use a categorical cross-entropy loss function to train the GNN. For binary variables, the model predicts probabilities that are directly compared with the actual binary values to compute the optimal solution. For integer and continuous variables, they are first represented as 8-bit binary numbers using the method described in Section 4.1.2, which involves using the binary representation of the logarithmic value. These binary representations are then used as labels and compared with the predicted probabilities.
**The sampling step**: We do not propagate a gradient through the sampling step. We prioritize speed in each iteration and online training and inference during the sampling process would negatively affect solving efficiency.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed answers and the new results provided. The clarifications and additional experiments address my concerns regarding (a) the choice of baselines, (b) use of past feasible solutions, and (3) clear ablation study. For these reasons, I raise my score from 2 to 4.
Still, I encourage the authors to rework the framing throughout the paper to emphasize that the focus is on finding primal solutions rather than proving global optimality. (For instance, avoid saying that an instance is solved if it is only about finding a feasible solution).
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful comments and time. We appreciate your insightful feedback regarding the clarity of orientations in our work. In our revised manuscript, we explicitly emphasize that our work focuses on **"finding primal solutions"** throughout the paper. We carefully revise all instances of "solved" to avoid ambiguity.
We believe that these modifications, along with the new results, strengthen the overall quality of the manuscript. | Summary: This paper studies the opportunities of leveraging an existing solution to efficiently adapt to slight modifications in MILP constraints or objectives, thereby accelerating MILP reoptimization. To this end, it introduces Variable Prediction Online Refinement (VP-OR), a novel two-stage reoptimization framework. First, VP-OR learns to predict confidence bounds for each MILP variable, leveraging historical information and a GNN to induce a high-quality solution space. It then incorporates Thompson Sampling (using a Beta distribution) to dynamically adjust the confidence of selecting variables to be fixed during online refinement (optimization). Results show that VP-OR can yield much better performance when the time limits for reoptimization are small (i.e., restricted in 10s).
Claims And Evidence: Most of the claims are well supported with evidence. However, a few comments are:
1. In the introduction, the authors state that reusing branching strategies and adjusting parameters can save time. However, it also reads that these approaches do not reduce the overall size or complexity of the problem. Does this create a contradiction?
2. The claim, "However, predicting each binary bit’s actual value can lead to inaccuracies, causing overall prediction errors." (around line 131), should be better justified with supporting evidence or examples.
Also, the paper seems insufficiently polished, with lots of unclear statements and uncovered details regarding the methodology. Please refer to the following points for specific areas requiring clarification.
Methods And Evaluation Criteria: 3. The paper lacks clarity in presentation, which may hinder understanding of the soundness of the approach. Section 4.1 is difficult to follow, as it intermixes various aspects such as the training stage, inference stage, training data processing, and predicted output processing within the same discussion flow. It is suggested to separate these components into distinct paragraphs to improve readability.
4. The section 4 is also missing key details, including: Definition of network input/output, Network architexture, Training Method and framework, Loss function, Data generation, Inference algorithm. Without these, the methodology remains unclear and difficult to follow.
5. Particularly, the necessity of the binary representation as well as the logarithmic transformation is unclear. Further discussion, along with numerical evidence, would be helpful to justify these choices.
6. The method appears to be trained on a small set of instances. I have concerns on how well it generalizes to other MILP instances. This is important as we need to offset the training cost so that once the model is trained, it can address many online optimization instances. Also, would the current model show signs of overfitting on the small datasets? Will it leads to poor long-term performance or unexpected failures?
7. It remains unclear to what extent Thompson Sampling helps in mitigating infeasibility and early convergence to local optima. Further analysis and empirical validation would help in this regard.
Theoretical Claims: Not involved.
Experimental Designs Or Analyses: 8. First, the experiments do not appear extensive enough to demonstrate the robustness of the proposed methods. The results show mixed performance. I encourage the authors to evaluate the approach on more datasets and provide a clearer summary of its key benefits.
9. In Table 1, it would be better to also show: 1) To what extent are the confidence bounds accurate? 2) How does the accuracy differ between continuous and integer variables?
10. In Table 4 and appendix, the presentation of results involves many empty rows marked with '-' . This is not effective in terms of comparision with baseline. It would be clearer to report all results using convergence figures, similar to Figure 2, as the main comparisons.
Supplementary Material: I checked Appendix A and C.
Relation To Broader Scientific Literature: It is related to the optimization community.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Overall, the paper has several strengths. It addresses an important real-world problem relevant to industrial applications, and the proposed idea appears novel and conceptually sound. However, more effort is needed to refine the presentation, expand the experiments, and better summarize the key insights and takeaways beyond purely performance comparison.
Other Comments Or Suggestions: Please refer to the above. Correct me if I was wrong.
There are also lots of minor issues or typos: For example, Figure 2 is not referenced anywhere in the paper, making it unclear how it contributes to the discussion. In Table 4, there is an inconsistency where, in some cases, the absolute gap is larger while the relative gap is smaller.
Questions For Authors: * What is the rationale behind using binary representation and logarithmic transformation? Are there any experimental results supporting their effectiveness?
* Do lower and upper bounds vary across different types of variables? Are there observable patterns in how tight these bounds become after training and across differnt optimization stages? Do these patterns differ between continuous and integer variables?
* Are all eight GPUs used for both training and inference?
* Are there any repair mechanisms for infeasible solutions? Can Thompson Sampling adaptively learn to resolve infeasibility over time?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback and suggestions.
## Response to Comment 1
There is no contradiction in the statement. Reusing branching strategies and adjusting solver parameters improve efficiency by saving time on generating the branching strategy and by enabling the solver to find better solutions faster. They focus on improving the way the problem is solved, not changing its size.
## Response to Comment 2
**The claim of the prediction accuracy:** Table 10 in the appendix provides evidence for our statement. We will move this table to the main text in the final version. It shows the number of incorrectly predicted variables for traditional Graph Neural Networks (GNN) and our approach (Re_GNN). In datasets bnd_2 and bnd_3, the table indicates that more than one-third of the continuous variables were predicted incorrectly.
| | bnd_1 | bnd_2 | bnd_3 |
|--|---|---|---|
| Total binary var. |1457.0 | 1457.0|1457.0|
| binary var. (GNN) |163.0 |45.1| 42.2 |
| binary var. (Re_GNN)|**8.2**|**6.7**|**4.5**|
| Total integer var. | 124.0 | 0.0 | 0.0|
| integer var. (GNN) | 33.4 |0.0 | 0.0|
| integer var. (Re_GNN)|**17.4**|0.0 | 0.0 |
| Total continuous var. |0.0 | 301.0 | 301.0 |
| continuous var. (GNN) |0.0 |140.2|121.0 |
| continuous var. (Re_GNN)| **0.0** |**0.0**|**2.0**|
## Response to Comment 3, 4
We will restructure the paper to separate the components into distinct paragraphs. Our approach builds upon the framework by Han et al., with modifications mainly in feature extraction and prediction algorithms for integer and continuous variables. The GNN architecture and training method remain the same as theirs. For data generation, we utilize the datasets provided by the existing reoptimization competition.
We use a categorical cross-entropy loss function to train the GNN. For binary variables, predictions are compared directly with actual binary values for computing the optimal solution. Integer and continuous variables are represented as 8-bit binary numbers following the method in Section 4.1.2, which uses the binary representation of the logarithmic value. These binary representations serve as labels for comparison with predicted probabilities.
The use of binary representation and logarithmic transformation addresses practical computational constraints. In our datasets, maximum integer values exceed 100,000, which would require at least 18 bits for binary representation. Using a direct approach without these transformations during testing resulted in out-of-memory errors due to large output dimensions. The logarithmic transformation allows us to represent integers as 8-bit binary numbers, distinguishing potential value ranges without predicting exact values.
## Response to Comment 6, 8
**Large-scale experiments**: Please refer to our Response to Reviewer MNmC's Question 1 for details.
## Response to Comment 7
**Ablation experiment**: Please refer to our Response to Reviewer MNmC's Question 2 for details.
## Response to Comment 9, Question 2
1) We evaluate the accuracy of predicted bounds for integer and continuous variables as follows: if the optimal value of a variable lies within the predicted lower and upper bounds, the predicted bounds are considered accurate; otherwise, they are not.
2) Continuous and integer variables are treated similarly, with the key difference being that continuous variables are rounded during preprocessing.
To evaluate how tight these bounds become after training and across different optimization stages, we use I_o and C_o to denote the average difference between original upper and lower bounds for integer and continuous variables, respectively. I_p and C_p represent the average predicted bounds gap for integers and continuous variables. In cases where there are no integer or continuous variables in a dataset, we use 'NA' as a placeholder.
| | bnd_1 | bnd_2 | bnd_3| mat_1|obj_2|rhs_1|rhs_2|rhs_3|
|-----|--|---|---|---|---|---|---|----|
| I_o |72146.14|NA|NA|NA|2.0|NA|NA|NA|NA|
| I_p |32750.57|NA|NA|NA|1.34|NA|NA|NA|
| C_o |NA|25.91| 26.15|1.0|1e+19|1e+20 |2e+20|2e+20|
| C_p |NA|14.22| 14.28|1.0|19968.14|1.0|14.22|13.67|
## Response to Comment 10
The '-' symbol does not indicate empty rows. Instead, it means that no feasible solution was found during testing, resulting in a gap of infinity. To prevent confusion, we will adjust the '-' to '>10.0' in the final version.
## Response to Question 3
Each instance uses only one GPU for both training and inference.
## Response to Question 4
Our relaxation mechanism addresses infeasible instances by dividing the fixed variables into ten groups and subsequently solving each without these variable sets. When a feasible solution cannot be found, we repeatedly apply the relaxation mechanism, building upon previous relaxations. Each iteration of this mechanism reduces the number of fixed variables. With enough iterations, the variables causing conflicts with the constraints are filtered out.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. Upon careful check, most of my concerns have been addressed, and the efforts of the authors for adding experiments should be appreciated. I am raising my score to a 3. However, I think the paper needs major revisions in the presentation of the methodology.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our revisions and for raising your score. We sincerely appreciate your guidance in improving the manuscript’s clarity.
As suggested, we have completely restructured Section 4 to address presentation issues:
1. **Improved structure**: Section 4.1 now distinctly separates the training stage, inference stage, data processing, and output processing into dedicated subsections for better readability.
2. **Added Details**:
* Network input/output definitions (including feature representation and prediction targets).
* Network architecture (layer configurations, activation functions).
* Training framework (optimizer, hyperparameters, regularization strategies).
* Loss function formulation (mathematical details and rationale).
Thank you again for your invaluable feedback. It has greatly strengthened the paper. | Summary: This paper introduces a novel ML-guided framework for predicting solutions for reoptimization problems. First, they adapt on current GNN methods by adding information about the leaf node to predict solutions specifically for reoptimization problems. Second, they introduce online-learning methods to refine the solutions they found. The paper has shown superior performance of their approach over default solver and several baselines.
Claims And Evidence: The claims in the paper are partly supported by experiments. The experiments in the appendix have shown the methods of Re-GNN features are definitely better than original features when applied to reoptimization problems. But I didn’t find any ablation studies for the iterative online refinement methods. All the experiment results show the whole two-step process without knowing how online refinement has contributed to performance improvement.
Methods And Evaluation Criteria: The proposed methods are valid and interesting. However, the benchmark datasets they used is not enough to show their approaches work. Generally, when you use ML-guided heuristic methods, you would test the performance on much harder instances (e.g. at least 1 hour to solve) than the instances evaluated in this paper.
Theoretical Claims: No major proof and theoretical claims in the paper.
Experimental Designs Or Analyses: The baseline the paper compared is weak, so additional experiments are needed to ensure the performance improvement of the author's claimed methods. First, ND and PaS work the best when solving problem instances that are very hard. Using them on smaller instances and showing gaps at a runtime cutoff of only 10 seconds may not be appropriate. Second, the instances used for training ND and PaS are much smaller than the original paper, 40 instances is not enough to get competitive results for ND and PaS, as their original paper uses 300 instances for training. Thirdly, the author did not compare their methods with state-of-art solver Gurobi. I didn’t see why the methods couldn’t be applied to the Gurobi solver, since one of the baseline PaS is running experiments against both SCIP and Gurobi.
Supplementary Material: I didn't carefully check the appendix.
Relation To Broader Scientific Literature: Speeding up MILP solving is very useful for solving a lot of real-world problems. The paper has improved upon previous reoptimization approaches for MILPs.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Can you show some experiment results on a larger set of harder instances (e.g., 200 train, 100 test) that would require a runtime of more than 1 hour? And can you show the performance comparison both on SCIP and Gurobi?
2. Can you provide some insights about how much performance you gain by online refinement? How is the performance w/o online refinement?
3. The methods of Re-GNN are described very vaguely. Can you explain more carefully how you use the leaf node feature as training data?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback and suggestions.
## Response to the Question 1
**Large-scale MILP experiments**: We expand our experiments to include large-scale MILP experiments with more instances (200 for training and 100 for testing) and add a comparison with the ML-guided LNS method of Huang et al.[2] (**ConPas**). The results are reported for both Gurobi and SCIP. We generate large-scale datasets IS and CA using the code from Gasse et al.[3], consistent with those used by Han et al.[1] The Gurobi and SCIP solvers could not reach an optimal solution within 3600 seconds for these instances. We run Gurobi for 3600 seconds to record the incumbent solution. For evaluation, we impose a 30-second time limit and update the incumbent solution if a better one is found. VP-OR achieves more significant acceleration than PS[1] and ConPas[2] during the early stages of solving on both Gurobi and SCIP.
CA:
| | gap_abs| gap_rel | wins|
|-------|-------|-------|----|
|SCIP|24068.27|0.19|0/100|
|Gurobi|3754.16|0.03|0/100|
|PS+SCIP|20182.91|0.19|0/100|
|PS+Gurobi|3754.16|0.03|0/100|
|ConPaS+SCIP|8506.52|0.07|0/100|
|ConPaS+Gurobi|2526.28|0.02|0/100|
|VP-OR+SCIP(Ours)|0.00|0.00|12/100|
|VP-OR+Gurobi(Ours)|**0.00**|**0.00**|**88/100**|
IS:
| | gap_abs| gap_rel | wins|
|-------|-------|-------|----|
|SCIP|104.92|0.05|0/100|
|Gurobi|133.07|0.06|0/100|
|PS+SCIP|104.57|0.05|0/100|
|PS+Gurobi|117.62|0.06|0/100|
|ConPaS+SCIP|30.25|0.01|0/100|
|ConPaS+Gurobi|24.66|0.01|0/100|
|VP-OR+SCIP(Ours)|9.57|0.00|33/100|
|VP-OR+Gurobi(Ours)|**0.97**|**0.00**|**67/100**|
## Response to the Question 2
**Ablation experiment**: We conduct an ablation experiment using Re-GNN for initial solution prediction in LNS and traditional GNN with the Thompson Sampling (TS) algorithm. We report the average gap_rel, where "inf" means no feasible solution was found. Re-GNN improves performance over traditional GNN in the LNS framework. The method combining traditional GNN with TS also quickly finds feasible solutions. The combined approach, VP-OR, achieves the best results across all datasets.
| | bnd_1 |bnd_2|bnd_3|mat_1|obj_1|obj_2|rhs_1|rhs_2|rhs_3|
|-------|-------|-------|---|----|----|----|----|----|-----|
| GNN+LNS(PS) |0.81|inf|inf|0.23|0.00|0.51|193.04|0.00|0.00|
| Re-GNN+LNS |0.16|inf|inf|0.23|0.00|0.21|1.04|0.00|0.00|
| GNN+TS |0.04|0.17| 0.12|0.17|0.00|0.00 |0.00|0.00|0.00|
| Re-GNN+TS(VP-OR) |0.02|0.11| 0.06|0.16|0.00|0.06|0.00|0.00|0.00|
## Response to the Question 3
Re-GNN includes traditional features like variable coefficients in the objective function, variable types, and initial bounds. Additionally, it captures features from leaf nodes where the best solution was found during solving. For each leaf node, it records the local bounds of each variable, the LP relaxation solution, and whether each variable is basic in this solution. These data come naturally from the solving process in SCIP and Gurobi without extra computation. All the features are listed in Table 5.
**Reference:**
[1] Han Q, Yang L, Chen Q, et al. A GNN-Guided Predict-and-Search Framework for Mixed-Integer Linear Programming[C]//The Eleventh International Conference on Learning Representations.
[2] Huang T, Ferber A M, Zharmagambetov A, et al. Contrastive predict-and-search for mixed integer linear programs[C]//Forty-first International Conference on Machine Learning. 2024.
[3] Gasse M, Chételat D, Ferroni N, et al. Exact combinatorial optimization with graph convolutional neural networks[J]. Advances in neural information processing systems, 2019, 32. | Summary: “Don’t Restart, Just Reuse” addresses the problem of MILP reoptimization – repeatedly solving similar MILP instances that change over time (e.g. objective coefficients, constraints, or bounds). The paper proposes a novel two-stage framework called VP-OR (Variable Prediction and Online Refinement) for fast reoptimization of MILPs with dynamic parameters. In the first stage, a graph neural network (GNN) model predicts a high-confidence partial solution space for the new MILP by estimating marginal probabilities for binary variables and feasible value ranges for integer/continuous variables. In the second stage, an iterative refinement procedure uses a Thompson Sampling bandit strategy to gradually fix a subset of variables to their predicted values or ranges and solve the reduced MILP, updating which variables to fix in subsequent iterations based on solution improvements. Extensive experiments on 9 dynamic MILP datasets (derived from MIP 2023 workshop Competition benchmarks) demonstrate that VP-OR finds feasible solutions for every test instance within 10 seconds and achieves lower primal gaps than baseline methods, converging to better solutions more rapidly.
Claims And Evidence: The paper’s key claims are well-supported by thorough evidence. For example, (a) VP-OR yields superior solution quality under tight time limits – The authors back this by reporting, for each dataset, the number of instances where a feasible solution is found within 10 seconds (Table 3). VP-OR finds feasible solutions in all test cases within 10s, whereas other methods often fail on harder instances. (b) VP-OR is significantly faster than traditional large neighborhood search (LNS) – This claim is quantified in Table 2: fixing a substantial portion of variables (with VP-OR’s strategy) leads to solving times an order of magnitude shorter than LNS-based methods.
Overall, the evidence presented is convincing and aligns with the claims. I did not identify any problematic claims – the authors are careful to couch statements in empirical results.
Methods And Evaluation Criteria: The proposed VP-OR method is well-motivated and appropriate for the reoptimization problem domain. The combination of a GNN for variable prediction and Thompson Sampling for iterative variable fixing is a sound design: the GNN exploits structural similarities between consecutive MILP instances, and the refinement stage balances exploration/exploitation when fixing variables. This approach directly addresses the limitations of prior reoptimization techniques, which either reused old solutions that might be invalid or only tweaked solver parameters without reducing problem complexity. The evaluation methodology is rigorous and well-aligned with the problem’s goals. The authors focus on time-constrained performance (10-second limit), reflecting real-world needs for quick reoptimization in dynamic settings. The experiments are conducted on nine diverse MILP reoptimization datasets from a well-known competition, ensuring the evaluation is realistic and broad. The authors split data into training and test sets properly (training on 20 instance pairs and testing on 5 pairs per dataset), which is appropriate to evaluate generalization to new instances. Overall, the methods and evaluation criteria are sound and well-chosen for the problem. I have no major concerns here; the paper’s approach is both novel and evaluated fairly.
Theoretical Claims: The paper does not heavily focus on new theory – it is primarily an algorithmic and empirical contribution. There are no formal theorems or proofs presented that require verification (I did not encounter any stated lemma or theorem in the main content). The use of Thompson Sampling is supported by standard bandit theory references, but the paper itself treats it as a heuristic for exploration; no formal convergence proofs are provided. The authors’ theoretical claims are mostly implicit in the design – for example, the variable fixing strategy that leads to accelerated solving is reasonable. In summary, while there are no new theoretical guarantees to check, the paper’s technical content is consistent with known theory, and I found no issues in the mathematical formulation or algorithmic claims.
Experimental Designs Or Analyses: The experimental setup is thorough and well thought out, lending credibility to the results. The authors clearly define three experiments targeting different aspects: (1) feasibility within 10s, (2) solution quality (gap) in 10s, and (3) convergence over 100s. This separation makes it easy to understand how each method performs in terms of both speed and quality. The use of 9 different datasets (with varied parameter changes: objective, bounds, matrix coefficients, RHS) demonstrates that the approach is not tuned to one specific scenario but is broadly applicable. The experiments are reproducible and sufficiently detailed. The analysis acknowledges limitations too: e.g., VP-OR can get stuck in a suboptimal solution in longer runs (though it finds good solutions faster). Overall, the design and analysis of the experiments are valid and convincing.
Supplementary Material: Yes, I have reviewed the supplementary material, focusing on the parts most pertinent to the main claims. The appendix is quite extensive (labeled Appendix A through C). Appendix A provides details on the features used by the GNN. Appendix B presents pseudocode and an additional explanation of the Thompson Sampling-based online refinement procedure. Appendix C contains additional experiments and results. Notably, Appendix C.3 reports results with different time limits and fixed parameter ratios (Table 7, etc.), which I looked at to see how sensitive the performance is to these settings. In summary, the supplementary material adds a lot of useful details.
Relation To Broader Scientific Literature: This work is well-situated at the intersection of operations research reoptimization techniques and machine learning for combinatorial optimization. Traditional MILP reoptimization methods (e.g. warm-start approaches) focusing on reusing information like optimal solutions or branch-and-bound trees from previous instances struggle when the new instance differed more substantially, as the old optimal solution might not be feasible and reusing search trees may not reduce problem complexity. The re-tuning method (Patel 2024) extended this by preserving a series of past solutions and adaptively tuning solver parameters. In parallel, recent years have seen ML-driven heuristics for MILP: e.g. Neural Diving (Nair et al., 2020) which learns a distribution over binary variables and fixes some to reduce the problem, and Predict-and-Search (Han et al., 2023) which uses a GNN to predict variables then applies large neighborhood search. The contribution of this paper is to merge the strengths of both lines of work: it uses instance-specific historical data (like classical reoptimization) and employs learning to predict variable fixings (like ND/PS), achieving a new level of performance.
Essential References Not Discussed: The authors have done a good job covering the essential prior work, and I did not identify any critical omissions in the references.
Other Strengths And Weaknesses: Strengths: This paper’s primary strength lies in its original combination of ideas that yields practical impact. The VP-OR framework is a novel synthesis of ML prediction and online optimization that, to my knowledge, has not been applied to MILP reoptimization before. The results show substantial improvements in time-constrained scenarios – for instance, being the only method to consistently solve all instances in under 10 seconds is a compelling result for real-time applications.
Weaknesses: One weakness is the lack of theoretical guarantees. As a heuristic/learning method, VP-OR does not guarantee optimality or even improvement in every iteration. It is possible that it could fix a wrong set of variables. While this is acceptable for a heuristic approach, some theoretical analysis (e.g. bounds on solution quality in terms of prediction accuracy) could strengthen the work. Another potential weakness is scalability/generalization: the method was tested on instances solvable within 10 minutes by SCIP (up to 600s baseline solve time); it’s not proven how it scales to truly large-scale MILPs beyond those benchmarks. These weaknesses are relatively minor in comparison to the contributions.
Other Comments Or Suggestions: The authors might consider a few minor improvements.
In Table 4, it might help to clarify how “gap abs” and “gap rel” are defined, especially since some methods show a 0.00 relative gap but still have a non-zero absolute gap.
I assume that the 10-second evaluation for VP-OR includes the model inference and refinement iterations. It would be good if the final version states that explicitly.
Future work: I echo the authors’ note on extending to larger instances.
Overall, my comments are minor; the paper is in good shape.
Questions For Authors: Generalization: How well would the trained GNN model generalize to instances with new characteristics outside the training distribution?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback and suggestions.
## Response to the Question 1
**Generalization**: In our tests, we indeed observed instances where the GNN model made errors in predicting variables. To address this challenge, approaches like those by Han et al.[1] and Huang et al.[2] leverage an LNS framework and our VP-OR uses sampling methods. These strategies introduce a level of tolerance for prediction errors, enhancing the model's applicability and extending its utility to different problem instances.
**Large-scale MILP experiments**: We expand our experiments to include large-scale MILP experiments with more instances (200 for training and 100 for testing) and add a comparison with the ML-guided LNS method of Huang et al.[2] (**ConPas**). The results are reported for both Gurobi and SCIP. We generate large-scale datasets IS and CA using the code from Gasse et al.[3], consistent with those used by Han et al.[1] The Gurobi and SCIP solvers could not reach an optimal solution within 3600 seconds for these instances. We run Gurobi for 3600 seconds to record the incumbent solution. For evaluation, we impose a 30-second time limit and update the incumbent solution if a better one is found. VP-OR achieves more significant acceleration than PS[1] and ConPas[2] during the early stages of solving on both Gurobi and SCIP.
CA:
| | gap_abs| gap_rel | wins|
|-------|-------|-------|----|
|SCIP|24068.27|0.19|0/100|
|Gurobi|3754.16|0.03|0/100|
|PS+SCIP|20182.91|0.19|0/100|
|PS+Gurobi|3754.16|0.03|0/100|
|ConPaS+SCIP|8506.52|0.07|0/100|
|ConPaS+Gurobi|2526.28|0.02|0/100|
|VP-OR+SCIP(Ours)|0.00|0.00|12/100|
|VP-OR+Gurobi(Ours)|**0.00**|**0.00**|**88/100**|
IS:
| | gap_abs| gap_rel | wins|
|-------|-------|-------|----|
|SCIP|104.92|0.05|0/100|
|Gurobi|133.07|0.06|0/100|
|PS+SCIP|104.57|0.05|0/100|
|PS+Gurobi|117.62|0.06|0/100|
|ConPaS+SCIP|30.25|0.01|0/100|
|ConPaS+Gurobi|24.66|0.01|0/100|
|VP-OR+SCIP(Ours)|9.57|0.00|33/100|
|VP-OR+Gurobi(Ours)|**0.97**|**0.00**|**67/100**|
## Response to the Comment 1
**Definition of "gap_abs" and "gap_rel"**: For the evaluation, we first solve the problem without a time limit and record the optimal solution’s objective value as OPT. Then, we apply a time limit of 10 seconds for each method. The best objective value obtained within the time limit is denoted as OBJ. We define the absolute and relative primal gaps as: gap\_abs=|OBJ-OPT| and gap\_rel= |OBJ-OPT| / ( |OPT|+$10^{-10}$ ), respectively.
The reason some methods show a 0.00 relative gap but still have a **non-zero absolute gap** is due to the rounding approach used in calculating "gap_rel". We retain two decimal places for "gap_abs" and "gap_rel", so when the relative error is sufficiently small (less than 0.004), the error rounds to 0.00, although a minor absolute error may still exist.
## Response to the Comment 2
Yes, the **10-second evaluation** does include both the model inference and refinement iterations. We will explicitly state this in the final version to ensure clarity.
[1] Han Q, Yang L, Chen Q, et al. A GNN-Guided Predict-and-Search Framework for Mixed-Integer Linear Programming[C]//The Eleventh International Conference on Learning Representations.
[2] Huang T, Ferber A M, Zharmagambetov A, et al. Contrastive predict-and-search for mixed integer linear programs[C]//Forty-first International Conference on Machine Learning. 2024.
[3] Gasse M, Chételat D, Ferroni N, et al. Exact combinatorial optimization with graph convolutional neural networks[J]. Advances in neural information processing systems, 2019, 32. | null | null | null | null | null | null |
Physics Aware Neural Networks for Unsupervised Binding Energy Prediction | Accept (poster) | Summary: This paper proposes an unsupervised learning approach, called CEBind, to binding energy prediction. CEBind includes rigid dynamics and the training loss function is motivated by energy conservation loss. Besides, it outperforms previous works.
## update after rebuttal
The authors have provided satisfactory responses to my queries. I regard this as a high-quality manuscript.
Claims And Evidence: The claims in the paper are well-supported by theoretical justifications, empirical results, and comparisons with prior methods.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are appropriate for the problem.
Theoretical Claims: The theoretical claims in the paper appear well-founded and mathematically consistent, with no major errors identified.
Experimental Designs Or Analyses: The experimental design and analysis in the paper are well-structured and valid for the problem at hand. The paper effectively justifies its claims with rigorous testing and fair comparisons.
Supplementary Material: Yes, I reviewed the supplementary material. The supplementary material is well-structured and supports the main claims without any inconsistencies.
Relation To Broader Scientific Literature: The paper advances prior work in unsupervised protein-ligand binding energy prediction by incorporating physics-based modeling, specifically the conservation of energy, and rigid dynamics.
- Unsupervised Protein-Ligand Binding Prediction: CEBind improves DSMBind by using rigid dynamics and energy conservation, making it more memory-efficient and physically interpretable.
- Physics-Based Binding Affinity Prediction: CEBind introduces a new energy-based framework that explicitly follows the conservation of energy, making it a hybrid between physics-based and data-driven methods.
- Generalization to Biomolecules: CEBind's general encoder approach aligns with AlphaFold3's vision of a universal model for biomolecules.
Essential References Not Discussed: The paper cites most of the essential related works, but a few additional references could further strengthen its context and comparison to prior work.
- Baek et al., Accurate Prediction of Protein Structures and Interactions Using a Three-Track Neural Network (Science, 2021) (RoseTTAFold). This work is an early step toward multi-scale molecular interaction modeling, relevant to the generalization claims in CEBind. Citing this could position CEBind in the broader trend toward universal molecular modeling.
- Hsu et al., ESM-2: Evolutionary Scale Modeling of Protein Structures (2022). The ESM models capture protein sequence and structure relationships, which may improve antibody-antigen binding prediction. If CEBind's ligand encoder were combined with ESM-2 representations, it might further enhance performance.
Other Strengths And Weaknesses: This paper is strong in originality, significance, and clarity, with only minor areas for improvement.
### Strengths
- Well-written and structured, with clear explanations of theoretical foundations, methodology, and experiments.
- The physics-aware approach (CEBind) is innovative, explicitly incorporating conservation of energy and rigid dynamics into an unsupervised binding energy prediction framework.
- This paper bridges the gap between physics-based methods and data-driven unsupervised learning.
- Efficient and scalable, requiring less GPU memory, making it suitable for large-scale molecular modeling tasks.
### Weakness
- The similarity between training set and test set should be considered and explained, which may lead to data leakage.
- The time $\Delta t$ is randomly selected. How the selection of $\Delta t$ impacts the performance of CEBind should be explained.
Other Comments Or Suggestions: - Did the dataset splitting take into account protein sequence similarity during the experiment? If not, there may be an information leakage problem.
- How does the selection of $\Delta t$ impact performance?
Questions For Authors: Predicting protein-ligand affinity is crucial for drug design. The proposed method is interesting and efficient. My only concern is the data splitting strategy. If the training and test datasets are highly similar, there is a potential risk of data leakage. Clarification on how the data was partitioned would strengthen the validity of the reported results.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and kind support! All your concerns have been carefully addressed as below. The manuscript will be carefully revised accordingly. We sincerely hope our responses fully address your questions.
> **W1:** The similarity between training set and test set should be considered and explained, which may lead to data leakage.
**A1:** Thanks for your insightful concern. We removed overlapping instances with the validation, test, and training sets. The experiments are fair and the same for all models. We also conducted additional experiments of removing the training samples with a similarity of 40% with validation and test samples. The results are shown below. The average performance (0.513 VS 0.511) of CEBind is not affected by the similarity between the training and test datasets, which proves that information leakage did not occur.
| split | protein-small molecule | antibody-antigen | Average |
|----------|----------|----------|----------|
| original | 0.652 $\pm$ 0.005 | 0.374 $\pm$ 0.009 | 0.513 |
| remove similar | 0.645 $\pm$ 0.003 | 0.377 $\pm$ 0.012 | 0.511|
> **W2:** The time $\Delta t$ is randomly selected. How the selection of $\Delta t$ impacts the performance of CEBind should be explained.
**A2:** $\Delta t$ is randomly selected from a uniform distribution $U(0.1,1)$. For your concern, we also conducted experiments to explore the impact of $\Delta t$, and the results shown below demonstrate that the randomly selected $\Delta t$ makes CEBind better.
| $\Delta t$ | protein-small molecule | antibody-antigen|
|----------|----------|----------|
| 0.1 | 0.649 $\pm$ 0.006 | 0.362 $\pm$ 0.019|
0.3 | 0.649 $\pm$ 0.002 | 0.352 $\pm$ 0.003|
0.5 | 0.647 $\pm$ 0.001 | 0.355 $\pm$ 0.007|
0.7 | 0.650 $\pm$ 0.004 | 0.359 $\pm$ 0.007|
0.9 | 0.650 $\pm$ 0.001 | 0.365 $\pm$ 0.017|
Random | 0.652 $\pm$ 0.005 | 0.374 $\pm$ 0.009| | Summary: The paper builds on the paradigm of unsupervised learning of binding affinity introduced in DSMBind but introduces a new method for force-matching that doesn't require computing the gradient of the energy function and that operates on the level of per-atom forces. The energy learned from the force-matching objective is then used as a ranking function and compared to ground-truth experimental binding data on small-molecule and antibody datasets where some gains are observed over DSMBind.
Claims And Evidence: 1. CEBind doesn't require differentiating through the entire energy network.
2. There is a marginal empirical improvement over DSMBind.
3. I don't quite see how CEBind is physics-based other than using the same equations that are used in mechanics. There are plenty of aspects of physics that are involved in the binding process that this paper doesn't even discuss such as quantum effects or even the simplest electrostatic interactions.
Methods And Evaluation Criteria: This paper inherits a major problem from Jin et al, 2024 in that the method can't directly discriminate non-binders because their structure is not defined in the same way it's defined for molecules that successfully co-crystalize. I view this as a major limitation of the paper both from the methods and evaluation points of view.
Theoretical Claims: There are no theoretical claims in the paper.
Experimental Designs Or Analyses: 1. Lack of negative data in the evaluation (see the comment above).
2. It would be valuable from the practical point of view to measure performance of the method (or maybe consider training) on the task of predicting _relative_ binding affinity [1]
[1] Protein-Ligand Binding Free Energy Calculations with FEP. Wang et al, 2019.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: The paper continues the line of research deep learning based unsupervised binding affinity prediction initiated by DSMBind (Jin et al, 2024).
Essential References Not Discussed: The paper doesn't seem to reference the entire field of molecular dynamics which is a very much physics-based family of methods that is often also unsupervised and used to predict binding affinity.
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: No.
Questions For Authors: No questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you for your comments! All your concerns have been carefully addressed as below. The manuscript will be carefully revised accordingly. We sincerely hope our responses fully address your questions.
> **W1:** There is a marginal empirical improvement over DSMBind.
**A1:** Thanks for your concern. The advantages of our CEBind over DSMBind are twofold. **(a) Performance**: our CEBind outperforms DSMBind consistently as shown in Table 5. **(b) Efficiency**: Training CEBind requires less memory and is faster, as we described in Section 5.5.1.
To demonstrate the effectiveness of CEBind, we also provide more experimental results in **A1** and **A2** of response to reviewer 1arn and **A2** to reviewer fEra.
> **W2:** How is CEBind physics-based other than using the same equations that are used in mechanics?
**A2:** Thanks for your comment. In this work, "physics aware" refers to the law of conservation of energy and the rigid dynamics.
- **(1) For the law of conservation of energy**, we have constructed a new training method using the conservation of energy. As shown in the ablation study in Table 3, this approach is effective.
- **(2) For rigid dynamics**, we match the derivative of the energy with respect to the coordinates (i.e., the force) with the sampled perturbation force. Instead of matching the sampled distance perturbation with the force in CEBind. As a result, our CEBind is more stable and efficient as described in Section 5.1.1.
> **W3:** This method can't directly discriminate non-binders because their structure is not defined in the same way it's defined for molecules that successfully co-crystalize. Lack of negative data in the evaluation (see the comment above).
**A3:** Thanks for your concern. We respectfully disagree. Our method operates on the quaternary structure of protein-ligand complexes without relying on specific crystallization-derived assumptions. Importantly, the physical principles leveraged by our model, such as energy conservation and force response under rigid dynamics, remain valid regardless of binding success. In fact, for non-binders, the model can still compute an energy landscape, and the predicted energy shift under force perturbation often reflects weaker or unstable interactions. This gives us a continuous signal that allows non-binders to be distinguished from strong binders, even in the absence of traditional co-crystal structures.
As for the training data, our method follows the commonly used evaluation pipeline for unsupervised protein-ligand binding prediction, where performance is assessed based on the correlation between predicted binding energy and ground truth affinity. This pipeline has been widely accepted in the literature and allows for a fair and consistent evaluation of binding affinity prediction [1,2,3].
[1] NeurIPS'24, Unsupervised protein-ligand binding energy prediction via neural euler’s rotation equation.
[2] ICLR'24, Proteinligand binding representation learning from fine-grained interactions.
[3] JCIM'23, Planet: a multi-objective graph neural network model for protein–ligand binding affinity prediction.
> **W4:** The paper doesn't seem to reference the entire field of molecular dynamics which is a very much physics-based family of methods that is often also unsupervised and used to predict binding affinity.
**A4:** Thank you for your comment. While molecular dynamics (MD) plays a crucial role in binding affinity prediction, our CEBind method differs from traditional MD.
**During training**, CEBind leverages rigid dynamics and energy conservation principles to predict binding energies through perturbations. **For inference, CEBind requires only the structure of the protein-ligand complex without perturbing them**, thus avoiding the extensive computational and memory demands of full molecular dynamics simulations. Compared to MD’s long simulation times and complex force fields, CEBind is more efficient and does not require lengthy molecular simulations or large labeled datasets.
Although we did not reference MD literature in our paper, we recognize its importance in binding affinity prediction and plan to include the relevant references in the revised version, including those you mentioned in your review. | Summary: The paper proposes CEBind, an unsupervised deep learning model for predicting protein-ligand binding energy based on the conservation of energy principle. It aims to address the challenge of limited labeled data for binding energy prediction, particularly for complex biomolecules like antibodies. Instead of requiring labeled affinity data, CEBind leverages rigid body dynamics and energy conservation laws to estimate binding energy changes.
## Update After Rebuttal
After carefully reading the rebuttal and all the discussion, I now believe the manuscript demonstrates both novelty and effectiveness. In my view, the authors have addressed the concerns of all reviewers. Therefore, I recommend acceptance and have raised my score to 5.
Claims And Evidence: The paper presents a well-motivated and innovative approach for unsupervised binding energy prediction. The claims made by the authors are well-supported, backed by solid theoretical foundations and comprehensive empirical results.
Methods And Evaluation Criteria: The method is sound and the evaluation metrics are well-chosen which align well with the problem of unsupervised protein-ligand binding energy prediction.
Theoretical Claims: - All major theoretical claims are valid, correctly derived, and align with established physics principles. Energy conservation proof is correct. Rigid body dynamics derivations are well-grounded. Score matching loss is based on proper statistical modeling.
Experimental Designs Or Analyses: - The experimental design is well-desined and statistically sound.
The datasets are well-chosen and representative of real-world binding problems.
The evaluation metrics are appropriate and unbiased.
Comparisons to baselines are fair and prove CEBind’s superiority.
Ablation studies confirm the model’s design choices.
Supplementary Material: - The supplementary material provides theoretical derivations, detailed experimental setups, additional analyses, and a discussion of limitations. Overall, these materials are well-structured, transparent, and strengthen the credibility of the paper.
Relation To Broader Scientific Literature: - This mthod builds upon three major research directions in computational biology and machine learning: unsupervised learning for protein-ligand interactions, physics-aware modeling, and neural energy-based models for binding prediction.
Essential References Not Discussed: - CEBind builds upon energy-based learning, particularly methods involving score matching and diffusion models:
- Song et al., "Score-Based Generative Modeling through Stochastic Differential Equations," ICLR 2021. They introduced the score-based diffusion models and the force-matching loss in CEBind is a form of score-matching.
Other Strengths And Weaknesses: ### Strengths
- The paper is clear, well-structured, and provides all necessary details for reproducibility.
- The combination of machine learning and physics makes CEBind more interpretable and generalizable, setting it apart from fully data-driven models.
- This work has the potential to accelerate molecular docking simulations and enhance drug discovery pipelines, making it scientifically and practically significant.
### Weakness
- Evaluating CEBind on real docking simulations would strengthen its practical applicability.
Other Comments Or Suggestions: - Equation (3) in Section 3.3: A missing subscript in $W_{\text{outer}}$ should be corrected for consistency.
- Table 2 caption: “Cross-validation results for different datasets”. It should be clarifyed that these are cross-dataset generalization results, as some readers might confuse this with standard k-fold cross-validation.
- Appendix A.2 notation consistency: The force notation $F_{\text{inner}}$ and $F_{\text{outer}}$ should be consistent across equations for clarity.
Questions For Authors: - How was the PDBBind dataset split? Was the data leakage considered in the evaluation set?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and kind support! All your concerns have been carefully addressed as below. The manuscript will be carefully revised accordingly. We sincerely hope our responses fully address your questions.
> **W1:** Evaluating CEBind on real docking simulations would strengthen its practical applicability.
**A1:** Thanks for your suggestion. We totally agree that the practical applicability of CEBind could be strengthened by evaluating on real docking simulations. But the evaluation of our CEBind in this paper follows the common protein-ligand binding prediction evaluation pipelines [1,2,3]
[1] NeurIPS'24, Unsupervised protein-ligand binding energy prediction via neural Euler’s rotation equation.
[2] ICLR'24, Proteinligand binding representation learning from fine-grained interactions.
[3] JCIM'23, Planet: a multi-objective graph neural network model for protein–ligand binding affinity prediction.
> **Suggestions:** Typo and caption correction.
**A2:** Thanks for your kind suggestions. We will fix the typos and change the caption of Table 2 to cross-dataset.
> **Questions:** How was the PDBBind dataset split? Was the data leakage considered in the evaluation set?
Thanks for your insightful concern. We removed overlapping instances with the validation, test, and training sets. The experiments are fair and the same for all models. We also conducted additional experiments of removing the training samples with a similarity of 40% with validation and test samples. The results are shown below. The average performance (0.513 VS 0.511) of CEBind is not affected by the similarity between the training and test datasets, which proves that information leakage did not occur.
| split | protein-small molecule | antibody-antigen | Average |
|----------|----------|----------|----------|
| original | 0.652 $\pm$ 0.005 | 0.374 $\pm$ 0.009 | 0.513 |
| remove similar | 0.645 $\pm$ 0.003 | 0.377 $\pm$ 0.012 | 0.511|
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications and additional experiments. After considering the feedback provided to the other reviewers, I find the work to be effective and well-founded. I would be happy to raise my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your kind support! CEBind is a significant and successful approach for unsupervised protein-ligand binding energy prediction, and it outperforms previous methods on a widely-used evaluation metric in machine learning. CEBind exhibits robust generalization across both protein-protein and protein-small molecule binding energy prediction.
There may have been a misunderstanding from Reviewer fEra, who interpreted our method as being applicable only to the same complex with different conformers. Additionally, Reviewer 2j23 may have interpreted our approach as a molecular dynamics-based method, while in fact, CEBind differs fundamentally in design and computational requirements. | Summary: This paper proposes CEBind, an unsupervised method for predicting protein-ligand binding energy via the conservation of energy. Specifically, this method random samples forces on atoms to move the molecules and predict the energy of both unperturbed and perturbed complex. And the energy gap between two complexes should be the work of outer forces. Experiments are performed on unsuperivised benchmarks and show better performance.
## update after rebuttal
After reviewing the rebuttal and discussion, I still doubt the novelty and effectiveness about this work. The energy conversation loss is the main novelty compared to DSMBind and it brings marginal improvement against DSMBind. But given that this method is indeed SOTA, so I increase my score to 2.
Claims And Evidence: Please refer to the weakness part
Methods And Evaluation Criteria: Please refer to the weakness part
Theoretical Claims: N/A
Experimental Designs Or Analyses: Please refer to the weakness part
Supplementary Material: No
Relation To Broader Scientific Literature: The proposed method may be useful for unsupervised binding energy prediction as high quality labeled data is very limited in this area.
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths:**
- This paper propose an unsupervised method via energy conservation law to predict the protein-ligand binding energy.
**Weaknesses:**
- I doubt the novelty of this method. The proposed method is quite similar to the DSMBind, which uses denoising score matching to perform unsupervised binding energy prediction. DSMBind uses SE(3) noise to perturb complex structures and use predicted energy gradient to calculate rotation and translation, whereas the proposed method just does the similar thing in the other way around, i.e. sample force first and calculate rotation and translation, and then predict energy of perturbed structures and calculate gradient force. We can also see this similarity from Table 3, where after removing energy conservation loss the performance is basically the same as DSMBind.
- The improvement of this method upon DSMBind is too marginal (table 1), which is expected as the methodology is very similar to DSMBind. So the experimental results cannot convince me this method is more effective than DSMBind. Also, spearman correlation also needs to be reported in Table 1.
- This kind of denoising method can only differentiate energy difference for the same complex with different conformers, but it’s hard to capture the energy difference between different complexes.
Other Comments Or Suggestions: Please refer to the weakness part.
Questions For Authors: Please refer to the weakness part.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your comments! All your concerns have been carefully addressed as below. The manuscript will be carefully revised accordingly. We sincerely hope our responses fully address your questions.
> **W1**: The novelty of CEBind. Difference between our CEBind and DSMBind.
**A1**: Thanks for your comments. The novelty of our CEBind is threefold: **(1)**
*We effectively integrated the law of conservation of energy and rigid dynamics into our CEBind*, as demonstrated in Table 3, which the reviewer mentioned. **(2)** The score matching in our CEBind is *more efficient and follows the physics law*. **(a) More efficient:** Our CEBind requires less memory and takes less time to train, as we described in Section 5.5.1. **(b) Physics aware:** We match the derivative of the energy with respect to the coordinates (i.e., the force) with the sampled perturbation force. Instead of matching the sampled distance perturbation with the force in CEBind. As a result, our CEBind is more stable and efficient. **(3)** *Stable and better performance:* our CEBind consistently outperforms DSMBind through our energy conservation loss, as well as the more physically meaningful score matching loss, as shown in Table 5 in Appendix. **(4)** The detailed comparison between CEBind and DSMBind can be found in Appendix C.2.
> **W2**: (1) The improvement of CEBind over DSMBind is too marginal (Table 1). (2) The Spearman correlation also needs to be reported in Table 1.
**A2**: (1) Thanks for your concern. The advantages of our CEBind over DSMBind are twofold. **(a) Performance**: our CEBind outperforms DSMBind consistently as shown in Table 5. **(b) Efficiency**: Training CEBind requires less memory and is faster as we described in Section 5.5.1.
(2) We also provide the spearman correlation as follows, which also demonstrate that CEBind outperforms previous works.
| Model | Protein-small molecule | Antibody-antigen |
|----------|-----------------------|-------------|
| DSMBind | 0.640 $\pm$ 0.003 | 0.358 $\pm$ 0.011 |
| **CEBind** | 0.652 $\pm$ 0.004 | 0.387 $\pm$ 0.009 |
> **W3:** This kind of denoising method can only differentiate energy difference for the same complex with different conformers, but it’s hard to capture the energy difference between different complexes.
**A3**: Thanks for your comments. We respectfully disagree. Our CEBind can indeed be applied to capture the energy difference between different complexes. All the results in this paper are the correlation between the ground truth and the prediction of our model for different complexes. **The perturbation is only performed in the training**, while in inference, our model only needs the structure of the complexes to output pseudo-energy values for each complex without any perturbations or different conformations.
---
Rebuttal Comment 1.1:
Comment: Thanks authors for the clarifications. However, I am still not convinced that CEBind has any essential difference from DSMBind. I can see the energy conservation loss as an improvement against DSMBind which brings a marginal performance improvement over DSMBind, but given that the pearson correlation for the DSMBind was already low so a 0.01 improvement seems not very meaningful.
Similar to DSMBind, CEBind uses denoising to capture the energy difference between the low-energy and perturbed high-energy conformer of a certain complex. So, to capture the energy difference between different complexes, the assumption is that the dataset follows the Boltzmann distribution, but this might not hold in practice and in the dataset.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback.
> Difference between the two methods.
We appreciate your concern regarding the essential differences between CEBind and DSMBind. While both models share a denoising score matching framework, CEBind introduces two key distinctions:
(1) The energy conservation loss is not merely an auxiliary term but **stems directly from the law of physics**, enabling a principled alignment between predicted energy changes and physical work. This results in **more stable** training and **better interpretability**, as also supported by the ablation study (Table 3).
(2) Unlike DSMBind, which perturbs the entire complex and requires estimating global rigid motion gradients, CEBind samples forces, instead of coordinate perturbations in CEBind, at the atom level and models local perturbations via rigid dynamics, leading to improved **efficiency** and **granularity**, as shown in the comparison of training time and GPU usage (Section 5.5.1). Besides, we construct training targets through energy conservation and force matching, which is completely different from DSMBind, where the matching is distance perturbation score matching.
> Improvement over previous works.
Regarding the Pearson correlation improvement: While the absolute gain (~0.01) may appear small, CEBind **consistently outperforms** DSMBind across two different datasets (Table 5). In unsupervised binding energy prediction, even marginal improvements under strict benchmark settings are considered significant due to the difficulty of the task.
Despite the modest absolute difference, the combination of `higher robustness`, `physical interpretability`, and `lower computational cost` (Section 5.5.1) suggests that our approach advances the field in a practically meaningful way.
We also performed a **statistical significance test** to compare CEBind and DSMBind. On the protein–small molecule dataset, CEBind’s improvements are `statistically significant` (p = 0.0039). (p < 0.05)
> Boltzmann distribution assumption
On the Boltzmann distribution assumption, we respectfully clarify that our model does not rely on the data exactly following the Boltzmann distribution. Rather, we leverage a local energy approximation based on small perturbations (Appendix A.3), where a linear force-displacement relation is used to estimate work. This allows CEBind to remain effective even if the global energy distribution deviates from Boltzmann behavior, which we also validate empirically.
We hope these clarifications help illustrate the theoretical and practical differences between the two models. Please let us know if you have further questions! | null | null | null | null | null | null |
Do Vision-Language Models Really Understand Visual Language? | Accept (poster) | Summary: This paper argues that LVLM's good performance in diagram reasoning mainly comes from pre-existing background knowledge rather than a genuine understanding of the diagrams' relational structure.
The authors develop a test suite specifically to assess diagram comprehension:
- Their evaluations focus on two main tasks: fast recognition and slow, multi-step reasoning.
- Their evaluations are not only performed on synthetic diagrams but also real diagrams.
- Their evaluations are based on open-source models as well as large accessible models.
After showing their experiments and analysis, the authors conclude that the apparent models can perfectly recognize and reason about entities in the diagrams, while they struggle with recognizing the relations.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. The authors show details about how they set up each experiment.
Methods And Evaluation Criteria: The proposed evaluation criteria make sense.
Theoretical Claims: Their theoretical claim (LVLMs reply on knowledge shortcuts when answering complex diagram reasoning questions) is supported by their detailed experiment designs and analyses.
Experimental Designs Or Analyses: The main contribution in this paper is their evaluation suite, from data creation to experiment implementations.
Both synthetic diagrams and real diagrams make sense. However, please answer my questions about the synthetic diagrams in the following parts.
The experimental design begins with simpler tasks—such as identifying entities and relationships in synthetic diagrams—and gradually progresses to more complex tasks involving real diagrams. Additionally, the authors conduct both quantitative and qualitative analyses on knowledge shortcut experiments, demonstrating that LVLMs may not truly understand diagrams but instead rely on ingrained shortcuts to generate answers.
Supplementary Material: The supplementary materials are reviewed. The authors show test suite details, related work as well as some limitations in their work.
Relation To Broader Scientific Literature: The key contributions of the paper shows the concerns about the LVLMs in complex diagram reasoning tasks, which is related to the findings of current LVLMs abilities in reasoning both in 2D and 3D images.
Essential References Not Discussed: The essential references are discussed by the authors in their supplementary materials.
Other Strengths And Weaknesses: Paper Strengths:
- The paper is well-structured, and the topic is interesting.
- The authors show the detailed implementation for each experiment.
- The authors argue their claims logically, from synthetic diagrams experiments to real diagrams experiments.
Major Weaknesses (please also refer to the **Questions For Authors**):
- Limited analysis in failure cases.
- Missing some details key settings.
Other Comments Or Suggestions: Here are some minor suggestions:
- I highly suggest the authors could highlight some abbreviations, such as NR and NC.
- L181, seems like you only have one **Intuition**, why do you label it as **Intuition** 1.
Questions For Authors: I have some questions regarding this paper:
- Can the authors explain why using text labels performs better on QA in Table 2?
- Can the authors provide performance on QA for relations using visual icons, like Table 3?
- Did the Table 4 test on text labels, visual icons, or both?
- For the synthetic diagrams, do they only have text entities or visual entities? Can the authors provide some results on synthetic diagrams with both text entities and visual entities?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging that our claims are clear and convincing. We will address the proposed concerns and answer the questions below. Additionally, we are pleased to inform you that we have extended our evaluation model set to include 9 models, as suggested by Reviewer 8S38 and Reviewer 3mRA, with consistent conclusions, further enhancing the reliability of our work.
---
>**Point 1**: *Limited analysis in failure cases.*
>**Response**: We agree with the reviewer that a more detailed failure case analysis would provide valuable insights into why the models fail. We would like to kindly remind the reviewer that Section 5 already includes a qualitative analysis. Additionally, we have provided examples, along with our prompts and the models' responses, for each experiment in the Appendix (Figures 11–34). While these figures do not provide a thorough analysis for each case, they offer readers a clearer understanding of why the models fail by showcasing their responses. We will also emphasize this point in the main paper.
---
>**Question 1**: *Can the authors explain why using text labels performs better on QA in Table 2?*
>**Response**: We compare the performance of entity recognition on synthetic diagrams, distinguishing between text entities and visual entities. The results show that models can perfectly identify text entities. We believe this is due to the models being specifically trained on numerous OCR datasets (as mentioned in their technical reports). As a result, their ability to recognize and extract text is significantly stronger compared to their ability to recognize visual elements.
---
>**Question 3**: *Did the Table 4 test on text labels, visual icons, or both?*
>**Response**: Based on the results in Table 2, we observe that the model performs well in recognizing text entities. To ensure that our subsequent experiments—such as exploring the models' abilities in more complex tasks like counting and relation recognition—are not influenced by entity recognition errors, we default to representing entities in text for the remainder of our experiments.
---
>**Question 2 and 4**: *Can the authors provide performance on QA for relations using visual icons, like Table 3? For the synthetic diagrams, do they only have text entities or visual entities? Can the authors provide some results on synthetic diagrams with both text entities and visual entities?*
>**Response**: We appreciate the thoughtful questions and thank the reviewer for carefully reading our paper. The underlying logic is that since models can accurately recognize text entities, any failure to recognize relations likely stems from a lack of relation recognition ability, rather than being bottlenecked by entity recognition. We understand the reviewer’s concern regarding entity representation. To address this, we report the performance of entity/relation recognition on synthetic diagrams with entity representations of text, visual, and both. Additionally, to better align with the results, we reformat the multi-choice QA into binary classification (i.e., with only one negative option).
>| GPT-4V | Precision (Visual) | Recall (Visual) | F1 (Visual) | Accuracy (Visual) | Precision (Text) | Recall (Text) | F1 (Text) | Accuracy (Text) | Precision (Both) | Recall (Both) | F1 (Both) | Accuracy (Both)
>|-------------|-----------|-----------|----------|-----------|----------|----------|---------|----------|----------|----------|---------|----------|
>| **Entity recognition** | 97.37 | 84.52 | 89.54 | 91.12 | 99.86 | 99.75 | 99.78 | 99.79 | 99.86 | 99.75 | 99.78 | 99.79 |
>| **Relation recognition**| 55.98 | 48.82 | 51.58 | 55.77 | 54.82 | 40.33 | 45.28 | 55.83 | 55.05 | 47.00 | 49.99 | 56.42 |
>The results reveal two key observations. First, representing entities with both text and visual information yields similar performance to text-only entities, and the model still struggles with relation recognition regardless of entity representation. Second, the recall for visual entity recognition is relatively lower, indicating that the model cannot perfectly identify the correct visual entities, rather than misclassifying wrong entities as correct ones. We hope these updated results help address the reviewer’s concern regarding entity representation.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer en7J:
Thank you for taking the time to review our paper and for your initial comments.
We carefully addressed all the concerns you raised in our rebuttal, including providing **new experimental results and clarifications**. While we appreciate your brief follow-up, we were hoping for a bit more engagement—especially to confirm whether our detailed responses resolved your questions. If any concerns remain, we would be more than happy to further elaborate or clarify.
Thank you again for your time and consideration.
Best regards,
The authors | Summary: The paper introduces a new benchmark dataset for vision-language models. The benchmark is based on diagram understanding. Unlike most existing diagram understanding datasets, the dataset introduced in this work is specifically designed to study the understanding of relationships between depicted entities by using graphs whose vertices are entities (eg., objects) connected with lines (edges in the graph). The paper studies the performance of existing vision-language models on various question types and subsets (eg. real vs. synthetic images / entities vs. relations / edges random vs. semantic / etc.). The paper uses the results as evidence to argue that the existing models do not truly understand diagrams but that they use short-cuts to answer questions.
Claims And Evidence: The claims are supported by clear evidence and are convincing to me.
Methods And Evaluation Criteria: The paper introduces a benchmark dataset itself. The paper uses the performance of models on this data to derive the main hypothesis in this paper, which is that existing vision-language models use shortcuts rather than truly understanding visual information in diagrams.
Theoretical Claims: N/A
This is an empirical study.
Experimental Designs Or Analyses: The experiments are fairly comprehensive and seem sound overall. The evidence brought forth in support of the paper's hypothesis is derived from multiple different performance evaluations on variations of the images and questions.
The paper uses a mix of graph images generated from synthetic and real imagery to support the hypothesis that real imagery is more amenable to learning shortcuts by leveraging background knowledge as opposed to the information derived from the image itself.
Supplementary Material: I reviewed the supplementary material but did not read all the presented points, variations and examples in detail.
Relation To Broader Scientific Literature: All the relevant similar existing work is discussed in a Related Works section located in the Appendix. This is a bit strange and unconventional, but probably Ok.
Essential References Not Discussed: I am not an expert on recent diagram understanding benchmarks, but the presented related works section seems reasonably comprehensive.
Other Strengths And Weaknesses: The paper is very well written, and very well-structured, and exceptionally easy to follow as a result.
Not a lot of models are evaluated (Llama, Llava, Qwen2 and three closed cloud models). As this is a benchmark paper, I would say this is the bare minimum. Considering the huge number of vision-language models that have been released in recent times, it would be great if the sample of tested models was larger, and if the results held up, that would increase the confidence in the hypothesis brought forth in this work.
The use of word2vec to generate synthetic relations to test for the use of background knowledge seems like a very nice and clever idea.
The results on knowledge inclusion are convincing enough in my opinion, but not as clear-cut as one would hope after reading the earlier parts of the paper.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the very positive feedback. We have carefully designed our experiments to ensure they are comprehensive, and our conclusions are as convincing as possible. We will address the proposed concern about evaluation model set below.
>**Concern Point**: *Not a lot of models are evaluated...*
>**Response**: We are grateful for this constructive suggestion. As mentioned in our response to Reviewer 8S38, we have added three new models—Molmo-7B, Qwen2-VL-2B, and Qwen2-VL-72B—to all of our experiments. Our model set now includes three groups with 9 models: large models (GPT-4V, GPT-4o, and Gemini), small models (LLaVA-7B, Molmo-7B, LLaMA-11B), and models with three scales (Qwen2-VL with 2B, 7B, and 72B versions). The performance trends and our conclusions remain consistent, and we believe this expanded model set enhances the reliability of our conclusions. | Summary: This paper investigates the diagram comprehension capabilities of Large Vision-Language Models (LVLMs) by developing a comprehensive test suite across synthetic and real-world diagrams. The evaluation reveals that while LVLMs can accurately recognize entities, they struggle to understand relationships within diagrams. Further analysis suggests that their apparent reasoning ability is largely due to leveraging background knowledge as shortcuts, rather than genuine diagram understanding.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence.
1. LVLMs can accurately recognize entities, they struggle to understand relationships within diagrams: supported by Tables 2-3.
2. Reasoning ability is largely due to leveraging background knowledge as shortcuts:supported by Tables 4-7.
Methods And Evaluation Criteria: This paper investigates the diagram comprehension capabilities of Large Vision-Language Models (LVLMs) by developing a comprehensive test suite across synthetic and real-world diagrams. The evaluation reveals that while LVLMs can accurately recognize entities, their ability to understand relationships is notably limited, often relying on background knowledge as shortcuts rather than genuinely perceiving diagram structures. However, the paper does not clearly specify the scale of evaluated models, raising the question of whether larger models (e.g., 72B) might exhibit improved relational reasoning and diagram perception or if these limitations persist—or even worsen—with increased model size. Addressing this would clarify whether scaling alone mitigates these issues or if fundamental architectural improvements are needed for genuine diagram understanding.
Theoretical Claims: There is no theoretical claims.
Experimental Designs Or Analyses: The experimental design is comprehensive and easy to follow, effectively supporting the observations and final conclusions. However, the relationships tested in both synthetic and real-world images are primarily spatial and follow arrow-based dependencies. There are more complex relationships that could be explored, such as logical, hierarchical, or causal relationships, to provide a more holistic evaluation of LVLMs’ diagram comprehension capabilities.
Supplementary Material: Yes. I reviewed all sections in the supplementary material.
Relation To Broader Scientific Literature: Refer to Section of Essential References Not Discussed.
Essential References Not Discussed: Exploring the visual limitations of LVLMs is not a new topic, as many previous studies have addressed this issue. However, the authors fail to provide a direct comparison with these works. For example:
[1] Hulingxiao He et. al., ANALYZING AND BOOSTING THE POWER OF FINEGRAINED VISUAL RECOGNITION FOR MULTI-MODAL LARGE LANGUAGE MODELS, ICLR 2025.
[2] Shengbang Tong et. al., Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs, NeurIPS 2024.
[3] Shengbang Tong et. al., Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs, CVPR 2024. [4] Xingyu Fu et. al., BLINK: Multimodal Large Language Models Can See but Not Perceive, ECCV 2024.
Other Strengths And Weaknesses: This paper contributes to diagram understanding of LVLMs, like visual rasoning and shortcut learning. The findings suggest that existing LVLMs lack true diagram understanding, opening avenues for research in improving relational reasoning in LVLMs through better graph-based representations. However, additional experiments are needed to make these conclusions more convincing: Evaluating larger models to determine whether scaling mitigates these limitations; Expanding benchmarks to test more complex, non-spatial relationships beyond arrows and spatial positioning.
Other Comments Or Suggestions: Refer to Section of Other Strengths And Weaknesses.
Questions For Authors: I have no further questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging that our experimental design is comprehensive and convincing. Our responses to the proposed concerns are provided below.
---
>**Point 1**: *... raising the question of whether larger models (e.g., 72B) might exhibit improved relational reasoning and diagram perception....*
>**Response**: We appreciate the constructive suggestions provided by the reviewer. We are pleased to report that we have now completed experiments on three additional models: Molmo-7B, Qwen2-VL-2B, and Qwen2-VL-72B. The results for these models have been incorporated into the paper (Tables 2, 3, 4, 5, 6, 7, 9, and 11). Overall, the trends and conclusions remain consistent (Molmo performs similarly to LLaMA, and Qwen-2B and Qwen-72B perform similarly to Qwen-7B). We will briefly present some of their scores here (Tables 2, 3, and 4).
>**Table 2 (entity understanding on synthetic diagrams)**
>| Acc (%) | Text Entity (recognition) | Text Entity (counting) | Visual Entity (recognition) | Visual Entity (counting) |
>|-------------|--------------------------------|--------------------------------|--------------------------------|--------------------------------|
>| LLaVA | 38.9| 26.6| 46.4 | 30.8|
>| Molmo | 93.4| 78.8| 64.1 | 54.0 |
>| LLaMA | 91.3| 90.9| 72.7| 70.1|
>| Qwen-2B | 82.3 | 73.2 | 63.1| 53.5 |
>| Qwen-7B | 97.6 | 73.0 | 94.5 | 73.0 |
>| Qwen-72B | 99.1 | 97.9 | 90.6 | 86.4|
>| GPT-4V | 97.8| 99.6 | 85.7 | 93.7 |
>| GPT-4o | 99.2 | 100.0 | 92.6| 94.9 |
>| Gemini | 88.1| 95.8 | 87.7| 86.5|
>**Table 3 (relation understanding on synthetic diagrams)**
>| Acc (%) | Implicit Relation (recognition) | Implicit Relation (counting) | Explicit Relation (recognition) | Explicit Relation (counting) |
>|-------------|--------------------------------|--------------------------------|--------------------------------|--------------------------------|
>| LLaVA | 30.2 | 27.5 | 35.1 | 28.3 |
>| Molmo | 71.7 | 31.2 | 59.1 | 50.4 |
>| LLaMA | 75.4 | 32.0 | 55.2 | 46.1 |
>| Qwen-2B | 63.3 | 29.8 | 44.0 | 33.1 |
>| Qwen-7B | 74.4 | 59.0 | 59.8 | 51.5 |
>| Qwen-72B | 77.9 | 67.1 | 70.3 | 63.8 |
>| GPT-4V | 72.3 | 34.4 | 61.6 | 59.5 |
>| GPT-4o | 77.3 | 55.3 | 76.6 | 70.2 |
>| Gemini | 60.9 | 31.8 | 68.5 | 70.2 |
>**Table 4 (entity and relation understanding on real diagrams)**
>| Model | Entity (recognition) | Entity (counting) | Relation (recognition) | Relation (counting) |
>|-----------|-------------|-------------|--------------|--------------|
>| LLaVA | 56.5 | 37.3 | 45.1 | 30.2 |
>| Molmo | 82.7 | 54.9 | 59.8 | 51.2 |
>| LLaMA | 87.3 | 59.7 | 73.7 | 51.2 |
>| Qwen-2B | 66.6 | 40.7 | 45.7 | 39.3 |
>| Qwen-7B | 90.0 | 56.1 | 71.4 | 58.0 |
>| Qwen-72B | 93.7 | 77.8 | 79.4 | 69.8 |
>| GPT-4V | 88.9 | 78.8 | 78.7 | 59.9 |
>| GPT-4o | 93.1 | 82.3 | 84.1 | 72.9 |
>| Gemini | 85.0 | 68.4 | 80.5 | 57.7 |
---
>**Point 2**: *There are more complex relationships that could be explored, such as logical, hierarchical, or causal relationships, …*
>**Response**: While our synthetic diagrams may not include complex relationships, our real-world diagram dataset covers a wide range of relationship types (more details can be found in Figure 4 and Table 8 in Appendix D). Unfortunately, for rarer cases, we are constrained by the limitations of existing datasets, but we have included as much diversity in the relationships as possible.
---
>**Point 3**: *However, the authors fail to provide a direct comparison with these works. For example:...*
>**Response**: We sincerely appreciate the reviewer for pointing out these relevant works. We will include them in our related work section and provide a discussion on them.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed responses. My concern regarding the large-scale model performance has been addressed. However, could you further elaborate on the novelty of your work. As I previously noted, visual limitations in vision-language models (VLMs) has been explored in earlier studies. Could you provide a brief discussion comparing your approach with the related works I cited [1–3] and clarify what sets your method apart?
---
Reply to Comment 1.1.1:
Comment: We’re glad to hear that the new results have addressed the reviewer’s concern. Regarding the comparison with the cited works, our approach differs in several important ways:
---
>[1] focuses on evaluating the recognition capabilities of small open-source models (up to 10B), particularly in fine-grained visual recognition tasks.
>[2] emphasizes visual grounding and the role of vision encoders, introducing a new benchmark and improving model design based on those insights.
>[3] uncovers fundamental visual shortcomings in CLIP-based VLMs and proposes a solution by incorporating features from self-supervised vision models.
>In contrast, our work centers on a different research question: we investigate why VLMs often fail at basic visual abilities—such as recognition and counting—despite appearing capable of reasoning over complex diagrams. Rather than focusing on fixing these issues, we design targeted experiments and curate datasets to explain these counter-intuitive behaviors.
>In short, while the cited works aim to identify and improve limitations in VLMs, our goal is to understand why those limitations exist through carefully controlled evaluations. We believe this focus offers a complementary and novel perspective. | Summary: ## update after rebuttal
The paper presents a comprehensive and detailed study on if LVLMs are able to understand visual diagrams. The authors develop a test suite of synthetic as well as real diagrams and test multiple LVLMs on it. The authors find that the models are able to identify identities well. However, they have limited abilities in recognizing relations, especially synthetic ones and the ones without background knowledge as shortcuts. This suggests that existing LVLMs have limited abilities on diagram reasoning.
Claims And Evidence: Yes the paper has clearly presents all the claims, together with clear evidence.
Methods And Evaluation Criteria: Yes, the evaluation makes sense for diagram understanding.
Theoretical Claims: No theory claims.
Experimental Designs Or Analyses: Yes, I checked each one and the paper is sound.
Supplementary Material: I checked, and it is the codes and benchmark data.
Relation To Broader Scientific Literature: There are many prior works that relate to diagram understanding in prior works on LVLM. For example, popular benchmarks like MMMU and ChartQA contain diagram understanding problems. The paper focuses on entity & relation and reveals that existing models still have a long way to go on recognizing and reasoning on relations.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The paper is overall well-written and clearly structured. The conclusions are sound and interesting and reveal a particular weakness of existing LVLMs. The experiment design is interesting and informational.
One weakness is that the scope of the problem covered is limited, with only entities connected by relations. It would be nicer if the author can discuss about more general and complex diagrams, which would make bigger impact to the community.
Other Comments Or Suggestions: page 4 line 172 on the right missing a space between "detailsof"
Questions For Authors: An interesting phenomenon I found in the numbers is that Llava performs very poorly on the benchmark compared with other models. However, I noticed that its benchmark score is similar to Llama and Qwen's. Any thoughts on this?
Also, can the models do much better on "relation" with some SFT with your synthetic data?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for acknowledging the clarity and robustness of our work. Below, we provide our responses to the concerns and questions raised. Additionally, in line with the suggestions from Reviewer 8S38 and Reviewer 3mRA, we have expanded our evaluation model set to include 9 models, and the results remain consistent, further reinforcing the validity of our findings.
---
>**Point 1**: *the scope of the problem covered is limited, … if the author can discuss about more general and complex diagrams, …*
>**Response**: We agree with the reviewer that expanding the range of diagram types discussed could be valuable for the community. In our work, we design two evaluation settings: one with synthetic data and another with real-world data. For the synthetic evaluation, we focus on a typical diagram type (entity-relation) to ensure clarity and intuitiveness. For the real-world evaluation, we gather data from six domains, incorporating various diagram types from existing works (Figure 4 and Table 8 in Appendix D provide additional details). While there is potential to include more general or complex diagram types, our work is constrained by the availability of existing diagram datasets. Nonetheless, we have aimed to cover as diverse a range of diagram types as possible.
---
>**Question 1**: *... Llava performs very poorly on the benchmark compared with other models. However, I noticed that its benchmark score is similar to Llama and Qwen's. Any thoughts on this?*
>**Response**: We appreciate the reviewer for carefully reviewing our paper. Indeed, the performance of LLaVA is significantly lower than that of the other models. We believe this can be attributed to differences in the training patterns. Unlike the other models, LLaVA is primarily trained with real-world images, while the other models incorporate augmented synthetic data and reinforcement learning techniques, such as DPO, which likely enhance their generalizability. We hypothesize that these additional training phases contribute to the stronger performance observed in these models on our benchmark.
---
>**Question 2**: *... can the models do much better on "relation" with some SFT with your synthetic data?*
>**Response**: Although SFT with our data could likely improve the evaluation accuracy on our benchmark, we do not believe it would lead to true diagram understanding ability or generalizability in practice. In our work, our primary focus is to highlight the weaknesses of existing Vision-Language Models (VLMs). Addressing how to improve these models is a broader topic, and we encourage future work to explore potential solutions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification! i will maintain my overall positive scores. | null | null | null | null | null | null |
Target Concrete Score Matching: A Holistic Framework for Discrete Diffusion | Accept (poster) | Summary: The paper introduces a novel training objective for discrete diffusion models, dubbed Target Concrete Score Matching (TCSM), which is based on the concrete score (Meng et al., 2022).
Specifically, two different objectives are proposed: One is based on some divergence between the target and predicted concrete score, whereas the other is based on some divergence between the target and predicted distribution (conditioned on a noisy version of the data). It is shown not only that the two objectives have the same global minimum, but also that if the global minimum is reached, then the target and model distribution are the same almost everywhere.
By choosing the forward KL-divergence as a specific instance of TCSM and under a factorized parameterization (where every token is modeled independently), the objective is claimed to reduce to the simple cross-entropy reconstruction objective, where the clean data is predicted from its noisy version.
To demonstrate the flexibility of TCSM, the proposed training objective is applied to pre-training a language model with or without some reference model, and also adapted for post-training tasks in the form of (offline) reward optimization, (offline) preference tuning, as well as knowledge distillation from some teacher language model.
The reported empirical results are on-par with or slightly ahead of prior work.
## Update after rebuttal
Thank you to the authors for earnestly and extensively engaging with my questions and feedback. There were a lot of initial concerns, some of which were addressed by the authors' rebuttal and some of which turned into different concerns, some of which in turn have been addressed by the authors' followup response. In the following, I will summarize which of my concerns have and have not been addressed after the rebuttal:
1. **Faster convergence of TCSM:** While this point was initially unclear, it seems that TCSM converges faster than baselines _initially_, but performs worse than the baselines (on masked diffusion) after full convergence. In light of this, claiming "faster convergence" is, IMO, unjustified as _convergence_ usually refers to the point in training where the model no longer improves, i.e. full convergence. In the absence of further evidence to the contrary, I believe that optimizing the diffusion ELBO directly is still superior for pre-training.
2. **TCSM encompasses several existing methods:** This concern has been partially addressed. I now see how TCSM can be used to train various discrete diffusion models, including MDMs. However, claiming that, e.g., MDM is an instance of TCSM still requires a mathematical proof.
3. **Likelihood computation:** TCSM relies on existing theory for likelihood bounds and does not provide a likelihood bound in and of itself. This confusion has been addressed.
3. **Soundness of theoretical claims:** Most of my concerns have been adequately addressed. The intractability of $p\_{1|t}$ remains a concern. This has now been reduced to the intractability of $p\_t(x\_t)$, which is still an issue if not conditioned on $x\_1$. During training, we neither sample from nor approximate $p\_t(x\_t)$ but instead sample from $p\_1(x\_1)$ and then $p\_t(x\_t | x\_1)$. This results in the same joint distribution, but would imply that the theory does not describe what we actually do in practice.
4. **Data prediction:** The authors point towards DFM for this terminology, but the DFM targets are probability _flows_ rather than probability _distributions_. In the DFM sense, TCSM is neither noise nor data prediction, since we are dealing with a distribution over denoised tokens rather than a probability flow. I maintain that the usage of the term in this paper is unnecessary and confusing.
Overall, given that many of the concerns on theoretical soundness have been addressed, I will raise my score from 1 (reject) to 2 (weak reject). Due to the initial extent of issues and given that TCSM for pre-training (and especially the claim of faster convergence) stands on weak evidence, I believe that major revisions in the form of improved clarity and framing are necessary. Unfortunately, this makes it hard for me to recommend an accepting decision, despite the novel and significant contributions.
Claims And Evidence: Section 4.2 claims that the proposed method helps mitigate “slow convergence, high variance gradients, and reduced sample efficiency compared to autoregressive models” by using a parametric model (a pre-trained language model, AR or non-AR). While using a parametric model indeed improves convergence compared to the TCSM baseline, no evidence on gradient variance is provided. Furthermore, the setup requires having access to a capable pre-trained model, which itself requires compute resources to train, thus offsetting any efficiency gains. I therefore cannot confidently conclude any of the above claims from the provided evidence.
Section 4.1 (L268 ff.) claims that TCSM “encompasses several existing discrete diffusion methods, including MD4 (Shi et al., 2024), MDLM (Sahoo et al., 2024), and DFM (Gat et al., 2024)”, with different source distributions and divergence measure supposedly giving rise to various objectives from the literature (as per Table 2). However, this is stated without proof, and the claim is not at all obvious. For starters: Masked diffusion models (MD4 and MDLM) have an ELBO that takes the form of a weighted reconstruction loss, where the weights $\frac{\alpha_t’}{1 - \alpha_t}$ are determined by the noise schedule $\alpha_t$. While the proposed TCSM $l_\mathrm{distrib}$ loss indeed seems to reduce to a reconstruction loss in certain cases, it is entirely unclear where the scaling factor would come from. Adding to the confusion is the fact that the equivalence is claimed for the $\mathcal{L}_\mathrm{distrib}$ loss, which is not actually an instance of TCSM (Eq. 3), but only has the same global minimum (as per Propositions 2).
The paper also claims an improvement in PPL compared to other discrete diffusion models (MDLM and SEDD), but the cited numbers are different from what is reported in the original paper. This calls into question the soundness of either the methodology or the reported numbers.
Methods And Evaluation Criteria: The paper relies heavily on likelihood as an evaluation criterion (BPC, PPL), and while the chosen datasets are consistent with prior work, there is no explanation on how one actually calculates the likelihood under the proposed model. While it is somewhat plausible that one _can_ calculate likelihood from the concrete score, it is not obvious (to me) how to do this, and the paper is lacking detail in this regard. If the intended claim is that the proposed $\mathcal{L}\_\mathrm{score}$ and
$\mathcal{L}_\mathrm{distrib}$ losses are likelihood bounds, this would have to be proven mathematically.
Theoretical Claims: Proposition 4.1 refers to the appendix for proof, but I cannot find this proof anywhere. Therefore the claim is stated without proof.
For Eq. (9), it is claimed that the derivation is straightforward starting at Eq. (5), but I do not see how this simplification occurs. Specifically, while there are indeed some constant terms (which we cannot simply “drop” and claim _equality_), the expectation/sum over $p_{1|t}$ that is present in the KL-divergence but not Eq. (9) cannot (trivially) be ignored. Hence, and as far as I can tell, while Eq. (9) can still serve as a training objective, the claimed equality does not hold.
The simplified objectives arising from a factorized parameterization (Eq. 10 and L263) are also stated without proof, and it is non-obvious how they follow from the stated assumption.
The proof of Proposition 1 (App. B.1) assumes “mild regularity conditions”, which is a term that I am not familiar with, and I am unable to find any further elaboration in the paper. Clarifying these conditions would improve the mathematical rigor. Other than this, the proof seems correct.
Proof of Proposition 2:
- RHS of Eq. (19) is repeated twice.
- RHS of Eq. (20) is identical to Eq. (19). It seems to me like at least one of them should be different.
- Consequently, I have a hard time coming to the claimed conclusion.
Proof of Proposition 3 seems fine after skimming.
Experimental Designs Or Analyses: The paper relies on likelihood as a measure of model performance, which is fine, but the lack of qualitative examples somewhat casts doubt on the sample quality of the proposed model. I suggest providing at least some samples from each experiment (unconditional generation, IMDB sentiment tuning, toxicity tuning) in the appendix.
Supplementary Material: I skimmed the appendix a couple of times, but did not read it in detail.
Relation To Broader Scientific Literature: The paper combines ideas from “Target Score Matching” (Bortoli et al., 2024) and “Concrete Score Matching” (Meng et al., 2022). While I am not deeply familiar with these two papers, it seems like a more detailed discussion on how the presented method is similar/different from and builds upon these two prior works would be appropriate, especially considering that the papers in question are only referenced in passing and are not mentioned in Section 2 (Preliminaries).
Essential References Not Discussed: It seems to me like the comparison to [1] and/or [2] would be appropriate, and arguably more pressing than comparing to discrete flow matching.
- [1] Lou et al., 2023. https://arxiv.org/abs/2310.16834
- [2] Sun et al., 2022. https://arxiv.org/abs/2211.16750
Other Strengths And Weaknesses: The paper provides an interesting and new perspective on discrete diffusion models and, if the claimed performance improvements are real, improves the state-of-the-art. This is the paper’s strength, which should be focused in future revisions.
However, the mathematical rigor and the soundness of claimed results leaves a lot to be desired and makes it hard for me to recommend an accepting decision. I am happy to reevaluate my conclusion if these concerns can be addressed, but the rebuttal period constitutes a tight deadline given the extent of the issues. It may be deemed the better, less stressful option to take the necessary time to improve the writeup and promptly resubmit it at a future date.
Other Comments Or Suggestions: On a more subjective note, I have found that the clarity of the writing has a lot of room for improvement. Things feel, at times, overly and unnecessarily general, in a way that obfuscates the contributions of the paper (e.g. do we really need to sample $t$ over an arbitrary distribution if this distribution in practice will always be $U(0, 1)$?). The theoretical results could also be put into perspective a bit better, leading the reader to the desired conclusions. For example: It is not clear to me why Proposition 3 gives new insight, considering that we have already proved equivalence of $l_\mathrm{score}$ and $l_\mathrm{distrib}$ in Proposition 2. Further, while Proposition 2 assumes some specific divergence measures, as far as I can tell, these are not the ones that are later used in experiments. Another example: Table 2 is not only stated without proof, but also never elaborated on, and it is left to the reader to “figure out” what it means and entails.
To summarize: I would suggest to streamline the theory by clearly stating all the conclusions we would like to take away, to lead the reader towards the desired conclusions as clearly and directly as possible, and to remove anything that does not contribute towards this goal.
Questions For Authors: - How does one compute likelihood under the proposed model? Are objectives (4) and (5) supposed to be likelihood bounds?
- How does one generate samples with the proposed model?
- How is Eq. (10) derived?
- What is the proposal distribution $h(x_1 | x_t)$ during pretraining? This seems like an essential component of the training objective that is (as far as I can tell) never specified until Section 5.
- How are baseline numbers for SEDD and MDLM obtained? The reported numbers (Figure 1) are different from what is claimed in each respective original paper: Sahoo et al. report a PPL of 23.21, not >30, for MDLM and 24.10, not >34, for SEDD. What is the reason for this discrepancy?
- Table 6: What data is used to compute PPL? Also, the table is not referenced in the main text.
Where is the “data-prediction objective” (Eq. 12) from? I cannot find it in either of the two references (Campbell et al., 2024; Gat et al., 2024).
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough and insightful review. We address each point in detail below, quoting relevant comments, and will incorporate all suggestions into revision.
Due to the paper's density, we aimed to balance presenting our method and providing essential background information. We omitted some standard practices for brevity, the concerns arising from these omissions can be clarified below with minor revisions, as they do not reflect flaws in our method or experiments.
# Q5
>Inconsistent Fig 1 baseline numbers
>cited numbers are different from original paper. calls into question soundness of methodology or reported numbers.
The discrepancy comes from different experimental setups. Fig 1 shows models trained on 26B OpenWebText tokens, making scores incomparable to original papers. We mentioned this in Sec 4.2 (L301) and Fig 1's caption, but agree it needs more emphasis. See response to Q2 of vK3t for more details.
# Q1
>How to compute likelihood?
We use same ELBO methods as prior works:
- Masked diffusion: MD4/MDLM approach (Eq.4 in [1])
- Uniform diffusion: DFM method (Eq.7.32 in [3])
>Are Eq(4,5) likelihood bounds?
>Writing is overly general (is sampling t from arbitrary $\omega(t)$ necessary?)
>Masked diffusion models have ELBO as weighted loss based on noise schedule, the source of the scaling factor in TCSM isn't clear.
$\omega(t)$ provides necessary weighting for valid ELBO and scaling factors, following diffusion literature (Eq.7 in [5], [4 Sec2.1]). For masked diffusion, ELBO weighting is $\lambda(t) = \frac{d\alpha_t/dt}{1-\alpha_t}$, so $\omega(t) \propto \lambda(t)$. We sample $t$ uniformly and apply $\lambda(t)$, equivalent to $t \sim \omega(t)$.
Eq (4, 5) are likelihood bounds only when $\omega(t)$ matches the ELBO weighting scheme.
# Q2
Refer to Q1 of vK3t.
# Q3
Eq.10 comes from Eq.9 using conditional independence
$p^{\theta}_{1|t}(\mathbf{x}_1 | \mathbf{x}_t) = \prod\_{i=1}^{L} p^{\theta}\_{1|t} (x^i_1 | \mathbf{x}_t)$.
Thus $p^{\theta}\_{1|t}(\mathbf{x}\_1 | \mathbf{x}\_1^{\neq i}, \mathbf{x}\_t)= p^{\theta}\_{1|t}(\mathbf{x}\_1 | \mathbf{x}\_t)$, yielding Eq.10.
# Q4
In pre-training, $h_{1|t} = p_{1|t}$ sampling from true data distribution, noted in Prop. 4.1 (L195) and L239.
# Q6
Table shows OWT val PPL from OWT-trained models. We'll add main text references.
>Where is data-prediction (Eq.12) from?
"Data-pred" follows DDPM conventions, used alongside "noise-prediction" or "v-prediction". It refers to predicting clean data $\mathbf{x}_1$ from noisy $\mathbf{x}_t$, versus predicting noise. See Eq. 109 in [4] for more.
# Q7
In Sec 4.2, we discuss gradient variance to motivate using parametric models for faster convergence. Prior works (e.g., Fig 2 in [6]) shows high gradient variance can slow convergence in diffusion models. While we don't directly measure variance reduction, our experiments demonstrate TCSM's faster convergence, addressing this core issue. We'll clarify in the revision that sample efficiency is our direct objective and demonstrated benefit.
Using existing LLMs (e.g. LLaMa) as parametric models avoids pretraining costs. Though we trained from scratch on OWT for fair comparisons, results show potential for leveraging pretrained LLMs.
# Q8
Simplification comes from using $p_{1|t}$ as $h_{1|t}$ and forward KL. We'll add complete notation: $l^i\_{\text{distrib}} = - \mathbb{E}\_{p_{1|t}(x\_1^i | \mathbf{x}\_t)} \log p\_{1|t}^{\theta}(x^i\_1| \mathbf{x}\_t) + C$.
# Q9
>RHS of Eq.19 repeated twice
RHS terms differ by $\mathbf{x}_1^{\neq i}$ positions.
>RHS of Eq.20 is identical to Eq.19
The proof shows equivalence between two concrete score views: Eq.19's LHS gives conditional score $p(x_1^i | \mathbf{x}_1^{\neq i}, \mathbf{x}_t)$, while Eq.20's LHS gives $i$-th component of joint score $p(\mathbf{x}_1 | \mathbf{x}_t)$. They're equal as normalization constants cancel in ratio-based concrete score. Though the RHS expressions appear identical, the key difference lies in their LHS interpretations
# Q11
See response to Q5 of vK3t.
# Q12
>Prop 3 insight, Prop 4.1 Proof
Prop 3 enables practical TCSM objective estimation from data. Since $\mathbf{c}\_{p\_{1|t}}$ is unknown, it reformulates the objective using KL and IS divergences, leading to the objective in Prop 4.1 used in experiments. It thus connects theory to practical estimation.
We'll include the proof of Prop 4.1 in appendix, which straightforwardly writes out the explicit forms of the KL and IS divergences.
# Q13
>Discussed divergence not used in exp
Gen. KL used in "TCSM Absorb $L_{score}$" (Table 4). Forward KL used in $\ell_{\text{distrib}}$ in "TCSM Absorb $L_{distrib}$" (Tables 3-4). Post-training uses f-divergence, giving objective (Eq. 11), tested in Tab 5-6 and App E.2.
# Q14
>Table 2 not elaborated
Please refer to Q4 of vK3t.
# Q15
>Samples
Will be added in the revision.
# Ref
[1] 2406.04329
[2] 2406.07524
[3] 2412.06264
[4] 2303.00848
[5] 2011.13456
[6] 2503.09573
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for responding to many of my concerns. In the following I will outline which concerns have and have not been addressed.
- Q5: Based on the caption of Fig. 1, it seems like the validation set, not the training set, consists of 26B tokens. Indeed, the main text clarifies this, but training for 26B OWT tokens is surprisingly little to validate the convergence properties of a pre-training method, considering the literature commonly trains for 5-10x longer.
- Q1, Q2: As I understand from the authors' response, the TCSM model based on a masking prior uses both the masked diffusion model (MDM) ELBO and sampling algorithm and only during training relies on the TCSM framework as a surrogate objective. If that is indeed the case, it needs to be clearly stated that TCSM serves only as a surrogate objective for existing diffusion approaches. However, it would also entail a weakening of the claim that "TCSM provides a unifying perspective that encompasses several existing discrete diffusion methods" (L268), since it does not unify existing approaches under one theory but only provides a surrogate objective that is versatile enough to be applied to different discrete diffusion settings.
- Q3: Assuming that $p^\theta\_{1|t}(\mathbf{x}\_1|\mathbf{x}\_1^{\neq i},\mathbf{x}\_t)=\prod\_j p^\theta\_{1|t}(x^j\_1|\mathbf{x}\_1^{\neq i},\mathbf{x}_t)$, we have $p^\theta\_{1|t}(x^j\_1|\mathbf{x}\_1^{\neq i},\mathbf{x}_t)=\delta\_{x^j,x\_1^j}$ if $i\neq j$. For $i=j$, $p^\theta\_{1|t}(x^j\_1|\mathbf{x}\_1^{\neq i},\mathbf{x}\_t)$ does not simplify without any further assumptions. It is unclear what the model probability is when we are given all but one token.
- Q4: Strictly speaking, the true denoising distribution $p_{1|t}$ (L127) is intractable, and if we can sample from it, we have already solved the problem of unconditionally denoising. Presumably, during training, we use the conditional denoising distribution $p_{1|t}(x | x_t, x_1)$, which is trivially equal to $\delta_{x, x_1}$. If this is indeed the intended claim, the dependence of $h$ on $x_1$ needs to be clearly stated.
- Q6: Unlike for continuous (Gaussian) diffusion, it does not make sense to distinguish between _data_ and _noise_ prediction for discrete diffusion, since there is no notion of "distance" between two (noisy) samples. As such, it is the first time I have heard the term "data-prediction" being used in the context of discrete diffusion, and I recommend removing the corresponding claim in L87, Col. 2 to avoid confusion.
- Q7: Even if high gradient variance causes slow convergence, which is likely true, the faster convergence observed in TCSM cannot be seen as evidence for lower gradient variance. Changing the writing appropriately should address this concern. In terms of using a teacher model, this cannot be seen as way to speed up pre-training, since pre-training assumes training a model from scratch. If the authors indeed intend to claim that using a parametric teacher model _speeds up_ pre-training, the training cost of the teacher model has to be taken into account. Instead, it is probably best framed as a form of distillation of a (non-diffusion) teacher model into a diffusion student model, which is also a valuable contribution.
- Q8: The updated equality is better. However, strictly speaking, the dependence on $\mathbf{x}_1^{\neq i}$ cannot trivially be dropped at this point.
- Q9: Thank you for the clarification, the proof seems sound now. Specifying the "mild regularity conditions" should further improve its rigor.
- Q12: I am willing to take the authors' word regarding the missing proof of Prop. 4.1. Regarding Prop. 3, given that $p_{1|t}$ is also unknown, its purpose remains unclear.
To summarize, while some of my concerns on soundness have been addressed, some remain. Most prominently, the short training horizon of 26B tokens feels insufficient to make strong claims about convergence properties compared to baselines that are usually trained for much longer. Further, the intractability of $p\_{1|t}(\mathbf{x}\_1|\mathbf{x}\_t)$ is a major theoretical issue if this distribution is to be used as the proposal distribution during pre-training. Many remaining theoretical soundness concerns can be remedied by clarifying the role of TCSM as a surrogate objective on top of existing discrete diffusion models. This would remove the need for, and in fact make unnecessary, the theoretical justifications, since empirical validation is all the justification needed to motivate a surrogate objective. Releasing reproducible training code would also bolster the paper in this regard. Besides soundness, framing and writing remain significant weaknesses that can be improved in future versions.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's follow-up.
To summarize:
- Q[9, 11, 12, 13, 14, 15] are resolved. We'll incorporate suggestions in revision.
- Remaining concerns addressed below.
# Q4 Role of $p_{1|t}$
We appreciate comments on $h_{1|t} = p_{1|t}$ and clarify.
>true denoising distribution $p_{1|t}$ is intractable, and if we can sample from it, we have already solved the problem of unconditionally denoising.
This interpretation is incorrect. As explicitly defined in Sec 2, Line 107, $p_{1|t}$ is $p_{1|t}(x_1 | x_t) = \frac{p_1(x_1)p_{t|1}(x_t | x_1)}{p_t(x_t)}$, where $p_t(x_t) = \mathbb{E}_{p_1(x_1)} p_{t|1}(x_t | x_1)$
Though $p_{1|t}$ is unknown, we can sample from it using the training dataset and probability path $p_{t|1}$. TCSM training involves sampling from $t \sim \omega(t), x_t \sim p_t(x_t), x_1 \sim h_{1|t}(x_1|x_t)$, specifically when $h_{1|t}=p_{1|t}$, the process is as follows:
1. $t \sim \omega(t)$
2. Sample $x_1$ from dataset
3. Sample $x_t$ from $p_{t|1}(x_t | x_1)$
4. Train with TCSM
>during training we use $p_{1|t}(x|x_t, x_1)$, which is $\delta_{x, x_1}$
>dependence of $h$ on $x_1$ needs to be stated
While $h_{1|t}$ is $p_{1|t}$ during pre-training, its definition doesn't depend on $x_1$. For example we use $h_{1|t}=p_{1|t}^{pre}$ in post-training, differing from original data distribution (see Alg 1, App E.2).
In our understanding of the notation $p_{1|t}(x|x_t, x_1)$, it seems you are borrowing the notation of the posterior $p(x_{t-1}|x_t, x_1)$ in denoising diffusion models which shares similar form. If that's the case, we would like to clarify that:
- $h_{1|t}$ does not play the same role as $p(x_{t-1}|x_t, x_1)$, which is only relevant in **denoising score matching**, leading to conditional denoising score matching $\nabla_{x_t} \log p_{t|1}(x_t|x_1)$. Our TCSM, however, is based on **target score matching**, which is based on the score $\nabla_{x_1} \log p_{1}(x_1)$. We have discussion in Sec 4 Lines 209-219, and Table 1.
>intractability of $p_{1|t}(x_1|x_t)$ is a major theoretical issue
We hope our explanation clarifies that $h_{1|t}=p_{1|t}$ does not pose a theoretical or practical issue.
# Q5
>26B tokens is surprisingly little to validate the convergence properties
- As mentioned in our previous response (Q2 vK3t), the experiment in Fig 1 is designed to evaluate **sample efficiency** during pre-training, not full convergence. Training for 26B tokens demonstrates these efficiency gains. The comparison in Fig 1 is fair as all methods used the same 26B token budget, similar to the 33B tokens in the MDLM paper.
Validation loss curves in Fig 4 further support our approach's improved training efficiency.
- For evaluations results **after full-convergence**, we have results presented in Table 3 and Table 6, where we compare against baseline models all after full convergence.
# Q3
We'd like to respectfully clarify that the assumption you brought up is not the one we make, either in our paper or rebuttal.
To elucidate the derivation of Eq 10, we restate it for clarity. The derivation is based on the conditional independence assumption used in prior works on denoising models, that the model can be factorized as $p^{\theta}\_{1|t}(x_1 | x_t) = \prod\_{i=1}^{L} p^{\theta}\_{1|t}(x^i\_1 | x\_t)$. This indicates that $x_1$, when conditioned on the noisy state $x_t$, is independent of tokens $x_1^{\neq i}$. Therefore, we have $p^{\theta}\_{1|t}(x\_1 | x\_1^{\neq i}, x\_t) = p^{\theta}\_{1|t}(x\_1 | x\_t)$. By substituting this simplification into Eq 9, we directly obtain Eq 10
# Q6
>it does not make sense to distinguish between data and noise prediction for discrete diffusion
>first time hearing "data-prediction" in discrete diffusion, suggest removal to avoid confusion
The distinction between data and noise prediction is well established in the foundational work we built on.
Discrete Flow Matching (DFM) (2407.15595) explicitly distinguishes these notions (see abstract and Table 1). The equivalence between noise and data prediction only holds for masked diffusion. Since both DFM and our TCSM are designed for general priors, "data-prediction" is necessary to describe one of the valid prediction targets, aligning with the DFM framework.
# Q1 Q2
We are glad concerns about likelihood and sampling are resolved. We clarify why TCSM is beyond a surrogate. By choosing different source distributions (masking, uniform, etc.) and divergence measures (KL, Gen KL, f-divergence), our framework can derive objective functions for various discrete diffusion models. There is no existing framework that TCSM can be considered a surrogate of. See Q4 in vK3t for details.
# Q7
Thanks for the suggestion. We will revise to align more with distillation.
We hope this clarifies. If so, we kindly ask the reviewer to update review reflecting most concerns are addressed, and specify if remaining issues are about flaws in our own methods or just background/notation when discussing with AC and reviewers. | Summary: This work presents a new paradigm for modeling discrete data, titled Target Concrete Score Matching (TCSM). Unlike recent works that match a denoising concrete score, starting from an objective inspired by discrete flow matching, the authors propose to model the concrete score of ‘clean’ (i.e., target) data. The framework of TCSM is claimed to admit other recent discrete diffusion / flow matching papers as special cases. After deriving tractable Monte Carlo formulations of the TCSM objective, the authors conduct experiments demonstrating the competitive language modeling capabilities of TCSM-trained models, as well as their amenability to post-training adaptations.
## update after rebuttal
During the rebuttal periods the authors addressed my big concern on whether the results represented "fair comparisons". I therefore increased my score to 4.
I believe the authors should commit to adding the missing proof for Prop 4.1 and make the edits from the rebuttal period which are important for clarity and reproducibiliyt
Claims And Evidence: The claims are well supported by the experiments, however, as noted below I have some questions about the details of the experiments and believe more information should be provided in the manuscript.
Methods And Evaluation Criteria: Yes, benchmarks and tasks are well suited to the tasks at hand.
Theoretical Claims: Proofs were checked. However, there is a proof missing in the manuscript. Currently the referenced proof for Proposition 4.1 is not found in the appendix. This should be remedied during the upcoming discussion period.
Additionally, I believe that the claim that TCSM unifies previous discrete diffusion and flow matching works under a single framework (i.e., Table 2) deserves more explicit derivation that demonstrates equivalence (could be added to the Appendix).
Experimental Designs Or Analyses: My biggest questions / concerns are regarding the main NLP experiments. Specifically, I do not believe that the current manuscript contains enough information for reproducibility:
- The authors should provide details on how perplexity (or its lower bounds) were computed for TCSM methods.
- More details for Figure 1 and Table 4 should be provided, i.e, model sizes and hyperparameters for training.
- More specifically, for Section 4.2, it is unclear whether the TCSM models used in Figure 1 are randomly initialized or come from pre-trained BERT/AR/Hollow transformer models. If the latter, the authors should more explicitly state the details for the pre-trained model, and moreover should clarify why the comparisons to the baselines are still “fair” if pre-trained models are used (i.e., TCSM would be receiving more training / FLOPs budget in that case).
- All of the fine-tuning / post-training experiments should have more explicit details about the specific model used for initialization and the hyperparameter setup of the fine-tuning
Supplementary Material: Yes, I reviewed the Appendix extensively, with a focus on sections A-D.
Relation To Broader Scientific Literature: This work stands in contrast to the successful denoising score matching work of SEDD and is a generalization of other recent discrete diffusion and flow matching papers.
Essential References Not Discussed: Throughout the paper, I would recommend adding Ou et al., 2024 [1] to the reference lists that include MDLM and MD4, as Ou et al was concurrent with these works and derives a similar formulation.
The authors should cite works such as [2], [3], [4] for discussing the use of the taylor approx. in the AR parameterization/distillation
Additionally, references for the text8 [5] and OWT [6] datasets should be added.
Finally, it would be helpful to add references to the first sentence in Section 4.2.
---
[1] Ou, Jingyang, et al. "Your absorbing discrete diffusion secretly models the conditional distributions of clean data." arXiv preprint arXiv:2406.03736 (2024).
[2] Vignac, Clement, et al. "Digress: Discrete denoising diffusion for graph generation." arXiv preprint arXiv:2209.14734 (2022).
[3] Nisonoff, Hunter, et al. "Unlocking guidance for discrete state-space diffusion and flow models." arXiv preprint arXiv:2406.01572 (2024).
[4] Schiff, Yair, et al. "Simple Guidance Mechanisms for Discrete Diffusion Models." arXiv preprint arXiv:2412.10193 (2024).
[5] Matt Mahoney. Text8 dataset, 2011. URL http://mattmahoney.net/dc/textdata.
[6] Aaron Gokaslan, Vanya Cohen, Ellie Pavlick, and Stefanie Tellex. Openwebtext corpus. http:
//Skylion007.github.io/OpenWebTextCorpus, 2019.
Other Strengths And Weaknesses: **Strengths**
Overall, I think this is innovative work and find the new directions of post-training and various parameterizations compelling. Additionally, if the concerns / confusions I have about the experimental details are resolved, this work would represent a very big step forward towards closing the gap to the dominant AR approach for language modeling. My current score below reflects this and I would be more than happy to increase it if the concerns raised above and detailed below are addressed.
**Weaknesses**
1. The authors should provide more explicit derivations of equivalence between TCSM and the works listed in Table 2.
2. The authors should clarify whether the comparison between TCSM and other models in Figure 1 and Tables 3 and 4 is “fair”. Was TCSM initialized from a pre-trained model here? If so, which one and additionally is the training budget comparable to baselines after accounting for the train of the initialization model.
3. The presentation of an “experiments” section following Section 4.1 seems out of place, since my understanding is that the actual models used require the parameterization introduced in Section 4.2.
4. An AR baseline should be added to Figure 1.
5. Are the gains in Table 6 just a product of additional training? Or are the baselines indeed comparable to fine-tuned TCSM?
6. The presentation of distillation in Section 5.4 as a post-training technique is somewhat confusing, considering this is the implementation of the AR parameterization of $p_1$ in Section 4.2, if I understand correctly. It seems the same methodology is being “pitched” for both pre and post training.
7. More experimental details are required to ensure reproducible results. Specifically, for all experiments the specific probability path schedule (I am assuming linear is used) should be mentioned as well as other missing model size, training length, and hyperparameter details (many / all of these can go in the appendix).
8. All the post-/fine-tuning experimental details should provide more information about the pre-trained model being used.
Other Comments Or Suggestions: **Suggestions**
1. It would be useful to have algorithms for training and generation with TCSM.
2. Throughout the text, I think you can consider dropping the time subscripts when using the variable $y$. To me, this represents any alternative from the vocab and so the subscript is unnecessary.
3. It would be helpful to make explicit what the “mild regularity assumptions” on the divergence are.
4. Citations should be added to Tables 1 and 2.
5. In Def 3.1, perhaps the notation for $\mathcal{N}$ on Line 105-RHS should be changed to: $\mathcal{N}(\mathbf{x}) = \{\mathbf{x\}_{n\_1}, \ldots, \mathbf{x}\_{n\_{K\_{\mathbf{x}}}} \}$.
6. On Line 198 LHS, one can surmise what is meant by $[c]_{\mathbf{y}_1}$ but this notation is not strictly well-defined in the text.
7. Consider adding an explicit definition for the cross entropy notation $\mathbb{H}$ on line 211 RHS.
8. I know it is technically “concurrent” work according to ICLR guidelines, but it might be nice to add UDLM (Schiff et al., 2024) to Tables 2 and 4.
9. The best values per column should be bolded in Table 3.
**Typos / Formatting issues**
1. Figure titles, legends, axes labels, and tickmarks are quite hard to read. Should be made bigger / more legible. Additionally in Figures 2 and 3, the dotted lines, the shading, and the meaning of the legend text (e.g. TCSM xx) are not explained anywhere.
2. There are several places where $p_{1|t}^{\theta}$ is replaced with $p_{t|1}^{\theta}$, e.g., Line 78-RHS, Line84-RHS, Line89-RHS, Line 676…
4. Several places have minor formatting issues after equations where the commas appear on a new line, e.g., after Equations (1) and (3)
5. References in the main text to Eq (12) should instead refer to Eq (1).
6. Line 134 LHS should say “ensures” instead of “ensuring”.
7. On Lines 115-116 RHS I understand what is meant by describing the matrix representation of the concrete score, but I think this part of the sentence is a bit confusing “by replicating the original sequence $\mathbf{x}$ $L$ times.” It’s not the values of $\mathbf{x}$ that are actually used, it’s just the right shape? Is that what is trying to be conveyed here?
8. Line 116 RHS is missing "$\mid \mathbf{x}_t$" in the vector expression for each column $i$ of the concrete score.
9. In the first two rows of Table 2, should “KL” instead be “GKL” or does the caption need to be updated?
10. The numbering for Tables 3 and 4 should be switched.
11. The actual parameterization of TCSM in Tables 3 and 4 should be indicated (i.e. BERT, vs. AR, vs Hollow).
Questions For Authors: 1. How would one use the parameterizations in Section 4.2 for generation? For example, the pre-trained BERT model seems to require ground truth data to compute $p_{1|t}(x_1^i\mid \mathbf{x}_1^{\neq i})$?
2. Are the parameterizations in Section 4.2 indeed “Pre-trained” (as indicated in the gray paragraph header in Line 256 RHS) or are they initialized from scratch?
3. How expensive are the parameterizations in Section 4.2? Seems like it requires $L$ forward passes for each sentence when using BERT / Hollow models?
4. Is it obvious that the divergence from Eq (3) can be decomposed into the component parts, as in Equations (4) and (5)? Aren’t there some assumptions being made about the divergence operator here that render these equivalent? If so, these should be explicitly stated.
5. Why is it important to define $h(\mathbf{x}_1 \mid \mathbf{x}_t)$? It seems that in all places, this is assumed to just be equal to $p(\mathbf{x}_1 \mid \mathbf{x}_t)$.
6. In Definition 3.1, why is it possible to have a “multi”set? Is this ever used?
7. In Proposition 4.1, is it correct to use the term “pseudo-likelihood" for $\ell^{i}_{pseudo}$ since it also has the added $1 / (V\cdot p)$ term?
8. For Figure 1, why do the authors only report PPL after 26B tokens. I believe this is only ~10% of the training of the baseline models? Additionally, were SEDD and MDLM re-trained for this figure? If so, that should be clarified.
9. In the Density Ration Parameterization of Section 5.1, wouldn’t this cause $\theta$ to not be updated since the $p_{ref}$ in the parameterization of $r_{1|t}$ would cancel out with that in the first entry of the Bregman Divergence in Eq. 11?
10. What does “Reimpl.” mean in the TCSM BCE line of Table 6? What is being reimplemented? Additionally, where is the MDLM number taken from? This is different than the one reported in Table 2 of Sahoo et al. 2024.
11. Can the authors elaborate on what is meant by the differing methods of compute PPL noted in the Appendix (Liines 875-879)? Some notation/formulas explaining the differences would be useful here as well as an explicit mention of which baseline uses which methodology.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer's thorough, high-quality, and insightful feedback. We'll address all suggestions and correct typos in our revision.
We are particularly encouraged by the reviewer's positive assessment of our work as "innovative", "compelling", and "representing a very big step forward." We also appreciate their willingness to increase the score once the concerns on experimental details are addressed. To directly address these concerns, we have organized our response to focus first on the experimental details.
Below we respond to each point (Q=Question, W=Weakness)
# Experimental Details Concerns
## [W3, Q1, Q2, Q3]
* Sec 4.1 introduces TCSM using **only data samples** from training dataset, same as prior works (SEDD, MD4)
* Sec 4.2 presents an alternative method using auxiliary parametric models (AR, BERT, Hollow) to estimate Target Concrete Score (TCS) via $p_1(x_1^i|\mathbf{x}_1^{\neq i})$
They describe orthogonal techniques within TCSM.
## W3
Experiments following Sec 4.1 use only **data-only** methodology, without auxiliary models from Sec 4.2.
## Q1
Parametric models in Sec 4.2 are used **only during training** to estimate TCS. They're **not needed for inference/generation**. After training, standard sampling techniques are used. (Please refer to Q1 of vK3t for more details).
## Q2
Our procedure:
1. Initialize parametric model from scratch and train on same dataset
2. Use this trained model (frozen) to provide TCS estimates
"Pre-trained" means auxiliary model is trained before being used in diffusion model training.
## Q3
Yes, it requires additional forward passes during training. However:
1. The overhead applies only during training, not inference, preserving generation speed
2. We observed just a ~20% increase in training time
3. As Fig 4 shows, these models significantly accelerate convergence, requiring fewer total training steps (and FLOPs) to reach target performance compared to data-only baselines
## Q5
* Pre-training: $h_{1|t}$ is true data posterior $p_{1|t}(x_1 | x_t)$
* Post-training with reward: $h_{1|t}$ is pre-trained denoising model $p_{1|t}^{pre}$
## Q8
Yes, we re-trained both baselines with same training budget as TCSM.
Please refer to Q2 of vK3t for more details.
## Q10
"Reimpl." refers to our re-implementation of EDLM NCE. As TCSM with BCE in Eq.11 is equivalent to EDLM's NCE loss and their code isn't public, we implemented it ourselves (detailed in App E.3, L1105-L1110).
The MDLM number in Tab 6 comes from Table 2 in the EDLM paper. Thanks for noting this discrepancy. We'll update with the original number from Sahoo et al. 2024.
## Q11
Like SEDD Sec 5.2.3, we divide text into fixed-length, non-overlapping segments. AR models use sliding window approach.
## W2
Comparisons are fair. All TCSM models were trained from scratch on identical datasets and evaluated under the same conditions using the same validation set. TCSM was not initialized from any pre-trained model trained on any extra data.
Baseline models were trained to convergence across multiple epochs, TCSM's gains come from our density ratio parameterization (see W5).
## W4
We implemented AR baseline with 26B tokens, achieving 22.51 PPL. We'll add to Fig 1.
## W5
Tab 6 gains come from our density ratio model $r_{1|t}^{\theta}$, not extra training. This model captures token dependencies other approaches miss with conditional independence assumption.
## W6
Please refer to Q3 in vK3t.
## W7, W8
All experiments use linear probability paths. We built on MD4's codebase, maintaining identical architectures, sizes, training steps, and optimizers for fair comparison. Code will be released for reproducibility.
# Other Concerns
## Q4
Eq.4 and 5 are indeed specific instances of the general Eq.3, simplified by using the 1-Hamming neighborhood.
This adds no additional assumptions on the divergence operator, as the goal of matching model and TCS can use any divergence measure. The 1-Hamming structure only defines what scores we match, not how we measure their differences through the divergence.
## Q6, Formatting issues 6
'Multiset' in Def 3.1 refers to how we construct the concrete score matrix (L114, RHS) by duplicating the original sequence. This enables transition from score space to normalized probability distribution space (Sec 3, L185-L219, 'Target Concrete Score' paragraph). Our approach parallels concepts in prior work (Proposition 1, Eq. 9 in MD4).
## Q7
We agree "pseudo-likelihood" in Prop 4.1 isn't used in exact sense. This term was chosen because minimizing $l_{pseudo}$ directly corresponds to maximizing standard pseudo-likelihood. We'll revise.
## Q9
As in Table 5, when implementing Eq. 11, we treat samples from $p^{ref}$ as "negative samples" for training model $\theta$. We don't backpropagate through sampling from $p^{ref}$.
## W1
Please refer to Q4 of vK3t.
## Extra References
Thank you for pointing out the extra references. We will add all suggested references.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I think I originally misunderstood the role of pre-trained models in Section 4.2 parameterizations. Can the authors clarify whether the reported validation ELBO numbers in Table 4 or Figure 1 rely on these external pre-trained models during `eval` or is the validation loop conducted using only TCSM without the pre-trained models?
Overall, I think several of the reviewers have noted issues with clarity. I believe that if these are addressed the paper would be much stronger. For me, in particular the two main opportunity areas for improvement are:
- Algorithms / explanation for training, generation, and ELBO calculation would be very useful.
- I had some difficulty understanding how the experiments were conducted: which models / parameterizations were used for each experiment.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for their time and insightful comments, which help us improve the work.
# Q1
>Can the authors clarify whether the reported validation ELBO numbers in Table 4 or Figure 1 rely on these external pre-trained models during eval?
>Is the validation loop conducted using only TCSM without the pre-trained models?
The validation ELBO results in Table 4 and Figure 1 **do not** use external pre-trained models during evaluation. These models are only used during *training* to help estimate the target concrete score (Section 4.2). Validation relies solely on TCSM without external models.
# Q2
> Algorithms / explanation for training, generation, and ELBO calculation would be very useful.
We agree. Below, we outline the algorithms for pre-training (using the $\ell_{score}$ loss from Eq. 10 with a masked prior as an illustrative example) and generation.
#### Algorithm for Training
#### Training Algorithm
**Input:** Dataset $\mathcal{D} = \{\mathbf{x}_1\}$, Denoising model $p_{1|t}^{\theta}$, Noise schedule $\{\alpha_t\}_{t}$
1. Sample $t \sim \omega(t) \propto \frac{d\alpha_t/dt}{1-\alpha_t}$
2. Sample $\mathbf{x}_1$ from dataset $\mathcal{D}$
3. Sample $\mathbf{x}_t$ from $p_{t|1}(\mathbf{x}_t | \mathbf{x}_1)$
4. Compute loss using $\ell_{score}$ (Eq. 10)
5. Update parameters $\theta$ via gradient descent
#### Generation Algorithm
1. **Initialize:** Set time steps $\{t(i)\}_{i=0}^{T} \leftarrow \text{discretize}([0, 1])$, initialize $\mathbf{x}_{t(T)}$ as $N$ mask tokens $\mathbf{m}$.
2. **for** $i = T, T-1, \ldots, 1$ **do**
- Set $t \leftarrow t(i)$ and $s \leftarrow t(i-1)$
- Predict distribution: $p_{1|t}^{\theta}(\mathbf{x}_1 | \mathbf{x}_t) = \text{Cat}(\mu(\mathbf{x}_t, t))$
- Update elements: $\text{for } n \in [N]$, $\text{ if } x_t^n = \mathbf{m}$, sample $x_s^n \sim \text{Cat}\left(\frac{\alpha_s - \alpha_t}{1 - \alpha_t} \mu^n(\mathbf{x}_t, t) + \frac{1 - \alpha_s}{1 - \alpha_t} \mathbf{e}_m\right)$; otherwise, keep $x_s^n \leftarrow x_t^n$.
3. **Output:** **return** final sequence $\mathbf{x}_0$.
For ELBO calculation, please see our response to Q1 wm3g.
Post-training procedures are detailed in:
- Algorithm 1 (Appendix E.2)
- Algorithms 2 and 3 (Appendix F.1) for reward-based fine-tuning
- Algorithm 4 (Appendix G.1) for preference-based fine-tuning
# Q3
> which models / parameterizations were used for each experiment.
We provide a detailed table below showing models and parameterizations for all results in the paper.
| Model | Experiment | Prior | Denoising Model Parameterization | Proposal distribution $h$ | Training objective |
|-------|------------|-------|----------------------------------|---------------------------|-------------------|
| TCSM Uniform $L_{score}$ (Sec. 4.2) | Table 4 | Uniform | Factorized $p_{1\|t}^{\theta}(\mathbf{x}_1 \| \mathbf{x}_t) = \prod_{i=1}^L p_{1\|t}^{\theta}(x_{1}^i \| \mathbf{x}_{t})$ | $p_{1\|t}$ | Eq. 10 |
| TCSM Uniform $L_{distrib}$ (Sec. 4.2) | Table 4 | Uniform | Factorized | $p_{1\|t}$ | Eq. 9 |
| TCSM Absorb $L_{score}$ (Sec. 4.2) | Table 4 | Mask | Factorized | $p_{1\|t}$ | Eq. 10 |
| TCSM Absorb $L_{distrib}$ (Sec. 4.2) | Table 4 | Mask | Factorized | $p_{1\|t}$ | Eq. 9 |
| TCSM Absorb $L_{distrib}$ (Sec. 5.1) | Table 4 | Mask | Density ratio parameterization $p_{1\|t}^{\theta}(\mathbf{x}_1 \| \mathbf{x}_t) \propto p_{1\|t}^{\text{ref}}(\mathbf{x}_1 \| \mathbf{x}_t) r_{1\|t}^{\theta}(\mathbf{x}_1 \| \mathbf{x}_t)$ | $p_{1\|t}^{pre}$ | Eq. 9 (pre-training), Table 5 Gen KL (post-training) |
| TCSM $L_{score}$ (Sec. 4.2) | Table 3 | Uniform | Factorized | $p_{1\|t}$ | Eq. 10 |
| TCSM $L_{distrib}$ (Sec. 4.2) | Table 3 | Uniform | Factorized | $p_{1\|t}$ | Eq. 9 |
| TCSM $L_{distrib}$ (Sec. 4.2) | Table 3 | Mask | Factorized | $p_{1\|t}$ | Eq. 9 |
| TCSM $L_{distrib}$ (Sec. 5.1) | Table 3 | Mask | Density ratio parameterization $p_{1\|t}^{\theta}(\mathbf{x}_1 \| \mathbf{x}_t) \propto p_{1\|t}^{\text{ref}}(\mathbf{x}_1 \| \mathbf{x}_t) r_{1\|t}^{\theta}(\mathbf{x}_1 \| \mathbf{x}_t)$ | $p_{1\|t}^{pre}$ | Eq. 9 (pre-training), Table 5 Gen KL (post-training) |
| TCSM-Bert | Figure 1 | Mask | Factorized | $p_{1\|t}$ | Eq. 9 |
| TCSM-AR | Figure 1 | Mask | Factorized | $p_{1\|t}$ | Eq. 9 |
| TCSM-Hollow | Figure 1 | Mask | Factorized | $p_{1\|t}$ | Eq. 9 |
| TCSM BCE (Reimpl.) | Table 6 | Mask | Density ratio (same as above) | $p_{1\|t}^{pre}$ | Table 5 BCE (ii) |
| TCSM LSIF | Table 6 | Mask | Density ratio (same as above) | $p_{1\|t}^{pre}$ | Table 5 LSIF (ii) |
| TCSM Gen KL | Table 6 | Mask | Density ratio (same as above) | $p_{1\|t}^{pre}$ | Table 5 Gen KL (ii) |
| TCSM | Figure 5 | Uniform | $p_{1\|t}^\theta$ | $p_{1\|t}^{pre}$ | Sec. 5.2 L 373 |
We hope these clarifications and added details effectively address the reviewer's concerns. We believe these clarifications further strengthen our paper, and we would be grateful if the reviewer would take these points into consideration in their updated review. | Summary: Recent works have proposed various diffusion modeling frameworks for discrete data, this paper proposes target concrete score matching, a framework that unifies various discrete diffusion approaches, such as discrete flow matching, masked diffusion language modeling, etc. The unified framework allows for using various model parameterizations and inference/noising processes with either uniform or absorbing terminal distributions in a single framework. The paper also demonstrates post-training methods such as sampling reward tilted models using RL or preference fine-tuning, and distillation from auto-regressive models can be encompassed in the TCSM framework.
Claims And Evidence: See below.
Methods And Evaluation Criteria: The paper enables studying various existing approaches, with different inference processes, model parameterization and objectives in a single framework.
Theoretical Claims: The theoretical claims are justified and detailed and self-contained proofs are provided in the appendix.
Experimental Designs Or Analyses: The experiments contain the relevant diffusion baselines, as well as experimenting with different objectives and choices in the TCSM framework. However, the paper could benefit from a discussion of the experimental results. For instance, for several baselines considered the inference process, objectives and model parameterization are similar (for instance, in table 3), therefore a discussion explaining the results or guidance on which component would benefit the paper.
Supplementary Material: Proofs were provided in the appendix which I checked.
Relation To Broader Scientific Literature: The authors consider several discrete space diffusion models and propose a unified framework for studying and making modeling choices.
Essential References Not Discussed: Sampling from discrete diffusion models is an active area of research as well as relevant for any unifying framework. Recent works have proposed various forms of sampling schemes, such as predictor correctors (zhao et al 2024), or gradient-based guidance (shi et al 2024).
Zhao, Yixiu, et al. "Informed correctors for discrete diffusion models." arXiv preprint arXiv:2407.21243 (2024).
Shi, Jiaxin, et al. "Simplified and generalized masked diffusion for discrete data." Advances in neural information processing systems 37 (2024): 103131-103167.
Other Strengths And Weaknesses: Strengths:
1. a single framework to study multiple inference processes, model parameterizations, divergences is appealing since discrete diffusion models have shown promising performance. TCSM allows for making modeling choices without having to manually derive an objective or parameterization for each different choice.
Weaknesses:
1. A discussion on sampling methods from masked diffusion models, including correctors (Zhao et al 2024), re-masking, etc is missing.
Zhao, Yixiu, et al. "Informed correctors for discrete diffusion models." arXiv preprint arXiv:2407.21243 (2024).
Shi, Jiaxin, et al. "Simplified and generalized masked diffusion for discrete data." Advances in neural information processing systems 37 (2024): 103131-103167.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their positive review and valuable feedback. We greatly appreciate the opportunity to address the questions and provide additional clarifications about our work. In the following sections, we carefully respond to each point raised by the reviewer, quoting the relevant comments to ensure clear and direct responses. We have also included additional explanations that we believe will help further clarify our approach and results. Please don't hesitate to let us know if you have any follow-up questions or need further details on any aspect of our work.
# Experimental Details Clarifications
> the paper could benefit from a discussion of the experimental results.
We agree that more discussion of the experimental results would be helpful. We layout following additional details on the implementation details and experimental results.
## Q1 Sampling
> A discussion on sampling methods from masked diffusion models is missing
We agree that including explicit introduction of sampling methods used in this work would be helpful.
TCSM's flexibility allows it to work with any diffusion model parameterization, compatible with standard sampling techniques like ancestral sampling and reverse CTMC simulation via Euler method.
In our experiments:
* Masked diffusion models (Sec 5.2, Fig 3): Used ancestral sampling
* Uniform diffusion models (Fig 5): Used reverse CTMC simulation with Euler method
These are standard approaches briefly noted in App A. We'll provide more details in the revision.
## Q2 Fig 1 evaluation
Fig 1 shows how parametric models (Sec 4.2) enhance sample efficiency during training. We aimed to test if this approach accelerates convergence vs data-only baselines. To clearly show performance differences, we deliberately restricted all models to 26 billion tokens, creating a controlled comparison within this limited training budget.
In contrast, the results reported by Sahoo et al. (23.21 for MDLM, 24.10 for SEDD) baseline were obtained using full training data. Consequently, the PPL scores in our Fig 1 are not directly comparable to those specific results from the original papers.
For fair comparisons under identical conditions (with full training convergence), see Tab 3 and 6.
# Further Clarifications
## Q3 AR to Diffusion Distillation
We placed distillation in post-training to align with LLM community where distillation is typically a post-training technique.
Distillation section highlights a key TCSM advantage: it's the only discrete diffusion framework enabling effective AR-to-diffusion distillation—valuable given today's many pre-trained LLMs. We developed novel techniques for this process (Top-K, first-order approximation) that enhance scalability beyond Sec 4.2.
## Q4 Table 2 details
While the key connections summarized in Table 2 are currently present in the manuscript, they are distributed across different sections. For instance, we establish that our simplified $\ell_{distrib}$ objective (Eq. 9) corresponds to the standard cross-entropy loss employed by MD4/MDLM/DFM. Furthermore, the relationship with EDLM (2410.21357) is detailed in App E.3 (L. 1122), linking their objective (Eq. 10) to the BCE-based Bregman divergence shown in Table 2 (row BCE (ii)).
To improve readability, we will add a consolidated paragraph in the revised manuscript. This new paragraph will explicitly outline these connections and clearly explain the significance and utility of Table 2.
## Q5 SEDD, SDDM comparison
SEDD was included as a baseline in all language modeling experiments, with results shown in Tables 3, 4 and Fig 1.
For SDDM, we followed prior discrete diffusion studies by excluding it, as its focus is image generation. Comparable language modeling results and official code were unavailable, preventing direct comparison.
We selected DFM for Sec 2 due to its general framework covering various discrete diffusion approaches, including SEDD (score parameterization) and MD4/MDLM (mask source distribution). This makes DFM ideal for introducing core concepts before our specific contributions.
## Extra references
We will make sure to include all the references pointed out by the reviewer.
---
Rebuttal Comment 1.1:
Comment: After going through the author's rebuttal and response to other reviewers, I maintain my score. | null | null | null | null | null | null | null | null |
MARGE: Improving Math Reasoning with Guided Exploration | Accept (poster) | Summary: This paper introduces MARGE, a framework for guided exploration in LLM mathematical reasoning for self-training pipeline.
It uses solution-guided exploration with MCTS and RL to find high-quality data, resulting in better exploration and credit assignment.
Specifically,
for each question it has a response, use rule based \n to split the response into states then from each state to finish the response and collect correct/incorrect labels, and then estimate state values based on labels, then train the policy using RL and update response.
In experiment using different LLMs and on different math dataset, they show it improves accuracy compare to baselines.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, I checked the Table 1 main result.
The accuracy of Qwen2.5-7B-Instruct experiment on MATH is 75.3 and the MARGE improves to 76.74.
I think the improvement is not that significant. However, the results on MATH500 with pass@64 does show significant improvements over the baseline (85 > 79).
My main concern is the lack of experiments using Qwen2.5 Math based model - would this lead to better accuracy or not?
In addition, the Table1 does not show any reference results from previous literature (7B model such as PRIME-EURUS, ACEMATH, rStar), it only compares the results under its own setting, which makes it hard for reader to understand how this method compares to the current literature and state of the art results at 7B level.
============ Post rebuttal ============
Author has added additional Qwen-math results and add references to other models. They have addressed my concerns and thanks for the effort.
Supplementary Material: NA
Relation To Broader Scientific Literature: This contributes to math LLM RL using MCTS to provide state level supervision.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strength: novel framework
Weaknesses: weak result
Other Comments Or Suggestions: the definition of state for implementation seems quite simple: based on "step i" or based on token count...What would be a better way to separate the reasoning process into different steps if compute is not a constraint?
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thorough review! We are happy to know you found MARGE to be novel and contribute to math LLM RL. Here, we appreciate the chance to address your questions.
# W1: lack of experiments using Qwen2.5 Math based model:
Thank you for your advice! To demonstrate MARGE's effectiveness on state-of-the-art math models, we conducted new experiments using **Qwen2.5-Math-7B-Instruct**. As models like this often undergo extensive and successful RL post-training, achieving further gains can be challenging[1], making this a good test case to showcase MARGE's effects. Due to its high reasoning ability, we randomly sample ~14k queries from the BigMath[2] dataset as training queries. Results (average of 3 runs) are below. We didn't test pass@64 on MATH due to its large size.
**Table 1**:Performance Comparison on Math Benchmarks Pass@1 / Pass@64 Accuracy (%)
| Pass1/64 Acc (%) | MATH@1 | MATH500 | OlympiadBench | CollegeMath |
| ------------------------ | ------ | ------------- | ------------- | ------------- |
| Qwen2.5-Math-7B-Instruct | 83.48 | 83.33 / 86.40 | 40.80 / 48.64 | 46.95 / 48.62 |
| PPO | 83.37 | 83.26 / 86.12 | 40.75 / 47.14 | 47.05 / 48.63 |
| **MARGE (Ours)** | 84.46 | 85.04 / 89.92 | 41.58 / 49.49 | 47.40 / 49.02 |
| ACEMath | 83.13 | 83.42 / 85.72 | 42.76 / 50.32 | 48.68 / 50.45 |
| PRIME-EURUS | 80.08 | 80.70 / 88.58 | 40.99 / 58.96 | 48.22 / 51.97 |
These results show MARGE effectively improves performance even on strong, instruction-tuned models. Notably, MARGE also yields larger relative gains on pass@k compared to pass@1, underscoring its benefit in enhancing **exploration diversity** and finding a wider range of correct solutions.
# W2: no reference results included
Thank you for pointing this out. Our initial tables focused on **controlled** comparisons within our experimental setup for clarity. We agree context is valuable and have now included reference results from recent literature (AceMath, PRIME-EURUS) in Table 1 above for comparison. We additionally test their pass@64 with open-source models. We will add them together in our revised paper.
What's more, we also want to emphasize MARGE's specific contribution: **improving exploration efficiency** to enable **scaled self-training**, reflected in both pass@1 and pass@k gains. This contribution is complementary to concurrent methods like PRIME-EURUS (implicit rewards modeling), and AceMath (curated data/process supervision). rStar proposes a system for SLMs to reason and resembles the idea of test-time scaling, and we cited it in related works. MARGE enhances the underlying exploration process in RL and can potentially be combined with these approaches, not as counterparts, for future enhancement in LLM reasoning.
# Question 1: Other ways to separate the reasoning process
We select these ways due to both their simplicity and their effectiveness. We also tested using "\n\n", which indicates new paragraphs in latex syntax, to separate the reasoning process. This yields more intermediate states than `Step i` or token counts but does not result in better final performance. Therefore, we believe when computation is not a constraint, simply more states aren't always better.
Alternative segmentation strategies are indeed interesting future directions for exploration and process-based rewards research. Approaches could include:
- Rule-based segmentation using logical structure (e.g., identifying equations, logical connectors), but it should also match the model's output style.
- Using an auxiliary LLM to identify meaningful intermediate reasoning steps.
Overall, exploring better segmentation is an interesting problem for not only our work but also the development of process rewards and PRMs, and thus can be further researched in the future.
We hope these clarifications and additional results address your concerns! We believe MARGE contributes to enhancing LLM reasoning through improved exploration.
[1] Gao, Jiaxuan, et al. "On designing effective rl reward at training time for llm reasoning." arXiv preprint arXiv:2410.15115 (2024).
[2] Albalak, Alon, et al. "Big-Math: A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models." arXiv preprint arXiv:2502.17387 (2025). | Summary: The paper presents MARGE, a method that improves the self-training of Large Language Models (LLMs) in math reasoning. MARGE relies on “guided exploration,” reusing partial solutions (“hits”)—correct or incorrect—to fix shared prefixes while varying subsequent steps. This stabilizes the generation of positive and negative examples in multi-step math tasks. Experiments show significant gains over baselines (SFT, PPO, DPO) on benchmarks like MATH, GSM8K, and OlympiadBench, with improved pass@1 and pass@k accuracy. ## update after rebuttal
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: The paper relates to the training of reasoning capability of LLMs.
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths**
- Directly addresses spurious correlations in self-generated data.
- Demonstrates clear empirical gains over various baselines.
- Provides extensive ablations and theoretical insights on why guided exploration yields richer training data.
Weaknesses:
- The method of choosing hit selection is still ambiguous in cases where no correct or wrong answers are generated. Due to its high frequency, it is not trivial to skip this point, especially for cases with no correct generated answers where exploration should be promoted.
- The statement “This solution is based on the simple intuition of increasing the likelihood of finding the right answer to a difficult question and possible failure cases to an easy one” does not hold all the time. If we start from a wrong intermediate point, no matter how the latter process is, we still get a wrong answer, right? It is a bit confusing why you chose the failed case as the hit. If the failure point lies in the first step, does it mean that all generated roll-outs are wrong?
Other Comments Or Suggestions: No
Questions For Authors: - Do you assume that if the final answer is correct, then the whole process is correct? If yes, is it in conflict with Proposition C.1 at the point “The reward function that gives 1 if and only if \( S_1 \oplus \cdots \oplus S_n \) is a correct solution to \( q \).”
- How do you handle queries where no suitable correct or incorrect “hit” is found?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable review! We are glad to learn that you find MARGE to be effective, contains empirical gains, and has extensive ablations and theories. We appreciate the opportunity to clarify the points raised.
# W1: choosing hit selection in cases where no correct or wrong answers are generated
Thank you for your question regarding the hit selection method.
1. Cases with No Correct Answers Generated
We agree with your insightful point that promoting exploration, especially when no correct answers are initially found, is crucial. We believe this question contains two key points: at least get one answer correct, and decrease the difficulty to find them.
The first key point remains a fundamental challenge, dependent on both exploration strategies and the backbone model's capabilities. Our current approach, consistent with recent reasoning works[2,3], is to increase the number of i.i.d. samples generated per query to improve the probability of finding a correct trajectory. Empirical results on Qwen2 demonstrate:
- At 32 samples: ~300/8888 questions lack correct answers
- At 256 samples: only 20 questions remain without valid answers
We remove remaining unsolved cases from training, leaving them to more capable models.
For the second point, as shown in Fig. 5, once a valid answer is sampled, MARGE greatly improves the exploration efficiency and surpasses baselines.
2. Cases with No Incorrect Answers
- This indicates problems are "too easy" for the current model
- We exclude them from training as they provide limited learning value, and focus more on challenging cases.
# W2: failed cases as the guidance hits
We appreciate your concern about our rationale for using incorrect responses as guidance hits for easier questions. Here we answer your questions and clarify our motivation:
*If we start from a wrong intermediate point, no matter how the latter process is, we still get a wrong answer*
We respectfully disagree based on our findings and concurrent works[1,2]. LLMs possess the potential to recover from wrong intermediate points and reach a correct final answer. Exploring data to reinforce such ability is a key goal of MARGE. We quantify this effect as follows.
As it is hard to decide the wrong states, we suppose states, where the estimated state value drops the most, are the false ones. We count the ratio of these states that could be recovered from, and the accuracies when completing them. We also count them when failures are at the first step.
| | recoverable wrong step % | Acc % of recovery | recoverable wrong first step % | Acc % of recovery (first step) |
| -- | -- | -- | -- | -- |
| Qwen2 | 73 | 44 | 83 | 54 |
| Llama3.1 | 74 | 36 | 88 | 46 |
| MetaMath | 51 | 21 | 73 | 33|
| Qwen2.5 (different queries) | 65| 35 | 80 | 62 |
*If the failure point lies in the first step, does it mean that all generated roll-outs are wrong?*
No, as the table above demonstrates (columns 3 and 4), models can often recover even when the error occurs at the very first step.
While wrong intermediate states do decrease the expected accuracy (as implied by the red line in Fig. 2), the possibility of recovery exists and is valuable. By selecting failed cases as negative guidance hits, we increase the portion of wrong rollouts for easy problems. These rollouts provide valuable learning targets for models' robustness:
- Showcase common mistakes of models,
- Demonstrate ways to avoid or even correct errors.
We validate the effectiveness of selecting wrong hits as guidance for exploration (Fig. 5) and final results (Tab. 3). Therefore, we believe this design choice is well-motivated and experimentally supported.
# Q1: Assumption about the whole process is correct
No, we don't assume this. Here, we want to express that the reward (in Prop C.1) is 1 iff the final answer given by $(S_1\oplus\cdots\oplus S_n)$ is correct. Trajectories with corrected intermediate errors are valid positive examples and help improve the model's reasoning abilities.
# Q2: Handling queries without suitable hits found
We progressively sample more responses for queries with no suitable hits found, as discussed detailedly in W1. Currently, we sample up to 256 responses. We discard the queries if they still get no suitable hits, the same as concurrent works[2,3]. Once a suitable hit is found, MARGE greatly improves exploration.
We believe MARGE contributes to enhancing LLM reasoning through improved exploration. We hope this rebuttal clarifies our method and addresses your concerns that led to your decision not to recommend acceptance of our work. We look forward to your feedback!
[1]Jaech, Aaron, et al. "Openai o1 system card." arXiv preprint arXiv:2412.16720 (2024).
[2]Guo, Daya, et al. "Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning." arXiv preprint arXiv:2501.12948 (2025).
[3]Cui, Ganqu, et al. "Process reinforcement through implicit rewards." arXiv preprint arXiv:2502.01456 (2025). | Summary: The paper introduces MARGE (Math Reasoning with Guided Exploration), a framework to enhance mathematical reasoning in Large Language Models (LLMs). It addresses two fundamental challenges in LLM reasoning: the scarcity of high-quality training data and the difficulty of exploring reasoning paths effectively. Unlike traditional self-training approaches, which often suffer from spurious correlations in self-generated datasets, MARGE introduces guided exploration to improve data diversity and reasoning accuracy. The core idea is hit-guided exploration, where intermediate reasoning states from self-generated solutions are systematically explored, leading to improved credit assignment and scalability in training. The paper provides extensive experimental results showing that MARGE improves both single-shot accuracy (pass@1) and exploration diversity (pass@k) across multiple benchmarks and backbone models.
The highlight to me is that the method can keep the reward on policy compared to other pretrained PRM, yet the problem is the on-line computation cost.
Claims And Evidence: Some of the statements are supported but may not be enough.
Based on the numbers presented in the tables, the gain of MARGE is not convincingly significant. And the computation cost was not fully discussed in the paper.
Methods And Evaluation Criteria: The method and evaluation benchmarks, metrics are reasonable.
The benchmarks are mostly math but pretty comprehensive.
Theoretical Claims: The Proof MERGA in the Appendix C to approve the method unbiased is correct.
Experimental Designs Or Analyses: The design of experiments are valid.
Supplementary Material: The algorithm, proof, case study, implement details and prompts.
Relation To Broader Scientific Literature: The paper builds on prior work in LLM self-training and reinforcement learning for reasoning.
Essential References Not Discussed: Multi-Step Problem Solving Through A Verifier: An Empirical Analysis on Model-Induced Process Supervision, EMNLP 2024
Improve mathematical reasoning in language models by automated process supervision, 2024
both papers has similar ideas as Math-Shepherd of using monte carlo estimation (hit-guide) to rate the partial solutions for math problems. The cost of running the experiments are discussed in those papers.
Other Strengths And Weaknesses: Strength:
1. The method keeps the reward on-policy which is critical for process supervision.
Weakness:
1. This method will increase the computation, yet not fully discussed in the paper.
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your helpful review! We are glad to learn you find MARGE to be novel, keep reward on-policy, and be reasonably evaluated. Here, we appreciate the chance to address your questions.
# Claims and Evidence: not convincingly significant
We conducted experiments with models of different abilities to demonstrate the effectiveness of our method. MARGE not only increases the pass@1 accuracy of models. More interestingly, it is capable of significantly increasing pass@k accuracy and improving the model's diversity, which is a trend not seen in most post-training methods.
We conduct new experiments using **Qwen2.5-Math-7B-Instruct**. As models like this often undergo extensive and successful RL post-training, achieving further gains can be challenging[1], making this a good test case to showcase MARGE's effects. Due to its high reasoning ability, we randomly sample ~14k queries from the BigMath[2] dataset as training queries. Results (average of 3 runs) are below. We didn't test pass@64 on MATH due to its large size.
**Table 1**:Performance Comparison on Math Benchmarks Pass@1 / Pass@64 Accuracy (%)
| Pass1/64 Acc (%) | MATH@1 | MATH500 | OlympiadBench | CollegeMath |
| ------------------------ | ------ | ------------- | ------------- | ------------- |
| Qwen2.5-Math-7B-Instruct | 83.48 | 83.33 / 86.40 | 40.80 / 48.64 | 46.95 / 48.62 |
| PPO | 83.37 | 83.26 / 86.12 | 40.75 / 47.14 | 47.05 / 48.63 |
| **MARGE (Ours)** | 84.46 | 85.04 / 89.92 | 41.58 / 49.49 | 47.40 / 49.02 |
These results show MARGE effectively improves performance even on strong, instruction-tuned models. Besides the improvement on pass@1, MARGE also yields large improvements on pass@k, indicating its benefit in enhancing **exploration diversity** and finding a wider range of correct solutions.
# Missing references:
Thank you for your kind reminder of these works! We will include these two works in the first part of our related works, where we discuss process supervision methods in LLM reasoning.
# W1: computation cost
Thank you for your advice! Adding a discussion on computation cost is important for the integrity and rigor of our work, and we will include this part in the revised version.
Compared to vanilla ways (DPO, SFT, PPO, ...), our method increases the number of prompts to generate when other parameters are controlled. We argue that MARGE only changes the coefficient of the time complexity but **not its asymptotic behavior**, thus is acceptable in practice. This coefficient is determined by the number of intermediate states for each query. In our experiment results, it is ~3.3 on Qwen2, Llama3.1, and MetaMath where there are about 5 states per query, and ~4.9 on Qwen2.5 with about 8 states per query.
Possible ways to reduce the computation cost of our method also exist, like removing unnecessary states from the Monte Carlo estimation. We believe this can be an interesting topic for future works.
Here, in Tab. 2, we present the results of MARGE and some baselines on Qwen2 when MARGE uses **less** computation, such that the training GPU time is roughly the same. In such cases, our method still exhibits advantages over baselines. We compare the results when baselines utilize more computation in Tab. 2 of our paper.
**Table2:**
| Acc % | MATH | GSM8k | CollegeMATH | OlympiadBench |
| --------- | ----- | ----- | ----------- | ------------- |
| PPO | 58.7 | 88.47 | 35.72 | 21.82 |
| REINFORCE | 59.81 | 88.32 | 35.58 | 24.49 |
| MARGE | 60.67 | 88.10 | 35.81 | 25.28 |
Here, we want to emphasize that, while our method utilizes more generation computation, it is our **goal and contribution** to **scale up** the computation to make the most use of the current query set. High-quality problems are getting harder to acquire. Therefore, we develop MARGE, with stronger exploration ability, to find more high-quality training samples.
As we demonstrate in Tab. 2 of our paper, adding more computation for baselines results in overfitting and degradation in performance. As recent progress in LLM inference, we believe adding up computation to automatically improve models is becoming the more feasible and promising way, highlighting the contribution of MARGE.
We believe MARGE contributes to enhancing LLM reasoning through improved exploration. We hope these clarifications and additional results address your concerns!
[1] Gao, Jiaxuan, et al. "On designing effective rl reward at training time for llm reasoning." arXiv preprint arXiv:2410.15115 (2024).
[2] Albalak, Alon, et al. "Big-Math: A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models." arXiv preprint arXiv:2502.17387 (2025). | Summary: The paper introduces a hit-guided exploration method to enhance LLMs’ mathematical reasoning by systematically exploring intermediate reasoning states. Using Monte Carlo simulations for better credit assignment, MARGE improves accuracy and reasoning diversity across multiple benchmarks without needing extra value models, making self-training more effective.
Claims And Evidence: Yes. Experiments support most claims in the paper.
Methods And Evaluation Criteria: Yes. Datasets are proper to the task.
Theoretical Claims: It is in appendix C.
Experimental Designs Or Analyses: Yes
The main result is comprehensive and reasonable.
Ablation study looks reasonable.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The paper differs from most previous RL methods.
The hit-guided exploration strategy is well-motivated, ensuring better coverage of reasoning steps.
The use of Monte Carlo simulation is widely used in many methods and is computationally efficient.
Iterative guidance updates ensure on-policy data generation, improving model alignment.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The paper compares against the most popular RL methods on widely used models.
2. The ablation studies confirm that hit selection strategy and guidance updates significantly impact performance.
Weaknesses:
1. The paper lacks failure analyses.
2. The paper claims MARGE enables scaling self-generated responses more effectively but does not show an explicit scaling trend.
Other Comments Or Suggestions: 1. The caption of Fig. 2 needs more explanation.
Questions For Authors: 1. MARGE avoids additional value models, but how does its computational cost compare to DPO, PPO, or GRPO?
2. How many Monte Carlo samples (n) are used for value estimation (Eq. 2)? How does it relate to training efficiency?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable reviews! We are more than happy to learn that you find our method to be well-motivated, contain reasonable and comprehensive results. Here we appreciate the chance to address your concerns.
# W1: lacks failure analyses
Thank you for your advice to improve our work! We also believe adding failure analyses will further enhance our work. Here, we plan to add failure analyses on:
1. special queries that our trained model failed to answer;
2. special cases where our model failed to find more preference pairs.
However, due to the limitation on the rebuttal length, we are unable to provide you full examples here. We will add this part as a chapter in the appendix in the updated version.
# W2: does not show an explicit scaling trend
Thank you for your advice! An explicit trend can better showcase MARGE's improvement and we will update it in the revised paper. Based on our data points on Qwen2, we find the logarithm function $y=c_1+c_2\ln(x)$ best fits the scaling trend between MATH500 accuracy $y$ and number of training samples $x$. We find the following coefficients for different methods:
- MARGE: $c_1=53.05, c_2=2.287$;
- GRPO: $c_1=52.92, c_2=1.302$;
- RFT: $c_1=54.89, c_2=0.99$.
The metrics above clearly showcase the effectiveness of MARGE in scaling training data. We plot the data points as well as fitted scaling lines in the figure [(link)](https://anonymous.4open.science/r/MARGE-ACE8/explicit_scaling_trend.png). We only include three lines for clarity of the picture, and we will include the results of other algorithms in the paper.
# Comment
Thank you for your suggestion! More explanation here will make our paper clearer. We will update the caption of Fig. 2 to the following:
> Average accuracies when starting from different intermediate states of correct solutions (blue) and incorrect ones (red) with Qwen2-7B-Instruct. A larger state index indicates being closer to the end. On average, completing from a correct (incorrect) state increases the portion of correct (incorrect) answers, which boosts the exploration of more training data.
# Q1: computation cost
Thank you for your advice! Adding a discussion on computation cost is important for the integrity and rigor of our work, and we will include this part in the revised version.
Compared to vanilla ways (DPO, SFT, PPO, ...), our method increases the number of prompts to generate when other parameters are controlled. We argue that MARGE only changes the coefficient of the time complexity but **not its asymptotic behavior**, thus is acceptable in practice. This coefficient is determined by the number of intermediate states for each query. In our experiment results, it is ~3.3 on Qwen2, Llama3.1, and MetaMath where there are about 5 states per query, and ~4.9 on Qwen2.5 with about 8 states per query.
Possible ways to reduce the computation cost of our method also exist, like removing unnecessary states from the Monte Carlo estimation. We believe this can be an interesting topic for future works.
In Tab. 2, we present the results of MARGE and some baselines on Qwen2 when MARGE uses **less** computation, such that the training GPU time is **roughly the same**. In such cases, our method still exhibits advantages over baselines.
**Table2:**
| Acc % | MATH | GSM8k | CollegeMATH | OlympiadBench |
| --------- | ----- | ----- | ----------- | ------------- |
| PPO | 58.7 | 88.47 | 35.72 | 21.82 |
| REINFORCE | 59.81 | 88.32 | 35.58 | 24.49 |
| MARGE | 60.67 | 88.10 | 35.81 | 25.28 |
Here, we want to emphasize that, while our method utilizes more generation computation, it is our **goal and contribution** to **scale up** the computation to make the most use of the current query set. High-quality problems are getting harder to acquire. Therefore, we develop MARGE, with stronger exploration ability, to find more high-quality training samples.
As we demonstrate in Tab. 2 of our paper, adding more computation for baselines results in overfitting and degradation in performance. As recent progress in LLM inference, we believe adding up computation to automatically improve models is becoming the more feasible and promising way, highlighting the contribution of MARGE.
# Q2
Here we use n=8 samples for value estimation. Higher Monte Carlo samples (n) provides more accurate value estimation and may yield better results. But at the same time, it leads to a linear increase in the number of generated tokens, decreasing training efficiency. We choose n=8 to balance this two effects.
We hope these clarifications and additional illustrations address your concerns! | null | null | null | null | null | null |
Multi-Marginal Stochastic Flow Matching for High-Dimensional Snapshot Data at Irregular Time Points | Accept (poster) | Summary: The authors proposed a multimarginal extension of flow matching that are simulation free and can work with high dimension data.
Claims And Evidence: The theoretical claims are supported by theorem and proofs. I am a bit skeptical on the empirical performance without uncertainty.
Methods And Evaluation Criteria: They make more sense if with some uncertainty.
Theoretical Claims: I did not check in depth, but did not find obvious mistakes.
Experimental Designs Or Analyses: I checked both simulated and real data. I think the experimental design make sense.
Supplementary Material: I went over, though not in depth, the proofs and in depth of additional experiments.
Relation To Broader Scientific Literature: Being able to perform scalable trajectory inference is useful in biology, ecology and environmental science. And the paper is targeting at an important problem.
Essential References Not Discussed: I do not have one particular -- trajectory inference literature is too large.
Other Strengths And Weaknesses: - The triplet idea is very interesting
- Scales well in dimensions.
Other Comments Or Suggestions: None
Questions For Authors: 1) I want to know more is the assumptions on the marginals -- are they discrete/empirical or continuous/population?
2) related to 1), Tong et al. 2023a used this mixture of Brownian bridge results of SB problem that would produce trajectories pass through one particles at each time point, this seems not inherited in the proposed method. Is this the case and does it imply that the proposed method operates on the continuous marginals not empirical version of it?
Tong, A., Malkin, N., FATRAS, K., Atanackovic, L., Zhang, Y., Huguet, G., Wolf, G., and Bengio, Y. Simulationfree schrodinger bridges via score and flow matching. In ¨ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems, 2023a.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback.
**Q1)**
We assume that the data space of the marginals is some continuous metric space. In particular, we assume that the data lies in $R^d$ space with the standard Euclidean distance (or distance squared) $d(x, y) = \| x - y \|^2$ for all our experiments. Moreover, we also assume that all the true marginals $\rho_i$ are continuous and that our data is just an empirical realization of the $\rho_i$ marginals.
**Q2)**
The numerical computation is done on samples (e.g. computing OT plans with respect to samples from the marginal distributions) because we do not assume access to the true closed form functionals defining the various marginal distributions. The underpinning theory, however, is based on learning a parameterized model which can generate a time-varying probability path $p_t$ that is time-differentiable and is constrained to match the true marginals (up to the fidelity given by the data) at the respective times.
In terms of trajectory inference, the SDE solvers operate on individual particles. Applying this to a point cloud outputs the trajectory of the empirical distribution, for example seen in Figure 1.
[1] Tong, A., Malkin, N., FATRAS, K., Atanackovic, L., Zhang, Y., Huguet, G., Wolf, G., and Bengio, Y. Simulationfree schrodinger bridges via score and flow matching. In ¨ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems, 2023a. | Summary: The paper presents Multi Marginal Stochastic Flow Matching Model(abbreviated as MMSFM), which is an extension of simulation-free score and flow matching method. The method enables the alignment of high-dimensional snapshots obtained from non-equidistant time points without reducing dimensionality. MMSFM uses third-degree polynomial to represent the non-equidistant time interval which connects optimally coupled points. Next, MMSFM introduce score matching and flow matching to reconstruct the Fokker-Planck equation while modeling stochastic process which bridges the distributions. As the non-equidistant snapshots are multi-marginal data which consists of overlapping mini-flows, MMSFM proposes a rolling window frame algorithm to deal with overlapping triplets, enhancing robustness and stability of the model. The model has been applied on synthetic datasets , single-cell dataset and other high- dimension real-world datasets, such as COLO58(melanoma single-cell) and CITEseq(Cellular Indexing of Transcriptomes and Epitopes by Sequencing)
Claims And Evidence: Yes. The paper gives out several theorems to prove the feasibility of score matching and overlapping mini-flow matching. The monotonic cubic Hermite splines introduced to couple different distributions promise the monotonicity of each piece of polynomial and allow a smaller and more flexible window while guaranteeing the computational efficiency and robustness. The experiments on both synthetic datasets and real-world datasets provides a solid foundation for the method proposed in the paper.
Methods And Evaluation Criteria: Yes. The algorithm MMSFM combines rolling window, optimal transportation and score matching to cope with high-dimension non-equidistant time interval snapshots. The author choose COLO58 and CITESeq as experiment datasets, which is sampled as non-equidistant time interval distribution. The model outperforms traditional algorithms such as MIOFlow.
Theoretical Claims: Yes. The paper gives one theorem and one corollary. The theorem calculates the gradient of the loss for a single interval with overlapping mini-flows and the corollary claims the condition in theorem is a special case of conditional flow matching. The proof of the theorem and corollary is correct and is well shown in the supplementary material.
Experimental Designs Or Analyses: Yes. The paper conducts the algorithm proposed on three synthetic datasets, one single-cell dataset from COLO858 melanoma cells, and two RNA gene expression datasets. Data is collected at several non-equidistant timepoints and the results are shown in tables and illustrations in the section 3. They both proves the soundness and validity of the experimental designs.
Supplementary Material: Yes. I review the proof of the theorem and corollary, the complement settings and implements of experiment, and the visualization of experiment results.
Relation To Broader Scientific Literature: The paper combines score and flow matching network with optimal transport and Brownian bridge to solve the multi-marginal problem which only non-equidistant time interval snapshots are provided. The newly-proposed method is one of the contribution of this work . The introduction of rolling window algorithm further improves the performance of the model and the results of experiments on both synthetic datasets and real-world datasets outperforms the traditional ones, which is another contribution of the paper.
Essential References Not Discussed: No as far as I know. The paper mentions most of the related work which helps the reader understand the model.
Other Strengths And Weaknesses: Strength:
1. The paper innovatively provides a novel model to solve the non-equidistant time interval snapshots inferring problem. The model based on score and flow matching outperforms traditional methods and increases the interpretability of the model.
2. The model combines rolling windows with spline measurements to address the issue of non-uniform time points which is inspired by optimal transport and Brownian bridge and shows great potential in dealing with other similar problems.
3.The experiments cover both synthetic and real-world data, verifying the capability of high-dimensional modeling, which is better than MIOFlow. Also, the visualization of the results is illustrated clearly in both section3 and supplemental materials.
Weakness:
1.The paper lacks in analysis of the bifurcation stream problem in Dyngen datasets, it should be added or be discussed in the future works.
2.The paper forgets to discuss the computational complexity of the rolling window size k in a broader view, which is an inevitable part of the theoretical analysis. The paper shows a M-dependent analysis of the computational complexity, but more analysis can be conducted in the study.
Other Comments Or Suggestions: Some figure (such as figure 2) should be added into main body of the paper instead of showing in the supplementary material. In section 2.3, the author mentions ‘See Figure 2 for a visual representation of the variation of paths in an interval.’ While the figure 2 is only shown in the supplemental materials, which makes it difficult to read the paper.
Questions For Authors: 1.The distant between different time intervals in the experiments shows a monotonically increasing property, what will happen if the gap between time intervals is more irregular?
2.What will happen if less snapshots is given? Is the model robust enough to solve a longer time experiment?
3. Can you explain the bifurcation of flow in Dyngen datasets and gives a analytical explanation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback.
**Q1)**
This is a very insightful question, as you are correct to hint at the sensitivity of splines to the time-labels in the data.
For the S-shaped and $\alpha$-shaped synthetic datasets, we evaluate on 3 different timepoint distributions to see the effect of varying the gaps.
* Uniform gaps $T_1 = (0, 0.17, 0.33, 0.5, 0.67, 0.83, 1)$, or alternatively $(i/6)\_{i=0}^6$.
* One set of arbitrary gaps $T_2 = (0, 0.08, 0.38, 0.42, 0.54, 0.85, 1)$. The smallest gap here is 0.04 from the interval $(0.38 \to 0.42)$. The largest gap is approximately 0.3 for which there are two intervals: $(0.08 \to 0.38)$ and $(0.54 \to 0.85)$.
* A second set of arbitrary gaps $T_3 = (0, 0.2, 0.27, 0.3, 0.88, 0.98, 1)$. This is an extremely uneven and difficult timepoint set as it contains a disproportionately large interval of 0.58 in $(0.3 \to 0.88)$. Also included are two very small intervals of 0.02-3 in $(0.27 \to 0.3)$ and $(0.98 \to 1)$.
We can see in Table 1 that for $T_1$ and $T_2$, the Triplet model tends to outperform the Pairwise model. Interestingly, the Pairwise model did outperform the Triplet model on the $\alpha$-shaped $T_1$ case, however we believe that given the Pairwise model's weakness on $\alpha$-shaped $T_2$ compared to the Triplet model, the experiments validate the ability for splines to be flexible enough for irregular timepoints.
Still, it is probably a good idea to be cautious. We can see snapshots of the probability path given in the $T_3$ case by plotting how the point cloud moves in time. The relevant figures are in Appendix E, Figures (9, 10). In the S-shaped case (Figure 9) we see that the Triplet model learns the flow but includes shearing effects. Moreover, in the $\alpha$-shaped case (Figure 10) neither model was able to successfully learn the flow. We believe exploring regulation effects to prevent this behavior can lead to fruitful future work.
The COLO858 dataset contains 8 timepoints of $T = (0, 0.5, 2, 6, 15, 24, 72, 120)$. We normalize this to $T = (0, 0.004, 0.017, 0.05, 0.125, 0.2, 0.6, 1)$. We can see that the size of the gaps is monotonically increasing. Table 1 again shows the Triplet model to outperform the Pairwise model in this setting with irregular timepoints.
The Multiome and CITEseq datasets both contain 4 timepoints of $T = (2, 3, 4, 7)$ which we normalize to $T = (0, 0.2, 0.4, 1)$. Likewise, we see that the gap size is monotonically increasing. Table 2 shows that Triplet model marginally outperforms the Pairwise model.
We see that the time intervals are monotonically increasing for COLO858, Multiome, and CITEseq because typically cell dynamics are most active early into the perturbation and gradually slow down over time. The higher time-fidelity in the early time points reflects this.
Our experimental setup descriptions can be found in section 3.1.
**Q2**
Flow matching methods including ours are ultimately attempts at learning the dynamics driving a system subject to constraints that the marginals at given times must match certain distributions. Moreover, these underlying dynamics are not necessarily unique, implying that there are infinitely many dynamics which solve the system subject to the marginal constraints. It is this unidentifiability which leads prior work to assume linear interpolations and our work to assume spline interpolations.
We are also beholden to problems relating to signal sampling which are beyond the scope of this work. Consider, for example, that we have data generated from a true underlying process of $f(t) = \sin(t)$. If we happen to take measurements at $t = 0, \pi, 2\pi, \dots, n\pi$ then our signal is a constant 0. Based on the data $\{ (k\pi, 0) \}_{k=0}^n$, the simplest reconstruction without any additional assumptions on the data or underlying process is $\tilde{f}(t) = 0$ but in fact $\tilde{f} \neq f$!
With this in mind, decreasing the number of snapshots is equivalent to decreasing the time-fidelity of our data. As such, this would increase the "simplicity" of the dynamics learned by our model. On the other hand, the total length of the process is not as important given that we normalize time to run for $t \in [0, 1]$.
**Q3**
Yes, the Dyngen dataset introduces a bifurcation as can be seen in Appendix E, Figure 7. We repurpose the synthetic dataset from [1], which the authors generated using the Dyngen simulator [2]. We have used the same data files as [1] which defines a realistic dynamic cellular process.
**References**
[1] Huguet, G., Magruder, D. S., Tong, A., Fasina, O., Kuchroo, M., Wolf, G., and Krishnaswamy, S. Manifold interpolating optimal-transport flows for trajectory inference. Advances in neural information processing systems, 35: 29705–29718, 2022.
[2] Cannoodt, R., Saelens, W., Deconinck, L., and Saeys, Y. Spearheading future omics analyses using dyngen, a multi-modal simulator of single cells. Nature Communications, 12(1):3942, 2021. | Summary: This paper proposes an extension of flow-matching for multi-marginals - i.e. when multiple snapshots are observed, typically over time. The method sample conditioning points from all snapshots using an approximation of the multi-marginal optimal transport map and then fits a spline to these points that is used as the conditional drift. The authors then evaluated their method on synthetic and single-cell data.
Claims And Evidence: The authors show in their experiments (both synthetic and single-cell) that the multi-marginal extension is more effective than pairwise flow matching. This demonstrates the added value of their approach.
Methods And Evaluation Criteria: The evaluation criteria are clear and make sense (reconstruction of held-out snapshots).
Theoretical Claims: I did not check all theoretical claims but I did not spot any major issue.
Experimental Designs Or Analyses: The experimental designs are sound, the authors learnt their method on given snapshots and evaluated on a left out time point, which is typically done in the literature.
Supplementary Material: I did not check the supplementary material.
Relation To Broader Scientific Literature: The paper extends the flow matching framework, and embeds itself in that literature.
Essential References Not Discussed: The authors missed an important, albeit very recent, reference that is seemingly very related to their approach [1]. Given the papers are concomitant, I don't expect the authors to compare against it but I encourage them to position their paper with respect to it.
The paper is also related to [2]. The authors could compare their method against it.
[1] Rohbeck, Martin, et al. "Modeling Complex System Dynamics with Flow Matching Across Time and Conditions." The Thirteenth International Conference on Learning Representations, 2025.
[2] Sinho Chewi, Julien Clancy, Thibaut Le Gouic, Philippe Rigollet, George Stepaniants, and Austin Stromme. Fast and smooth interpolation on Wasserstein space. In International Conference on Artificial Intelligence and Statistics, pp. 3061–3069. PMLR, 2021.
Other Strengths And Weaknesses: This paper addresses an important use case of flow matching, where multiple snapshots are observed over time. The authors proposed a sound and effective way to leverage temporal dependencies and showed favorable experimental results.
Weaknesses:
- The authors claim to propose a promising method for single-cell perturbation but I did not see any mechanism to incorporate perturbations nor any experimental result involving perturbations.
- The method heavily relies on the assumption that the temporal dynamics follow a spline. It's not clear whether this assumption is justified in practice.
Other Comments Or Suggestions: Cfr above.
Questions For Authors: 1. The authors claim to propose a promising method for single-cell perturbation but I did not see any mechanism to incorporate perturbations nor any experimental result involving perturbations. Could you please clarify the claim ?
2. The method heavily relies on the assumption that the temporal dynamics follow a spline. It's not clear whether this assumption is justified in practice. Can the authors comment on that ?
3. Could the authors include [2] as a baseline ?
[2] Sinho Chewi, Julien Clancy, Thibaut Le Gouic, Philippe Rigollet, George Stepaniants, and Austin Stromme. Fast and smooth interpolation on Wasserstein space. In International Conference on Artificial Intelligence and Statistics, pp. 3061–3069. PMLR, 2021.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback.
**Q1)**
Thank you for the clarifying question. By perturbations we mean a system which is not currently at a steady state. For example, a cell system can be perturbed by some drug stimuli. Further, we focus on cases where the perturbation is fixed. Learning a model to handle user-defined, arbitrary perturbation cases (e.g. arbitrary drug types and dosages) is an important avenue for future work.
**Q2)**
The original flow matching method assumes a linear interpolation function to fill in the gaps for particles at times not given in the training data. This is based on the fact that we effectively do not have access to the true temporal dynamics---if we did, there would be no reason to use a flow matching framework because we could then directly simulate the system for a desired initial condition. As such, we opt for an Occam's razor approach about using simpler dynamics. In our case, this amounts to cubic splines which offer tractable solutions and avoid Runge's phenomenon. As you correctly point out, our method assumes the temporal dynamics to be approximable or model-able using spline interpolations in the data space. These data-space interpolations are then used to learn a vector field generating the probability path $p_t$. This $p_t$ is also a spline, but one over measure space, interpolating between the marginal distributions $\rho_i$ given in the data.
**Q3)**
Our sampling algorithm is the Transport Spline Interpolation in Algorithm 1 of [2]. However, those authors look at a single interpolating spline whereas we generate multiple overlapping splines in our own Algorithm 1 via a rolling window over the timepoint marginals. Moreover, we require a generative model for evaluations on unseen initial conditions which [2] does not provide because it looks only at interpolations between existing points.
**Comments**
Thank you for bringing [1] to our attention as we had not seen it at the time of developing our model. Certainly, there is a good amount of similarity and relevancy:
a) We both use splines as interpolants.
b) We both arrive at the same MMOT plan approximation in our work's equation (8). The authors of [1] discuss this in their Appendix B.
c) We both allow for irregular snapshot timings.
However, we differ in that:
a) We use splines on overlapping triplets whereas [1] looks at splines over the whole sequence.
b) We explore the spline algorithm and opt for monotonic Hermite cubic splines whereas [1] opts for natural cubic splines. The latter is perhaps nicer analytically, but we nonetheless use the former due to practical concerns involving overshoot. For example, natural cubic splines enforce $C^2$ continuity which produce "smoother" splines, but this constraint can introduce severe overshooting which we illustrate in Figures (2, 3, 4) found in our Appendix B. Monotonic Hermite cubic splines only enforce $C^1$ continuity which is sufficient in terms of the general Flow Matching framework. This weaker smoothness constraint, along with the monotonicity between control points, avoids the overshooting problem whilst also still allowing for continuous derivatives at each interior point unlike a simple linear interpolation.
c) Related to (b), we explore the sensitivity of our method to highly irregular timepoints. This is in part because we suspect the overshooting behavior in the natural cubic splines to be caused by neighboring short and long time intervals. For example, consider our timepoint set $T_3 = 0, 0.2, 0.27, 0.3, 0.88, 0.98, 1)$ for synthetic data. Notice that the sequence $0.27 \to 0.3 \to 0.88$ involves a short interval of 0.03 followed by a long interval of 0.58. This means that any change in velocity and acceleration along the short interval can happen relatively quickly, but the corresponding change for the long interval must be drawn out. The continuity of the acceleration does not help in this regard as it prevents the spline from instantaneously re-adjust velocities.
**References**
[1] Rohbeck, Martin, et al. "Modeling Complex System Dynamics with Flow Matching Across Time and Conditions." The Thirteenth International Conference on Learning Representations, 2025.
[2] Sinho Chewi, Julien Clancy, Thibaut Le Gouic, Philippe Rigollet, George Stepaniants, and Austin Stromme. Fast and smooth interpolation on Wasserstein space. In International Conference on Artificial Intelligence and Statistics, pp. 3061–3069. PMLR, 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers. I confirm my score.
- I appreciate the in-depth discussion of the method with [1]. Please make sure to include part of this in your final manuscript as this will help readers understand the subtleties of both approaches.
- Regarding to [2], the authors in [1] did compare against it so I assume you should be able to do the same ?
Best Regards, | Summary: This work proposes Multi-Marginal Stochastic Flow Matching (MMSFM) with the goal of training a translation model across multiple snapshots taken at non-equidistant time points. MMSFM builds upon the Flow Matching framework and extends it through measure-valued splines. (Stochastic) Flow Matching can be applied to train a pair-wise flow between all adjacent time point snapshots. However, this approach can struggle to capture the global dynamics across multiple time points and does not generalize well to arbitrary held-out time points. MMSFM enables capturing local dynamics across uneven time intervals and maintaining consistency between overlapping windows.
Claims And Evidence: The authors claim that "the use of measure-valued splines enhances robustness to irregular snapshot timing, and score matching prevents overfitting in high-dimensional spaces." While this is somewhat supported by empirical evidence, this overstates the contribution of the paper as the use of the score-matching loss is taken from prior work.
Also, the authors propose using mini-batch OT, following previous work. However, no ablation of this design choice is done, and it remains unclear how reliant MMSFM is on it.
Empirically, adding splines with triplets (k=2) compared to pairwise (k=1) shows overall improvements, although not consistent across different experimental settings.
Methods And Evaluation Criteria: The evaluation approach is appropriate. The W1/W2 metrics provide standard measures of distribution similarity, and MMD metrics offer complementary insights. The synthetic datasets test specific capabilities (bifurcations, topology changes), and the biological applications demonstrate practical relevance. Biological metrics could further strengthen this experimental section.
Empirically, the authors compare MMSFM (k=2) to SF2M and MIOFlow. It could be beneficial to include more competing works here, as well as include results for k > 2 to shed further insight into the parameter k. While Appendix D gives one ablation on this, it is not at all discussed in the main text. Additionally, it remains unclear how e.g. k=3 would perform and what tradeoff choosing larger k would give w.r.t performance and computational cost.
Theoretical Claims: The theoretical foundation draws from established work on optimal transport and measure-valued splines. The paper states that their approach minimizes total action across time points but doesn't prove that the overlapping window strategy actually achieves this global minimization.
Additionally, I am unsure about the author's use of the term MMOT (Multi-marginal Optimal Transport). This term, see e.g. [1], is usually used in a different context with the goal of obtaining a **joint** coupling across N different marginals. This can cause confusion, as this is also the notion that is thoroughly discussed in [2]. As also mentioned by the authors, [2] tackles the multi-marginal OT problem, i.e., learning a mapping between **all pairs** of distributions. On the contrary, MMSFM learns a sequential mapping across time-adjacent distributions, which is not MMOT. Adding a formal definition of MMOT following e.g. [1], and building upon that would help clarify this confusion and position MMSFM better within related work.
[1] Brendan Pass. "Multi-marginal optimal transport: theory and applications". 2014.
[2]Michael S. Albergo, Nicholas M. Boffi, Michael Lindsey, Eric Vanden-Eijnden. "Multimarginal generative modeling with stochastic interpolants". 2023.
Experimental Designs Or Analyses: Yes, they do make sense. The synthetic datasets make sense and the biological applications demonstrate practical relevance. However, the work is submitted under "**Primary Area:** Applications->Health / Medicine" while having no modality specific metric. Measuring W1/W2 and MMD is fine but I think these experiments would be strengthened through biologically meaningful metrics as e.g. leveraged in [3].
[3] Luca Eyring, Dominik Klein, Théo Uscidda, Giovanni Palla, Niki Kilbertus, Zeynep Akata, Fabian Theis. "Unbalancedness in Neural Monge Maps Improves Unpaired Domain Translation". ICLR 2024.
Supplementary Material: Yes, B, C, D & E.
Relation To Broader Scientific Literature: This paper proposes to improve existing Flow/Bridge Matching literature for mapping across multiple non-equidistant time points. Prior work has shown that these methods are competitive in this problem. Through the addition of measure-valued splines, MMSFM achieves improvements in this specific problem and takes a step toward solving this problem.
Essential References Not Discussed: [3] is very closely related to MMSFM and related work as it also tackles unpaired single-cell translation. It improves upon mini-batch OT sampling through unbalanced Optimal Transport. These ideas could in future also be applied to MMSFM and would additionally be a relevant competing method.
[3] Luca Eyring, Dominik Klein, Théo Uscidda, Giovanni Palla, Niki Kilbertus, Zeynep Akata, Fabian Theis. "Unbalancedness in Neural Monge Maps Improves Unpaired Domain Translation". ICLR 2024.
Other Strengths And Weaknesses: Clarity could be improved in Section 2. Specifically, regarding the term MMOT.
Other Comments Or Suggestions: Formally introduce multi-marginal optimal transport. A toy-example visualization could strengthen the clarity of the work. In general, a concrete description of all used hyperparameters seems to be missing.
Questions For Authors: - What solver and hyperparameters do you use to compute the OT plans in Equation 8?
- What is here referred to as MMOT (Multi-marginal OT) is implemented by computing pair-wise OT plans across adjacent time points. Could the authors extend the sentence "Then, we compute the MMOT plan given by the first-order Markov approximation" and explain how this is an MMOT plan?
- Missing description and ablation of hyperparameters for SDE solver, which solver is used, and how many NFE are used? How sensitive is the trained model to the NFEs?
- Does MMSFM also work without min-batch OT? How much worse does it perform?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback.
**Q1)**
We use the Python Optimal Transport (POT) package for computing the OT plans $\pi(x_0, x_1)$ where we use $x_0, x_1$ as notational shorthand for $x_{t_\ell}, x_{t_{\ell+1}}$. The conditional plans are generated by using probability rules where $\pi(x_{\ell+1} | x_\ell) = \pi(x_\ell, x_{\ell+1}) / q(x_\ell)$. Specifically, we use the Earth Movers Distance function (`POT.emd()`) with exact matching.
We manipulate the probability matrices $\pi$ returned by `POT.emd()` representing $\pi(x_i, x_{i+1})$ by using the column vector $\hat{q}$ representing $\int \pi(x_i, x_{i+1})dx_{i+1}$. We construct $\hat{q}$ by summing over the columns of $\pi$. We can then obtain the conditional plan $\hat{\pi} \gets \pi(x_{i+1} | x_i) = \pi / \hat{q}$.
Sampling itself is at simple as using $\pi$ as a 2D probability table or $\hat{\pi}$ as a list of 1D probability vectors where the row index corresponds to the conditioning variable. We also note that the whole sampling procedure technically operates on the indexes of a mini-batch and so we can also consider this as an alignment operation on the initial mini-batch. Further, this procedure adopts Algorithm 1 from [2] using OT plans in place of OT maps.
**Q2)**
As you sharply note, the "true" MMOT plan would correspond to the joint distribution $\pi$ which considers all pairs of distributions. However, this is not easy to compute and moreover we are specifically interested in processes which do have a temporal ordering. With this in mind, we turn to the framework of joint distributions and apply the chain rule of probability to obtain that $\pi(x_0, \dots, x_M) = \pi(x_0, x_1) \prod_{i=2}^M \pi(x_i | x_{< i})$. Finally, by considering a sequential sampling process where $x_i$ must be sampled prior to $x_j$ for $i < j$, we can apply a first-order Markov approximation where $x_i$ only depends on $x_{i-1}$. Thus, the conditional probabilities in the product all reduce to the form $\pi(x_i | x_{i-1})$ and we recover equation (8). It is in this sense that we call (8) the first-order Markov approximation to the true MMOT plan.
**Q3)**
We use the same setup as [3] and use the SDE solver `torchsde` from `github.com/google-research/torchsde`. The drift function $f_t(x)$ is set to the SDE drift $u_t(x ; \theta) = v_t(x ; \theta) + \frac{g^2(t)}{2} s_t(x ; \theta)$ from equation (12) for some learned deterministic flow $v_t$ and score $s_t$. We set the diffusion schedule $g(t) = \sigma$ for a constant $\sigma = 0.15$.
We have not explored any ablation to the SDE solver hyperparameters nor the sensitivity to the number of function evaluations, however we believe these to be avenues for interesting future studies. If referring to differences between trajectories inferred by an SDE and an ODE, neither meaningfully differed from one another because the the same learned flow model $v_t$ is used in both cases.
**Q4)**
Yes, MMSFM can work without mini-batch OT. Notice how in our Theorem 2.1 we define the regression signal to be certain $u_t(x | z)$ objects with an expectation over some $q(z)$. OT comes into play when constructing $q(z) \gets \pi(z)$ for sampling $z = (x_0, \dots, x_M)$ and evaluating the expectation. However, the only strict requirement is that $\pi(z)$ be a coupling distribution such that $q(x_i) = \int \pi(x_0, \dots, x_M)dx_{-i}$. We can just as easily construct $\pi(z) = \prod_i^M q(x_i)$ and satisfy the requirement, noting that this $\pi$ is not the OT plan. Based on prior work such as [1], we focused on the OT coupling and did not evaluate on the independent coupling.
**Comments**
Thank you for suggesting [4]. Although we have not applied the methodologies and metrics referenced, we find them very relevant and valuable for future work.
**References**
[1] Tong, A., Malkin, N., Huguet, G., Zhang, Y., Rector-Brooks, J., Fatras, K., Wolf, G., and Bengio, Y. Improving and generalizing flow-based generative models with mini-batch optimal transport. arXiv preprint arXiv:2302.00482, 2023b.
[2] Chewi, S., Clancy, J., Le Gouic, T., Rigollet, P., Stepaniants, G., and Stromme, A. Fast and smooth interpolation on wasserstein space. In International Conference on Artificial Intelligence and Statistics, pp. 3061–3069. PMLR, 2021.
[3] Tong, A., Malkin, N., FATRAS, K., Atanackovic, L., Zhang, Y., Huguet, G., Wolf, G., and Bengio, Y. Simulation-free schrodinger bridges via score and flow matching. In ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems, 2023a.
[4] Luca Eyring, Dominik Klein, Théo Uscidda, Giovanni Palla, Niki Kilbertus, Zeynep Akata, Fabian Theis. "Unbalancedness in Neural Monge Maps Improves Unpaired Domain Translation". ICLR 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for the explanations.
**Re Q1):**
I am not super familiar with the POT library, but EMD would suggest you are computing the Wasserstein-1 distance? Is that correct?
**Re Q2):**
"However, this is not easy to compute and moreover we are specifically interested in processes which do have a temporal ordering."
This is definitely true, but I'm still very unsure whether it is right to use the term MMOT here. I would suggest to either change this or provide a formal definition of MMOT, and build upon that. In its current form claiming that MMSFM is learning MMOT is not correct in my opinion.
**Re Q3):**
How many NFEs are used for MMSFM during inference? The use of mini-batch OT with splines could mean that fewer NFE are needed in MMSFM compared to other methods. An ablation on the NFE would be interesting here. Additionally, I believe it is quite important to include these details as well as the details of how you are computing the OT plans into the paper/Appendix.
**Re Q4:**
An ablation on this could help further verify the effectiveness of the proposed approach (no mini-batch OT, but still using splines).
I choose to retain my score for now as I still think the usage of the term MMOT is not really accurate and that the explanation and introduction of MMOT needs to be improved to avoid confusion. | null | null | null | null | null | null |
Flow-field inference from neural data using deep recurrent networks | Accept (poster) | Summary: This work sets out to infer the latent variables and their time dynamics from observed neural recordings. To achieve this, the authors developed FINDR, essentially a recurrent neural network with multi-layer perceptrons defining the flow maps of the latent variables, whereas neural activities are defined as linear projections plus a soft plus (an analytically nicer form of relu) from the latent variables. The key contribution to the literature seems to be the use of a specific deep architecture, though similar approaches exist in the published literature.
## After the rebuttal
I believe that, following the promised revisions, this work now meets the criteria for acceptance at ICML. That said, I still feel that the clarity and overall impact of the paper could have been significantly improved had the authors chosen to present the full scope of the work in a single manuscript. The mention of a companion bioRxiv paper on scientific applications of FINDR (introduced only in the author response, see: "we discuss scientific findings using FINDR in a separate bioRxiv paper") clarified some of the unease I initially experienced while reviewing this submission. While this fact explains certain omissions that I felt would have benefited this work significantly (which is one big confound we can never account for in our reviews), it also highlights how a more integrated presentation could have elevated the contribution well beyond the acceptance threshold.
Claims And Evidence: I believe that some of the claims made in this work are not clearly supported by empirical or theoretical evidence. Specifically:
- Identifiability: It is not clear what latent identifiability is, or how it is achieved. To my understanding, CEBRA ([1]) has solved a significant problem in this literature by proving linear identifiability in latent variables. This claim should be made more clear, as it is central to the latent variable identification literature.
- Fig 3 experiments: The benchmarked models seem to be not optimized, rather some random parameters taken from demos are used? If true, model parameters should be optimized using cross-validation. The authors should also provide clear information on how projections from held-out neurons were trained for all methods. For instance, LFADS use nonlinear decoders. Did the authors train linear readouts? If so, this is also suboptimal. Overall, there is little to no information in the paper about the methodology behind Fig. 3.
- Fig. 4: This figure does not support the claim that other methods do not find consistent representations across folds. For this claim, I believe there should be some form of statistical tests. Moreover, since latent variables are often only linearly identifiable, it is likely that the autoLFAD results might be quite consistent after linear transformations.
Methods And Evaluation Criteria: To my understanding, the field has moved away from LFADS. On the other hand, rSLDS are mainly used due to their interpretability, not computational power. Also, please see the latest implementation for rSLDS that seem to provide significant improvements compared to the traditional use (https://nips.cc/virtual/2024/poster/95587).
In general, I believe benchmarking should include CEBRA [1], or models derived from CEBRA-like architectures that also incorporate dynamical modeling of latent states (as is done in this work). See for instance, [2-3]. Additionally, I think at least one component of this paper should include a (low-rank) RNN benchmark. See for instance [4-5]. Before moving towards the deep recurrent network, one might expect whether shallow low-rank RNNs could have similar explanation power. For instance, [5] has shown a similar result as in Fig. 2 in this work.
Theoretical Claims: NA.
Experimental Designs Or Analyses: For Fig. 2, it may be more interesting to add other examples than the flip flop task. To solve flip flops, network simply generate bistable dynamics. Tasks such as sin-generation (limit cycle) and delayed addition/multiplication (line attractor) could bring additional breadth to this work and increase the appeal to the broader NeuroAI community.
Supplementary Material: I did not review the full SM. I looked at the identifiability part and searched for how held-out neurons were designed.
Relation To Broader Scientific Literature: As noted earlier, the key contribution to the literature was somewhat unclear to me. Authors state that "The goal of FINDR is to 1) compress the activity of a large population of neurons at time t to an abstract low-dimensional representation, and 2) learn the “rules” of how this representation evolves over time. "
As stated, these are very broad statements that apply to many works in the field [1-5]. I was not able to identify these works as cited in the manuscript, and I think there are several other relevant ones that are cited within these works.
Essential References Not Discussed: Please see the end of this report. The current manuscript is missing some key references. This list is not exhaustive, but may be helpful.
Other Strengths And Weaknesses: As a big strength, I want to note that goal 2 stated by authors is very interesting! A fully static model like CEBRA cannot address this, though later variations did try to advance CEBRA in this regard. That being said, I always wondered the following question when reading papers with such claims: How we can trust the flow maps outside the regions that data is observed? In relevant literature, [6-7] have proven this for rSLDS by performing optogenetics manipulations. I am not sure if such drastic experiments are needed to support this claim, but as noted above, I do not believe Fig. 4 is sufficient either.
Other Comments Or Suggestions: References:
[1] Schneider, S., Lee, J. H., & Mathis, M. W. (2023). Learnable latent embeddings for joint behavioural and neural analysis. Nature, 617(7960), 360-368.
[2] Abbaspourazad, H., Erturk, E., Pesaran, B., & Shanechi, M. M. (2024). Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nature Biomedical Engineering, 8(1), 85-108.
[3] Chen, C., Yang, Z., & Wang, X. (2025). Neural Embeddings Rank: Aligning 3D latent dynamics with movements. Advances in Neural Information Processing Systems, 37, 141461-141489.
[4] Pals, M., Sağtekin, A. E., Pei, F. C., Gloeckler, M., & Macke, J. H. (2024, June). Inferring stochastic low-rank recurrent neural networks from neural data. In The Thirty-eighth Annual Conference on Neural Information Processing Systems.
[5] Valente, A., Pillow, J. W., & Ostojic, S. (2022). Extracting computational mechanisms from neural data using low-rank RNNs. Advances in Neural Information Processing Systems, 35, 24072-24086.
[6] Vinograd, A., Nair, A., Kim, J. H., Linderman, S. W., & Anderson, D. J. (2024). Causal evidence of a line attractor encoding an affective state. Nature, 634(8035), 910-918.
[7] Liu, M., Nair, A., Coria, N., Linderman, S. W., & Anderson, D. J. (2024). Encoding of female mating dynamics by a hypothalamic line attractor. Nature, 634(8035), 901-909.
More relevant work: https://www.nature.com/articles/s41593-020-00733-0, https://www.nature.com/articles/s41586-023-06714-0, https://www.nature.com/articles/s41467-018-06560-z, also see works from Durstewitz's and Ostojic's groups.
Questions For Authors: I have the following questions for the authors:
1) Could you clarify how the model parameters for the benchmarks were chosen? Did you perform cross-validation to optimize them? How were the parameters for FINDR chosen?
2) Could you address my comments in Claims And Evidence above: Specifically how held-out neurons were trained to connect to latent variables and what is meant here in terns of latent identifiability?
3) How does your method compare to CEBRA ([1], published a while back) and MARBLE (https://www.nature.com/articles/s41592-024-02582-2) which just came out but has been on BiorXiv for a while? To clarify, for MARBLE, I am not asking for comparisons since it can be considered concurrent work, though should be cited as such, I am just asking for clarification.
I believe this work can be a good fit for ICML, but as it stands, it requires substantial revisions. I am not sure if the limited interactions of the conference format allow such nuanced discussions. Hence, I recommend the authors address above concerns and resubmit to the next conference cycle, though if they choose to do a rebuttal, I remain optimistic and ready to change my evaluation if substantial evidence is presented.
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. While we can’t update our submission now, we will revise it accordingly.
**Identifiability:**
Our definition (Appendix A.2) differs from [1], which builds on Roeder et al., 2021 ([A]), in two ways:
1. Following Wang et al., 2021, we define latent z as identifiable if z_1 != z_2 implies that p(y|z = z_1, \theta) != p(y|z = z_2, \theta), where y is the observed neural activity (this is Eq. (1) and L636). Since we use a linear projection C for p(y|z, \theta), this is satisfied if C in Eq. (1) is injective. This is a weaker condition than [A], which requires both z **and** \theta to be identifiable. For us, this is difficult as there is always some invertible A such that y = Cz = CAA^{-1}z. This was our motivation for performing SVD on C = USVt, and defining z_tilde = SVt z so that the distance in the space z_tilde is preserved in the space y (A.3). All FINDR analyses use z_tilde. FINDR’s latents (z_tilde) are thus identifiable up to an orthogonal transformation.
2. FINDR is a generative model, while CEBRA is a discriminative model. [1] and [A] focus on conditions for linear identifiability in discriminative models.
We will move A.2-A.3 to the main text, and clarify differences from [1].
**Fig. 3:**
We regret if there has been a misunderstanding. We *did* perform 5-fold CV to optimize hyperparameters of all models (paragraph starting in L332, 1st column). We used a hyperparameter-optimized LFADS (**auto**LFADS), using config as in L796. For SLDS and rSLDS, we optimized over discrete latents (L308).
Following Pei et al., 2022, we held out 20% of neurons by supplying the encoder with 80% held-in neurons. Then, the decoder in Eq. (1) reconstructed *all* neurons. We trained FINDR on 3/5 of the trials on all neurons, validated it on 1/5 of the trials on all neurons via grid search (A.1.6), and tested performance on the remaining 1/5 on held-out neurons. We performed the same procedure for autoLFADS. In Pandarinath et al., 2018, they used a linear decoder with exponential nonlinearity, and *not* a nonlinear decoder.
We will clarify details on evidence-conditioned PSTH R^2, normalized log-likelihood, and held-out neuron training.
**Fig. 4:**
Here, axes represent PC 1 and PC 2 of latents. For autoLFADS, the latent trajectories projected onto these axes were not consistent, whereas for FINDR, they were. To further evaluate consistency across folds, we sorted single-trial trajectories by evidence sign and computed the trial average of each group. Then, we calculated Pearson’s |r| of these trajectories between fold 1 and 2 for each latent axis and took the average of |r| across the axes. With this, FINDR folds were consistent by 0.99. In contrast, for autoLFADS (after doing A.3 transformation, just like FINDR), this was 0.53.
Unlike the transformation in A.3 which preserves distances in the latent space, linearly transforming autoLFADS fold 1 to match fold 2 increased |r| to 0.99, but, by doing linear transform, we stretch the latent space, so the distance in latent space is *not* preserved in the neural space (ignoring softplus). Without A.3, we wouldn’t be able to say e.g., the first latent dimension explains most of the variance of the task-relevant component of neural data.
**Benchmarking & Expressivity:**
As the reviewer points out, [1] is static and is not a dynamical model in the sense that it does not learn representations like our Eq. (2), and we can’t perform fixed point analysis. This is similar for [3]. The model in [2] is a finite-dimensional LDS with a nonlinear decoder, meaning it can’t learn nonlinear dynamics like bistable attractors.
Regarding explanation power (expressivity) of low-rank RNNs and benchmarking RNNs, Kim et al., 2023 defines a measure of practical expressivity and performs extensive analyses comparing RNNs of different architectures, including the one we use. Low-rank RNNs are a special case of our single-hidden-layer MLP without gating (Mastrogiuseppe & Ostojic, 2018).
**Fig. 2:**
Fig. 2 *doesn’t* show bistable dynamics, but a disk attractor (L203, 1st column). A 1-D variant similar to this task would generate a line attractor. We find that FINDR can also recover limit cycles and bistable attractors in synthetic data.
**FINDR’s goals:**
We will clarify that the two goals are field-wide, not just FINDR’s. We will make our contributions, including task-relevant and irrelevant dynamics, clearer with bullet points.
**Flow map confidence:**
Please see Reviewer PTUj’s Q1.
**Relevant work:**
We will cite Hu et al., Pals et al., and MARBLE as important concurrent work, and cite references mentioned by the reviewer. In particular, MARBLE estimates flow fields in neural space before embedding them in latent space, whereas FINDR estimates flows in latent space. Estimating flows in high-D space could be more sensitive to noise, and integrating state-space modeling (like the one here) with MARBLE could be an interesting future direction.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their well-structured rebuttal that clearly engages with the critiques raised in my original review. I found the following responses particularly convincing:
- The clarification on the methodology was much appreciated. The held-out neuron evaluation follows the correct procedure (kudos for fairness here), and the parameter optimization strategy is indeed reasonable (apologies for overlooking this). I recommend making both aspects clearer in the main methods section. I also appreciated the discussion on latent interpretability/identifiability - please ensure the manuscript is self-contained so that readers are not required to consult other works to follow these key ideas.
- I have a better appreciation for Fig. 2 now. I had missed that input strength was varied, making the emergence of a planar attractor quite reasonable. That said, it is still somewhat difficult to disambiguate this from bistable dynamics visually since the latter would also have very slow dynamics around the origin compared to the far edges. You might consider showing a potential function (e.g., in 3D) or a 1D projection to make the planar/disk attractor structure more visually explicit.
- The discussion on Kim et al. (2023) was helpful and I commend the authors’ willingness to balance their claims throughout the rebuttal
However, I remain unconvinced on the following points:
- Trust in flow fields: The response to Reviewer PTUj does not fully address my concern. I still do not see an example where learned flow fields meaningfully generalize beyond steady-state. For instance, how well does FINDR capture neural activity later in the trial versus trial-averaged activity? What about novel trials far from the training set? Such analyses would be necessary but not sufficient for a final publication.
- Benchmarking and expressivity: While CEBRA is indeed static, one can still extract latent variables and then fit dynamical models post hoc. From an experimental viewpoint, this approach can be just as informative, especially given the difficulty of validating the inferred flow fields (see my point above). In my humble opinion, this critique could have been better addressed via experiments rather than theoretical distinctions.
- Fig. 4: I’m still not fully convinced by the response. FINDR defines latent variables up to rotation, while linearly identifiable models allow both rotation and stretching - so it's unsurprising that FINDR captures geometry more precisely, while models like autoLFADS may preserve topology. This is an interesting observation, but perhaps not as impactful as initially framed. If the authors wish to emphasize it, I suggest expanding the theoretical motivation and clarifying the preprocessing (e.g., whether latent variables were standardized before PCA).
Overall, I believe the methodology is sound, but the paper would benefit from *substantial* revisions in presentation, benchmarking, and a clear focus on real-world (biology) applications. In particular, applying FINDR to a neural recording dataset to uncover a compelling insight would strengthen its contributions, possibly even making it a strong spotlight candidate at a future venue. That being said, at this time, I do not believe the manuscript meets the bar for ICML, but I am increasing my score to reflect my confidence in the methodology and the thoughtful rebuttal. For a final publication, the three remaining points I raised above should be sufficiently addressed. I am happy to reevaluate if authors provide evidence addressing these concerns.
I also want to emphasize that this line of work is highly promising for neuroscience, and with one round of major revisions, it is likely to become suitable for a top ML venue. In the event of a rejection by the AC in line with my recommendation, I would strongly encourage the authors to address this feedback and resubmit to NeurIPS.
Edit: I had some more time to think about it. I think the added benefit of having this work published overwhelms the weaknesses stemming from the benchmarking concerns. As long as the following are satisfied by April 8th and answered by authors with clear evidence, I will support an acceptance:
1) Please show us *whether* the model extrapolates beyond the training regions in one way or another. It doesn't have to be perfect, but this has to be present in the manuscript and has to be quantified.
2) Please confirm and commit to revising Fig. 4 to tone down the claims made about consistency across cross-validation folds. You are welcome to use the geometry vs topology distinction, but **the quantitative results after the linear transformation (r=.99) should be there**, i.e., make it clear that topology is preserved in both methods and the added benefit of the SVD is to preserve geometry **in this case**. I am asking for the specific wording of this to be present in the rebuttal response.
3) Please add CEBRA to Fig. 4 as a third method.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the opportunity to revise and improve our manuscript. We have tried our best to incorporate the suggested changes:
**(1)** Please see our response to Reviewer PTUj.
**(2)** Thank you for the suggestion---we will revise Fig. 4 and the main text to clarify our definition of consistency and ensure that quantitative results after linear transformation (|r|=.99) are included for autoLFADS. We will also clarify that while we find empirical evidence that FINDR discovers consistent representations for this dataset, consistency is not theoretically guaranteed and should be verified empirically on new datasets.
While we found that the autoLFADS factor *trajectories* are topologically consistent across folds, we also wanted to check whether the *dynamics* found by the LFADS generator for each fold are consistent with each other by identifying approximate attractors for each fold’s generator and seeing whether they match. Ext. Data Fig. 4 shows that FINDR consistently reveals two approximate attractor points corresponding to left and right choices, and that the trajectories, by 3s onwards, reach one of the two points.
Do we find similar bistable attractor-like structures across folds for autoLFADS? To see this, for each fold’s autoLFADS, we ran the trained generator forward in time for 5s, starting from the initial conditions inferred from the encoder. (448 initial conditions, because there were a total of 448 trials in this dataset.) We found that while autoLFADS states reached steady-state by 5s (states moved minimally during the 4-5s period), they did *not* form two clusters as would be expected from bistable attractors, both for folds 1 and 2. (We will include this figure in the revision.) Importantly, to see if the distribution of the fold-1 states in approximate steady-state match the distribution of the fold-2 states, we affine transformed the autoLFADS latent trajectories from 4s to 5s in fold 1 to match those in fold 2, and applied the Pearson’s |r| metric above. We found that they were not consistent (|r|=0.22). This suggests that even when the autoLFADS factor trajectories across folds are topologically consistent (|r|=0.99), this does not guarantee that the underlying dynamics that generated the trajectories by autoLFADS are consistent. Using a similar procedure for FINDR, we found |r|=0.94, consistent with the visualization in Ext. Data Fig. 4. For this |r|, we didn’t have to do an affine transformation and could simply use z_tilde’s from both folds.
**(3)** Yes, we will include CEBRA results for all 5 folds in Fig. 4b. In the past few days, we trained CEBRA-Time on folds 1 and 2 using hyperparameters from: https://cebra.ai/docs/demo_notebooks/CEBRA_best_practices.html#Items-to-consider. For both folds, when we color-coded the latents by evidence strength (like in Fig. 4), we saw a gradient in the latent space respecting the evidence strength.
We also trained a Euclidean-distance model ("offset10-model-mse", output dim=2). Although no theoretical guarantees on linear identifiability are provided in [1] or [A] for this model (in contrast to the cosine-distance model), we did see empirically that for this dataset, two folds were consistent by |r|=0.99. We also saw that the parts of the state space traversed by the trajectories depend on evidence strength.
However, as the reviewer pointed out, it is difficult to perform fixed point analysis on the latents of CEBRA without additionally fitting a dynamical model, e.g., FINDR or rSLDS. Whether combining CEBRA with dynamical models improves interpretability is an interesting future direction, but beyond the scope of this work.
A key distinction between FINDR and CEBRA, for this dataset, is that we find that sensory inputs perturb dynamics roughly along PC 1 in the latent space of FINDR, but this would be difficult to know using CEBRA.
**Summary:**
Our new analyses, together with results in our original submission, support that FINDR representations are consistent across folds---not only *topologically* but also *geometrically*---and reveal *dynamical* consistency, specifically two approximate slow points associated with left/right choices. Among all methods tested, only FINDR achieves both:
1. Strong performance on neural data
2. Discovery of consistent *low-D dynamical* representation, with interpretable slow points
**Clarification:**
>In particular, applying FINDR to a neural recording dataset to uncover a compelling insight would strengthen its contributions,
We would like to clarify that we *did* apply FINDR to real neural recordings---Fig. 3 and 4 are from neuropixels data. While our focus here is on the methods, we discuss scientific findings using FINDR in a separate *bioRxiv* paper.
We also mention in the Discussion what the representations found by FINDR mean in decision-making (L377, 2nd column), and more broadly, how FINDR could have a potential impact in neuroscience (L382, 2nd column). | Summary: Authors introduce a new method for latent variable inference of neural data. The essence is a sequential variational autoencoder. The main innovation is a “prior” which encourages the latent variables to satisfy an ODE. Using this method, they show that low-D latents are recovered in synthetic examples. When compared with other methods that are also limited to low-D latents, the performance on real data is better for very low-D, and comparable for slightly higher dimensions.
## After rebuttal
After reading all the rebuttals and discussion with all reviewers, I am keeping my score. The topic is important, and many methods have been introduced in recent years. While the proposed method here is novel and seems promising, the limited comparison to other methods and benchmarks weakens the contribution.
Claims And Evidence: The main claims are partially supported. The method is able to recover low-D synthetic latents. But, the abstract claims that “FINDR outperforms existing methods in capturing the heterogeneous responses of individual neurons”. Existing methods (e.g. LFADS) were not tested without the limitation of low-D latents.
Methods And Evaluation Criteria: There are existing benchmarks (neural latents benchmark, also cited in this paper) that offer quantitative measures of prediction (without limiting latent dimensionality). These were not used in the present work.
Theoretical Claims: Irrelevant
Experimental Designs Or Analyses: The synthetic example is indeed a case where we expect a 2D continuous attractor, and the method recovers it successfully. The irrelevant dynamics (constant bias) are perhaps not challenging enough.
As mentioned above, it would be good to test the method on other benchmarks, such as neural latents.
Supplementary Material: Yes. All
Relation To Broader Scientific Literature: This is part of the latent inference line of work. The authors mention these other works.
The emphasis here is on a low-D latent space, which can improve interpretability.
Essential References Not Discussed: Two works that also emphasize the low-D latents.
Valente, Adrian, Jonathan W. Pillow, and Srdjan Ostojic. “Extracting Computational Mechanisms from Neural Data Using Low-Rank RNNs.” Advances in Neural Information Processing Systems 35 (December 6, 2022): 24072–86.
Pals, Matthijs, A. Erdem Sağtekin, Felix Pei, Manuel Gloeckler, and Jakob H. Macke. “Inferring Stochastic Low-Rank Recurrent Neural Networks from Neural Data.” arXiv, February 26, 2025. https://doi.org/10.48550/arXiv.2406.16749.
Other Strengths And Weaknesses: Main strength is a new approach to encourage interpretable latents, which is an important goal. The comparison to other techniques and other benchmarks is somewhat lacking, which weakens the paper.
Other Comments Or Suggestions: none
Questions For Authors: In the identifiability section (A.2), you write that rank(C)=L. How does this fit with the results of figure 2D for L>2 ?
Equation 3 – should this be sqrt(dt)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. We appreciate their suggestion to test FINDR on the Neural Latents Benchmark (NLB) to further support our claims. We agree that evaluating FINDR on multiple real datasets, including the public training and validation datasets available from NLB, could strengthen our findings. However, as the reviewer pointed out, we think that one of the main goals of our paper is to be able to discover interpretable dynamics (i.e., performing analyses similar to Fig. 4), rather than solely outperforming other models on neural activity prediction (though we should make sure that our model performs reasonably well on this). Since analyses similar to Fig. 4 for the models currently fit to datasets in NLB (for models where this is possible, like rSLDS) are not readily available, we think the dataset we use is as good a choice as datasets in NLB.
The winning model of the NLB challenge in 2021 was based on transformers, which does not give a dynamical interpretation. Similarly, autoLFADS (with or without the rank bottleneck) doesn’t enable plotting low-dimensional vector fields (even when L<=3). We suspect that transformers and autoLFADS with L>20 will outperform FINDR given that autoLFADS with L=20 performed similarly to FINDR with L=2 in terms of R^2 (Fig. 4a). However, given that FINDR performs reasonably well in predicting responses from held-out neurons (Fig. 3c), in addition to being able to provide low-dimensional flow fields, we think this is where the strength of FINDR lies. We will clarify in the main text that FINDR only outperforms existing methods in low dimensions, and revise the abstract so that it reads “we demonstrate that FINDR performs competitively against existing methods…”.
Regarding A.2, thank you for pointing this out. We should have been more clear. The rank of `C` is computed with `numpy.linalg.matrix_rank`, which computes the SVD and counts the number of singular values that are greater than some value `tol` to compute the rank. By default, `numpy.linalg.matrix_rank` sets `tol = S.max(axis=-1, keepdims=True) * max(C.shape[-2:]) * numpy.finfo(S.dtype).eps`, where `S` is the vector of singular values. While, for example, the column for L=6 in Fig. 2d shows that variance explained is close to 1 after L=2, the singular values of `C` here were 96.303, 86.11196, 9.6449, 8.212957, 7.338667, 1.3648185, which were all greater than `tol` which, for this `C`, was 0.0057.
Regarding Equation 3---thank you for catching this! This is a typo, and it should have been \sqrt(dt).
---
Rebuttal Comment 1.1:
Comment: Thanks for all the clarifications to my comments and those of the other reviewers.
Perhaps a more relevant benchmark is the computation through dynamics benchmarks. I'm aware that it's quite new, but it could still strengthen the paper to compare existing methods in such a manner.
Another relevant recent work (but on arxiv since 2022) is Langdon and Engel (Nat Neur 2025).
And another preprint is Versteeg, Sedler, McCart, Pandarinath (arxiv 2023), where in Figure 6 performance of ODIN (and other models) is compared to the peak performance of autoLFADS.
---
Reply to Comment 1.1.1:
Comment: Thank you for suggesting more references/benchmark relevant to our work! We will make sure to cite them in our revision. | Summary: FINDR (Flow-field Inference from Neural Data using deep Recurrent networks) is an unsupervised method for discovering low-dimensional neural population dynamics. It uses a gated neural drift function, decomposing spiking activity into task-relevant latents and a time-varying bias capturing non-task effects. A sequential variational autoencoder ensures accurate data reconstruction and interpretable dynamics. FINDR offers an interpretable approach for modeling complex neural computations through low-dimensional flow-field visualizations and analysis.
Claims And Evidence: The authors claimed that FINDR is able to compress the neural data into a abstract low-d latent space and learn the dynamics in the latent space, which have been supported by their experiments.
Methods And Evaluation Criteria: Yes, the cross-validated log-likelihood and PSTH-based R² are standard metrics for spike train models, helping verify predictive accuracy and interpretability.
The validity of their main method: gated drift dunction was verified in previous litratures, not included in the present work.
Theoretical Claims: The main theoretical aspect is that the gating structure of the drift function improves the expressivity, stability, and trainability of the neural SDE, referencing prior work in gated neural ODEs, while there are no formal proofs in the text.
Other claims seemed correct.
Experimental Designs Or Analyses: The synthetic data experiments were designed to confirm whether FINDR could recover known continuous attractors, a standard check for dimensionality and topology in latent-variable models. The real-data analysis employed a five-fold cross-validation scheme, ensuring robust validation of held-out neurons. Both approaches appear well-executed and adhere to common practices (e.g., log-likelihood and PSTH R²). A minor limitation is that the real-data setup features only a two-choice decision-making task, which might not reflect more complex or multidimensional behaviors.
Supplementary Material: I reviewed A.1.1, which provides details on task-irrelevant feature learning.
Relation To Broader Scientific Literature: FINDR extends classical state-space models (LDS, GPFA) and deep generative approaches (LFADS) by using a gated nonlinear architecture that separates task-irrelevant components. This design builds on continuous-attractor concepts and addresses mixed selectivity, yielding stable, interpretable latent spaces and contributing new insights into neural dynamics and computational neuroscience.
Essential References Not Discussed: not to my knowledge
Other Strengths And Weaknesses: - Strengths
1.The paper is well organized, with a clear exposition of the model through equations and figures.
2.It is novel in how it explicitly separates task-irrelevant features, an approach that helps maintain a stable latent space across trials.
3.Both synthetic and real datasets are used to verify the model, enhancing confidence in its validity.
- Weakness
The real-data experiments focus on a two-choice decision-making task, which may be relatively simple.
Other Comments Or Suggestions: To my understanding, the decoder should take d as input to reconstruct the firing rates, but d is not included in the figure.
Questions For Authors: 1.The flow field is learned from encoder-inferred trajectories reflecting real data; how accurate might the learned dynamics be in latent regions not sampled by the dataset?
2.Why is it necessary to optimize d first, then the other parameters? How does the approach ensure that task-relevant and irrelevant features are distinctly captured by d vs. the latent variables?
3.To what extent does FINDR preserve geometric relationships in the latent space? For instance, if an animal navigates a T-maze, would the model maintain the T-shaped structure in the low-dimensional representation?
4.How large must the MLP be (e.g., number of layers and hidden units) for stable training and robust performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback. We will change Fig. 1 to include d in the revision.
**Q1:** We should have clarified that the colored trajectories in Fig. 4b represent trial-averages sorted by evidence strength, and inside the dotted line represents part of the state space *visited* by single-trial trajectories. We did not show part of the state space not visited by single-trial trajectories.
In the revision, we could show a “confidence heatmap” showing the probability of the single-trial trajectories visiting a particular region of the state space.
**Q2:** By design, task-irrelevant components do not use the task-relevant inputs u to capture within- and across-trial baseline firing activity. We reasoned that by optimizing d first, we let the model predict neural activity as much as it can without task-relevant information. If this part is successful, then the dynamics capturing the residual of the task-irrelevant component from inputs u would be task-relevant.
**Q3:** The Euclidean distance in the neural state space (ignoring softplus) is preserved in the latent space. For more details, please see Appendix A.3 and response to Reviewer UnSA on identifiability.
**Q4:** We perform hyperparameter grid search to ensure that we analyze latent representations from a model with good training and performance (Appendix A.1.6). We found that the optimal size of the MLP, and other important hyperparameters depend on the dataset.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply! The answers properly addressed my concerns.
Following Q1, to my understanding, your model is able to be extrapolated to the regions outside the dotted enclosure. To what extent does it still preserve meaningful dynamics in these regions?
---
Reply to Comment 1.1.1:
Comment: Thank you for this important question. Reviewer UnSA raised a similar point, and we address both here. If we understand correctly, the question is whether the FINDR-inferred flow field can be trusted outside the dotted enclosure (i.e., outside the region of state space visited by the single-trial trajectories).
We would like to clarify that we do not make any claims about the accuracy of vector arrows outside the enclosure in this manuscript or elsewhere, and the method is not built or intended to work outside the enclosure. It is intended to succinctly summarize and characterize the dynamics underlying the training dataset. Extrapolating beyond it would involve further assumptions beyond the scope of FINDR.
That said, we recognize that in some cases, knowing the dynamics only within the dotted region is not sufficient to draw conclusions about the dynamics of the system. This is especially the case if the model has to infer the underlying flow field from only a few dynamical trajectories (e.g., Hu et al., 2024, Nair et al., 2023), as there can be multiple possible flow fields that could have generated those few trajectories. This was less the case for our real-world application in Fig. 3 and 4, where the dataset consisted of the rat performing a single decision per trial, with a total 448 of them. Because we infer the flow field that needs to satisfy 4/5 of the 448 dynamical trajectories (because we do 5-fold CV), this helps us be more confident in our estimate of the flow field.
If one had to work with a limited number of trajectories, and to truly test whether the model can extrapolate outside the enclosure, as Reviewer UnSA pointed out, one could experimentally perturb the neural state out of the enclosure, and evaluate whether the relaxation dynamics of the neural trajectory follow the flow field learned by the model. We think perturbations are beyond the scope of our work.
The problem of whether we can identify the correct flow field when there is a limited number of trajectories is a general problem that applies not just to FINDR but to all dynamical systems identification models. In other fields where we have side information about the system, e.g., that the system needs to conserve energy, having such information built into the model has been shown to be helpful for accurately identifying the system (e.g., [a], [b]). In neuroscience, this is a much harder challenge, but we think it is an important future direction to develop flexible models that can integrate multiple types of data/information to help constrain the space of possible flow fields, especially in scenarios where there are a limited number of dynamical trajectories (e.g., development, learning). Using simpler models to fit a limited number of trajectories, without side information, will not necessarily generalize well to the region of the unobserved state space.
The Discussion in L417, 1st column, is also relevant to some aspects discussed here.
**New results on extrapolation performance:** To directly assess how well FINDR extrapolates from training data, we trained FINDR to the same dataset used in Fig. 4, but held out the last 0.1s of each trial (=10 time steps; a single trial was typically <1s long in this task). We then computed 5-fold CV R^2 for each time step between z_tilde from the full vs. the held-out models. We found that R^2 starts around 0.70 for one step forward and monotonically decreases to 0.63 by the end of the trial. We also confirmed that the 5-fold CV log-likelihood score for neural activity (one used in Fig. 3b) is similar for both models for the held-out 10 time steps (0.028 for full vs 0.024 for held-out). We will include a relevant figure for this in the Appendix.
**Summary:**
1. Extrapolating beyond the dotted enclosure requires either experimental perturbation data or additional assumptions on the dynamics that are beyond the scope of this work.
2. Nonetheless, FINDR can extrapolate across trial epochs. The model trained on the initial periods of the trial can well predict single-trial trajectories in the later period.
[a] Ahmadi, A. A. & El Khadir, B. (2023). Learning Dynamical Systems with Side Information. *SIAM Review*.
[b] Greydanus, S. et al. (2019). Hamiltonian Neural Networks. *NeurIPS*. | Summary: The paper introduces FINDR, a method for inference of low-dimensional stochastic dynamics from neural recordings.
The proposed approach estimates first the bias $d$ of the observation function (called task irrelevant component of the spiking activity) by solving a regression problem for the average firing rates of each neuron. This aims to capture fluctuations in the spiking activity not attributed to the latent dynamics.
The authors first estimate an across-trial bias, by fitting a linear model with raised cosine basis functions to the trial-averaged firing rates, capturing slow fluctuations in each neuron’s baseline activity. They then model fast within-trial fluctuations using another linear model with raised cosine basis functions and sum both components to obtain the overall time-varying bias $d$ for each neuron.
The main inference method uses a sequential variational autoencoder with a semi-orthogonal loading matrix to infer a low-dimensional latent representation of neural population activity, and to learn underlying stochastic dynamics (flow field inference). The authors model the drift function with gated multilayer perceptrons The optimization uses backpropagation through time to compute the gradients.
The authors first demonstrate the method on synthetic data, showing that it estimates correctly the underlying dynamics, performing better than other models, when the latent dimension is correct. Then, on selected recorded data from rat PFC performing a decision-making task, the authors show that FINDR outperforms SLDS, rSLDS, autoLFADS, and GPFA in predicting held-out neuron activity. The authors also show that the inferred flow fields are consistent across cross-validation folds and identify attractors related to the decision-making task.
Overall, the method demonstrates good performance. However, its technical implementation is somewhat involved. Nevertheless, the complexity of the approach is well suited for an application-focused paper.
Claims And Evidence: - The main claims are that FINDR infers accurate dynamics, outperforms existing methods, and provides interpretable visualizations. The synthetic data experiments support the first claim by showing correct latent dimension recovery and attractor structure.
- The gated parametrisation of the drift is more expressive and trainable compared to non-gated models. I am not sure that there is demonstrated evidence for this in the paper.
- Consistency across folds and attractor visualization in flow fields support the third claim. However, such interpretable visualisation of flow-filed can be made for all other baseline methods used for comparison.
The interpretability of flow fields is demonstrated but might benefit from more quantitative measures. The separation of task-relevant/irrelevant components is shown to improve consistency, but the paper doesn't explore what exactly the task-irrelevant components capture.
- The paper asserts that it can effectively disentangle task-relevant from task-irrelevant neural activity by estimating two components of the bias d, designed to capture within-trial and across-trial fluctuations that are not explained by the latent dynamics. However, the current version of the paper lacks clear evidence or numerical validation demonstrating the accuracy of this estimation method and the parameters of this part of the method are not rigorously analysed. In the Ext figure 5 the authors claim that by setting a single constant bias $d$ across trials the estimated flow fields are less consistent across splits, but to my eyes the projected trajectories in the five splits of that figure seem rather consistent, and the inconsistency seems to occur in the parts of the (projected) state space not visited by the system (see also my questions to the authors below for this part.)
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-motivated and suited for inferring neural population dynamics.
The authors first apply the method on synthetic data with known underlying dynamics, and demonstrate the performance in terms of normalised log-likelihood on test data.
The authors could probably use a metric that quantifies differences between ground truth and estimated flow field for this setting with known dynamics to directly show the accuracy of the flow field inference part.
The use of synthetic datasets, with known underlying attractor structures, provides a controlled environment to verify that the method can accurately recover latent dynamics. The application to real neural recordings using cross-validated metrics (normalized log-likelihood, R²) offers a framework for assessing performance on recorded data with unknown latent dynamics.
The authors could further report forward prediction metrics.
Theoretical Claims: There are no theoretical claims in the paper.
Experimental Designs Or Analyses: No
Supplementary Material: Yes
A.1. and A.2
Extended Data Figure 1,4,5,6, 2,3
Relation To Broader Scientific Literature: The paper integrates and extends several strands of research in computational neuroscience and machine learning. First, it builds on previous approaches for modeling neural population activity via low-dimensional latent variables, such as GPFA, LFADS, and variants of switching linear dynamical systems. Most of available frameworks consider either latent deterministic or linear dynamics or autonomous dynamics. The work further builds on the literature of neural ODEs (and related differential approaches) by adapting these techniques specifically to spiking neural data.
Essential References Not Discussed: - Genkin, M., Hughes, O., & Engel, T. A. (2021). Learning non-stationary Langevin dynamics from stochastic observations of latent trajectories. Nature communications.
- Genkin, M., & Engel, T. A. (2020). Moving beyond generalization to accurate interpretation of flexible models. Nature machine intelligence.
- Schimel, M., Kao, T. C., Jensen, K. T., & Hennequin, G. (2022). iLQR-VAE: control-based learning of input-driven dynamics with applications to neural data. ICLR.
- Zhao, Y., & Park, I. M. (2020). Variational online learning of neural dynamics. Frontiers in computational neuroscience.
Other Strengths And Weaknesses: Strengths:
- Well-organized and clearly written paper.
- Good performance on demonstrated datasets.
- Consistency of estimated dynamics across splits.
- Needs more evidence, but the bias estimation part is interesting, and to my knowledge the approach is novel.
Weaknesses:
- The method is rather technically involved, with multiple steps and hyperparameter dependencies, potentially limiting accessibility.
- Although a hyperparameter grid-search is described, additional explicit analyses demonstrating robustness or sensitivity across hyperparameter choices and initialisations is missing.
- Also see my questions below.
Other Comments Or Suggestions: The authors comment on the identifiability of the observation model in the appendix, but they don't discuss the identifiability of the latent dynamics (drift and diffusion of the SDE) when observed through spikes.
I wonder how would the method compare in terms of accuracy and required amount of data when compared with the method off Dunker et al 2019, and Genkin and Engel 2020, when applied on a synthetic dataset with autonomous latent dynamics.
Questions For Authors: - In FINDR, the SDE’s time constant $\tau$ (as used in Eqs. 2–3) sets the timescale of the intrinsic latent dynamics. Do you estimate $\tau$ directly from the data, or is it fixed a priori? Have you observed any relationship between this timescale and factors such as the required number of recorded neurons/heir average firing rates for accurate inference of the latent dynamics?
- Do you assume that the external inputs $u$ are known? If not, how do you estimate them? If you do assume that they are known, how do you set them up in the examples in Figure 3 and 4?
- I have a few questions regarding the estimation of the bias term of the observation model, that the authors name "task irrelevant activity". The authors infer first this bias, before learning the dynamics and the other parameters.
- 1] How uniquely can you identify the bias term of each neuron, as opposed to attributing a shift of the dynamics part to this bias?
- 2] Does the proposed approach estimate the d values that were used when generating the spike trains in the simulated experiments, or are they just identified good enough to allow estimation of the latent dynamics? Can the authors provide a scatter plot with the d_estimated vs d_true values to demonstrate how well this part of the method works?
- 3] Moreover I think I would be useful to make an ablation study for this part of the estimation to test how well the inference method would work without estimating d, or without the individual across trial and within trial d components.
- In the paper of Genkin, & Engel mentioned above, the authors make a discussion regarding overfitting of latent stochastic dynamical models with increasing number of spiking data. Do you observe something similar in your framework requiring to adjust the parameter $/beta$, or does the selection of $\beta=2$ suffice to overcome this issue?
- In the model parameters to be inferred is the noise covariance matrix $\Sigma$ of the latent process. How do you estimate $\Sigma$ and how accurate is this estimation in the synthetic dataset?
- How necessary is the gating function of the gated neural ODE for accurate inference of the latent dynamics? Would a nODE without $G$ fail capture the latent flow field?
- What are the computational demands of each part of the proposed inference method, and how does is compare to the methods used as baselines?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s thoughtful comments.
**Necessity of gating:**
Kim et al., ICML, 2023 ([B]) shows that gating increases expressivity and trainability of the dynamics function. Their Fig. 1 suggests gating is necessary for correctly inferring dynamics in our synthetic dataset.
**Flow fields and interoperability:**
We respectfully disagree with the reviewer that flow fields can be visualized for all other baselines. While rSLDS allows flow field visualization, SLDS, autoLFADS, and GPFA do not. rSLDS failed to capture neural data (Fig. 3d), likely due to learning a constant d, and not a time-varying d, so we did not plot flow fields estimated using rSLDS.
Previous work has proposed that the model is interpretable if it discovers representations of computation that we can identify in the low-D dynamics ([B]; Duncker et al., ICML, 2019). Accordingly, one way to quantify interpretability in the flow field is in terms of the log-likelihood and R^2 metrics of how well neural data is captured in low-D (L<=3). Using these metrics, FINDR was the only model that captured neural data well in low-D.
**Task-irrelevant components and consistency in flow field:**
By design, task-irrelevant components do *not* use the task-relevant inputs u to capture within- and across-trial baseline firing activity. In Ext. Data Fig. 5, colored trajectories represent trial-averages sorted by evidence strength (as in Fig. 4b), and inside the dotted line represents part of the state space *visited* by single-trial trajectories.
Learning time-varying d improves performance (Ext. Data Fig. 5, last panel), and is necessary---when d is constant, the median evidence-sign conditioned PSTH R^2 is -0.009. We will include this result as the first panel, move the last panel next to this panel, and clarify the definition of the dotted line.
**Limited accessibility:**
We will release our code with documentation and a tutorial notebook.
**Model robustness:**
We find that the second best hyperparameter choice gives a representation consistent with the best choice. We also find a consistent representation when we train the model with a different initialization.
**Identifiability:**
Please see response to Reviewer UnSA.
**Additional comparisons with autonomous dynamics:**
While we did not directly compare to Duncker et al. and Genkin & Engel, we will add analysis showing that FINDR correctly captures autonomous limit cycle dynamics in a synthetic dataset.
**Questions**
(1) \tau=0.1s was a fixed hyperparameter (A.1.7), but the timescale of the dynamics is adaptive. That is, it depends on z and u through the gating function G. [B] analyzed expressivity as a function of \tau (their Fig. 3), and showed that as \tau increases, the drift function \mu needs more parameters to fit data well. Given the number of our model parameters and trial durations (<1s), \tau=0.1s was a reasonable choice.
(2) Yes, external inputs u are given to the model. We will clarify in our revision that in Fig. 3 and 4, u_t was 2-D: [0;0] (no click), [0;1] (right click), [1;0] (left click).
(3-1) If there is a constant shift in dynamics that is independent of u, this will be included as bias d and not as part of z. For the task-irrelevant component, because we use a linear basis function model, the solution we obtain is analytical and unique, given the regularization coefficients, which we optimize using the validation dataset (A.1.1).
(3-2) We validated d_estimated against d_true using real datasets. Across-trial d_estimated closely matches true across-trial firing rate fluctuations, and within-trial d_estimated matches the observed PSTH (R^2=0.82). We will include relevant figures in the Appendix.
(3-3) Our ablation study (Ext. Data Fig. 5, last panel) shows that without estimating time-varying d, the model fails to capture data (median R^2: -0.009).
(4) Similar in spirit to the approach in Genkin & Engel, we split the data into five different folds to find features that are consistent across folds. We consistently find two approximate point attractors with the selection of \beta=2 (Ext. Data Fig. 4). We will mention this, and cite references suggested by the reviewer.
(5) We learned the parameters of the diagonal elements of \Sigma (L99, 2nd column, Eq. (25)) directly via SGD. For generating the ground truth latents, we first generated the latents and added noise to each timestep with N(0, \sigma^2=0.01). The inferred \Sigma, after affine transformation to match ground truth latents, was found to be [[0.026, 0.001]; [0.001, 0.018]].
(6) For Fig. 3, the inference of the task-irrelevant component doesn’t require GPUs, and takes <30 minutes. Task-relevant component (jax/flax-based) takes <1.5 hours per hyperparameter configuration on a single A100 GPU. It typically took total ~5 hours for FINDR to do grid search and complete training on our cluster. AutoLFADS (PyTorch) took ~6 hours. For SLDS, rSLDS and GPFA, no GPU was used, and typical runtime was <1 hour.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response! I appreciate the detailed explanations.
I agree that from the selected baselines only rLDS allows for flow field visualization, but most state space models like the ones referenced in my Essential References Not Discussed also do.
A limitation of the approach is the requirement of knowing the external inputs $u$ but I find it ok given the other contributions, especially the part of fitting the task irrelevant component $d$ that is often used as a constant value in similar models.
I consider the comparison with one of the mentioned papers that consider autonomous latent stochastic dynamics would considerably improve the paper.
I have a small question in Eq. 25: Why do you pass the noise variables through the sigmoid function? Doesn't this limit the value of the noise that could be affecting the latent dynamics, or am I missing something?
---
Reply to Comment 1.1.1:
Comment: Thank you for your response to our rebuttal! We will make sure to cite the references in the Essential References Not Discussed.
While external inputs u were given to FINDR in our application, it is relatively straightforward to build a controller to infer inputs, similar to how it is done in LFADS, without altering the training pipeline. Thus, we don’t necessarily see this as a limitation of the method itself, though inferring inputs has not been done here.
We also appreciate your question regarding the sigmoid. In Eq. (1), Cz=C*(1/a)\*a\*z, and we can define C’=C*(1/a) and z’=a*z for any scalar a (for more details, please see **Identifiability** in the response to Reviewer UnSA). Thus, the sigmoid, in principle, shouldn’t affect model expressivity. Empirically, we found that using the sigmoid instead of the softplus function was helpful for training stability. | null | null | null | null | null | null |
Refined generalization analysis of the Deep Ritz Method and Physics-Informed Neural Networks | Accept (poster) | Summary: The paper proposed refined generalization bounds for the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs), including the Poisson equation and static Schrödinger equation on the $d$-dimensional unit hypercube with the Neumann boundary condition.
## update after rebuttal
Most of my concerns have been solved, so I maintain my positive rating.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Not available.
Theoretical Claims: The paper presents a logically structured set of theoretical claims supported by established mathematical techniques.
Experimental Designs Or Analyses: Not applicable. There are no experiments to validate the proposed theoretical claims.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: This paper proposed refined generalization analysis for Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs).
Essential References Not Discussed: No.
Other Strengths And Weaknesses: 1. Although the paper demonstrates good generalization performance in low-dimensional cases, how to extend these theories to more complex high-dimensional problems, especially when the solution of the PDE belongs to complex spaces, still requires further research.
2. The paper primarily focuses on Neumann boundary conditions and linear PDEs, with limited in-depth discussion of other types of boundary conditions (such as mixed boundary conditions) or nonlinear PDEs. Extending this work to more diverse physical problems is a potential research direction.
3. While the paper proposes improved generalization analysis and approximation rates, the consumption of computational resources and time remains an issue in practical applications, particularly for complex physical scenarios and large-scale problems. Further exploration is needed on how to reduce computational costs while maintaining accuracy.
4. Viewing PINNs as a multi-task learning problem is an effective framework, but it may lead to interference between tasks, especially when the nature of the tasks varies significantly. Further optimization of task balancing and interaction in multi-task learning could enhance the model’s performance.
Other Comments Or Suggestions: No.
Questions For Authors: Please refer to the Other Strengths And Weaknesses section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your time in reviewing our manuscript and your valuable insights. Below, we address your questions point by point.
**Q1**: Extension of theories to more complex high-dimensional problems.
**A1**: In this work, we analyze two scenarios: (1) when the solutions of PDEs reside in Barron spaces, and (2) when they belong to Sobolev spaces. For Sobolev spaces, certain constants in our results demonstrate exponential dependence on dimensionality, which limits their applicability to high-dimensional problems. This limitation is inevitable because the complexity of Sobolev spaces grows dramatically with increasing dimensions, leading to exponential scaling of certain constants in neural network approximation errors. In contrast, for Barron spaces, the constants in the generalization error exhibit at most polynomial dependence on dimensionality, making our results meaningful for high-dimensional settings.
To establish fundamental theoretical insights, this work primarily analyzes the settings where solutions in Barron or Sobolev spaces. The extension to more complex spaces is an important direction for future research. We conjecture that our methodology can be generalized to these settings with appropriate modifications.
**Q2**: Extension to other types of boundary conditions and nonlinear PDEs.
**A2**: Regarding the extension to other boundary conditions, such as the following mixed boundary conditions:
$$ -\Delta u+Vu =f \ in \ \Omega, \ u+\beta\frac{\partial u}{\partial n}=g \ on \ \partial \Omega,$$
where $\beta\neq 0$. A direct analysis shows that the solution $u^{* }$ of this equation satisfies that
$$u^{* }=\mathop{\arg\min} _{u \in H^1(\Omega) } \mathcal{E}(u):=\mathop{\arg\min} _{u \in H^1(\Omega) } \int _{\Omega} \left(||\nabla u||^2+Vu-2fu \right)dx+ \frac{1}{\beta} \int _{\partial \Omega} \left(u^2-2gu \right)ds. $$
Moreover, for any $u\in H^1(\Omega)$, we have
$$ \mathcal{E}(u)-\mathcal{E}(u^{* })\lesssim ||u-u^{* }|| _{H^1(\Omega)}^2\lesssim \mathcal{E}(u)-\mathcal{E}(u^{* }),$$
which is similar to the strong convexity property required in our analysis for Neumann boundary conditions (see equation (4) and (6)), suggesting that our method can be naturally extended to mixed boundary conditions. For further details, please refer to Section D.2 (line 2851), where we discuss other boundary conditions.
Regarding the extension to nonlinear PDEs within the PINNs framework, our method can also generalize to a broad class of nonlinear PDEs. For instance, let us consider the following equation:
$$\mathcal{D}(u)=f(u) \ in \ \Omega, \ u=g \ on \ \partial \Omega,$$
where $\mathcal{D}$ is a linear differential operator and $f$ is the Lipschitz nonlinear term. Here, for simplicity, we denote the loss function as
$$L(u)=||\mathcal{D}(u)-f(u)|| _{L^2(\Omega)}^2+||u-g|| _{L^2(\partial \Omega)}^2.$$
Assume that $u^{* }$ is the true solution, then we can deduce that
\begin{align*}
L(u)&=||\mathcal{D}(u)-f(u)-\mathcal{D}(u^{* })-f(u^{* })|| _{L^2(\Omega)}^2+||u-u^{* }|| _{L^2(\partial \Omega)}^2 \\\\
&\lesssim ||\mathcal{D}(u)-\mathcal{D}(u^{* })|| _{L^2(\Omega)}^2 +||f(u)-f(u^{* })|| _{L^2(\Omega)}^2+ ||u-u^{* }|| _{H^1(\Omega)}^2\\\\
&\lesssim ||\mathcal{D}(u)-\mathcal{D}(u^{* })|| _{L^2(\Omega)}^2 +||u-u^{* }|| _{L^2(\Omega)}^2+ ||u-u^{* }|| _{H^1(\Omega)}^2.
\end{align*}
Therefore, this bound is similar to that in the linear case, and our method remains applicable.
Moreover, we agree with the reviewer's suggestion. Extending this framework to broader physical problems (e.g., inverse problems) is indeed an important direction.
**Q3**: Practical applications.
**A3**: In this work, we primarily focus on generalization analysis and approximation rates, while optimization aspects are beyond the scope of this study. As you rightly pointed out, the computational resource and time requirements of neural network-based PDE solvers indeed limit their applicability to complex physical systems and large-scale problems. Developing computationally efficient optimization methods for these solvers with guaranteed accuracy remains a key focus of our ongoing research.
**Q4**: Issues in viewing PINNs as a multi-task learning problem.
**A4**: From a theoretical perspective, formulating PINNs as a multi-task learning problem allows us to derive tighter generalization bounds through the use of local Rademacher complexity in the multi-task setting. In the experimental perspective, as you mentioned, interactions between different tasks may indeed lead to certain issues, since the loss functions contain multiple different additive terms that
can disagree and yield conflicting update directions. Some recent studies, such as [1] and [2], have developed efficient algorithms to mitigate such conflicts.
**References**:
[1]: Config: Towards conflict-free training of physics informed neural
networks. ICLR 2025.
[2]: Dual cone gradient descent for training physics-informed
neural networks. NeurIPS 2024. | Summary: The manuscript presents a detailed error analysis for physics-informed neural networks (PINNs) and the deep Ritz method (DRM) for linear elliptic equations. Both, the case of the true solution belonging to a Barron space and a Sobolev space are discussed. Additionally, certain approximation theoretic results are provided, noteably approximation rates in Barron spaces are proven with explicit control over the neural networks weights, which is of independent interest. The manuscript is carefully written, well understandable and technically mature.
## update after rebuttal
I maintain my positive assessment of the manuscript.
Claims And Evidence: All results are substantiated with complete proofs.
Methods And Evaluation Criteria: Not applicable.
Theoretical Claims: I checked the proof strategy but not the technical details.
Experimental Designs Or Analyses: Not applicable.
Supplementary Material: Not applicable.
Relation To Broader Scientific Literature: The work is well contextualized.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: Strengths:
- The analysis is -- up to the optimization error -- complete. The manuscript is the most comprehensive error analysis of the DRM and PINNs that I am aware of.
- It treats both the setting of Sobolev spaces and Barron spaces.
Weakness:
- The manuscript would benefit from simulations illustrating some of the theoretical findings. I think this is essential and will add value to the paper. From a practitioners point view, it is important to know to which extent theory and practice meet.
- From a practical/optimization standpoint, it is more natural for the deep Ritz case to consider $H^2$ conforming networks. While $H^1$ clearly suffices for the variational energies to be well defined, one typically needs another derivative to perform gradient-based optimization schemes. For PINNs, the authors seem to already consider $\operatorname{ReLU}^3$ networks (i.e. the extra derivative), at least in the case of the results for Sobolev functions.
Other Comments Or Suggestions: The main concern I have with this paper is whether it fits the scope of ICML. It is a pure error analysis paper with no simulations. In my opinion, the manuscript would benefit from the review process in a mathematical journal focused on numerical analysis more than the review process at ICML. This concern does not influence my rating of the paper.
On page 8 of the manuscript, the authors discuss the relation of the PINN loss to the error measured in certain norms. There are estimates available in the literature for precisely this question, see https://academic.oup.com/imajna/advance-article-abstract/doi/10.1093/imanum/drae081/7904789
Questions For Authors: - Can the results be extended to more general activation functions?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful for your thorough review and thoughtful suggestions. Below we provide detailed responses to each of your comments.
**Q1**: Missing experiments.
**A1**: We agree that numerical validation would enhance the work. About experimental validation, the experiments in [1] have demonstrated that even in 100-dimensional cases, the Deep Ritz Method (DRM) attains a relative error of less than 20% for the Poisson equation and static Schrödinger equation in the setting of Barron space. Furthermore, when comparing experimental results with theoretical bounds, [1] observed that the convergence rates in their generalization bounds may not be sharp. Consequently, the experimental findings in [1] provide partial support for our results, as our theoretical framework aligns with theirs—while offering improved generalization bounds.
Regarding why we did not include experiments: Like most theoretical papers, we have chosen to omit optimization error. In future work, we plan to account for optimization error and conduct experiments to validate our findings.
**Q2**: Consider $H^2$ conforming networks for Deep Ritz Method (DRM) .
**A2**: We totally agree with you. In fact, if we assume that the solutions belong to Barron space $\mathcal{B}^s(\Omega)$ with $s\geq 3$, then $\text{ReLU}^2$ activation functions (i.e. $H^2$ conforming networks) could be employed for DRM, and our theoretical framework Theorem 2.4&2.5 still remains valid.
In this work, both DRM and PINNs are analyzed under the weakest solution regularity assumption -- specifically, we assume the solutions reside in relatively low-order Barron spaces (e.g. $\mathcal{B}^3(\Omega)$ in Theorem 3.4). This fundamental assumption governs our choice of activation functions throughout the paper. For example, for solutions in $\mathcal{B}^s(\Omega)$ with $s\geq 4$, $\text{ReLU}^3$ activations would become feasible for PINNs, and our theoretical framework Theorem 3.4 remains valid.
**Q3**: Regarding the positioning of this work.
**A3**: We can understand the reviewer’s concern about whether our purely theoretical error analysis fits ICML’s scope, especially given its focus on theoretical contributions without numerical simulations. However, **we believe this work aligns with ICML’s mission to advance machine learning theory, as it addresses certain challenges arising in physics-informed learning where commonly used ML tools fail or are insufficient**. Whereas papers in mathematical journals often rely on numerical integration with well-established error estimates, our work tackles the more complex setting of Monte Carlo methods for computing high-dimensional integrals arising in neural network loss functions---a setting that requires different theoretical tools. Specifically, we contribute by (1) developing a new theoretical framework for physics-informed learning and (2) deriving meaningful error bounds for Barron spaces in the overparameterized regime, advancing the theoretical foundations of machine learning for scientific computing domain.
**Q4**: Missing references.
**A4**: We sincerely appreciate the reviewer for pointing out this important reference. It indeed provides valuable insights, particularly for the second-order elliptic equations where Lemma 1 parallels our Lemma C.11. Additionally, this reference provides a thorough analysis for other types of PDEs, which broadens the scope. In the revised manuscript, we will cite this reference and incorporate this discussion on page 8 to offer readers a more comprehensive perspective.
**Q5**: Extension to more general activation functions.
**A5**: While our current analysis primarily focuses on $\text{ReLU}^k$
activation functions, the framework can indeed be extended to more general activations. Here we outline a concrete example using tanh activations for the static Schrödinger equation (Theorem 2.5(2)):
For the Deep Ritz method, the generalization bound under this setting takes the form:
$$\frac{\alpha}{n}+\epsilon _{app}^2,$$
where $n$ is the sample size, $\alpha$ represents the neural network complexity and $\epsilon _{app}$ is the approximation error in the $H^1$ norm. When employing tanh activations, Theorem 5.1 of [2] demonstrates that for the solution $u _{S}^{* }\in W^{k,\infty}$
of the static Schrödinger equation, there exists a two-hidden-layer tanh network with width at most $\mathcal{O}(N^d)$ achieving an approximation error $\epsilon _{app}=\mathcal{O}(\frac{1}{N^{k-1}})$. In this setting, we have $\alpha=\mathcal{O}(N^d)$. Then taking a proper $N$ yields:
$$\frac{\alpha}{n}+\epsilon _{app}^2=\frac{N^d}{n}+\frac{1}{N^{2(k-1)}}=\mathcal{O}(n^{-\frac{2k-2}{d+2k-2}}).$$
This matches the convergence rate derived in Theorem 2.5(2) of our work.
**References**:
[1]: A Priori Generalization Analysis of the Deep Ritz Method for Solving High Dimensional Elliptic Partial Differential Equations. COLT 2021.
[2]: On the approximation of functions by tanh neural networks. Neural Networks 2021. | Summary: This paper presents a refined generalization analysis of two popular deep learning-based methods for solving partial differential equations (PDEs): the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs). The authors derive sharper generalization bounds for these methods under different assumptions about the solutions of the PDEs, particularly when the solutions lie in Barron spaces or Sobolev spaces.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: No
Experimental Designs Or Analyses: No
Supplementary Material: No
Relation To Broader Scientific Literature: The contributions of the paper are related to the community of deep learning theory and scientific machine learning.
Essential References Not Discussed: The below paper is about the theory of PINNs by CNN.
Lei, G., Lei, Z., Shi, L., Zeng, C., & Zhou, D. X. (2025). Solving PDEs on spheres with physics-informed convolutional neural networks. Applied and Computational Harmonic Analysis, 74, 101714.
Other Strengths And Weaknesses: Strengths:
1. The authors achieve sharper generalization bounds compared to previous works, particularly for DRM and PINNs, which is a solid contribution to the field.
2. The proposed methods provide a unified framework for deriving generalization bounds for machine learning-based PDE solvers, which can be extended to other PDEs and methods.
Weaknesses:
1. While the theoretical contributions are solid, the paper lacks empirical validation. It would be beneficial to see how the derived bounds hold in practice, especially for high-dimensional PDEs.
2. The analysis relies on the assumption that the solutions lie in Barron spaces or Sobolev spaces. While these are reasonable assumptions, they may not hold for all PDEs, and the paper does not discuss the implications when these assumptions are violated.
Other Comments Or Suggestions: No
Questions For Authors: 1. The paper claims that the framework can be extended to other PDEs, including time-dependent ones. Could the authors provide some insights on how this extension would work?
2. How do the generalization bounds of DRM and PINNs compare with those of traditional numerical methods like finite element or finite difference methods, especially in high dimensions?
3. There may be some concerns on the use of $ReLU^k$ in deep neural networks. Are they commonly used in sovling PDE?
4. Do you have any insights regarding the optimality of the derived rate?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our article and for your insightful comments. Let us address your concerns point by point.
**Q1**: Missing references.
**A1**: We appreciate the reviewer for pointing out this important literature. It has established rigorous analysis of PICNN on the sphere, bridging the gap in understanding PINNs on manifolds. In the revised version, we will cite it with proper discussion.
**Q2**: Experiments.
**A2**: Regarding why we did not include experiments, like most theoretical papers, we have chosen to omit optimization error. In future work, we plan to account for optimization error and conduct experiments to validate our theory. For experimental validation of the Deep Ritz Method (DRM) for high-dimensional PDEs in the setting of Barron spaces, the experiments in [1] demonstrate that even in 100-dimensional cases, the DRM achieves a relative error of less than $20$% for Poisson equation and static Schrödinger equation. Moreover, when comparing experimental results with theoretical bounds, [1] observed that the convergence rates in their generalization error may not be sharp. Therefore, their experiments can, to some extent, support our findings, as our theoretical setting aligns with theirs—while we provide better generalization bounds.
**Q3**: Assumptions for the solutions.
**A3**: As the article primarily focuses on theoretical aspects, we–like most existing works–initially consider simpler cases. Investigating the setting where solutions belong to more complex spaces remains an important direction for future research.
**Q4**: Extension to other PDEs.
**A4**: In our analysis for PINNs, we require that the expected loss function of PINNs can be controlled by certain Sobolev norms between the exact and approximate solutions. For time-dependent PDEs, such as
$$\partial_t u=\Delta u+f(u),$$
where $u^{* }$ is the true solution and $f$ is the nonlinear term assumed to be Lipschitz. For simplicity we only consider the interior term, then the loss function is
$$L(u)=||\partial _t u-\Delta u-f(u)|| _{L^2(0,T;L^2(\Omega))}^2.$$
Then we have
\begin{equation*}
\begin{aligned}
L(u)&=||\partial _t u-\Delta u-f(u)-\partial _t u^{* }+\Delta u^{* }+f(u^{* })|| _{L^2(0,T;L^2(\Omega))}^2\\\\
&\lesssim ||\partial _t u-\partial _t u^{* }|| _{L^2(0,T;L^2(\Omega))}^2+||\Delta u-\Delta u^{* }|| _{L^2(0,T;L^2(\Omega))}^2+||u-u^{* } || _{L^2(0,T;L^2(\Omega))}^2.
\end{aligned}
\end{equation*}
Then combining our method with neural network approximation results for spatiotemporal Sobolev spaces can yield similar generalization bounds.
**Q5**: Comparison with traditional methods.
**A5**: The error bounds of finite element method depend on the maximum mesh diameter $h$ and the computational complexity scales as $O(h^{-d})$ in high dimensions. Moreover, the error constants grow significantly with increasing dimension $d$, leading to the curse of dimensionality.
In this work, under the assumption that the solutions belong to Barron spaces, the relevant constants demonstrate at most polynomial dependence on the dimension, making the generalization bounds meaningful even in high-dimensional settings.
**Q6**: The use of $\text{ReLU}^k$.
**A6**: The choice of activation functions is just to demonstrate that our method can achieve better results. Our method also works for other activations. To illustrate this, we provide a concrete example using tanh for the static Schrödinger equation.
For the DRM, the generalization bound takes the form
$$\frac{\alpha}{n}+\epsilon _{app}^2,$$
where $n$ is the sample size, $\alpha$ represents the neural network complexity and $\epsilon _{app}$ is the approximation error. Theorem 5.1 of [2] shows that for the solution $u _{S}^{* }\in W^{k,\infty}$
of the static Schrödinger equation, there exists a two-hidden-layer tanh network with width $O(N^d)$ achieving an approximation error $O(N^{-(k-1)})$. In this case, we have $\alpha=O(N^d)$, then taking a proper $N$ yields
$$\frac{\alpha}{n}+\epsilon _{app}^2=\frac{N^d}{n}+\frac{1}{N^{2(k-1)}}=O(n^{-\frac{2k-2}{d+2k-2}}).$$
This result exactly coincides with the derived rate in Theorem 2.5(2) of our work.
**Q7**: Optimality.
**A7**: Some studies have shown that neural network-based estimators can achieve the minimax optimal rates for regression problems. These upper and lower bounds are estimated under the $L^2$ norm. However, for PINNs, different PDEs may require distinct norms to measure the discrepancy between approximate and exact solutions, which significantly differs from the framework of regression problems. Therefore, whether the improved generalization bounds for PINNs derived in this work are indeed optimal still requires further validation in future studies.
**References**:
[1]: A Priori Generalization Analysis of the Deep Ritz Method for Solving
High Dimensional Elliptic Partial Differential Equations. COLT 2021.
[2]: On the approximation of functions by tanh neural networks. Neural Networks 2021. | Summary: The paper presents refined generalization error bounds for two Machine Learning (ML) based methods used to solve partial differential equations (PDEs) - the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs). The main technical contribution made in this paper is to provide a sharper generalization bound for DRMs and PINNs based on localized Rademacher complexity techniques. For the DRMs, the analysis is mainly based on the Poisson Equation and the static Schrodinger Equation on the high-dimensional unit hypercube with Neumann boundary condition. For the PINNs, the paper focuses on the general linear second elliptic PDEs with Dirichlet boundary condition. Compared to previous studies, tighter bounds get obtained and unrealistic assumptions also get removed.
## Update After Rebuttal
The reviewer remains positive about the theoretical results presented in the paper, so the reviewer would like to remain the score. Nevertheless, the authors should probably discuss their answers to Q3 and Q4 mentioned in the review in detail when revising the manuscript and include some numerical experiments if possible.
Claims And Evidence: The main claim made in this paper is that the convergence rates associated with the DRM and PINN can be further improved based on techniques like the peeling method and localized Rademacher complexity. Detailed proofs are provided to support the main claim here.
Methods And Evaluation Criteria: N/A (This is a theoretical paper). However, the reviewer does think that the paper's impact might be increased if the authors can add a few numerical experiments to justify their findings.
Theoretical Claims: For DRMs the main theoretical claims are in section2, which are results for the Poisson Equation and the static Schrodinger Equation on the high-dimensional unit hypercube with Neumann boundary condition. For PINNs the main theoretical claims are in section 3, which are results for general linear second elliptic PDEs with Dirichlet boundary condition. The reviewer didn't find any significant issue with the proofs of the main claims.
Experimental Designs Or Analyses: N/A (This is a theoretical paper). However, the reviewer does think that the paper's impact might be increased if the authors can add a few numerical experiments to justify their findings.
Supplementary Material: Yes, the reviewer checked almost all the proofs presented in the supplement (not in a very detailed way though). The reviewer did find almost all the proofs presented in the supplement to be correct.
Relation To Broader Scientific Literature: This paper, which studies how to improve the convergence rate of the DRMs and PINNs, should be mainly situated as the application of nonparametric statistics and learning theory (high-dimensional statistics) in scientific machine learning (SciML/AI4Science), i.e., the theoretical analysis of algorithms in SciML/AI4Science.
Essential References Not Discussed: Given that solving PDEs via ML-based methods is a popular field recently, it might be beneficial for the authors to do a review of related methodology proposed in the current literature. See for instance the literature review section in [1]. Some important references that the authors didn't cite here include [2,3,4,5,6,7,8].
References:
[1] Lu, Y., Blanchet, J. and Ying, L., 2022. Sobolev acceleration and statistical optimality for learning elliptic equations via gradient descent. Advances in Neural Information Processing Systems, 35, pp.33233-33247.
[2] Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A. and Anandkumar, A., 2020. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895.
[3] Lu, L., Jin, P., Pang, G., Zhang, Z. and Karniadakis, G.E., 2021. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature machine intelligence, 3(3), pp.218-229.
[4] Chen, Y., Hosseini, B., Owhadi, H. and Stuart, A.M., 2021. Solving and learning nonlinear PDEs with Gaussian processes. Journal of Computational Physics, 447, p.110668.
[5] Han, J., Jentzen, A. and E, W., 2018. Solving high-dimensional partial differential equations using deep learning. Proceedings of the National Academy of Sciences, 115(34), pp.8505-8510.
[6] Khoo, Y., Lu, J. and Ying, L., 2021. Solving parametric PDE problems with artificial neural networks. European Journal of Applied Mathematics, 32(3), pp.421-435.
[7] Sirignano, J. and Spiliopoulos, K., 2018. DGM: A deep learning algorithm for solving partial differential equations. Journal of computational physics, 375, pp.1339-1364.
[8] Zang, Y., Bao, G., Ye, X. and Zhou, H., 2020. Weak adversarial networks for high-dimensional partial differential equations. Journal of Computational Physics, 411, p.109409.
Other Strengths And Weaknesses: The main strength of this article is the techniques used in its proofs, which does address issues like the infeasible assumption (some solution represented by a neural network in the $H^{1}_0$ space). However, for the weakness of this article, the reviewer's main concern is that the authors might have to compare their work with the previous work [1] in a more thorough way. Firstly, given that [1] establishes not only upper bounds but also information theoretical lower bounds on the expected estimation error, would it be possible for the authors to provide some intuition on whether it would be possible to derive information theoretic lower bounds for the cases considered in this paper? (i.e, are the bounds established here minimax optimal or not?) To the best of the reviewer's knowledge, this seems to be one important criteria for articles focusing on learning theory and nonparametric statistics. Secondly, it seems that [1] also used techniques like peeling and localized Rademacher complexity, which is similar to the proof strategy deployed in this paper - would it be possible for the authors to further comment on the proof novelty in this paper?
References:
[1] Lu, Y., Chen, H., Lu, J., Ying, L. and Blanchet, J., 2021. Machine learning for elliptic pdes: Fast rate generalization bound, neural scaling law and minimax optimality. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=mhYUBYNoGz.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your time in reviewing our manuscript and your valuable insights. Below we provide a point-by-point response to your concerns.
**Q1**: Experiments.
**A1**: We agree that experiments would further strengthen our work. In future work, we plan to account for optimization error and conduct experiments to validate our results. For experimental validation of DRM in the setting of Barron space, the experiments in [4] show that even in 100-dimensional cases, the DRM achieves a relative error of less than $20$%. Moreover, when comparing experimental results with theoretical bounds, [4] found that the convergence rates in their generalization error may not be sharp. Therefore, their experiments can, to some extent, justify our findings, as our theoretical setting aligns with theirs—but we provide better generalization bounds.
**Q2**: References.
**A2**: We sincerely appreciate the reviewer for pointing out these important references. In the revised version, we will follow the format of [1] and incorporate all suggested references [2-8] with proper discussion.
**Q3**: Lower bounds.
**A3**: What [1] has achieved better than our work is that they also derived lower bounds for both DRM and PINNs. Their results show that the bound for DRM is not minimax optimal, whereas that for PINNs is minimax optimal. However, the metric used in [1] to evaluate PINNs is the $H^2$ norm, which requires strong convexity assumption on PDEs and neural network functions to belong to $H_0^1$. Such assumptions appear too stringent.
Recent studies have shown that neural network-based estimators can achieve minimax optimal rates for regression problems. These bounds are estimated under the $L^2$ norm. However, for PINNs, different PDEs require distinct norms to measure the discrepancy between the empirical and true solutions, which differs significantly from the regression framework. Therefore, whether the bounds we derived for PINNs and DRM are truly optimal still requires further investigation in future research.
**Q4**: Comparison of proof strategies with [1].
**A4**: For the DRM, we also analyze the Barron space setting and derive novel generalization bounds that remain meaningful even in overparameterized regimes. This advancement surpasses the capabilities of [1]'s approach (see Proposition 2.7). For the Poisson equation, the variational form $\mathcal{E} _{P}(u)$ is not equal to the expectation of its corresponding empirical part $\mathcal{E} _{n,P}(u)$. This makes the local Rademacher complexity (LRC, [3]) and the method in [1] infeasible. Morover, for example, the core lemma (Lemma B.4) in [1] requires the condition (from [3]) that
$$R _n({f\in \mathcal{F}: \mathbb{E}[f]\leq r}) \leq \phi(r),$$
where $R _n$ is the empirical Rademacher complexity. Then in [1], an appropriate functions class $\mathcal{F}$ can be chosen such that $\mathbb{E}[f] = \mathcal{E} _{P}(u)-\mathcal{E} _{P}(u^{* })$. By doing so, the strong convexity can be used. However, for the Poisson equation, there does not exist a function class $\mathcal{F}$ such that $\mathbb{E}[f]=\mathcal{E} _{P}(u)-\mathcal{E} _{P}(u^{* })$. In contrast, this work develops a novel peeling technique to establish fast rates. The static Schrödinger equation shares similar strong convexity properties with certain classical problems like bounded-noise regression [2]. Although approaches from [1], [2] and our Poisson equation method remain applicable, they are much complicated. To address this, we instead develop a novel error decomposition method that allows direct application of the results in [3] (LRC), yielding better generalization bounds through a more concise proof framework.
For PINNs, [1] only considered the case of the static Schrödinger equation and assumed that the neural network function class is a subset of $H_0^1$. This assumption makes PINNs contain only interior terms, thereby giving them a strong convexity property similar to that of the DRM. Thus, the approach for PINNs in [1] is identical to that for the DRM. In contrast, we consider more general PDEs and treat PINNs as a multi-task learning (MTL) problem. The key difference from the DRM is that for PINNs, we only require a non-exact form of oracle inequality, eliminating the need for the strong convexity. Then, by using LRC under MTL (where we also derive an improved Talagrand-type concentration inequality for MTL), we obtain sharper generalization bounds. Moreover, this approach can also be extended to other types of PDEs and neural-network-based PDE-solving methods similar to PINNs.
**References**:
[1] Machine learning for elliptic pdes: Fast rate generalization bound, neural scaling law and minimax optimality. ICLR,2022.
[2] Deep neural networks for estimation and inference. Econometrica,2021.
[3] Local Rademacher complexities. AoS,2005.
[4] A Priori Generalization Analysis of the Deep Ritz Method for Solving
High Dimensional Elliptic Partial Differential Equations. COLT,2021.
---
Rebuttal Comment 1.1:
Comment: The reviewer would like to thank the authors for addressing the two main concerns. The reviewer remains positive about the theoretical results and will keep the score. However, the authors are encouraged to discuss their answers to Q3 and Q4 above in detail in the revised version of the manuscript and include some numerical experiments if possible.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your constructive feedback. In the revised manuscript, we will expand the discussion of Q3 and Q4 with additional analysis and incorporate numerical experiments to support our theoretical results. Thank you for your valuable suggestions. | null | null | null | null | null | null |
Improved Sample Complexity for Private Nonsmooth Nonconvex Optimization | Accept (poster) | Summary: This paper provides randomized algorithms for nonconvex nonsmooth optimization under the constraint of differential privacy. The sample complexities of zeroth-order algorithms are greatly improved over those of the previous work (Zhang et al. 2024).
In addition, the author further extended the methodology with first-order algorithms to reduce the oracle complexity.
## update after rebuttal
The author's reply makes sense, so I keep my score.
Claims And Evidence: Strength:
* The sample complexity of zeroth-order algorithms greatly improves over existing work (Zhang et al. 2024).
* The generalization from ERM to excess population loss of nonconvex nonsmooth optimization is a novel and interesting result.
Weakness:
* Although the sample complexity of the proposed zeroth-order algorithms is greatly reduced, these methods have high oracle complexities due to the choice of a large $m$. The overall oracle complexity is no better than the previous work.
* There are multiple typos in the proofs, which makes them hard to read.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are sound and make sense to me.
Theoretical Claims: I checked the proof of most theorems in the main text and supplementary materials, there are some typos and errors. Most errors are negligible, and some are really confusing.
Let me list some typos as follows:
* In Line 236 Theorem 3.1, should the nominator value of $m$ be $\log(d B_1 \delta)$?
* In Line 286 Theorem 3.1, I am confused that the denominator of the first term is $B_1$ instead of $\sqrt{B_1}$. Taking square root on both sides of Lemma 3.6 will give you $\sqrt{B_1}$.
* In Line 575, the proof of Lemma 3.4, the right-hand side of $\geq$ in the probability seems to miss one term $2L / B_1$
* In Line 602, the term II should sum from $t_0 + 1$ to $t$?
* In Line 608 Eq. (13), subscripts should be $\xi_i$ instead.
* In Line 656, is (III) miss a factor of $d$ in the second term?
Experimental Designs Or Analyses: I do not think experiments need to be provided for this paper.
Supplementary Material: I have read Appendix A and B, which cover the proof from Sections 3 to 6.
Relation To Broader Scientific Literature: This paper is a really interesting work for private nonconvex nonsmooth optimization, however, there are too many typos in the proof.
Essential References Not Discussed: Some essential references to differential privacy are not discussed in the paper. For example, the multi-pass algorithm uses the moment account method for privacy composition, which was introduced in the following paper:
* Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., & Zhang, L. (2016, October). Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security (pp. 308-318).
Their proof was enhanced by the following manuscript:
* Kulkarni, J., Lee, Y. T., & Liu, D. (2021). Private non-smooth empirical risk minimization and stochastic convex optimization in subquadratic steps. arXiv preprint arXiv:2103.15352.
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their work and are encouraged by their appreciation of our results!
We will incorporate the suggested changes and references into the final version.
We are extremely grateful for the reviewer's careful review and for finding numerous typos. We note that the only change incurred by these straightforward fixes (specifically, due to the $\sqrt{B_1}$ in the denominator in Line 286) will be the non-private term in Theorem 3.1 becoming $\sqrt{d}$ times larger, that is $\sqrt{d}/\alpha\beta^3$. We emphasize that this does not affect the main claims in our paper: Our obtained complexity is still $\Omega(\sqrt{d})$ smaller than previous results, and still breaks the erroneous lower bound claimed in previous work. Moreover, neither of our improved empirical/generalization results later in the paper are affected.
We conjecture that the previously claimed dimension-independent bound $O(1/\alpha\beta^3)$ might still be achievable. The $\sqrt{d}$ factor appears to stem from the randomized smoothing technique we employed. With improved smoothing approaches, it may be possible to eliminate this dimension dependence. In the final version we will accommodate the current fix accordingly, explore this conjecture further, and plan to acknowledge the reviewer for their help.
We also appreciate the references suggested by the reviewer. We will add proper citations to Abadi et al. (2016) for the moment account method for privacy composition, as well as to Kulkarni et al. (2021) for their enhancements to the approach.
Regarding the oracle complexity concern, the reviewer is correct that our choice of large $m$ increases the oracle complexity. As noted in Remark 3.3, our parameter choice prioritizes minimizing sample complexity, which comes at the cost of increased oracle calls. In the final version, we will clarify this trade-off more explicitly and discuss potential approaches to balance sample and oracle complexity for different application needs. | Summary: This paper studies the problem of non-smooth non-convex (NSNC) optimization problem under the constraint of differential privacy (DP). The authors first proposed a zeroth-order and single-pass NSNC-DP algorithm that achieves sample complexity of $O(\frac{1}{\alpha\beta^3}+\frac{d}{\epsilon\alpha\beta^2}+\frac{d^{3/4}}{\epsilon^{1/2}\alpha\beta^{5/2}})$ to find a $(\alpha,\beta)$-Goldstein stationary point. Notably, this result improves from the existing best rate by a factor of $\sqrt{d}$, and the key is an improved zeroth-order gradient estimator that has an improved sensitivity in high probability. In addition, the authors also extend the result to the ERM problem where they proposed a multi-pass algorithm, and connected the result to the stochastic case by showing a reduction of Goldstein stationary point from ERM loss to generalization loss. Finally, the authors proposed a first-order algorithm, showing it achieves the same sample complexity while reducing oracle complexity by $O(d^2)$.
Claims And Evidence: NA
Methods And Evaluation Criteria: NA
Theoretical Claims: The theoretical claims are valid.
Experimental Designs Or Analyses: NA
Supplementary Material: I've checked all contents in the appendix.
Relation To Broader Scientific Literature: This work fits in the subfield of DP optimization for NSNC objectives. The proposed zeroth-order algorithm improves the existing best known rate in the literature. Furthermore, this paper also provides a multi-pass algorithm for ERM problem, and a first-order algorithm, which are both the first results in the literature.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: ### Strengths
This paper provides very concrete results for DP-NSNC optimization problems. The main result, a zeroth-order single-pass algorithm equipped with an improved gradient estimator, improves the sample complexity by a factor of $O(\sqrt{d})$ compared to the best know rate in the literature. This is a significant result since the optimal rate is yet unknown, and technical novelty of using a low-sensitivity gradient estimator is interesting by itself. Furthermore, the proposed multi-pass zeroth-order algorithm and the first-order algorithm are the first results in the literature.
### Weaknesses
I think the high probability guarantee of the sensitivity implicitly assumes the gradient estimator has norm-sub-Gaussian noise, which is not clearly justified. This is a slightly stronger assumption than standard literature.
Overall, I think this paper greatly contributes to the understanding of DP-NSNC problem, and I'd recommend acceptance.
Other Comments Or Suggestions: NA
Questions For Authors: 1. Regarding the non-private term in the single-pass bound, do the authors know if $O(1/\alpha\beta^3)$ is the optimal sample complexity for non-private NSNC optimization? Also, could the authors further elaborate on Rmk 6.2 that the oracle complexity of first-order algorithm is $d^2$ smaller than zeroth-order algorithm while they achieve the same sample complexity? Does that imply the sample complexity of zeroth-order algorithm could possibly be further improved?
2. In terms of oracle complexity, it seems that $m\gg d^{3/2}$. Is it true that the oracle complexity of zeroth-order algorithm is larger than the previous algorithm by Zhang et. al. 2024 (where their sample complexity if $\sqrt{d}$ worse, but oracle complexity is fixed to be $d$ per sample)?
3. The privacy guarantee is based on high probability bound of the sensitivity instead of a worse case (almost surely) bound. Would that break the privacy guarantee? For example, in the tail event where sensitivity is unbounded, I think privacy no longer holds since the divergence is no longer bounded. Is there a known result that fixes the issue, e.g. some theorem like privacy guarantee in high probability generalizes to privacy guarantee almost surely.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer and are encouraged by the positive review!
We clarify the raised issues below:
Regarding the "weakness" mentioned, we want to clarify that we did not assume anything stronger than previous results. In our proof, we use concentration inequalities for sub-Gaussian vectors, but these vectors are actually bounded due to the Lipschitz assumption (a standard assumption in the literature, also used by Zhang et al. (2024)).
We address the Questions below:
(1) We are not aware of a sample complexity lower bound in the non-private literature for finding a Goldstein-stationary point. Regarding oracle complexity, it is known to be $\Theta(1/\alpha\beta^3)$ (see Cutkosky et al., ICML 2023). As pointed out by Reviewer rnQ5, there is a small bug in our analysis that leads to a worse non-private term $O(\sqrt{d}/\alpha\beta^3)$ after fixing the error (primarily by resetting the hyperparameters). Despite this, our result still breaks upon the zero-order oracle lower bound of previous work. The dimension-dependence likely stems from the randomized smoothing technique we employed. We conjecture that the previously claimed dimension-independent bound $O(1/\alpha\beta^3)$ could be achieved, possibly through an improved smoothing technique. Therefore, we cannot currently determine the full extent to which our results might be further improved.
(2) Yes, this is true. As discussed in Remark 3.3, our choice of m aims to minimize the sample complexity, and we will rewrite this to be more clear that this is indeed at the cost of larger oracle complexity in the zero-order case.
(3) The privacy guarantee always holds - the tail event is accounted for in the 'delta' in the $(\epsilon,\delta)$-DP definition, which allows the privacy to break with probability at most $\delta$. As the reviewer correctly guessed, this approach is indeed standard in the DP literature, and we also discuss this in the Discussion section. | Summary: This paper presents novel differentially private (DP) optimization algorithms for nonsmooth and nonconvex objectives, with a focus on achieving Goldstein-stationary points while improving sample complexity. The authors introduce a single-pass algorithm that improves the sample complexity by a factor of $\sqrt{d}$ over previous results, as well as a multi-pass algorithm that further reduces sample complexity while preserving privacy guarantees.
Claims And Evidence: The claim presented in this paper appears to be clear and correct.
Methods And Evaluation Criteria: The privacy and utility guarantees of the algorithms have been rigorously proven.
Theoretical Claims: The proof appears to be correct.
Experimental Designs Or Analyses: Not applicable; the proof appears to be correct.
Supplementary Material: I have reviewed most of the Supplementary Material.
Relation To Broader Scientific Literature: This paper contributes to the literature on differentially private optimization, particularly in the context of nonsmooth and nonconvex objectives.
Essential References Not Discussed: The paper appears to provide sufficient references to related work.
Other Strengths And Weaknesses: Strength:
* This paper introduces novel algorithms and improves the sample complexity for differentially private nonsmooth and nonconvex (DP NSNC) optimization problems.
* The application of a concentration argument to tighten the sensitivity bound of the gradient estimator is particularly noteworthy and adds technical depth.
Weakness:
* The algorithmic components, such as the use of the tree-based mechanism, have been explored in prior work. However, the analytical approach adopted in this paper appears to be original and contributes new insights.
Other Comments Or Suggestions: I did not identify any noticeable typographical errors in the paper.
Questions For Authors: Overall, I find the paper to be well-written and clear, and I have no further questions at this time.
## update after rebuttal
I keep my positive score.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer and are encouraged by the positive review! | Summary: The paper provided presents advancements in differentially private (DP) optimization algorithms for stochastic and empirical objectives that are neither smooth nor convex. Here is a summary of the results:
1. Zeroth-order Single-pass algorithm.:
- The proposed $(\epsilon, \delta)$-DP algorithm improves the dependence on the number of dimensions over existing algorithms by a factor of $\Omega( \sqrt{d} )$ in the leading term.
2. Zeroth-order Improved multi-pass algorithm:
- To further reduce the sample complexity, a multi-pass polynomial time algorithm is introduced. This algorithm returns the first known algorithm in private ERM with sublinear dimension-dependent sample complexity for non-smooth non-convex objectives.
3. First-order algorithm with reduced oracle complexity: A first-order (i.e., gradient-based) algorithm is also provided, which maintains the sample complexity but reduces the oracle complexity compared to its zero-order counterpart. This makes the gradient-based algorithm significantly more efficient in terms of oracle calls, confirming it as the best-known method in terms of both sample efficiency and oracle efficiency.
Claims And Evidence: See above.
Methods And Evaluation Criteria: Not applicable.
Theoretical Claims: I understood the high-level ideas of the proofs but I was not able to check all the details.
Experimental Designs Or Analyses: Not applicable.
Supplementary Material: I reviewed only some small parts of the supplementary material.
Relation To Broader Scientific Literature: Preserving privacy of the data used to train machine learning models is a very important area with broad interest.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
-----------------------------------
- The paper provides a clear improvement for a natural problem defined recently by Zhang et al. 2024.
- The paper is well-written and it provides a very clear comparison to previous work.
Weaknesses and Comments
-----------------------------------
1. Although this is a clear improvement of Zhang et al. 2024, the problem studied is esoteric and for this reason it seems that is of interest to a small group of people specialized on this topic.
Other Comments Or Suggestions: No.
Questions For Authors: See Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer and are encouraged by the positive review!
Designing optimization algorithms for nonsmooth-nonconvex losses is a topic that gained substantial attention recently mainly due to deep learning applications. Indeed, only few works further provide grounded privacy guarantees when optimizing such losses, as this is a relatively new area of research, yet as the reviewer noted, “Preserving privacy of the data used to train machine learning models is a very important area with broad interest.” Our work aims to bridge this gap and provide foundational algorithms that may be widely applied as privacy-preserving machine learning continues to grow in importance. | null | null | null | null | null | null |
Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence | Accept (poster) | Summary: The paper presents GNN+, a framework that enhances Graph Neural Networks (GNNs) using six components—edge features, normalization, dropout, residual connections, feed-forward networks (FFNs), and positional encoding—to address issues such as over-smoothing and capturing long-range dependencies.
Through benchmark evaluations, GNN+ models demonstrate superior performance and efficiency compared to Graph Transformers (GTs), often securing top positions.
Each component of GNN+ significantly enhances the capability of GNNs, making them a simpler yet competitive option for graph-level tasks.
Claims And Evidence: The claims are supported by empirical evidence, comparing GNN+ with state-of-the-art models across 14 datasets, as detailed in Tables 2, 3, and 4.
Additionally, thorough ablation studies isolate and evaluate the contributions of each component within the GNN+ framework in Tables 5 and 6.
However, the enhanced performance of GNN+ might depend heavily on meticulous hyperparameter tuning, suggesting that improvements might stem more from this tuning rather than the architecture's inherent superiority.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well aligned with the problem of improving graph-level performance.
By enhancing classic GNNs with six components, the approach directly addresses known limitations of traditional GNNs.
Moreover, the choice of benchmark datasets—including those from the GNN Benchmark, Long-Range Graph Benchmark (LRGB), and Open Graph Benchmark (OGB)—is appropriate because they cover diverse applications widely recognized in the community.
Theoretical Claims: The submission does not include any formal theoretical proofs for its claims.
Instead, it presents standard equations that define classic GNN operations (such as those for GCN, GIN, and GatedGCN) along with modifications that integrate components such as edge feature integration, normalization, dropout, residual connections, FFN, and positional encoding.
These equations are largely descriptive and serve to illustrate how the GNN+ framework is constructed rather than providing rigorous proofs of new theoretical properties.
Experimental Designs Or Analyses: The authors evaluated GNN+ across 14 benchmark datasets from three prominent sources (GNN Benchmark, LRGB, and OGB), reporting mean performance and standard deviations over five runs.
Moreover, the design includes comprehensive ablation studies in Section 5.2 that systematically remove individual components to isolate their contributions.
The hyperparameters used are reported in the Appendix Section A.3.
Supplementary Material: A zip file containing the code is attached as supplementary material.
The supplementary code was reviewed with a focus on its overall structure and organisation into folders and files.
The code seems well-organsied, making it easy to understand and extend for various research purposes.
Relation To Broader Scientific Literature: The key idea of the paper is to enhance classic GNNs by systematically integrating six techniques that are themselves well-established in the literature.
Four of these techniques—normalization, dropout, residual connections, and feed-forward networks—are well-recognised in the literature for improving GNN performance, albeit in node classification tasks [1].
The integration of the remaining two components—edge features and positional encoding—is straightforward, aligning with existing knowledge: edge features are crucial for molecular datasets, and positional encodings are widely used in graph transformers.
[1] Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification, In NeurIPS'24.
Essential References Not Discussed: Previous research [2] highlights the necessity of standardizing experimental protocols in evaluating GNNs, as it reveals that many GNN architectures fail to consistently surpass structure-agnostic baselines, especially in chemical datasets.
This context is significant for GNN+ of this submission, as it aligns with rigorous evaluation standards, ensuring that its performance gains are genuine and robust rather than results of experimental artifacts or overfitting.
A detailed discussion of simple approaches [3, 4] would not only highlight alternative pathways to simplicity and efficiency but also provide valuable insights for further enhancing the design of classic GNNs as pursued in the GNN+ paper.
[2] A Fair Comparison of Graph Neural Networks for Graph Classification, In ICLR'20,
[3] A Simple yet Effective Method for Graph Classification, In IJCAI'22,
[4] A simple yet effective baseline for non-attributed graph classification, In ICLR'19 Workshop: Representation Learning on Graphs and Manifolds.
Other Strengths And Weaknesses: Strengths:
[+] Besides achieving competitive or superior accuracy compared to state-of-the-art Graph Transformers, the paper demonstrates that the enhanced classic GNNs are computationally more efficient, an important practical advantage.
[+] The paper clearly outlines each component of the proposed method and provides well-structured experimental results. The detailed description of the experimental setup and hyperparameter tuning further enhances transparency and reproducibility.
Weaknesses:
[-] While the paper excels in its empirical contributions, it does not provide new theoretical insights or rigorous proofs to support the improvements.
[-] The performance gains appear to rely on extensive and careful hyperparameter tuning. This raises questions about the robustness of the improvements when applied in different settings or on different types of graphs beyond the benchmark datasets used.
Other Comments Or Suggestions: The submission would benefit from a discussion on limitations and future research directions.
For instance, despite the strong empirical results, a detailed theoretical analysis of when and why GNN+ surpasses the performance of Graph Transformers would enhance understanding.
Additionally, establishing formal bounds on expressiveness and generalization could inform the design of future models.
Questions For Authors: 1. Was a hyperparameter sensitivity analysis performed for critical hyperparameters in GNN+, and if so, how sensitive is GNN+ to changes in these parameters?
2. Were all the baseline models, particularly the graph transformer baselines, tuned with the same level of rigor as GNN+?
3. Were the six components—edge features, normalization, dropout, residual connections, feed-forward networks (FFNs), and positional encoding—carefully integrated into the baseline models and compared with GNN+ models?
4. Was a trade-off analysis conducted to compare test data performance (e.g., accuracy) and training performance (e.g., training time) between fast baseline models and GNN+ models?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your thorough review and for acknowledging our contributions. We sincerely hope our response below further strengthens your confidence in our work.
**(1) Related Works**
Thank you for sharing these related works. We have integrated detailed discussions of these relevant studies ([2], [3], [4], and others) into the revised manuscript to better contextualize our contributions.
**(2) Performance vs. Efficiency Trade-off Analysis of Fast Baseline Models and GNN+**
Thank you for your thoughtful question. The GT baselines used global attention mechanisms with *quadratic* computational complexity, resulting in higher computational costs without performance gains compared to GNNs. To address your question, we conducted additional experiments using two *linear* baseline models—HRN [3] and LDP [4]—and compared their performance with GNN+.
First, we evaluated these models on five **small-scale** graph classification datasets (IMDB-B, IMDB-M, COLLAB, MUTAG, and PTC) used in [3, 4]. These datasets consist of only 300–5000 graphs, substantially smaller than the main datasets (>10,000 graphs) used in our benchmarking study. Under identical experimental setups from their papers, GCN+ achieved comparable performance to these fast baselines. Given the small size of the datasets, their computation times are equally fast.
|Model (Accuracy)|IMDB-B↑|IMDB-M↑|COLLAB↑|MUTAG↑|PTC↑|
|-|-|-|-|-|-|
|\# graphs|1000|1500|5000|188|344|
|LDP|75.4|50.0|78.1|90.3|64.5|
|HRN|77.5|52.8|81.8|90.4|65.7|
|GCN+|76.9|52.3|80.9|90.1|66.6|
Further, we extended this comparison to **large-scale** OGB datasets (ogbg-molhiv and ogbg-molpcba) using identical experimental setups. Although LDP and HRN showed faster training times than GCN+, their performance was significantly lower. This highlights the importance of rigorous benchmarking on larger scales. Evaluating models solely on small datasets, like MUTAG and PTC, can mask true differences in generalization capabilities. Results from large-scale datasets clearly demonstrate GCN+'s superior predictive performance, validating our focus on comprehensive and systematic benchmarking.
|Model|ogbg-molhiv (41,127 graphs)||ogbg-molpcba (437,929 graphs)||
|-|-|-|-|-|
||AUROC↑|Training Time (epoch/s)↓|Avg. Precision↑|Training Time (epoch/s)↓|
|LDP|0.7121±0.0105|5s|0.1243±0.0031|31s|
|HRN|0.7587±0.0147|9s|0.2274±0.0043|68s|
|GCN+|0.8012±0.0124|16s|0.2721±0.0046|91s|
Thank you once again for helping us improve our work. These results have been included in the revised version of our manuscript.
**(3) Future Theoretical Research**
We appreciate your comment and suggestion. We have incorporated a discussion of **limitations and future research directions** into the revised manuscript.
Our work is an **empirical benchmarking study** (please see our reply to Reviewer s2AU "(1) Theoretical Analysis"). While deeper theoretical exploration would be valuable, such analysis lies **beyond the scope of this benchmarking study**.
As noted by Reviewer 5voA, "*While the paper does not introduce new theoretical innovations, it effectively synthesizes existing research, providing valuable insights and a comprehensive summary of current knowledge in the field.*"
The **empirical insights** derived from this work, particularly from the ablation studies (Tables 5 and 6), offer **valuable guidelines** for researchers in designing and applying GNNs to graph-level problems. These findings also **lay a solid foundation for future theoretical investigations**.
**(4) Hyperparameter Tuning and Fairness of Comparison between GTs and GNNs**
Thank you for highlighting this concern. We applied equally thorough hyperparameter tuning to all GT baselines, incorporating edge features, normalization, dropout, residual connections, FFNs, and positional encoding. All models were retrained using the same hyperparameter search space as GNN+ (lines 251–255).
Notably, GNN+ remains robust and dataset-agnostic. Please see our response to TFXX, "(2) Fair Comparison between GTs and GNNs".
**(5) Hyperparameter Sensitivity Analysis**
Thank you for your thoughtful suggestion. In response, we conducted additional experiments to systematically evaluate the sensitivity of GNN+ performance to critical hyperparameters, specifically dropout rates ([link](https://anonymous.4open.science/r/GNN-3221/dropout.jpg)) and the number of layers ([link](https://anonymous.4open.science/r/GNN-3221/layer.jpg)).
- **Dropout Rates:** Our results indicate that a low dropout rate (≤ 0.2) is sufficient and optimal, whereas higher dropout rates significantly degrade performance.
- **Number of Layers:** Residual connections enable GNN+ to achieve optimal performance across a wide range. Increasing the number of layers does not lead to sudden performance drops, indicating effective mitigation of the over-smoothing problem.
For other hyperparameters that are binary (enabled or disabled), please refer to our ablation studies in Tables 5 and 6. | Summary: This paper explores techniques inspired by Graph Transformers (GTs) to enhance Graph Neural Networks (GNNs). The authors demonstrate that these enhanced GNNs outperform most GTs on graph-level benchmarks, which contrasts with previous findings in the literature. Additionally, they provide empirical insights into the specific types of graphs on which each technique is most effective.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: See **Other Strengths And Weaknesses**.
Theoretical Claims: No proofs to be checked.
Experimental Designs Or Analyses: Yes.
Supplementary Material: I took a preliminary look at the code without running it.
Relation To Broader Scientific Literature: The findings prompt a reevaluation of whether Graph Transformers are truly necessary, given their complexity, especially considering that enhanced GNNs can achieve superior performance on graph-level tasks.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths.** The experiment is comprehensive with respect to benchmarks and baselines. It effectively validates the argument that "GNNs with enhancements can surpass Graph Transformers on graph-level tasks," providing valuable insights for the literature.
**Weakness.**
- The paper dedicates considerable space to experimental settings, results, and observations, but it lacks **deeper analysis**, which would offer more valuable and insightful contributions. The proposed method feels more like a successful technical trial. Notably, I’m not dismissing technical works; my main concern is that the finding—'GNNs with enhancements can surpass Graph Transformers on graph-level tasks'—**alone** is not enough to support a strong paper.
- I notice that the six techniques are selectively applied, as shown in Table 6. This implies that the **GNN+ architectures vary across benchmarks, with only the best-performing configurations being reported from Table 2 to Table 4**. In contrast, the architecture of the GT baselines remains consistent across benchmarks. This discrepancy introduces a degree of unfairness and weakens the argument that 'GNNs perform better than GT.' Additionally, the GNN+ architecture appears to be less practical due to the heavy dependence on the dataset.
Other Comments Or Suggestions: In the ablation study section, I recommend emphasizing the analysis to ensure it isn't overshadowed by the description of the phenomena.
Questions For Authors: 1. For the six techniques explored, there are **multiple ways to combine them** through operations like reordering, yet the GNN+ formulation is fixed as in Equation 11. Other similar confusion occurs—why choose BN over LN, and why add edge features in the chosen way? ... Do other variants perform similarly well or worse. If they perform worse, are there any explanations? Additionally, it would be valuable to provide insights into **how we should enhance GNNs for graph-level tasks**.
2. I note that GTs also incorporate all the techniques including edge feature integration, normalization, dropout, residual connections, feed-forward networks, and positional encoding. **It seems that the attention machenism may not offer benefits, and may even hurt performance compared to message-passing.** Are there any explanations for this observation?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for recognizing the depth of our experiments and insights. We believe some key points may have been missed and hope our clarification encourages you to revisit and re-evaluate our work.
**(1) Deeper Theoretical Analysis**
Thank you for your valuable comment. Please refer to our response to Reviewer s2AU, "(1) Theoretical Analysis".
**(2) Fair Comparison between GTs and GNNs**
We'd like to clarify that our performance comparison is **fair**.
Firstly, the techniques used for GNN+, regarded as hyperparameters—including edge feature module, normalization, dropout, residual connections, FFNs, and positional encoding—have already been incorporated into all GT baselines. Importantly, we retrained the GT baselines using **the same hyperparameter search space as the classic GNNs** (lines 251–255). Therefore, the application of these techniques is **consistent** across both GNN+ and GT baselines.
Secondly, to comprehensively address your concern, we conducted additional experiments utilizing **a fixed model with all six techniques integrated**, denoted as **GNN+ (fixed)**, as presented in the table below. The results show that the fixed GNN+ model achieves performance comparable to the best configurations reported in the paper, demonstrating its practicality and robustness.
|Model|PascalVOC-SP↑|COCO-SP↑|MalNet-Tiny↑|ogbg-molpcba↑|ogbg-code2↑|MNIST↑|CIFAR10↑|PATTERN↑|CLUSTER↑|
|-|-|-|-|-|-|-|-|-|-|
|**GCN+**|0.3357±0.0087|0.2733±0.0041|0.9354±0.0045|0.2721±0.0046|0.1787±0.0026|98.382±0.095|69.824±0.413|87.021±0.095|77.109±0.872|
|**GCN+ (fixed)**|0.3341±0.0055|0.2716±0.0034|0.9235±0.0060|0.2694±0.0059|0.1784±0.0029|98.257±0.063|69.436±0.265|87.021±0.095|76.352±0.757|
|**GatedGCN**|0.4263±0.0057|0.3802±0.0015|0.9460±0.0057|0.2981±0.0024|0.1896±0.0024|98.712±0.137|77.218±0.381|87.029±0.037|79.128±0.235|
|**GatedGCN (fixed)**|0.4204±0.0061|0.3774±0.0028|0.9450±0.0045|0.2981±0.0024|0.1889±0.0018|98.712±0.137|77.218±0.381|87.029±0.037|79.128±0.235|
**(3) Justification for Architectural Choices**
> There are multiple ways to combine six techniques through operations like reordering, yet the GNN+ formulation is fixed as in Equation 11.
Indeed, the GNN+ framework **is not limited to the formulation shown in Equation 11**. Instead, it represents a flexible GNN architecture that integrates edge features, normalization, dropout, residual connections, FFNs, and positional encoding, which collectively define GNN+. Each component allows various implementation choices; for instance, both BN and LN are viable options for normalization.
The specific combination presented in Equation 11 was chosen because our experiments consistently showed strong and, in some cases, remarkable performance. However, our intention was not to advocate exclusively for this configuration. Rather, our primary goal was to clearly demonstrate, through systematic empirical benchmarking, that an enhanced GNN architecture can match or surpass GTs across diverse datasets. Future research can explore alternative architectures and refinements within this flexible framework.
**(4) Insights into How to Enhance GNNs for Graph-Level Tasks**
Our ablation studies (Tables 5 and 6) provide detailed, actionable insights. For example, normalization substantially impacts larger-scale datasets but has less effect on smaller datasets, and a low dropout rate (≤0.2) consistently proves optimal. These findings serve as practical recommendations for researchers seeking to enhance GNN performance.
**(5) Why Attention Fails**
Thank you for the thought-provoking question. This phenomenon is the core finding of our study. While GTs employ the global attention mechanism, our results suggest that **it does not benefit graph-level learning as expected** and may even degrade performance by introducing unnecessary complexity.
To investigate this, we visualized the attention scores of GraphGPS on the Peptides-func dataset (see [this link](https://anonymous.4open.science/r/GNN-3221/attention.jpg)). The visualization shows that the nodes in question (highlighted with green borders) predominantly attend to only one or a few distant nodes, or randomly attend to multiple distant nodes without clear, explainable patterns.
An ablation study is presented in the table below. The results clearly indicate that the global attention mechanism used in GraphGPS may negatively impact performance. We hypothesize that the global attention mechanism conflicts with the local message-passing mechanism of GNNs, causing the model to excessively attend to distant, less relevant nodes. This phenomenon, referred to as the **over-globalizing problem**, has also been experimentally validated in recent studies [1].
|Model|Peptides-func↑|
|-|-|
|**GraphGPS (GNN + Attention)**|0.6534 ± 0.0091|
|**GraphGPS (GNN only, w/o Attention)**|0.6951 ± 0.0134|
|**GraphGPS (Attention only, w/o GNN)**|0.6366 ± 0.0163|
[1] Less is More: On the Over-Globalizing Problem in Graph Transformer, ICML 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks for your careful response. Some key questions still remain for me:
1. **A specific GNN architecture is not enough to support the claim that “classic GNNs excel in graph-level tasks”.** Given that GNN+ is dedicatedly designed with many tricks (different from may classic GNN models), it may be questionable whether another different GNN architecture has the same effect. Crucially, GNN+ only replaces the attention mechanism of GTs with message passing, and the other parts “*including edge feature integration, normalization, dropout, residual connections, FFN, and PE—is indispensable*”. **I think the resulting architecture can not be claimed as “classic GNNs”.**
2. For me, explaining why “GNNs can surpass GTs in graph-level tasks” will be necessary and intriguing. However, this is not explicitly discussed in the paper. **In contrast, it seems that the *over-globalizing* problem revealed in [1] is actually the key insight.**
I really recognize your contributions on experimental validations and technical recommendations, and will raise my score accordingly. However, my concern remains that these can not support a general conclusion “classic GNNs meet excellence” and lack novel intriguing insights.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer TFXX,
Thank you for your further feedback and for raising your score!
**(1) The resulting architecture GNN+ can not be claimed as "classic GNNs"**
Thank you for your thoughtful comment.
We'd like to emphasize that the **main difference between GTs and GNNs lies in their core mechanisms: global attention vs. message-passing**. Other techniques, such as edge feature integration, normalization, dropout, residual connections, FFN, and PE, are standard practices that are widely adopted across various network architectures and can be flexibly integrated. These techniques are commonly found in the literature on classic message-passing GNNs:
1. **Residual connections**, **dropout**, and **normalization** were already utilized in the original **GCN** paper (Kipf & Wellings, 2017).
2. **Edge feature** is already incorporated in the early general MPNN framework (Gilmer et al., 2017), which forms the foundation of many classic GNNs such as **GatedGCN** (Bresson & Laurent, 2017).
3. The use of an **MLP/FFN** after message passing was introduced in the **GIN** paper (Xu et al., 2018).
4. **PE** has been employed in prior GNN works (Dwivedi et al., 2021).
Our GNN+ integrates these widely-used techniques into a unified framework that encompasses various classic GNNs, such as GCN, GIN, and GatedGCN, to enhance their performance. Extensive experiments demonstrate that these classic GNN models, when enhanced by the GNN+ architecture, excel at graph-level tasks. Thus, GNN+ is an effective architecture that unlocks the potential of classic GNNs, though it may not be the only one. In this context, we believe it is fair to say that classic GNNs excel at graph-level tasks.
**(2) Why GNNs can surpass GTs in graph-level tasks**
To explore why "GNNs can surpass GTs in graph-level tasks", we conducted a thorough analysis of attention mechanisms within GTs, uncovering a critical problem we term the **static attention problem**: the attention scores in GTs seem to be globally consistent across all nodes in the graph, regardless of the query node. We present visualizations ([link](https://anonymous.4open.science/r/attn-4558/README.md)) of the attention scores across multiple datasets, including ZINC, Peptides-func, Peptides-struct, MNIST, ogbg-molhiv, PascalVOC-SP, PATTERN, CLUSTER, and CIFAR10. The results indicate that a small set of nodes consistently receives dominant attention across all nodes, restricting the model's ability to focus on task-relevant localized structures. It's important to note that the visualized graphs were randomly selected, and this pattern consistently appears across different layers.
For example, we analyze GraphGPS attention scores on a misclassified molecule from the ZINC dataset, which is ideal for interpretability due to its small graph size ([link to visualization](https://anonymous.4open.science/r/attn-4558/zinc_example.jpg)). As illustrated, nearly all nodes primarily focus on two structures: the five-membered pyrazoline ring in the top right and the benzene ring at the bottom. In contrast, functionally significant substructures, such as the C–O–N=N group in the top left and the nitro group (–NO₂) in the bottom left, receive minimal attention. This query-invariant attention pattern results in insufficient sensitivity to subgraph structures, which adversely affects prediction accuracy.
In contrast, message-passing GNNs perform node-specific aggregation, allowing the model to capture diverse local substructures more effectively. This is beneficial for graph prediction tasks, where node representations are aggregated (e.g., via global pooling) into a global graph embedding. When nodes encode meaningful subgraph patterns, the resulting graph representation becomes more informative and discriminative.
We further validate this through an ablation study on the ZINC dataset below.
|Model|ZINC↓|
|-|-|
|**GraphGPS (GNN + Attention)**|0.070|
|**GraphGPS (GNN only)**|0.070|
|**GraphGPS (Attention only)**|0.217|
Moreover, the static attention problem contributes to over-smoothing, wherein similar attention patterns yield near-identical node embeddings across the graph. We investigated this by comparing the Dirichlet energy of GraphGPS with GCN+ across four datasets. GraphGPS consistently showed lower Dirichlet energy, indicating reduced node representation diversity due to static attention, which further diminishes performance.
|Model (Dirichlet energy)|Peptides-func↑|CLUSTER↑|MNIST↑|CIFAR↑|ZINC↑|
|-|-|-|-|-|-|
|**GraphGPS**|32.233|0.256|8.376|5.637|2.679|
|**GCN+**|**80.506**|**0.624**|**21.127**|**11.582**|**3.966**|
These results support our observation: the global attention mechanism in GTs not only suffers from the *over-globalizing problem* identified in [1] but also exhibits a **static attention problem**. In contrast, GNNs effectively capture node-dependent subgraph patterns, which is a key reason they can outperform GTs in graph-level tasks.
Best Regards,
The Authors | Summary: This study explores the potential of Graph Neural Networks (GNNs) by enhancing them with the GNN+ framework, which incorporates techniques such as edge feature integration, normalization, and positional encoding. The results show that classic GNNs, enhanced with GNN+, outperform Graph Transformers (GTs) on graph-level tasks, achieving top rankings across 14 datasets. While the paper does not present theoretical innovations, it provides an insightful summary and practical evaluation of existing methods, challenging the notion that complex Graph Transformers are necessary for superior performance.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes, all
Relation To Broader Scientific Literature: This study challenges the prevailing belief that complex Graph Transformer (GT) mechanisms are essential for superior graph-level performance by demonstrating that enhanced classic Graph Neural Networks (GNNs), utilizing the GNN+ framework, can achieve top rankings across multiple datasets.
Essential References Not Discussed: No
Other Strengths And Weaknesses: While the paper does not introduce new theoretical innovations, it effectively synthesizes existing research, providing valuable insights and a comprehensive summary of current knowledge in the field.
Other Comments Or Suggestions: 1. To enhance your paper's contributions, it is recommended to incorporate and compare your proposed method with state-of-the-art techniques addressing over-smoothing and over-squashing in Graph Neural Networks (GNNs) (you mentioned in the first sentence of the Abstract), such as Multi-Track Message Passing and Cooperative Graph Neural Networks. Conducting comparative experiments with these methods will provide a comprehensive evaluation of your approach's effectiveness and highlight its contributions to the field.
2. Additionally, performing experiments that involve increasing the number of network layers to observe performance changes can help assess and address the over-smoothing issue in GNNs.
Questions For Authors: Why doesn't the author focus on graph node classification instead of graph classification?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We greatly appreciate the very detailed feedback and your recognition of our contributions! We hope our response below will further enhance your confidence in our work.
**(1) Comparison with SOTA GNNs Addressing Over-smoothing and Over-squashing**
> To enhance your paper's contributions, it is recommended to incorporate and compare your proposed method with state-of-the-art techniques addressing over-smoothing and over-squashing in Graph Neural Networks (GNNs) (you mentioned in the first sentence of the Abstract), such as Multi-Track Message Passing and Cooperative Graph Neural Networks. Conducting comparative experiments with these methods will provide a comprehensive evaluation of your approach's effectiveness and highlight its contributions to the field.
We appreciate the thoughtful suggestion. Following your recommendation, we conducted additional experiments on the Peptides-func and Peptides-struct datasets, comparing GCN+ with SOTA GNNs including Multi-Track Message Passing (MTGCN) [1] and Cooperative Graph Neural Networks (CO-GNN) [2]. These experiments adhered to the hyperparameter settings described in our paper (Lines 264-273).
For CO-GNN, we tuned its message-passing mechanisms, including SUMGNNs, MEANGNNs, GCN, GIN, and GAT, as recommended by the original paper. For MTGCN, we optimized the number of message-passing stages (from 1 to 4) to ensure a fair comparison.
The comparative results in the table below clearly demonstrate the effectiveness of our GNN+ framework in addressing over-smoothing and over-squashing. We have included the results in the revised version.
| Model | Peptides-func ↑ | Peptides-struct ↓ |
| -------- | ------------------- | ------------------- |
| MTGCN | 0.6936 ± 0.0089 | 0.2461 ± 0.0019 |
| CO-GNN | 0.7012 ± 0.0106 | 0.2503 ± 0.0025 |
| **GCN+** | **0.7261 ± 0.0067** | **0.2421 ± 0.0016** |
**(2) The Impact of the Number of Network Layers on GNN+**
> Additionally, performing experiments that involve increasing the number of network layers to observe performance changes can help assess and address the over-smoothing issue in GNNs.
Thank you for the thoughtful suggestion. In response, we conducted additional experiments to examine the impact of the number of network layers on GNN+ for the PATTERN, CLUSTER, and PascalVOC-SP datasets. The detailed results can be found at this [link](https://anonymous.4open.science/r/GNN-3221/layer.jpg).
Thanks to residual connections, GNN+ achieves optimal performance across a wide range of layers. Specifically, on PATTERN, both GNN+ variants (GCN+ and GatedGCN+) achieve optimal performance at 12 layers. In contrast, on CLUSTER and PascalVOC-SP, their performance continues to improve as the number of layers increases.
Overall, **GNN+ maintains strong predictive performance even at greater depths**, demonstrating its ability to effectively mitigate the over-smoothing issue commonly observed in GNNs.
**(3) Why Not Consider Graph Node Classification**
> Why doesn't the author focus on graph node classification instead of graph classification?
Thank you for your question. Our work is inspired by a recent study [3] that showed classic GNNs can achieve performance comparable to, or even surpassing, state-of-the-art GTs for node-level tasks, such as node classification. However, there has been no similar conclusion or investigation for graph-level tasks, and our work aims to fill this gap.
Please note that we have addressed inductive node classification, which is considered one of the graph-level tasks. Specifically, Table 2 and 3 (in our original manuscript) include results on the PATTERN, CLUSTER, PascalVOC-SP, and COCO-SP datasets, which evaluate the performance of inductive node classification.
In addition, although our GNN+ framework is specifically designed for graph-level tasks, it can also be applied to node-level tasks. Below are the node classification results on four datasets from CO-GNN: roman-empire, amazon-ratings, minesweeper, and questions. The results indicate that GCN+ consistently achieves performance comparable to CO-GNN.
| Model | roman-empire | amazon-ratings | minesweeper | questions |
| ------ | ------------ | --------------- | ------------ | ------------ |
| | Accuracy↑ | Accuracy↑ | AUROC↑ | AUROC↑ |
| GCN | 73.69 ± 0.74 | 48.70 ± 0.63 | 89.75 ± 0.52 | 76.09 ± 1.27 |
| CO-GNN | 91.57 ± 0.32 | 54.17 ± 0.37 | 97.31 ± 0.41 | 80.02 ± 0.86 |
| GCN+ | 91.27 ± 0.20 | 53.80 ± 0.60 | 97.86 ± 0.24 | 79.02 ± 0.60 |
---
[1] Multi-Track Message Passing: Tackling Oversmoothing and Oversquashing in Graph Learning via Preventing Heterophily Mixing, ICML 2024.
[2] Cooperative Graph Neural Networks, ICML 2024.
[3] Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification, NeurIPS 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's reply. I'd like to maintain my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 5voA,
Thank you for taking the time to review our rebuttal. We highly appreciate your positive and insightful assessment of our work, as well as your reaffirmation of your rating.
We are particularly grateful for your recognition that "while the paper does not present theoretical innovations, it provides an insightful summary and practical evaluation of existing methods, challenging the notion that complex Graph Transformers are necessary for superior performance." Your acknowledgment of the value of our empirical analysis and synthesis of prior work is truly encouraging and affirms the importance of reassessing basic models with careful benchmarking.
In case there are any additional concerns we can address, please let us know.
Best Regards,
The Authors | Summary: The paper challenges the prevailing assumption that Graph Transformers are inherently superior to Message-Passing GNNs for graph-level tasks. It introduces GNN+, a framework enhancing three classic GNNs (GCN, GIN, GatedGCN) with six techniques: edge feature integration, normalization, dropout, residual connections, feed-forward networks (FFN), and positional encoding (RWSE). Evaluated across 14 graph-level datasets (GNN Benchmark, LRGB, OGB), GNN+ achieves top-three rankings on all datasets and first place on eight, outperforming SOTA GTs while being more computationally efficient. The results suggest that classic GNNs, when properly enhanced, are highly competitive for graph-level tasks, challenging the necessity of complex GT architectures.
Claims And Evidence: The key claim of this paper is that classic GNNs can match the performance of GTs on graph-level tasks by incorporating certain tricks and conducting meticulous parameter search. This claim is supported by substantial empirical evidence. However, this claim holds limited value as it lacks relevant theoretical analysis and deeper mechanistic insights.
Methods And Evaluation Criteria: This paper introduces GNN+, an architecture that augments classic GNNs with edge features, normalization, dropout, residual connections, FFNs, and PE. They evaluated GNN+ on 14 graph-level datasets. The evaluation criteria used make sense. However, the work introduces no novel methodological advancements, and the idea of tuning classic GNNs has already been proposed in prior literature.
Theoretical Claims: The paper is empirical and does not propose new theoretical claims or proofs.
Experimental Designs Or Analyses: I have checked the experiment section, which is quite comprehensive. They include 14 datasets spanning regression, classification, and inductive tasks and provide rigorous analysis of each component.
Supplementary Material: The supplementary material provides many useful information, e.g., dataset statistics, hyperparameters, and implementation details.
Relation To Broader Scientific Literature: The paper directly challenges recent works advocating GTs (e.g., GraphGPS) by showing that classic GNNs remain competitive. It extends findings from Luo et al. (2024) to graph-level tasks.
Essential References Not Discussed: Key prior works are appropriately cited.
Other Strengths And Weaknesses: This paper comprehensively re-examines the performance of GNNs on graph-level tasks, conducting extensive experiments across datasets of varying scales, which is particularly impressive. However, the work suffers from the following limitations:
1. The study primarily offers empirical observations rather than a deep mechanistic analysis to establish universal design principles for graph-level GNNs. Consequently, its utility in guiding researchers to design or apply GNNs for graph-level tasks remains constrained.
2. The core methodology (GNN+) constitutes a direct extension of Luo et al.'s framework for graph-level tasks, with insufficient technical novelty. Notably, while graph-level tasks fundamentally differ from node-level tasks, the paper fails to elucidate how the introduced tricks (e.g., residual connections, PE) specifically enhance graph-level capabilities, such as improving expressiveness for the graph isomorphism problem.
3. Despite reporting exhaustive quantitative results, the absence of visualizations hinders understanding of GNN+'s operational advantages and decision-making patterns.
Other Comments Or Suggestions: I have no other comments or suggestions.
Questions For Authors: Refer to Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback. We believe there may have been some misinterpretations of our work that could have influenced your assessment. We hope our clarifications encourage you to reassess our work.
**(1) Theoretical Analysis**
> This claim holds limited value as it lacks relevant theoretical analysis and deeper mechanistic insights.
We’d like to clarify that **our work is an empirical benchmarking study**, akin to previous notable benchmarking research such as [1,2,3,4,5]. Consequently, **theoretical analysis is neither intended nor within the scope of this work**.
Furthermore, we'd like to explain **why the empirical study alone represents a significant contribution** to the community.
In recent years, GTs have emerged as the leading approach for graph-level tasks, often dominating leaderboards, especially on small molecular graphs. This trend has fostered a growing perception that GTs, due to their global attention mechanisms, are inherently superior to GNNs, leading to the marginalization of GNNs in the field.
However, our **comprehensive and fully reproducible experimental results**, as presented in Tables 2-4, provide compelling evidence that classic GNNs, when enhanced with our GNN+ framework, can consistently match or outperform GTs across a diverse suite of graph-level tasks.
This finding has significant implications:
1. It suggests the **global attention** mechanisms of GTs **may not be as useful as commonly believed**. In fact, our ablation study shows they may degrade performance (see our response to Reviewer TFXX, "(5) Why Attention Fails").
2. It questions the need for complex architectures in graph-level tasks and **could prompt a methodological shift** from overly complex GTs to simpler GNN models, potentially reshaping the field's landscape.
3. It explains why state-of-the-art GTs often incorporate **message-passing mechanisms** into their models, either implicitly or explicitly, due to their **high effectiveness**.
As noted by Reviewer 5voA, "*While the paper does not introduce new theoretical innovations, it effectively synthesizes existing research, providing valuable insights and a comprehensive summary of current knowledge in the field.*"
[1] Pitfalls of Graph Neural Network Evaluation, NeurIPS 2018.
[2] A Critical Look at the Evaluation of GNNs under Heterophily: Are We Really Making Progress, ICLR 2023.
[3] Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls and New Benchmarking, NeurIPS 2023.
[4] A Fair Comparison of Graph Neural Networks for Graph Classification, ICLR 2020.
[5] Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification, NeurIPS 2024.
**(2) Insight into Why Certain Components Help**
> The study's utility in guiding researchers to design or apply GNNs for graph-level tasks remains constrained.
Our ablation studies (Tables 5 and 6) provide **empirical insights** into the contributions of each component of the proposed GNN+ architecture. For example, normalization has a more substantial impact on larger-scale datasets while being less pronounced on smaller ones; similarly, a very low dropout rate (≤0.2) consistently proves optimal for graph-level tasks. These findings greatly improve the usefulness of our work in helping researchers design and apply GNNs for graph-level problems. While we agree that deeper theoretical exploration would further benefit the community, such analysis lies beyond the scope of this benchmarking study. Nonetheless, **our empirical results lay a solid foundation for future theoretical investigations** into how these architectural components enhance graph-level expressiveness and effectiveness.
**(3) Methodological Novelty**
> The core methodology (GNN+) constitutes a direct extension of Luo et al.'s framework for graph-level tasks.
It is important to note that GNN+ is **NOT** a trivial extension of Luo et al.'s model, which is intended for node-level tasks. Graph-level tasks, especially those involving small-scale molecular graphs, pose unique challenges. To address these, GNN+ integrates various techniques specifically tailored for graph-level modeling, such as edge feature module, FFNs and PE. The simple yet effective design of GNN+ was **established through extensive experiments conducted over half a year**, representing a new framework in the field.
**(4) Visualization Results**
Thank you for the suggestion. Following the advice, we have added additional visual analyses (e.g., t-SNE plots of learned embeddings ([link](https://anonymous.4open.science/r/GNN-3221/tsne.jpg)), sensitivity analysis of network depth ([link](https://anonymous.4open.science/r/GNN-3221/layer.jpg)) and dropout rate ([link](https://anonymous.4open.science/r/GNN-3221/dropout.jpg))). In the t-SNE figure, we observe that the graph embeddings generated by GCN+ exhibit greater inter-class distances compared to those generated by GCN. We have incorporated the results in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' response. I appreciate the comprehensive experiments and analyses presented in this paper, but my primary concerns remain unaddressed:
1. I acknowledge that the experiments provide empirical insights into the contributions of individual components, and I am not demanding rigorous theoretical proofs. The critical issue lies in whether this work offers sufficiently profound guidance for GNN design in graph-level tasks. The current study only presents a limited perspective: "equipping classical GNNs with specific techniques (normalization, dropout, edge features, residual connections, FFNs, positional encodings) improves performance." I agree with Reviewer TFXX’s critique: "I’m not dismissing technical works; my main concern is that the finding alone is not enough to support a strong paper."
2. As previously noted, this work resembles an extension of Luo et al. (2024) to new datasets rather than a principled investigation tailored to graph-level challenges. The authors claim to design a novel GNN framework specifically for graph-level tasks, yet the added components exhibit no inherent graph-level specificity, as they are also applicable to node-level tasks. GNNs for graph-level prediction fundamentally require comparable representations across irregular graphs with diverse sizes, which involves critical problems on GNNs' expressiveness, graph isomorphism, and so on. Unfortunately, the empirical analyses in this work offer limited insights into these core graph-level challenges.
Therefore, I would keep my rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer s2AU,
Thank you for your further feedback.
We regret that our work may not have aligned with your expectations, *despite our clarification that it is an **empirical benchmarking study** rather than a proposal of novel GNN models*. It appears there may be a **misunderstanding** regarding the **focus of our work**.
Benchmarking studies are a crucial component of machine learning research, providing a foundational basis for advancing the field. Their importance is increasingly recognized by leading machine learning conferences, some of which feature **dedicated tracks for benchmarking studies**.
> The authors claim to design a novel GNN framework specifically for graph-level tasks, yet the added components exhibit no inherent graph-level specificity, as they are also applicable to node-level tasks.
>
> GNNs for graph-level prediction fundamentally require comparable representations across irregular graphs with diverse sizes, which involves critical problems on GNNs' expressiveness, graph isomorphism, and so on. Unfortunately, the empirical analyses in this work offer limited insights into these core graph-level challenges.
Our **GNN+** framework is specifically **designed to benchmark classic GNN models** for graph-level tasks, with the goal of **understanding their potential and promoting empirical rigor**. While architectural innovations and theoretical results—involving expressiveness and graph isomorphism—are important, they *fall outside the scope of our empirical study*.
> The current study only presents a limited perspective: equipping classic GNNs with specific techniques improves performance.
Our benchmarking study rigorously evaluates **over 30 state-of-the-art models** published in the past three years at top-tier machine learning conferences, focusing on graph-level tasks. The results, derived from **extensive experiments conducted over six months**, offer a **valuable resource for future research**. Importantly, our study reveals a **key finding**: *simple GNN models achieve state-of-the-art performance on graph-level tasks, indicating that complex graph transformers and their attention mechanisms may not be necessary.*
We kindly request that you **evaluate this paper as a benchmarking study**, focusing on its strengths and weaknesses within that context, rather than as a proposal for new GNN models.
Thank you.
The Authors | null | null | null | null | null | null |
CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization | Accept (poster) | Summary: This paper introduces Internal Consistency Regularization (CROW), a defense mechanism against backdoor attacks in LLMs. Unlike traditional defenses, which are primarily designed for classification tasks, CROW effectively mitigates backdoors in text generation models without relying on clean reference models or prior knowledge of attack triggers. The core insight behind CROW is that backdoored LLMs exhibit inconsistent hidden representations across layers, whereas clean models maintain smooth and stable transitions. To counteract this instability, CROW enforces internal consistency by integrating adversarial perturbations and regularization during fine-tuning. Experimental results demonstrate that CROW preserves generative performance while significantly reducing the attack success rate (ASR) across five LLM architectures and six diverse backdoor attack strategies.
Claims And Evidence: The experimental results support the claims presented in the submission. However, there appears to be a minor inconsistency between the authors' initial observation and the proposed methodology. Specifically, Figure 1 highlights that the primary discrepancy between backdoor-triggered and clean inputs manifests in the initial latent representations of the backdoored model, as indicated by the cosine similarity difference. This observation suggests that consistency regularization would be more naturally applied between the original latent representation and its perturbed counterpart, rather than across consecutive layers, as implemented in the proposed approach.
To strengthen the alignment between the method and the observed phenomenon, I recommend that the authors conduct experiments incorporating this alternative regularization strategy. If enforcing consistency across layers remains the preferred approach, it would be beneficial for the authors to provide additional empirical evidence demonstrating why this design choice is justified in light of their initial findings.
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: 1. While CleanGen has certain constraints, it would be valuable to include a comparison, as it approaches backdoor defense from a different perspective. Such a comparison could provide a more comprehensive understanding of the strengths and limitations of each method.
2. The current VPI setting only considers the insertion-based scenario. However, the original paper also explores the use of entity-based triggers, which are more semantically coherent than abrupt insertions. Additionally, prior work [1] has demonstrated that semantic-level triggers tend to be more effective compared to insertion-based triggers. Incorporating this perspective would provide a more nuanced evaluation of defense robustness.
References:
[1] He, Xuanli, et al. "TuBA: Cross-Lingual Transferability of Backdoor Attacks in LLMs with Instruction Tuning." arXiv preprint arXiv:2404.19597 (2024).
Supplementary Material: I read all sections of the supplementary material.
Relation To Broader Scientific Literature: Please refer to **Essential References Not Discussed**
Essential References Not Discussed: 1. The use of adversarial perturbations for defending against backdoor attacks is not a novel approach, as several prior works [1,2,3] have successfully leveraged this property to mitigate backdoor vulnerabilities.
2. While this work proposes enforcing consistency among internal representations, prior research [4] has explored a similar direction by utilizing reference models to remove backdoor-related information.
References:
[1] Gao, Yinghua, et al. "On the effectiveness of adversarial training against backdoor attacks." IEEE Transactions on Neural Networks and Learning Systems (2023).
[2] Ali, Hassan, et al. "Adversarially Guided Stateful Defense Against Backdoor Attacks in Federated Deep Learning." arXiv preprint arXiv:2410.11205 (2024).
[3] Yang, Wenkai, et al. "RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models." Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2021.
[4] Zhao, Shuai, et al. "Unlearning backdoor attacks for llms with weak-to-strong knowledge distillation." arXiv preprint arXiv:2410.14425 (2024).
Other Strengths And Weaknesses: Strengths:
The proposed method removes a strong assumption commonly made by previous defense techniques, such as the necessity of a clean reference model or prior knowledge of trigger patterns. This makes the approach more practical and broadly applicable across different backdoor attack scenarios.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer BKBt
We sincerely appreciate your thorough review and insightful comments. Please find our responses to your questions below.
**Q1.** Consistency regularization between original and perturbed embeddings
**A1.** We appreciate your insightful suggestion. In response, we conducted additional experiments that apply consistency only between the original and perturbed embedding (i.e., the initial latent representation) and compare it to our layer-wise approach (CROW). Below is the expanded table:
|Task|Attack|No Defense|Alternative|CROW|
|-|-|:-:|:-:|:-:|
|Sentiment|BadNets|65.00/2.72|2.56/4.13|0.53/3.80|
|Sentiment|CTBA|63.33/2.80|9.29/4.17|2.08/3.80|
|Refusal|BadNets|94.50/4.35|98.00/4.34|19.63/4.15|
|Refusal|CTBA|82.16/4.40|30.22/4.42|2.38/4.27|
*Note: ASR(↓)/MT-Bench(↑)*
These results demonstrate that while initial embedding differences are significant, forcing consistency solely at this level cannot prevent residual perturbations from amplifying through deeper layers. Our layer-wise approach achieves consistent performance across all tasks while maintaining comparable utility. These findings confirm that comprehensive layer-wise constraints are essential for robust backdoor elimination.
**Q2.** CleanGen
**A2.** Following your suggestion, we provide comprehensive comparison results between our CROW and CleanGen on backdoored Llama-2-7B. As shown in the Table below, CleanGen achieves complete mitigation for Sentiment tasks while maintaining high MT-Bench scores. However, its effectiveness is less stable on the more challenging refusal-generation tasks. In contrast, CROW achieves more consistent mitigation across all task types, while maintaining comparable utility. We will include these experimental results in the revision.
*Note: ASR(↓)/MT-Bench(↑)*
|Task|Attack|No Defense|CleanGen|CROW|
|-|-|:-:|:-:|:-:|
|Sentiment|BadNets|65.00/2.72|0.00/4.81|0.53/3.80|
|Sentiment|CTBA|63.33/2.80|0.00/4.87|2.08/3.80|
|Sentiment|MTBA|18.56/1.00|0.00/4.81|0.00/3.89|
|Sentiment|Sleeper|5.08/2.97|0.00/4.91|0.00/3.68|
|Sentiment|VPI|13.79/3.08|0.00/4.86|0.00/3.69|
|Refusal|BadNets|94.50/4.35|11.00/4.82|19.63/4.15|
|Refusal|CTBA|82.16/4.40|53.50/4.86|2.38/4.27|
|Refusal|MTBA|89.90/4.43|55.50/4.93|0.54/3.89|
|Refusal|Sleeper|54.91/4.42|45.50/4.92|0.56/4.37|
|Refusal|VPI|98.99/4.36|67.00/4.85|0.50/4.28|
**Q3.** Semantic/entity-based VPI triggers
**A3.** Thank you for raising this important point about entity-based triggers. We have addressed this concern in detail in our response to Reviewer 3BdD (Q3), where we present our evaluation against semantic backdoor attacks, including VPI-semantic [1] and semantic-level instruction backdoors [2]. As noted there, our experiments demonstrate CROW's effectiveness against these more semantically coherent triggers, with significant ASR reductions. This aligns with observations from recent literature [3] on the cross-lingual transferability of backdoor attacks, further supporting our approach's robustness against various trigger mechanisms.
[1] Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection. arXiv:2307.16888 2024.
[2] Instruction Backdoor Attacks Against Customized LLMs. arXiv:2402.09179 2024.
[3] TuBA: Cross-Lingual Transferability of Backdoor Attacks in LLMs with Instruction Tuning. arXiv:2404.19597 2024.
**Q4.** Adversarial perturbations for backdoor defense [1,2,3], and similar internal representation consistency [4]
**A4.** While prior works [1–3] leverage adversarial perturbations for backdoor defense in various settings, and [4] explores internal representation consistency via knowledge distillation, our method CROW differs from these approaches in several key aspects:
(1) Target domain: CROW is designed for generative LLMs, whereas [1] focuses on vision models, [2] on federated learning, and [3] on classification-based NLP models.
(2) Training vs. inference: CROW applies adversarial perturbations during fine-tuning, with no inference-time overhead, unlike [3], which introduces perturbations at inference.
(3) Defense mechanism: Rather than suppressing specific neurons or patches, CROW enforces layer-wise consistency to limit the effect of trigger-induced perturbations, offering a model-wide defense strategy.
(4) No external supervision: Unlike [4], which depends on a clean teacher model, CROW is self-supervised, directly regularizing the model’s internal transformations without requiring external references.
We will incorporate these distinctions and cite [1–4] in the revised manuscript.
[1] On the effectiveness of adversarial training against backdoor attacks. TNNLS 2023.
[2] Adversarially Guided Stateful Defense Against Backdoor Attacks in Federated Deep Learning. arXiv:2410.11205 2024.
[3] RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models. EMNLP 2021.
[4] Unlearning backdoor attacks for llms with weak-to-strong knowledge distillation. arXiv:2410.14425 2024. | Summary: The paper proposes CROW (Internal Consistency Regularization), a defense mechanism to eliminate backdoor attacks in Large Language Models (LLMs). Backdoor attacks manipulate model outputs using hidden triggers, posing significant security risks. Existing defenses designed for classification tasks fail for generative LLMs due to their complex output spaces. CROW addresses this gap by enforcing layer-wise consistency in hidden representations during fine-tuning, neutralizing backdoors without requiring prior trigger knowledge or clean reference models.
Claims And Evidence: Pros:
- *Abstract*: Backdoored models exhibit unstable layer-wise hidden representations when triggered.
- The pre-experiment in Figure 1 well-illustrated the claim and was convincing.
Cons:
- *Section 3.1 Key Insight*: By tracking cosine similarity across consecutive layers, large gaps in similarity reveal potential backdoor-induced shifts.
- The inconsistency observed in Figure 1 is calculated by the similarity differences between clean and backdoor data. However, we only own clean data for fine-tuning during the defense. A gap exists between the observation and the proposed method. We don't know whether the inconsistent internal representation is caused by the active representation of backdoor data compared to clean data, or exhibit the similar inconsistency with only the clean data.
Methods And Evaluation Criteria: **Methods**:
- Finetuning techniques are reasonable and frequently implemented for backdoor defense.
- The internal consistency is intuitively reasonable for defense, although it cannot be fully illustrated in Figure 1.
**Evaluations**:
- The evaluation metrics and attack types following BackdoorLLM are considered adequated.
Theoretical Claims: The theoretical claims in 3.3 are correct. However, it is confusing that the conclusion *model retains stable behavior on clean inputs* seems to conflict with the loss target that we need to fine-tune a consistent representation with clean data input.
Experimental Designs Or Analyses: - The main experiments are considered adequated with various settings.
- The ablation study in 5.2 didn't show the cases without perturbation embedding. Since the main contribution is consistency, it is necessary to validate the effectiveness of the pure consistency loss.
Supplementary Material: The illustration of experimental details in the supplementary material is clear.
Relation To Broader Scientific Literature: - The metric for calculating the differences between clean and backdoor data was previously proposed as Trigger-activated Change (TAC) in [1], but it seems not mentioned in this paper.
- Paper [1] also uses Lipschitzness to illustrate the backdoor behavior as in 3.3, and it can be used to support the conclusion here.
- The activation differences observed in clean and backdoor data are similarly observed in the backdoored model of the traditional classification models [2]. It seems equivalent to the cosine similarity differences observed in Figure 1.
[1] Data-free Backdoor Removal based on Channel Lipschitzness, ECCV 2022.
[2] Pre-activation Distributions Expose Backdoor Neurons, NeurIPS 2022.
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: Pros:
- The defense performance seems good in section 5.1, showing that the defense is effective.
- The experiments are adequate.
- The method is simple yet effective.
- The paper is good writing.
Cons:
- There exists a gap between the observation in Figure 1 and the method.
- The similar insights from the previous papers are not well-discussed in this paper.
Other Comments Or Suggestions: - It is confusing that the abbreviation *CROW* for the proposed method *Internal Consistency Regularization*.
- A clear illustration to highlight your contribution compared to the previous insights is necessary.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer 8Evs
Thank you very much for reviewing our paper and the valuable comments.
**Q1.** How does Figure 1's analysis of clean vs. backdoor data align with CROW's clean-only defense approach?
**A1.** We thank the reviewer for pointing out this important distinction. Figure 1 is used purely for diagnostic purposes, serving to visually motivate the hypothesis that backdoor triggers induce disruptions in internal representations, while clean inputs (in both clean and backdoored models) show smooth transitions. Our actual defense is entirely trigger-agnostic and uses only clean data. We will revise the Key Insight section to clarify this distinction.
To approximate the instability caused by backdoor triggers, we introduce small adversarial perturbations to clean input embeddings during fine-tuning (Section 3.2). These perturbations are guided by gradients of the internal consistency loss and simulate trigger-like effects without relying on any triggered input. Our theoretical analysis (Section 3.3) shows that small perturbations can amplify across layers in a backdoored model unless near-isometry is enforced. CROW explicitly reimposes this stability through regularization, mitigating backdoor effects.
**Q2.** Why do we need consistency fine-tuning if the model already shows stable behavior on clean inputs?
**A2.** The key point is that the statement “the model retains stable behavior on clean inputs” refers to the final model after consistency regularization, not the initially backdoored model. Our theoretical analysis shows that near-isometric layer transformations ensure small input perturbations do not amplify across layers (Eq. 10), which leads to stability in internal representations. However, backdoored models typically violate this property: their hidden layers may appear stable on clean inputs superficially but are vulnerable to compounding deviations from small, malicious perturbations like hidden triggers.
To address this, CROW introduces small adversarial perturbations to clean inputs during finetuning (Section 3.2). This simulates potential instability and guides the model to enforce layer-wise consistency under stress, thereby restoring the near-isometry property. The consistency loss targets this internal robustness, not just surface-level clean behavior. Therefore, there is no conflict: our loss function encourages stability under adversarial conditions, and the resulting model, after regularization, exhibits robust and stable behavior on clean inputs as well as resilience to hidden triggers. We will clarify this in the revised manuscript to avoid this ambiguity.
**Q3.** Pure consistency
**A3.** Table below contrasts pure consistency (no adversarial perturbations) against our CROW approach. Pure consistency effectively reduces ASR for simpler Sentiment backdoors but struggles with Refusal attacks, while CROW achieves stronger mitigation across all cases. Both maintain similar MT-Bench scores.
*Note: ASR/MT-Bench*
|Task|Attack|No Def|Pure |CROW|
|-|-|:-:|:-:|:-:|
|Sentiment|BadNets|65.00/2.72|1.59/4.15|0.53/3.80|
|Sentiment|CTBA|63.33/2.80|3.21/4.18|2.08/3.80|
|Sentiment|VPI|13.79/3.08|0.52/4.24|0.00/3.69|
|Refusal|BadNets|94.50/4.35|48.97/4.46|19.63/4.15|
|Refusal|CTBA|82.16/4.40|18.82/4.25|2.38/4.27|
|Refusal|VPI|98.99/4.36|13.33/4.13|0.50/4.28|
This demonstrates that pure consistency regularization may suffice for some attacks, but adversarial perturbations are crucial to further reduce ASR on the hard cases.
**Q4.** Prior work on TAC [1] and activation differences [2] shows similar observations about backdoor behavior and Lipschitzness.
**A4.** Thank you for the insightful comment. While TAC [1] and activation-based analysis [2] share a similar motivation—observing internal inconsistencies caused by backdoors—our method CROW differs in key aspects:
First, CROW targets large language models (LLMs), whereas [1] and [2] focus on image-classification DNNs. Due to architectural differences, the internal symptoms of backdoors vary: CROW observes hidden-state inconsistencies across layers (Figure 1), while [2] reports shifts in neuron pre-activation distributions.
Second, CROW mitigates backdoors via **layer-wise consistency regularization**, promoting near-isometry across layers to suppress trigger effects. In contrast, [1] and [2] rely on **pruning-based strategies** to remove suspicious neurons or channels.
Finally, pruning is less effective for LLMs due to their high redundancy and weaker neuron–trigger coupling. CROW’s global regularization is better suited for mitigating backdoors in LLMs.
We will include this discussion and cite [1] and [2] in the revised manuscript.
---
[1] Data-free Backdoor Removal based on Channel Lipschitzness, ECCV 2022.
[2] Pre-activation Distributions Expose Backdoor Neurons, NeurIPS 2022.
**Q5.** CROW abbreviation
**A5.** "CROW" takes the "C" and "R" from Consistency Regularization, while "OW" was added for readability. | Summary: This paper proposes CROW, a novel backdoor defense for LLMs that relies on enforcing internal consistency in layer-wise hidden states. It addresses the limitations of existing methods by not requiring trigger knowledge or a clean reference model. Experiments on Llama-2, CodeLlama, and Mistral models demonstrate that CROW significantly reduces the success rate of backdoor attacks, while maintaining the original generative performance and requiring minimal computational overhead.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes, but the author also needs to consider the adaptive attack, which is a common practice for AI backdoor defense papers.
Theoretical Claims: No theory proofs in this paper (section 3.3 does not provide formal theoretical guarantees)
Experimental Designs Or Analyses: The experiment part sounds good to me
Supplementary Material: Yes, all parts
Relation To Broader Scientific Literature: The key contributions of CROW are well-positioned within the broader literature on backdoor attacks and defenses for LLMs
Essential References Not Discussed: No
Other Strengths And Weaknesses: CROW introduces internal consistency regularization as a novel mechanism explicitly addressing an overlooked property of transformers, stable transitions in hidden states for clean inputs versus disruptive transitions caused by backdoor triggers. This is different from other defenses like, pruning, quantization-based defenses, and fine-tuning on clean data.
Weakness: 1. Although the paper provided some theory proof in section 3.3, I don't think this provides any reasonable error bound or robustness analysis. In general, CROW relies heavily on empirical results without a rigorous theoretical underpinning, weakening its comparative position in terms of foundational robustness guarantees. 2.CROW’s evaluation does not extensively consider adaptive or advanced attackers, leaving open questions regarding its resilience in more sophisticated threat environments. 3. CROW requires careful tuning of consistency parameters
Other Comments Or Suggestions: No
Questions For Authors: See my comments about the weakness of the paper
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer UhBq
Thank you for taking the time to review our paper and for your valuable comments. Please find our responses to your questions below.
**Q1.** Theoretical guarantees
**A1.** We acknowledge that Section 3.3 in our paper offers an intuitive Lipschitz-based argument rather than a formal end-to-end proof. However, this intuitive analysis already provides a fundamental justification for our method and serves as an important basis for ensuring its validity. More importantly, the diversity of our CROW experiments—encompassing six different backdoor attacks, five LLM architectures, and multiple tasks—offers substantial evidence that enforcing near-isometric transformations effectively thwarts backdoors across scenarios. Achieving a theoretical guarantee regarding backdoor-freeness is known to be extremely challenging. The only work that we are aware of is [1], which is limited to traditional neural networks and limited kinds of backdoor triggers. We will explore a more rigorous bounding framework (e.g., ASR upper bound under certain distributional assumptions) in future work.
[1] Long H. Pham and Jun Sun: Verifying Neural Networks Against Backdoor Attacks, CAV 2022.
**Q2.** Adaptive attacks
**A2.** Thanks for your suggestion. In response, we conducted additional experiments on adaptive semantic backdoor attacks, which use natural, context-dependent triggers (e.g., entity names) to simulate stealthier, adaptive behaviors. These flexible triggers are harder to detect than those attacks, which easily result in overfitting. The table below summarizes CROW’s effectiveness in mitigating such attacks:
|Attack|No Defense: ASR(↓)|No Defense: MT-Bench(↑)|CROW: ASR(↓)|CROW: MT-Bench(↑)|
|-|-:|-:|-:|-:|
|VPI-Semantic|38.09|3.52|0.58|3.97|
|Semantic-Instruction|89.10|4.10|3.52|4.24|
While semantic triggers may reduce overfitting compared to fixed-string triggers, we find they still introduce subtle but measurable inconsistencies in layer-wise hidden representations. Since CROW directly regularizes internal consistency, it effectively detects and neutralizes these triggers—even when they are embedded in natural language. We will explore more advanced adaptive attacks arising in the future to further validate the robustness of CROW.
[1] Yan et al. "Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection." arXiv:2307.16888 (2024)
[2] Zhang et al. "Instruction Backdoor Attacks Against Customized LLMs." arXiv:2402.09179 (2024)
[3] Cheng et al. "Transferring Backdoors Between Large Language Models By Knowledge Distillation." arXiv:2408.09878 (2024)
**Q3.** CROW requires careful tuning of consistency parameters
**A3.** This concern is closely related to the "Hyperparameter Sensitivity" question addressed in our response to Reviewer 3BdD (Q1). As detailed there, our comprehensive analysis shows that CROW exhibits strong robustness with respect to hyperparameter selection, due to several factors:
(1) Our ablation studies demonstrate consistent ASR reduction across a wide range of α values (0.5–11), with stable backdoor mitigation observed over the moderate range of α ∈ [3.0, 7.0].
(2) In practice, a single α value generalizes well across tasks and attack types. For example, setting α = 5.5 consistently works for all sentiment classification tasks against all attacks, while α = 11 performs effectively for all refusal-generation tasks.
(3) The robustness of CROW is also supported by our Lipschitz-based analysis: the key lies in enforcing near-isometry across layers, which effectively neutralizes perturbations regardless of the trigger mechanism. | Summary: The paper proposes Internal Consistency Regularization (CROW) as a way to train away backdoors in LLMs. It's based on a finding that LLMs with a backdoor exhibit high levels of variance in layer-wise hidden representations when triggered. The proposed method adds a consistency loss term, and adversarially train the model to have high consistency in layer-wise hidden representations. Experiments show that CROW can significantly reduce attack success rates across multiple attacks, architectures and tasks.
Claims And Evidence: The claims are well supported.
Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense. The method is based on empirical observation.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, I checked the soundness and validity of the experimental designs.
Supplementary Material: I skimmed over appendix B
Relation To Broader Scientific Literature: The paper is related to backdoor attacks and defences in the broader scientific literature.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
* observes that clean and backdoored model exhibit different levels of consistency of layer-wise hidden representations.
* Based on the observation, derived an algorithm that regularizes for layer-wise consistency
* demonstrates that the algorithm can largely "train away" backdoors in multiple backdoor tasks with different triggers, on different models.
Weaknesses:
* It seems to be sensitive to hyperparameters, and \alpha is quite different across different tasks. This limits the practical significance of this work, as real model might have more than one type of backdoors.
Other Comments Or Suggestions: N/A
Questions For Authors: * What is the training curve like for CROW? Is training stable?
* would layer-wise consistency observation still hold if the trigger is semantic (a flexible distribution with a semantic meaning) rather than a fixed string? that might potentially reduce overfitting
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Reviewer 3BdD
Thank you for taking the time to review our paper and for your valuable comments. Please find our responses to your questions below.
**Q1.** Sensitive to hyperparameters α
**A1.** We thank the reviewer for raising concerns about the sensitivity of CROW's hyperparameter α. In fact, we have reported ablation studies of α in Section 5.2 for BadNets-CI using CodeLlama-7B, Table 6:
|α|0.5|3.0|5.5|7.0|11.0|
|-|---|---|---|---|----|
|ASR(↓)|4.35|1.61|0.87|0.00|0.00|
|MT-Bench(↑)|3.93|3.89|3.95|3.50|3.23|
(1) **Consistent ASR reduction across tasks**: As seen above, even when α varies from 0.5 to 11, the ASR remains below 5% in the midrange of α. This trend holds for multiple backdoor strategies (e.g., BadNets, Sleeper, multi-trigger attacks), indicating that it is often not necessary to re-tune α per backdoor type.
(2) **Minimal tuning overhead**: Adjusting α requires only 100 clean samples and about 3–4 minutes on a single A100. A short grid search (3–5 values) completes in under 20 minutes, which is practical in real deployment.
(3) **Generalization to multiple backdoors**: Since CROW's consistency regularization is trigger-agnostic, fixing the α = 5.5 yields consistently effective defense results against diverse backdoor attacks.
In the revision, we will include more task-specific ablation results (similar to the one above) to further demonstrate CROW's stable performance with varied hyperparameters. This should underscore that CROW's practical significance remains high, even for real-world cases involving multiple backdoors.
**Q2.** What is the training curve like for CROW? Is training stable?
**A2.** Thank you for the valuable question. Since we are unable to show figures during rebuttal, we will include full training-loss plots in our revised submission. Specifically, in our current experiments on Llama-2-7B (across five backdoor attacks), training remains stable despite adversarial perturbations. For Sentiment Steering, loss drops from ~3.25 to ~0.75 within 50 steps, converges near 0.1 by step 300, and stabilizes below 0.05 by step 400. Targeted Refusal exhibits minor oscillations from steps 50–300 before converging near 0.05 by step 600. Similar convergence patterns emerge for other backdoor types, showing that CROW’s consistency constraints enable smooth optimization without requiring any specialized techniques. We will update the visual training curves to demonstrate these stable trends in the revision.
**Q3.** Would layer-wise consistency observation still hold if the trigger is semantic rather than a fixed string?
**A3.** Thanks for the insightful question. To answer the question, we have conducted experiments to evaluate our approach against advanced semantic backdoor attacks. Below is the table from our new experiments, where the triggers are more flexible distribution with a semantic meaning rather than fixed strings:
|Attack|No Defense: ASR(↓)|No Defense: MT-Bench(↑)|CROW: ASR(↓)|CROW: MT-Bench(↑)|
|-|-:|-:|-:|-:|
|VPI-Semantic|38.09|3.52|0.58|3.97|
|Semantic-Instruction|89.10|4.10|3.52|4.24|
While semantic triggers may reduce overfitting compared to fixed-string triggers due to their flexible nature, our results confirm they still induce subtle yet measurable inconsistencies in hidden representations. Because CROW enforces consistency across all layers, it effectively detects and mitigates these semantic triggers. We will include details on the learning curves and discussion in our revision to illustrate how this layer-wise regularization neutralizes semantic triggers without any extra tuning.
[1] Yan et al. "Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection." arXiv:2307.16888 (2024)
[2] Zhang et al. "Instruction Backdoor Attacks Against Customized LLMs." arXiv:2402.09179 (2024)
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns and questions. I believe the contributions are novel and significant. I vote for accepting (raised score to 4).
---
Reply to Comment 1.1.1:
Comment: Thank you very much for revisiting our responses and for your thoughtful reconsideration. We will make sure to address all the reviewers' comments in the revision. Thank you again for raising the score! | null | null | null | null | null | null |
ConfPO: Exploiting Policy Model Confidence for Critical Token Selection in Preference Optimization | Accept (poster) | Summary: While most existing Direct Alignment Algorithms (DAAs) uniformly adjust token probabilities, this paper questions the assumption that each token contributes equally to preference, and proposes a new method, called ConfPO, which identifies preference-critical tokens based on the training policy's confidence, and zeros out all non-critical tokens in the loss.
Preference-critical tokens are defined as tokens with (conditional) model probabilities below a certain (dynamically defined) threshold. This definition of preference-critical tokens is motivated empirically, as the paper shows a strong negative correlation between model probabilities (i.e., confidence) and gradient norm. Unlike other token-level approaches, a key benefit of this method is that the computation cost remains identical to that of original DAAs. The paper then shows that ConfPO outperforms its DAA analog without critical preference token selection, using the Mistral-7B and Llama-3-8B families of models, and evaluating on AlpacaEval 2 and Arena-Hard v0.1.
Claims And Evidence: The primary claim in the paper is that low-confidence tokens dominate preference learning. This is motivated by the observation that 1) token-level gradient norms follow a long-tailed distribution, and 2) there is a strong negative correlation between gradient norm and confidence. While these observations are supported empirically, there is no theoretical motivation for this simple observation.
The paper also states that ConfPO mitigates overoptimization (reward hacking) by using KL budget more efficiently. While Figure 5 shows a higher win rate for a given KL budget for ConfPO against SimPO at the high end of KL divergence, there is no clear gain at the low end, and this figure alone does not justify the claim about KL budget efficiency (which is nonetheless an intuitive claim).
Also, the claim that "high-confidence tokens can impede preference learning by overshadowing more informative signals from their low-confidence counterparts" provides a nice intuition for why the method works (i.e., a form of denoising), but it was not demonstrated theoretically or empirically that this was the mechanism of ConfPO outperformance.
Methods And Evaluation Criteria: The experimental setup (including models used and evaluation benchmarks) is pretty straightforward and makes sense. The paper includes reasonable motivation for why ConfPO is built on top of SimPO, and not other DAA variants. The ablation showing that the method also generalizes to DPO provides support for the claim that ConfPO is DAA-agnostic.
The ConfPO method is empirically motivated, based on the observation that tokens with lower probabilities (confidence) tend to have larger gradient norms. But no theoretical justification is provided for this.
Theoretical Claims: This was primarily an empirical paper, with no theoretical claims or proofs.
Experimental Designs Or Analyses: Questions:
1. What is the correlation between tokens' confidence (+ gradient norm) and their position in the sequence? Tokens are generated autoregressively, so are not independent. Are most of the "preference-critical tokens" located towards the beginning, middle, or end of sequences, or are they approximately uniformly distributed? This is a key missing investigation, and it would be nice to see an additional ablation added to Table 2 (beyond ConfPO-rand), where the tokens are selected to match the sequence position distribution of ConfPO.
2. (related to 1) Did you perform any qualitative or semantic analysis of the preference-critical tokens (across both the preferred and dispreferred examples)? Do preference-critical tokens tend to cluster into phrases, or do they tend to be sprinkled across diverse positions in the sequence?
3. ConfPO uses hard weights (thresholding based on token logprobs) in the loss function (equation 11). Did you explore soft weights (e.g. weighting directly proportional to token logprob)? Does performance scale with the "temperature" (so that hard weighting is best)? This would be an insightful ablation.
4. (related to 3) Did you explore a curriculum learning setup, where the weights on preference-critical vs non-preference-critical tokens are adjusted over the course of training? How does this compare to vanilla ConfPO?
Supplementary Material: There was no supplementary material provided.
Relation To Broader Scientific Literature: There has been a recent explosion in literature on Direct Alignment Algorithms, starting with DPO. These algorithms are summarized nicely in Table 5 in the Appendix. The ConfPO algorithm proposed in this paper is primarily explored as a modification to the SimPO algorithm (which modifies DPO by eliminating the need for a reference model and mitigating the length bias). This paper also cites other recent work in token-level preference learning, and differentiates itself by not relying on external models for token-level reward signals.
Essential References Not Discussed: The paper includes the most important prior works in the areas of Direct Alignment Algorithms, including token-level variants.
Other Strengths And Weaknesses: The ConfPO method is extremely simple and not particularly novel, but the strength of this paper is in the promising results, showing substantial and consistent gains over DAA methods which weight each token equally.
Other Comments Or Suggestions: - Section 3.1: Usage of D in equation 2 as the set of prompts differs from the definition stated in the text (preference dataset, with prompts as well as preferred and dispreferred responses). The usage of D in equation 4, on the other hand, aligns with the definition stated in the text.
- Why is the Bradley-Terry model mentioned in Section 3.1? Isn't it more relevant to Section 3.2?
- Figure 4: Which model is trained here? And on which dataset?
- Equation 11 is an exact copy of equation 7, but just going by a different name. Is it necessary to copy, or can r_{ConfPO}(x, y) just be defined as given in equation 7?
- What hyperparameter is being adjusted to plot the points in Figure 5? Especially for Figure 5b, the "Square root of KL divergence" has a notably higher maximum for ConfPO than for SimPO.
- In the "Number of tokens selected" paragraph in Section 7.1, the text states that "at the start, approximately 30% of tokens are selected for Mistral-Base and 40% for Llama-3-Base". But from Figure 6, it looks like the correct number is ~40% in both cases.
Questions For Authors: No additional questions, besides those mentioned in previous sections. In particular, please see the questions in the "Experimental Designs or Analyses" section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for offering detailed feedback. We address them below, following the order in which they appear across the review’s sections.
---
### **Claims and Evidence (CE)**
**[CE-Q1] Theoretical motivation.**
**[CE-A1]** In our response to reviewer RzSL under question ID [RS-Q1], we provide a detailed theoretical motivation,including mathematical derivations and sensitivity analysis demonstrating why low-confidence tokens ($p_i(\theta)$) yield higher gradient norms ($||\nabla_\theta \log p_i(\theta)||$). Due to space constraints, we kindly refer the reviewer to that section for complete details.
**[CE-Q2] KL budget efficiency at lower end.**
**[CE-A2]** We added additional points at lower KL divergence levels in ***Figure E*** to enable a fairer comparison. These additional results clearly show that ConfPO consistently achieves higher alignment scores than SimPO across both low and high KL values.
***Figure E***: https://anonymous.4open.science/r/ConfPO-B7C6/L/FigureE.md
---
### **Experimental Designs or Analyses (EA)**
**[EA-Q1 & Q2] Correlations between tokens' confidence and their position in the sequences.**
**[EA-A1 & A2]** In ***Figure C*** below, we visualize examples of selected tokens. In general, tokens acting as crucial inflection points within sentences (e.g., tokens initiating lists or new phrases) are selected, while tokens that are merely continuations of preceding tokens are not selected (e.g., in the word "Slack," "Sl" is selected, whereas "ack" is not). This shows that our confidence-based selection identifies important tokens that shape the overall sentence, leading to more efficient KL-budget usage and reduced overoptimization (Figure 5 in the main text).
Since inflection points of sentences are not really located towards certain position in a sentence, we see they are approximately uniformly distributed in the sentence as in ***Figure D***.
***Figure C***: https://anonymous.4open.science/r/ConfPO-B7C6/L/FigureC.md
***Figure D***: https://anonymous.4open.science/r/ConfPO-B7C6/L/FigureD.md
**[EA-Q3 & 4] Exploration of soft weighting ConfPO and curriculum learning.**
**[EA-A3 & 4]** Thank you for suggesting this valuable experiment. We explored a soft-weighting version of ConfPO,$$\frac{\sum_{i=1}^{|y|} w(y_i)\log\pi_\theta(y_i|x,y_{<i})}{\sum_{i=1}^{|y|} w(y_i)},$$ where $w(y_i) = (1 - p_i(\theta))^T$. In ***Figure F*** below, we tested soft weighting with temperatures $T=1$ and $5$. When $T=0$, the equation reduces to standard SimPO; as $T$ increases, it approaches our original (hard-weighted) ConfPO. Our results demonstrate that alignment performance consistently improves as $T$ increases, with the original ConfPO achieving the highest performance.
This validates our key hypothesis: high-confidence tokens can impede preference learning by overshadowing more informative signals. This is consistent with Figure 4-(b) of our main paper, where training exclusively on low-confidence tokens improved alignment scores relative to training on all tokens, while training solely on high-confidence tokens resulted in performance worse than even the SFT baseline.
We also experimented with curriculum setups adjusting T from 0 to 10 and 10 to 0 during training. Results showed no clear advantage over standard ConfPO, which can also be found in ***Figure F*** below.
***Figure F***: https://anonymous.4open.science/r/ConfPO-B7C6/L/FigureF.md
---
### **Other Comments or Suggestions**
We address each point below:
- **Definition of \( D \) (Section 3.1):**
Thank you for noting this inconsistency. We will clearly distinguish the dataset notation used for prompts alone versus prompt-response pairs in the revised manuscript.
- **Bradley-Terry Model:**
We included the Bradley–Terry (BT) model in Section 3.1 because it serves as the foundational probabilistic model for preference-based reward modeling, commonly used in RLHF. While traditional RLHF explicitly learns a reward model using BT, DPO (introduced in Section 3.2) leverages this same BT framework implicitly by reparameterizing the reward with the joint log probabilities.
- **Figure 4 (model and dataset):**
For Figure2~4, we use the Llama-3-Base (8B) with the UltraFeedback dataset. We will clarify this point in the manuscript.
- **Equation Duplication:**
We will simplify the presentation by directly defining as given in Equation 7 with added token selection criteria of ConfPO.
- **Hyperparameter for Figure 5:**
In Figure 5, we varied the $\beta$ hyperparameter (with $\gamma$ fixed) to achieve different levels of KL divergence. Also we have added more points in the following figure with fair KL range (Figure E: https://anonymous.4open.science/r/ConfPO-B7C6/L/FigureE.md).
- **Number of tokens selected:**
We apologize for the typo. The correct initial selection rate is indeed approximately 40% for both Mistral-Base and Llama-3-Base. We will correct this in the main text. | Summary: This paper proposes a token selection strategy for direct alignment algorithms. They observe a high correlation between the gradient norm and the confidence of a token and use confidence as a metric to automatically select the token, which can possibly prioritize the tokens with high gradient norm. They show empirically that incorporating selection strategy can enhance simpo and DPO algorithms.
Claims And Evidence: Yes. Their claim is supported by clear evidence.
Methods And Evaluation Criteria: One weakness is the lack of sufficient baselines to demonstrate the necessity of incorporating dynamic selection. It would be beneficial to include baselines, such as static selection before training.
Theoretical Claims: This work focuses on empirical analysis.
Experimental Designs Or Analyses: One major weakness is that the evaluation primarily focuses on AlpacaEval Arena-Hard, relying on large language models as judges for human preferences. To better substantiate the effectiveness of the approach, it would be valuable to evaluate its performance on downstream tasks such as reasoning and factual question answering.
Supplementary Material: Yes, all parts. There are typos in Table 7, particularly in the reported results for ConfPO in Arena-Hard.
Relation To Broader Scientific Literature: This paper contributes to the ongoing research on direct alignment algorithms from a feature importance perspective.
Essential References Not Discussed: None
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: This paper contributes to the ongoing research on direct alignment algorithms from a feature importance perspective.
1. Since the proposed token selection can be non-continuous, how might this discontinuity impact the coherence or the performance of downstream tasks?
2. Could the authors provide a simple baseline that uses a fixed reference policy to select important tokens? This would help demonstrate the necessity of dynamic selection.
3. A key implicit assumption is that low-confidence tokens contain more useful information and lead to more effective updates. However, if the data contains noise, could this selection mechanism amplify it, potentially resulting in degenerated performance?
4. This selection mechanism appears to be applicable in both the base and instruct settings, where the former involves off-policy data and the latter on-policy data to some extent. I am curious whether this selection method is also applicable to general RLHF algorithms, such as PPO, where all samples are highly on-policy. In this scenario, tokens are more likely to have high confidence due to the on-policy nature of the data generation.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for offering detailed feedback. We address them below, following the order in which they appear across the review’s sections.
---
### **Methods and Evaluation Criteria (ME)**
**[ME-Q1] Static token selection baseline.**
**[ME-A1]** We address this question under *[QA-Q2]* below, where we provide further details on the static token selection baseline.
---
### **Theoretical Claims (TC)**
**[TC-Q1] Theoretical Motivation**
**[TC-A1]** In our response to reviewer RzSL under question *[SW-Q1]*, we provide a detailed theoretical motivation, including mathematical derivations and sensitivity analysis demonstrating why low-confidence tokens ($p_i(\theta)$) yield higher gradient norms ($||\nabla_\theta \log p_i(\theta)||$). Due to space constraints, we kindly refer the reviewer to that section for complete details.
---
### **Experimental Designs or Anlayses (EA)**
**[EA-Q1] Evaluation on downstream tasks**
**[EA-A1]** We have evaluated our trained models on downstream tasks involving math, truthfulness, reasoning, and coding. We find that ConfPO achieves the highest average performance across all baselines. Due to space constraints, please refer to our response to reviewer M4QG under question *[ME-Q1]* or at the following link: https://anonymous.4open.science/r/ConfPO-B7C6/L/TableB.md
---
### **Supplementary Material (SM)**
**[SM-Q1] Typo in Table 7.**
**[SM-A1]** Thank you for pointing out the typo. We have corrected Table 7, which is available at the following link: https://anonymous.4open.science/r/ConfPO-B7C6/L/TableF.md
---
### **Questions for Authors (QA)**
**[QA-Q1] Concern about non-continuous update on coherency and performance of downstream task.**
**[QA-A1]** Because each token is generated under a conditional distribution that considers all previously generated tokens, selectively updating some tokens while skipping others does not fragment the model’s coherence. Despite directing the preference objective toward challenging (low-confidence) tokens, the model still maintains a valid joint distribution $p(y)$ at inference time, ensuring coherent token-by-token generation. Empirically, we do not observe drops in fluency or logical consistency on bechmarks like AlpacaEval 2, Arena-Hard, and downstream tasks shown in question *[EA-Q1]* above. A small-scale human assessment of responses also further confirms that selectively omitting updates on high-confidence tokens does not undermine text coherency.
**[QA-Q2] Static token selection baseline.**
**[QA-A2]** We conducted an additional baseline experiment, selecting tokens based on confidence values from a fixed reference policy. Our results (see ***Table G***) indicate that approximately 80% of tokens selected dynamically by the evolving policy model overlapped with tokens chosen by the fixed reference policy; however, the remaining 20% differed, reflecting critical tokens that changed dynamically as training progressed. While token selection via the fixed reference policy slightly improved performance over SimPO, it was noticeably inferior compared to our dynamic ConfPO method. This clearly highlights that token importance evolves as the policy model updates, underscoring the necessity of our dynamic selection strategy for achieving optimal performance.
***Table G***: https://anonymous.4open.science/r/ConfPO-B7C6/L/TableG.md
**[QA-Q3] Does noisy data result in degenerated performance.**
**[QA-A3]** Thank you for raising this important concern. We tested ConfPO with two types of noise:
1. **Flipped Preferences:** We randomly flipped 20% of the preference labels. Even in this noisy setting, ConfPO outperformed SimPO, confirming our low-confidence token selection does not amplify label noise.
2. **Word-Level Noise (EDA):** We also introduced word-level noise via data augmentation methods—synonym replacement, random insertion, random swap, and random deletion. Although this aggressive augmentation substantially degraded overall performance, ConfPO still outperformed SimPO.
The detailed results are available at the following link:
***Table H***: https://anonymous.4open.science/r/ConfPO-B7C6/L/TableH.md
We note that word-level noise is uncommon in recent preference-learning scenarios, which typically use on-policy data collection from advanced LLM-generated text that produces coherent sentences. Thus, randomly flipped preferences represent a more realistic noise scenario.
**[QA-Q4] Applicability to on-policy RLHF, such as PPO.**
**[QA-A4]** Our token-selection mechanism indeed applies effectively to both off-policy and partially on-policy settings, as shown. Even with highly on-policy data, our dynamic threshold maintains stable token selection ratios. Extending ConfPO explicitly to PPO-based RLHF, however, involves reward-based signals rather than implicit log-probabilities used by DAAs, and thus lies beyond our current scope but remains promising future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. They address my main concerns, and I will maintain my current score. | Summary: This paper proposes to only include low-confidence tokens during preference alignment.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: There is no theory in this work/
Experimental Designs Or Analyses: Yes.
1. Llama-3 and Mistral seem to be outdated. It would be better using more advanced models like Llama-3.1, Deepseek and Qwen.
2. It will be better to have diverse tasks like math and coding.
Supplementary Material: Yes. Every part.
Relation To Broader Scientific Literature: This is a slight modification of SimPO without much novelty and contribution.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Weaknesses:
1. The relationship between gradient norm and confidence is weak, only supported by limited empirical experiment observation.
2. It would be beneficial to select specific examples and provide visualizations of the chosen tokens. This could reveal whether the selected tokens capture core information or merely auxiliary words.
3. This actually corresponds to all alignment algorithms. So it would be better to see the combination of this selection strategy with different algorithms.
4. The novelty is limited as it just slightly modifies the loss without much theoretical justification. Also, the performance is similar to existing works.
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for offering detailed feedback. We address them below, following the order in which they appear across the review’s sections.
---
### **Experimental Designs or Analyses (EA)**
**[CE-Q1] Experiment on newer models.**
**[CE-A1]** We focused on the four models used by SimPO, as these models had thoroughly tuned hyperparameters for nine existing DAAs, ensuring a fair comparison.
However, following the reviewer’s suggestion, we provide additional experimental results using Qwen2.5 7B, shown in ***Table E*** below, where we see ConfPO has higher score than SimPO.
***Table E***: https://anonymous.4open.science/r/ConfPO-B7C6/L/TableE.md
**[CE-Q2] Evaluation on diverse tasks**
**[CE-A2]** We have evaluated our trained models on various downstream tasks, including math and coding, and find that ConfPO achieves the highest average performance than baselines. Due to space constraints, please refer to our response to reviewer M4QG under question [ME-Q1] or at the following link: https://anonymous.4open.science/r/ConfPO-B7C6/L/TableB.md
---
### **Strengths and Weaknesses (SW)**
**[SW-Q1] Theoretical Motivation.**
**[SW-A1]** For theoretical clarity, we provide a first-principles analysis showing why low-confidence tokens naturally yield higher gradient norms.
Let $\pi_\theta$ be our policy model, parameterized by $\theta$. Given a prompt $x$ and a response $y =\{y_1,\dots,y_n\}$, let $$p_i(\theta)=\pi_\theta(y_i|x,\,y_{<i})$$be the probability of the $i$-th token. Our goal is to show that the gradient norm of the log-probability of the $i$-th token, $||\nabla_\theta\log p_i(\theta)||$, is **negatively correlated** with the confidence of the policy model for that token, $p_i(\theta)$.
**Chain Rule and Gradient Norm**
Using the chain rule, we have:
$$\nabla_\theta\log p_i(\theta)=\frac{\nabla_\theta p_i(\theta)}{p_i(\theta)}\implies||\nabla_\theta\log p_i(\theta)||=\frac{||\nabla_\theta p_i(\theta)||}{p_i(\theta)}.$$
Thus, if token confidence $p_i(\theta)$ dominates this ratio, the gradient norm of the log probability $\nabla_\theta\log p_i(\theta)$ decreases as confidence increases, establishing the observed negative correlation.
**Sensitivity Analysis**
To formalize the dominance of $p_i(\theta)$ on $\nabla_\theta \log p_i(\theta)$, consider the ratio $$r(\theta)=\frac{||\nabla_\theta p_i(\theta)||}{p_i(\theta)}=\frac{b(\theta)}{c(\theta)},$$where $b(\theta)=||\nabla_\theta p_i(\theta)||$ and $c(\theta)=p_i(\theta)$. A local sensitivity test examines partial derivatives:$$\frac{\partial r}{\partial b}=\frac{1}{c},\quad\frac{\partial r}{\partial c}=-\frac{b}{c^2}.$$To determine which component more strongly influences the ratio, one can sample many tokens $ i $, each yielding pairs $(b_i, c_i)$. Then, examine the *main effects*:
$$S_b(i)=\frac{1}{c_i},\quad S_c(i) =-\frac{b_i}{c_i^2}.$$
Comparing $\mathbb{E}[|S_b|]$ and $ \mathbb{E}[|S_c|] $ offers a practical measure of which component dominates in typical scenarios. If $\mathbb{E}[|S_c|-|S_b|]>0$, it shows that variations in $ p_i(\theta) $ (the token’s confidence) have the stronger effect on $||\nabla_\theta \log p_i(\theta)||$. Using Monte Carlo estimate with 1000 samples, we indeed find that $\mathbb{E}[|S_c|-|S_b|]>0$ (The exact value can be found in https://anonymous.4open.science/r/ConfPO-B7C6/L/TableI.md). Thus, this describes our observed **negative correlation** between the confidence and log-probability gradient norm.
**[SW-Q2] Visualization of selected tokens.**
**[SW-A2]** In ***Figure C*** below, we visualize examples of selected tokens. In general, tokens acting as crucial inflection points within sentences (e.g., tokens initiating new phrases) are selected, while tokens that are merely continuations of preceding tokens are not selected (e.g., in the word "Slack," "Sl" is selected, whereas "ack" is not). This shows that our confidence-based selection identifies important tokens that shape the overall sentence, leading to more efficient KL-budget usage and reduced overoptimization (Figure 5 in the main text).
***Figure C***: https://anonymous.4open.science/r/ConfPO-B7C6/L/FigureC.md
**[SW-Q3] Extension to other DAAs.**
**[SW-A3]** In our main text, we showed how ConfPO applies to SimPO (Table 1) and DPO (Table 3). Additionally, we successfully extended ConfPO to IPO with enhanced performance (see response to reviewer M4QG, *[RS-Q2]*). Due to the space limit, we kindly direct the reviewer to that section or to the following link: https://anonymous.4open.science/r/ConfPO-B7C6/L/TableD.md
**[SW-Q4] Theoretical Motivation and Performance.**
**[SW-A4]** Please refer to [SW-Q1] for the theoretical motivation. In terms of the performance, our method consistently improves on existing DAAs (SimPO, DPO, IPO) across multiple benchmarks. For instance, on Llama-3-Base(8B) for AlpacaEval2, ConfPO achieves a +7% win rate over SimPO, showing that our modified loss leads to meaningful alignment gains.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors’ rebuttal, but I remain unconvinced. The proposed "theory" does not make sense to me. The chain rule cannot demonstrate negative correlation, as the numerator also varies. I fail to see any logic in the "sensitivity analysis" and its connection with negative correlation. In fact, I don’t believe a theoretical explanation is necessary for this claim. The idea itself is quite intuitive—for instance, non-critical tokens like auxiliary words often have high confidence but contribute little to the gradient. Based on this, the authors suggest focusing only on low-confidence tokens, as they are more likely to be critical. However, I doubt the robustness of this approach across all algorithms and datasets. Important information may be overlooked, and certain key details appear to be missing.
---
Reply to Comment 1.1.1:
Comment: ### Response to Rebuttal Comment by Reviewer RzSL
We thank the reviewer for the rebuttal comment and would like to clarify the raised concern as follows:
1. **“The chain rule alone cannot demonstrate negative correlation.”**
We do not rely solely on the chain rule to conclude negative correlation. Rather, the chain rule
$$
\nabla_\theta \log p_i(\theta)
= \frac{\nabla_\theta p_i(\theta)}{p_i(\theta)}
$$
simply reveals that $||\nabla_\theta \log p_i(\theta)||$ depends on the inverse of to $p_i(\theta)$. The key point is *how strongly* changes in $p_i(\theta)$ versus changes in $||\nabla_\theta p_i(\theta)||$ affect the ratio. This motivates our **sensitivity analysis**, which goes beyond the chain rule to quantify which term dominates in practice.
---
2. **“Lack of logic in the sensitivity analysis and its connection to correlation.”**
***Local sensitivity analysis*** is a widely adopted statistical method that quantifies how variations in individual input variables affect a target output. Specifically, in our context, it would be seeing how $||\nabla_\theta p_i(\theta)||$ and $p_i(\theta)$ affect the ratio $r(\theta) = \frac{||\nabla_\theta p_i(\theta)||}{p_i(\theta)}$, through their respective partial derivatives.
We decompose $r$ into a numerator $b(\theta) = ||\nabla_\theta p_i(\theta)||$ and denominator $c(\theta) = p_i(\theta)$ and compute the partial derivatives: $\frac{\partial r}{\partial b} = \frac{1}{c}, \quad \frac{\partial r}{\partial c} = -\frac{b}{c^2}$. Empirically, we sample many tokens and compare the “main effects” $\tfrac{1}{c_i}$ vs. $-\tfrac{b_i}{c_i^2}$. If the latter term (tied to $c_i = p_i(\theta)$) consistently dominates, it indicates that larger $p_i(\theta)$ values drive the ratio down more than variations in $||\nabla_\theta p_i(\theta)||$ push it up. This yields a net *negative* relationship between $p_i(\theta)$ and $||\nabla_\theta \log p_i(\theta)||$.
---
3. **“A theoretical explanation seems unnecessary; the idea is intuitive.”**
We agree that the phenomenon is intuitive: As seen in ***Figure C***, highly "confident" tokens often contribute less information, typically serving as straightforward continuations of preceding tokens, whereas low-confidence tokens tend to represent important inflection points in sentences. **However**, we believe that formalizing *why* this occurs helps rigorously validate the intuition and strengthens the theoretical grounding of our method.
***Figure C***: https://anonymous.4open.science/r/ConfPO-B7C6/L/FigureC.md
---
4. **“Robustness across all algorithms/datasets.”**
In our paper, we explored four different models (Mistral-base 7B, Llama-base 8B, Mistral-instruct 7B, Llama-instruct 8B), along with an additional model (Qwen 2.5 7B) tested during this rebuttal (see question [CE-Q1] or Table E: ***Table E***: https://anonymous.4open.science/r/ConfPO-B7C6/L/TableE.md), and three different datasets: UltraFeedback (for Mistral-base 7B, Llama-base 8B, and Qwen2.5 7B), on-policy data for Mistral-instruct 7B, and on-policy data for Llama-instruct 8B.
Furthermore, we demonstrated the robustness of ConfPO by applying it across multiple DAAs—SimPO (Table 1), DPO (Table 3), and IPO (see question [SW-Q3] or Table D: ***Table D***: https://anonymous.4open.science/r/ConfPO-B7C6/L/TableD.md).
**We would appreciate it if the reviewer could explicitly indicate which aspects of our response regarding the sensitivity analysis or the experimental results remain unclear, so that we can provide further clarification.** | Summary: This paper introduces ConfPO, a novel method for enhancing preference learning in large language models (LLMs). The core idea behind ConfPO is to selectively update tokens during training based on their confidence levels, specifically focusing on low-confidence tokens which the authors empirically demonstrate are crucial for effective alignment. The method is motivated by three key observations: that token-level gradients follow a long-tailed distribution, there is a high correlation between the gradient norm and a token's confidence, and that low-confidence tokens dominate the learning process during preference optimization. The authors propose a simple yet effective approach where only tokens with a probability lower than the average probability across the sequence are selected for backpropagation. Specifically, the authors extend their method to DPO, demonstrating its applicability across different DAA frameworks.
Empirically, ConfPO demonstrates superior performance compared to existing DAAs, including SimPO and DPO, on benchmarks such as AlpacaEval 2 and Arena-Hard. The results indicate that ConfPO achieves higher win rates and reduces the risk of reward hacking. The authors conduct experiments using Mistral 7B and Llama-3-8B models, in both base and instruct configurations, and evaluate their method on the AlpacaEval 2 and Arena-Hard benchmarks. They show that ConfPO improves alignment performance and mitigates overoptimization, a common issue in preference learning. The paper also includes an ablation study in the appendix, which explores different thresholding strategies for selecting low-confidence tokens, finding that the average probability threshold performs best.
Claims And Evidence: The paper claims that ConfPO has the same computational cost as standard preference learning, but it lacks quantitative data to support this claim. While the authors mention that their method is robust to hyperparameter choices, this claim should be supported by more extensive experimentation.
Methods And Evaluation Criteria: The authors should also consider using more fine-grained evaluation metrics that can capture the nuances of model behavior on these tasks. For example, they could use metrics that measure the quality of reasoning or the correctness of generated code. This would provide a more detailed assessment of ConfPO's performance and help identify areas for further improvement.
Theoretical Claims: Yes. I checked the objective function listed in Table 5.
Experimental Designs Or Analyses: The authors propose ConfPO which only selects tokens with a probability lower than the average probability. Experiments show that ConfPO improves alignment performance and mitigates overoptimization. Howeve, this paper lacks the essential comparison with existing work using similar ideas, such as TDPO [1] and SparsePO [2].
[1] Token-level Direct Preference Optimization
[2] SPARSEPO: CONTROLLING PREFERENCE VIA SPARSE TOKEN MASKS
Supplementary Material: Yes. I reviewed Appendix C and D.
Relation To Broader Scientific Literature: The paper primarily focuses on the SimPO algorithm, and while it does extend the method to DPO, a more comprehensive comparison with other DAAs would strengthen the paper's claims. A deeper investigation beyond token selection into the theoretical underpinnings of why ConfPO works across various DAAs, beyond empirical observation, is needed.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: No.
Questions For Authors: The paper uses the average probability as a threshold, but it is unclear whether this is the most effective approach in all cases. A more detailed analysis of different thresholding strategies and their impact on performance is needed. Third, how sensitive is ConfPO to variations in other hyperparameters, such as the learning rate and batch size, and how do these hyperparameters interact with each other?
What is the computational cost of ConfPO compared to other alignment techniques, such as DPO and SimPO, in terms of training time, memory usage, and energy consumption?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for offering detailed feedback. We address them below, following the order in which they appear across the review’s sections.
---
### **Claims and Evidence (CE)**
**[CE-Q1] Computational cost comparison.**
**[CE-A1]** **Table A1** compares runtime and GPU memory usage of ConfPO with baseline DAAs (DPO, SimPO). As ConfPO requires no extra forward/backward passes or auxiliary models, its computational cost is nearly identical to the baselines.
**Table A1.** Runtime and peak GPU memory usage for each method under identical settings.
||Time/Iter(s)|Mem(GB)|
|-|-|-|
|DPO|121.8|53GB|
|ConfPO$_{DPO}$|125.8|51GB|
|SimPO|150.2|50GB|
|ConfPO|140.0|49GB|
**[CE-Q2] Robustness to hyperparameter choices.**
**[CE-A2]** We want to clarify that we do not claim our method is “robust” to hyperparameters. Rather, in Figure 5 (where we vary the KL-divergence of resulting models by adjusting the hyperparameters), it indicates our method remains more stable when increasing the KL budget which is a key concern in reward-hacking (overoptimization) scenarios. We will revise the manuscript to avoid any confusion about hyperparameter robustness and emphasize that these experiments were specifically designed to demonstrate reduced overoptimization.
However, with regards to this, we have attached ***Figure A*** below, comparing ConfPO and SimPO across various hyperparameter settings.
***Figure A***: https://anonymous.4open.science/r/ConfPO-B7C6/L/FigureA.md
---
### **Methods and Evaluation Criteria (ME)**
**[ME-Q1] Evaluation on downstream tasks**
**[ME-A1]** We have evaluated our trained models on downstream tasks involving math, QA, reasoning, and coding. We find that ConfPO achieves the highest average performance across all baselines. Due to space constraints, we summarize key results below (***Table A2***) and provide the full table (***Table B***) in the following link:
***Table B***: https://anonymous.4open.science/r/ConfPO-B7C6/L/TableB.md
**Table A2.** Downstream task results on Llama-3-Base (8B).
||GSM 8K (Math)|TruthfulQA|ARC (Reasoning)|Humaneval (Coding)|
|-|-|-|-|-|
|**Llama-3-Base (8B)**|||||
|SFT|51.1|45.3|55.6|6.71|
|DPO|51.2|53.5|56.1|12.5|
|SimPO|50.8|56.2|56.7|12.8|
|ConfPO|**52.9**|**59.2**|**58.4**|**17.0**|
---
---
### **Experimental Designs or Analyses (EA)**
**[EA-Q1] Comparison with existing token-level DAAs**
**[EA-A1]** We include comparisons with TDPO and SparsePO in ***Table C*** below. While TDPO outperforms SimPO by leveraging token-level signals, our ConfPO still achieves higher alignment performance. Notably, TDPO requires a reference model to obtain token-level signals, incurring higher memory and compute overhead compared to ConfPO, which can identify critical tokens without a reference model. Meanwhile, SparsePO demonstrates performance comparable to SimPO.
***Table C***: https://anonymous.4open.science/r/ConfPO-B7C6/L/TableC.md
---
### **Relation to Broader Scientific Literature (RS)**
**[RS-Q1] Theoretical Motivation**
**[RS-A1]** In our response to reviewer RzSL under question *[SW-Q1]*, we provide a detailed theoretical motivation,including mathematical derivations and sensitivity analysis demonstrating why low-confidence tokens ($p_i(\theta)$) yield higher gradient norms ($||\nabla_\theta \log p_i(\theta)||$). Due to space constraints, we kindly refer the reviewer to that section for complete details.
Furthermore, we emphasize that various DAAs utilize the joint log probability as an implicit reward, causing their policy to fundamentally depend on $\nabla_\theta \log p_i(\theta)$. Consequently, our theoretical insights naturally generalize across various DAAs.
**[RS-Q2] Extension to more DAAs**
**[RS-A2]** In our main text, we have shown ConfPO to work on both the DPO and SimPO. We have now also applied ConfPO to IPO in ***Table D*** below, supporting the general applicability of ConfPO on various DAAs.
***Table D***: https://anonymous.4open.science/r/ConfPO-B7C6/L/TableD.md
---
### **Questions for Authors (QA)**
**[QA-Q1] Analysis of different thresholding strategies.**
**[QA-A1]** In Appendix B (Table 4), we analyze various thresholding strategies (fixed, geometric, arithmetic averages) and find that the arithmetic average, as used in our main experiments, provides the best performance.
**[QA-Q2] How sensitive is ConfPO to variations in other hyperparameters such as learning rate?**
**[QA-A2]** We have included ***Figure B*** below comparing ConfPO with SimPO across different learning rates (lr). Due to SimPO's design (no explicit KL regularization), it tends to become unstable at higher learning rates. In contrast, ConfPO, by selectively updating only critical tokens, demonstrates increased stability, even at higher lr.
***Figure B***: https://anonymous.4open.science/r/ConfPO-B7C6/L/FigureB.md
**[QA-Q3] Computational cost comparison.**
**[QA-A3]** We have provided a detailed comparison under **[CE-Q1]** above. | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.