title stringlengths 15 163 | paper_decision stringclasses 4
values | review_1 stringlengths 853 32.6k | rebuttals_1 stringlengths 0 15.1k | review_2 stringlengths 1.03k 35.6k | rebuttals_2 stringlengths 0 15.1k | review_3 stringlengths 807 27.4k ⌀ | rebuttals_3 stringlengths 0 15k ⌀ | review_4 stringlengths 780 22.2k ⌀ | rebuttals_4 stringlengths 0 15.1k ⌀ | review_5 stringclasses 171
values | rebuttals_5 stringclasses 166
values | review_6 stringclasses 25
values | rebuttals_6 stringclasses 24
values | review_7 stringclasses 4
values | rebuttals_7 stringclasses 4
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
K$^2$IE: Kernel Method-based Kernel Intensity Estimators for Inhomogeneous Poisson Processes | Accept (poster) | Summary: The paper proposes K²IE, a kernel method-based intensity estimator for inhomogeneous Poisson processes, combining the computational efficiency of classical kernel intensity estimators (KIEs) with edge-correction capabilities from reproducing kernel Hilbert spaces (RKHS). By reformulating the problem using a penalized least squares loss in an RKHS, the authors derive an estimator that matches the form of classical KIEs but uses equivalent RKHS kernels h(⋅,⋅) to implicitly handle edge effects. Theoretical analysis shows the solution satisfies a specialized representer theorem with unit dual coefficients. Experiments on synthetic 1D/2D datasets demonstrate comparable accuracy to Flaxman’s kernel method-based estimator (FIE) but with significantly improved computational efficiency.
Claims And Evidence: Supported Claims:1. K²IE achieves computational efficiency comparable to classical KIEs (evidenced by CPU time in Tables 1-2).
2.Edge correction via equivalent RKHS kernels works effectively in multi-dimensional settings (supported by 2D results in Table 2).
Problematic Claims:1.The assertion that K²IE "combines the computational efficiency of KIEs with the effectiveness of Flaxman’s estimator" is partially unsubstantiated. While K²IE is faster than FIE, its edge-correction superiority over KIE is only qualitatively shown (Figure 1), lacking quantitative comparison.
2.The claim that K²IE "does not require model fitting" is misleading, as hyperparameter tuning (e.g., γ, β) is still necessary.
Methods And Evaluation Criteria: Strengths:
1.The degenerate approximation using random Fourier features (Eq. 17) is a practical approach for solving the Fredholm equation.
2.Evaluation metrics (L^2, ∣L∣) align with standard practices for intensity estimation.
Weaknesses:
1.Experiments are limited to synthetic data with high SNR. Real-world datasets are absent, raising concerns about generalizability.
2.The cross-validation setup uses p-thinning, which may introduce bias if events are temporally/spatially correlated.
Theoretical Claims: Theorem 1’s proof relies on path integral representations and operator inversions (Eq. 13–17). While the derivation is logically consistent, the critical step of connecting the least squares loss to the Gaussian process representation lacks rigor. Specifically: 1.The path integral representation of the RKHS norm (Kim, 2021) is cited but not explicitly justified in the context of Poisson processes. 2.The equivalence between q(⋅,⋅) and h(⋅,⋅) (Eq. 8) is asserted without proving uniqueness or convergence under finite-dimensional approximations.
Experimental Designs Or Analyses: 1.The comparison with KIE uses edge-corrected kernels for KIE but does not clarify whether KIE’s edge correction was optimized similarly to K²IE’s hyperparameters.
2.The regularization parameter γ’s impact on edge correction is not analyzed. For example, does γ→0 degrade h(⋅,⋅) to an uncorrected kernel?
3.Negative intensity values (Section 3.1) are dismissed without quantitative analysis (e.g., frequency/severity of negative estimates in experiments).
Supplementary Material: The appendix provides derivations for the degenerate kernel approximation (Eq. 17–19) and additional experimental details. However, key theoretical proofs (e.g., Theorem 1’s operator inversion steps) are omitted, limiting reproducibility.
Relation To Broader Scientific Literature: The work bridges classical kernel smoothing (Diggle, 1985) and modern RKHS-based methods (Flaxman et al., 2017). By connecting the least squares loss to the representer theorem, it extends the theoretical framework of Walder & Bishop (2017) for Cox processes. However, it does not engage with recent advances in neural point processes or scalable Bayesian methods.
Essential References Not Discussed: Scalable Bayesian Methods: Lloyd et al., Variational Inference for Gaussian Process Modulated Poisson Processes (ICML 2015) – omitted despite being a key prior work.
Other Strengths And Weaknesses: 1.The connection between least squares loss and unit dual coefficients is novel, though the core idea (equivalent kernels for edge correction) builds directly on Flaxman et al. (2017).
2.Provides a computationally efficient alternative to FIE but does not surpass classical KIE in low dimensions.
3.The writing is dense, with inconsistent notation (e.g., λ used for both true and estimated intensity functions).
Other Comments Or Suggestions: The authors need to carefully check all the details making sure no typos included.
Questions For Authors: 1. The equivalence between \( q(\cdot,\cdot) \) and \( h(\cdot,\cdot) \) in Equation 8 is asserted but lacks a proof of uniqueness or convergence under finite-dimensional approximations. Can this be rigorously shown without invoking path integral heuristics (Kim, 2021)?
2. Section 3.1 acknowledges potential negative estimates but dismisses them without quantitative analysis (e.g., frequency/severity in experiments). How prevalent are negative values in practice?
3. The edge-correction mechanism via solving Equation 8 directly extends prior work. What fundamentally new theoretical insight does K²IE offer?
4. The claim "no model fitting" is misleading since \(\gamma\) and \(\beta\) require cross-validation. How does hyperparameter tuning affect computational efficiency claims?
5. Equation 16 uses random Fourier features (RFF) to approximate \( k(\cdot,\cdot) \). How does RFF rank \( M \) trade off edge-correction accuracy?
6. Does \(\gamma \to 0\) reduce \( h(\cdot,\cdot) \) to an uncorrected kernel? How does \(\gamma\) balance edge effects and overfitting?
7. Experiments are limited to synthetic 1D/2D data. How does K²IE perform on real-world data with irregular domains?
8. Figure 1 qualitatively compares kernels but lacks quantitative metrics (e.g., \( L^2 \) error near boundaries). Why?
9.\( \lambda(\cdot) \) denotes both true and estimated intensity functions (Section 3.1). Standardize notation (e.g., \( \lambda^*(\cdot) \) vs. \( \hat{\lambda}(\cdot) \)).
10.Scalable Bayesian methods (e.g., Lloyd et al., ICML 2015) are ignored. How does K²IE compare to modern Bayesian nonparametrics?
I will be willing to improve the scores if the authors give me good answer of the above questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for giving valuable comments. Below, we provide a detailed response to each question. We will include all discussions in the revised manuscript.
**The equivalence between ( q(\cdot,\cdot) ) and ( h(\cdot,\cdot) ) in Equation 8 is asserted but lacks a proof ... Can this be rigorously shown without invoking path integral ...?**
Yes, the equivalence between $q(\cdot,\cdot)$ and $h(\cdot,\cdot)$ can alternatively be established via Mercer's theorem, which aligns with the "rigorous" approach proposed by (Flaxman, 2017). Due to space limitations, we provide only a brief overview of the derivation here. Let $\lambda(x)$ be a function in the RKHS associated with the kernel $k(x,x')$, and let $||\lambda||^2_{H_k}$ denote its squared RKHS norm. Then, following the notation in Eq. (7) in Section 2.2, we can express $\lambda(x) = \sum_{m=1}^{\infty} b_m \eta_m$ and $||\lambda||^2_{H_k} = \sum_m b_m^2 /\eta_m$, where $b_m$ are coefficients. Substituting this into the penalized least squares loss in Eq. (11), we obtain: $-2 \sum_{n} \lambda(x_n) + \sum_m \frac{\eta_m+1/\gamma}{\eta_m} b_m^2$. We can see that the 2nd term corresponds to the squared RKHS norm under a rescaled kernel defined by $q(x,x') = \sum_m \frac{\eta_m}{\eta_m + 1/\gamma}e_m(x) e_m(x')$. It is evident that $q(\cdot,\cdot)$ is consistent with $h(\cdot,\cdot)$ in Eq. (8). A full derivation of Theorem 1 will be included in the text.
**How prevalent are negative values in practice?**
Following the reviewer’s question, we conducted an analysis of how frequently K$^2$IE produces negative values using the 2D synthetic dataset $\lambda_{\text{2D}}^{1.0}$. Specifically, we evaluated the estimated intensity values at 500 x 500 grid points within the observation domain and computed the ratio of negative values. The mean ± standard deviation of this ratio across 100 trials was $0.059 \pm 0.016$. This result indicates that K$^2$IE can indeed produce negative estimates in practice—particularly in regions with sparse data—highlighting the necessity of post-hoc clipping like $\max(\hat{\lambda}(x), 0)$ in applications where negative intensity values are not permitted. It is also worth noting that when Laplace RKHS kernels are used, the equivalent kernel $h(x, x')$ is functionally non-negative in one-dimensional input settings (see Section 3.2 in (Kim, NeurIPS2024)).
**What fundamentally new theoretical insight does K$^2$IE offer?**
1. To the best of our knowledge, our paper is the first to prove that minimizing the least squares loss with a squared RKHS norm regularizer yields kernel intensity estimators (KIEs).
2. K$^2$IE demonstrates that the equivalent kernel used in Flaxman (2017) can serve as an edge-corrected smoothing kernel for KIE. This insight enhances computational efficiency, as it eliminates the need to solve the dual optimization problem (9) in Flaxman's model.
**The claim "no model fitting" is misleading ... How does hyperparameter tuning affect computational efficiency claims?**
We will rephrase "no model fitting" as "no optimization of dual coefficients". Regarding hyperparameter tuning, K$^2$IE offers a significant advantage over the reference models. Specifically, KIE and FIE require MC integration and solving a dual optimization problem for each cross-validation, respectively, whereas K$^2$IE requires neither, which is beneficial especially in multi-dimensional settings.
**How does RFF rank ( M ) trade off edge-correction accuracy?**
Please refer to our 2nd response to Reviewer XSJm regarding the ablation study on $M$. We want to emphasize that the edge correction (i.e., the integral over the domain in Eq. (8)) is performed exactly under the RFF approach (see the 2nd paragraph of Sec. 3.2.2). Rather, $M$ governs the approximation accuracy of shift-invariant kernels.
**How does (\gamma) balance edge effects and overfitting?**
We appreciate the important question. In classical KIEs, tanking $\beta \to \infty$ ($\beta^{-1}$ is the scale parameter) leads to $\hat{\lambda}(x) = \sum_n \delta(x-x_n)$. K$^2$IE exhibits the same behavior regardless of $\gamma$, but taking $\gamma \to \infty$ also yields the same solution regardless of $\beta$ in K$^2$IE. Although $\gamma$ as well as $\beta$ controls the degree of overfitting, it seems that $\gamma$ acts globally and $\beta$ locally. A thorough investigation into the distinct roles of $\gamma$ and $\beta$ remains an important direction for future work. We believe the explanation is satisfactory to the reviewer.
**Experiments are limited to synthetic 1D/2D data.**
Please see our 1st response to Reviewer XSJm.
**Standardize notation**
We will standardize the notation according to the suggestion.
**How does K$^2$IE compare to modern Bayesian nonparametrics?**
Please see our 3rd response to Reviewer upvZ.
**does not clarify whether KIE’s edge correction was optimized**
As in Sec 4, KIE's hyperparameter was optimized similarly to K2IE but based on test likelihood. | Summary: **Summary:**
The authors consider modelling the intensity function of a Poisson point process as belonging to an RKHS, and then fitting this based on a regularised squared error objective. This yields a method which is similar to other kernel intensity estimators (which were not motivated via RKHS), in that no "model fitting" or "parameters" need to be found once the data and hyperparameters are fixed. The technique achieves comparable predictive performance while being more computationally efficient than existing methods.
## update after rebuttal
I'm happy with the authors response --- the comments about the clipping and squared loss were particularly helpful. It looks like all reviewers are leaning accept or accept. I will maintain my current score of accept.
Claims And Evidence: The claims are clear and supported by evidence.
Methods And Evaluation Criteria: Yes, the evaluation criteria make sense and are standard for this area. The benchmark with the rectangular domains is particularly nice.
Theoretical Claims: - This paper bridges an important theoretical and conceptual gap between two "kernel" approaches to intensity estimation --- RKHS (e.g. Flaxman et al.) and kernel intensity estimators (like kernel density estimators). In particular, with a regularised least squared loss, RKHS methods actually give a variant of kernel intensity estimation methods! This is actually quite an inspiring result - I wonder what other loss functions with RKHS models yield?
Experimental Designs Or Analyses: Yes, they are sound.
Supplementary Material: I did not check supplementary material.
Relation To Broader Scientific Literature: This is well-placed within broader scientific literature -> stats/ml -> point processes and machine learning. I list some possible related works for the authors' consideration, however these are definitely not essential to cite.
- In Bayesian context, the intensity is modelled as a squared Gaussian process and Random Fourier features are used in "Sparse Spectral Bayesian Permanental Process with Generalized Kernel", as well as a Laplace approximation to the posterior.
- Yet another type of "kernel method" for estimating intensity functions of inhomogeneous Poisson point processes is using ""Exact, Fast and Expressive Poisson Point Processes via Squared Neural Families. These are squared neural networks with closed-form integrated intensity functions, trained using maximum likelihood.
- Is your squared loss related to equation (12) of "PSD Representations for Effective Probability Models"? They are looking at density estimation rather than intensity estimation, but apart from that it looks a little bit similar.
Essential References Not Discussed: All essential works are cited, to the best of my knowledge.
Other Strengths And Weaknesses: **Strengths:**
- This paper bridges an important theoretical and conceptual gap between two "kernel" approaches to intensity estimation --- RKHS (e.g. Flaxman et al.) and kernel intensity estimators (like kernel density estimators). In particular, with a regularised least squared loss, RKHS methods actually give a variant of kernel intensity estimation methods! This is actually quite an inspiring result - I wonder what other loss functions with RKHS models yield?
- Text, equations and figures are easy to follow, and seem to be without any major errors.
- I particularly liked the composite domains built from multiple rectangles in Figure 3.
**Weaknesses:**
- As pointed out by authors in section 3.1, the method leads to intensity functions which can be negative, due to the fact that they do not utilise a nonnegative "link function". This can lead to undesirable effects from a modelling perspective (e.g. predict a negative number of events). The authors use a post-hoc clipping of the intensity function below zero, after the fitting procedure, however this then clearly breaks the optimality of the solution.
- The equivalent kernel has to be approximated (e.g. Fourier features, MC), leading to approximate estimation procedure.
Other Comments Or Suggestions: See above.
Questions For Authors: - Question: Could you add some more discussion for equation (10), beyond providing the citations? It looks like the squared L2 distance between the intensity function and a constant 1 function, which is then approximated by replacing the integrated intensity by a sum of intensities evaluated at the data. Is that the "correct" interpretation? Why is this a helpful loss function?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the highly positive comments. Below, we provide a detailed response to each comment.
**I list some possible related works for the authors' consideration, however these are definitely not essential to cite.....**
We appreciate the suggestion of including these important references. "Sparse Spectral Bayesian Permanental Process with Generalized Kernel" and "Exact, Fast and Expressive Poisson Point Processes via Squared Neural Families" both propose intensity estimation methods that employ the quadratic link function, $\sigma(\cdot) = (\cdot)^2$, to ensure non-negativity of the estimators. We will appropriately cite these works in the Other Related Works section.
**Is your squared loss related to equation (12) of "PSD Representations for Effective Probability Models"?**
This reference proposes a quadratic kernel model of the form $f_{\text{PSD}}(x) = \sum_{n,n'=1}^N A_{nn'}k(x_n,x)k(x_{n'},x)$, which leverages the non-negativity property of the positive semi-definite matrix $A$. The model is applicable to both density estimation and intensity estimation tasks. While the reference also considers learning the model within the framework of penalized least squares loss minimization, it focuses on fitting the parameter $A$ using iterative optimization methods and does not address whether the functional form of $f_{\text{PSD}}(x)$ is optimal for that loss--indeed, it likely is not. In fact, (Marteau-Ferey et al., NeurIPS2020) showed that $f_{\text{PSD}}(x)$ minimizes a certain functional loss, $L(f(x_1),\dots, f(x_N))$, with appropriate regularization. However, this functional loss does not include the least squares loss, which involves the integral of the latent function. Although identifying the functionally optimal estimator under the least squares loss with similar regularization is an intriguing and important question, the work "PSD Representations for Effective Probability Models" does not appear essential in this specific context.
**Weaknesses: As pointed out by authors in section 3.1, the method leads to intensity functions which can be negative, .... The authors use a post-hoc clipping of the intensity function below zero, after the fitting procedure, however this then clearly breaks the optimality of the solution.**
In fact, applying post-hoc clipping via $\text{max}(\hat{\lambda}(x),0)$ consistently improves the accuracy of the estimator, since for any input $x$, the inequality $|\text{max}(\hat{\lambda}(x),0) - \lambda(x)| \leq |\hat{\lambda}(x) - \lambda(x)|$ holds due to the non-negativity of the true intensity function $\lambda(x)$. The reason why the clipping, despite the fact that it breaks the optimality of the solution, improves the accuracy of K$^2$IE is clear: the non-negativity condition of intensity function is not taken into consideration in the problem of minimization of the penalized least squares loss (11). For more details, see our 3rd response to Reviewer XSJm.
**Question: Could you add some more discussion for equation (10), beyond providing the citations?**
In response to the reviewer's suggestion, we will include the following explanation of the least squares loss (10), which we believe is satisfactory. Let $\mathbb{E}$ be the expectation regarding data points generated from the true intensity function $\lambda(x)$. Then we consider the expectation of the integrated squared loss between the estimator, $\hat{\lambda}(x)$, and the true intensity $\lambda(x)$, defined by
$$\mathbb{E} \Bigl[ \int_{\mathcal{X}} \bigl|\hat{\lambda}(x) - \lambda(x) \bigr|^2 dx \Bigr] = \mathbb{E} \Bigl[ \int_{\mathcal{X}} \hat{\lambda}^2(x) dx \Bigr] - 2\mathbb{E} \Bigl[ \int_{\mathcal{X}} \hat{\lambda}(x) \lambda(x) dx \Bigr] + \mathbb{E} \Bigl[ \int_{\mathcal{X}} \lambda^2(x) dx \Bigr].$$
The third term can be omitted, as it is independent of the estimator. The second term can be decomposed into two parts as
$$2\mathbb{E} \Bigl[ \int_{\mathcal{X}} \hat{\lambda}(x) \lambda(x) dx \Bigr] = 2\mathbb{E} \Bigl[ \int_{\mathcal{X}} \hat{\lambda}(x) \sum_{n=1}^N \delta(x-x_n) dx \Bigr] + 2\mathbb{E} \Bigl[ \int_{\mathcal{X}} \hat{\lambda}(x) \bigl( \lambda(x) - \sum_{n=1}^N \delta(x-x_n) \bigr) dx \Bigr],$$
where the second term vanishes due to Campbell’s theorem:
\begin{equation}
2\int_{\mathcal{X}} \mathbb{E} [\hat{\lambda}(x)] \lambda(x) dx - 2\sum_{n=1}^N \mathbb{E} [\hat{\lambda}(x_n)] = 2\int_{\mathcal{X}} \mathbb{E} [\hat{\lambda}(x)] \lambda(x) dx - 2\int_{\mathcal{X}} \mathbb{E} [\hat{\lambda}(x)] \lambda(x) dx = 0.
\end{equation}
Putting everything together, we obtain the following (correct) interpretation of Eq. (10):
$$\mathbb{E} \Bigl[ \int_{\mathcal{X}} \bigl|\hat{\lambda}(x) - \lambda(x) \bigr|^2 dx \Bigr] = \mathbb{E} \Bigl[ \int_{\mathcal{X}} \hat{\lambda}^2(x) dx - 2\sum_{n=1}^N \hat{\lambda}(x_n) \Bigr] + C,$$ where $C$ is the constant term. | Summary: This paper introduces K2IE, a kernel method-based kernel intensity estimator for inhomogeneous Poisson processes, which formulates the intensity estimation as a penalized least squares loss minimization in RKHS. A key theoretical contribution is the establishment of a specialized representer theorem leading to a computationally efficient estimator with unit dual coefficients, drawing a formal connection between classical kernel intensity estimators (KIEs) and RKHS-based estimators. The method is validated on 1D and 2D synthetic datasets, demonstrating comparable predictive performance to prior methods while offering improved computational efficiency.
Claims And Evidence: The paper claims that:
1. K2IE is theoretically consistent with classical KIEs under least squares loss.
2. It provides comparable predictive performance to state-of-the-art methods while being computationally more efficient.
3. The proposed estimator handles edge effects effectively via RKHS-derived equivalent kernels.
These claims are largely supported by the theoretical derivations and empirical experiments. However, the experiments are limited to synthetic data, and evidence on real-world applicability or robustness to noise and irregular event distributions is missing.
Methods And Evaluation Criteria: The least squares loss within RKHS is a reasonable and novel formulation for this problem, especially given its computational advantages over log-likelihood loss. The comparison with KIE and Flaxman’s estimator (FIE) is appropriate, and the use of metrics like integrated squared and absolute error (L2, |L|), along with CPU time, provides a fair evaluation.
Theoretical Claims: The theoretical results are sound and well-supported through rigorous derivation. The connection to Fredholm integral equations and path integral representations is novel and mathematically grounded.
Experimental Designs Or Analyses: The experiments are designed well for demonstrating performance on a range of synthetic intensities, with appropriate variations in data sparsity and observation domains. Use of both low-dimensional and moderate-dimensional settings is appreciated. Still, the omission of real-world datasets or more complex 3D/temporal domains limits broader validation.
Also, while hyperparameters are tuned via cross-validation, there is no ablation to show sensitivity to the number of random features (2M), which could impact approximation quality.
Supplementary Material: Yes. Code
Relation To Broader Scientific Literature: This work is well-positioned in the literature on nonparametric Poisson intensity estimation, kernel methods, and RKHS theory. It builds directly on foundational work by Flaxman et al. (2017) and distinguishes itself by moving from maximum likelihood to least squares loss, bridging classical and modern techniques.
Essential References Not Discussed: The paper overlooks some recent work in Bayesian nonparametric methods for Poisson processes beyond what is cited, including approximate inference in deep Gaussian processes or deep kernel learning which could serve as competitive baselines.
Other Strengths And Weaknesses: Strengths:
1. Theoretical originality in bridging classical KIE and RKHS estimators.
2. Analytical tractability due to representer theorem and Fourier-based approximation.
3. High computational efficiency and clear reproducibility through open-source code.
Weaknesses:
1. Limited to synthetic data.
Other Comments Or Suggestions: na
Questions For Authors: 1. Why are real-world datasets not included in the evaluation? This would be critical to support claims of practical relevance.
2. How sensitive is the model to the number of random Fourier features (M)? An ablation study would help demonstrate robustness.
3. Can the non-negativity of the estimator be more rigorously enforced, e.g., through a post-processing projection or transformation?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the highly positive and constructive comments, by which we are strongly encouraged. Below, we provide a detailed response to each of the comments.
**Why are real-world datasets not included in the evaluation? This would be critical to support claims of practical relevance.**
We focused our evaluations on synthetic datasets, which allow for precise error estimation between the true and estimated intensity functions—an appropriate setting to verify the theoretical soundness of our model. However, from the perspective of practical relevance, we fully agree with the reviewer on the importance of validating our approach on real-world datasets. In response to the reviewer’s suggestion, we have conducted an additional experiment using an open 2D real-world dataset, *bei*, in R package *spatsta* (GPL-3). It consists of locations of 3605 trees of the species *Beilschmiedia pendula* in a tropical rain forest (Hubbell & Foster, 1983). Following (Cronie et al., 2024), we randomly labeled the data points with independent and identically distributed marks {1, 2, 3} from a multinomial distribution with parameters $(p_1 = p_2 = 0.3, p_3 = 0.4)$, and assigned the points with label 1 and 2 to training data and test data, respectively; we repeated it ten times for evaluation. We evaluated the predictive performance of the estimators $\hat{\lambda}(x)$ based on the test least squares loss ($L_{s}$) and the negative test likelihood of counts ($L_{c}$): $L_{s}$ was computed as $L_{s} = \int_{X} \hat{\lambda}^2(x) dx - 2\sum_{n \in D_{\text{test}}} \hat{\lambda}(x_n)$, where $D_{\text{test}}$ was the test data; the observation domain $X$ was discretized into10 x 10 sub-domain {$X_1, \dots, X_{100}$}, and $L_{c}$ was computed as $L_{c} = \sum_{i=1}^{100} (\Lambda_{X_i}) - N_{X_i} \log \Lambda_{X_i} - \log(N_{X_i}!)$, where $N_{X_i}$ is the number of test data points observed in $X_i$, and $\Lambda_{X_i} = \int_{X_i} \hat{\lambda}(x) dx$. We obtained the results across 10 trials with standard errors in brackets (the lower, the better): $L_s$ = -5.74(0.39), -6.17(0.52), -5.09(0.31) for KIE, K$^2$IE, FIE; $L_s$ = 265(14.1), 278(10.8), 287(19.7) for KIE, K$^2$IE, FIE. The results show that our K$^2$IE achieved the best performance on $L_s$, but was outperformed by KIE on $L_c$, which could be because the hyperparameters were optimized based on the least squares loss and the log-likelihood for K$^2$IE and KIE, respectively. We will include the results based on 100 trials in the final manuscript.
**How sensitive is the model to the number of random Fourier features (M)? An ablation study would help demonstrate robustness.**
We thank the reviewer for the constructive feedback. In response to the suggestion, we conducted an ablation study on the number of random features ($2M$) using the 2D synthetic dataset $\lambda_{\text{2D}}^{1.0}$. The integrated squared errors ($L^2$) of our K$^2$IE were 149(6.72), 74.8(12.1), 50.7(7.71), and 49.0(8.80) for $2M$ = $20$, $100$, $300$, and $500$, respectively, where standard deviations are in brackets. Similarly, the integrated absolute errors ($|L|$) were 9.86(0.25), 6.65(0.60), 5.43(0.52), and 5.31(0.57) for the same values of $M$. These results indicate that the predictive performance of K$^2$IE improves with larger $M$, and our chosen setting ($2M=500$) is sufficiently large to yield accurate estimates.
Please note that, due to the limited time available during the rebuttal period, the reported results are based on 10 trials. We will include the results based on 100 trials in the final manuscript.
**Can the non-negativity of the estimator be more rigorously enforced, e.g., through a post-processing projection or transformation?**
As discussed in the last paragraph of Section 3.1, applying $\text{max}(\hat{\lambda}(x),0)$ or $\text{ReLU}(\hat{\lambda}(x))$ is the simplest way to enforce the non-negativity of the estimator $\hat{\lambda}(x)$. Fortunately, this operation improves the estimator's accuracy in a pointwise sense, as it satisfies $|\text{max}(\hat{\lambda}(x),0) - \lambda(x)| \leq |\hat{\lambda}(x) - \lambda(x)|$ for any input $x$ due to the fact that the true intensity function $\lambda(x)$ is always non-negative. However, when evaluating the integral $\int_{S} \lambda(x) dx$ over a compact region $S$, applying the max operation hinders closed-form integration and necessitates the use of computationally expensive Monte Carlo methods. Therefore, whether or not to employ the max operation depends on the specific application. We will incorporate the above discussion into the text. For clarity, we note that the max operation was not used in our experiments.
**overlooks some recent work in Bayesian nonparametric methods...**
We added a study with a scalable Bayesian model (see our 3rd response to Reviewer upvZ). We promise to cite deep kernel/GP approaches, but it would be helpful if you could point us to a few references worth citing. | Summary: The paper develops a new kernel based estimator for intensity in inhomogeneous Poisson processes. The estimator is shown to be a associated with a unique reproducing kernel Hilbert space and is compared to some previous estimation methods in a simulation study. The simulation study shows that the new methods achieves better reconstructions at a lower computational cost compared to existing methods.
Claims And Evidence: The theoretical derivation seems sound, I have some issues with the simulations study that does not include comparison to any state of the art statistical methods (e.g. https://doi.org/10.1111/2041-210X.13168) and which seems to solve the wrong problem for partially observed processes.
For a partially observed process we would like reconstructions for the entire domain even when observations are only from part of the domain. For this paper that would entail in equation (11) that the integral over lambda(x) (second term) be over the observed area while the last integral/norm (i.e. the penalty term) be over the entire domain, thus having a penalty on difference between observations and lambda(x) only for observed parts of the domain but a smoothness penalty for the entire domain, including unobserved parts. For the lower row in figure 3 this would imply that the reconstruction of lambda is computed (and evaluated) also for the hash-marked regions.
Methods And Evaluation Criteria: The simulation study is reasonable but evaluation is only based on L1 and L2 error between reconstruction and actual intensity in the observed area. No attempt at checking prediction uncertainties or the models ability to predict intensities at unobserved locations is made (see comment under Claims And Evidence). No comparison to methods for log-gaussian-cox processes is made (see Essential References Not Discussed).
Theoretical Claims: The derivation of theorem 1 seems sound. However, as the authors themself note the solution can have unreasonable results, i.e. negative intensities, this is due to an improper formulation of the minimisation problem in (11), either an additional constraint lambda>0 or a transform of lambda to ensure non-negative intensities should be included. My feeling is that the paper presents a theoretically sound solution to the wrong problem.
Experimental Designs Or Analyses: See comments under Methods And Evaluation Criteria.
Supplementary Material: Not reviewed.
Relation To Broader Scientific Literature: The paper extends on existing RKHS theory for estimation of inhomogeneous Poisson processes. It provides some references to spatial statistical development but does not include any methods from spatial statistics in the comparison (see Essential References Not Discussed).
Essential References Not Discussed: The paper lacks references to some recent log-gaussian cox process literature and fast numerical methods for these. E.g. https://doi.org/10.1093/biomet/asv064 and https://doi.org/10.1111/2041-210X.13168. Especially the later could be included in the simulation study.
Other Strengths And Weaknesses: No additional comments
Other Comments Or Suggestions: No additional comments
Questions For Authors: Key points to consider:
1) Why not include a lambda>=0 constraint?
2) Can the method be used to reconstruction of intensity at unobserved regions (e.g. the hash marked areas in Fig 3)
3) How does the method compare to log-gaussian-cox process methods?
4) Does the methods provide uncertainty estimates for the reconstruction, e.g. V(lambda(x)|observations), or is only the best reconstruction provided?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the deep understanding of our model and the constructive comments. We provide a detailed response to each comment.
**Why not include a lambda>=0 constraint?**, **My feeling is that the paper presents a theoretically sound solution to the wrong problem.**
As pointed out, one can enforce the non-negativity of the estimator by introducing lambda>=0 constraint or by applying a non-negative transformation. However, as far as we know, such constraints prevent us from obtaining efficient estimators like K$^2$IE. The main contribution of our work lies in showing that, by sacrificing strict non-negativity, one can obtain a feasible kernel-based estimator comparable to classical KIEs.
As discussed in Section 3.1, non-negativity can be enforced in a post-hoc clipping like max($\hat{\lambda}(x)$,0). However, we totally agree with the reviewer's point that the functional optimization problem (11) does not explicitly take non-negativity into account, hence not yielding an optimal "non-negative" estimator. For future work, we would like to discuss technical issues arising when non-negativity constraints are imposed.
Consider modeling the intensity as a non-negative transformation $\sigma(x)$ of a latent function $f(x)$ lying in an RKHS. Then, the functional derivative of the objective functional in Eq. (11) leads to the following equation that the optimal $\hat{f}(x)$ solves (full derivation is omitted): $$\frac{1}{\gamma} f(x) + \int_{X} k(x,s)\sigma(f(s)) \sigma'(f(s)) ds = \sum_n k(x,x_n) \sigma'(f(x_n)),$$ where $\sigma'(x)$ is the derivative of $\sigma(x)$. When $\sigma(x) = x$, the equation reduces to a Fredholm integral equation for which Theorem 1 provides a feasible solution. However, when $\sigma(x)$ is nonlinear, even as simple as $\sigma(x) = x^2$, deriving a feasible solution becomes non-trivial.
Alternatively, one may consider enforcing non-negativity of intensity at finite virtual points {$q_1,\cdots, q_R$}, which leads to a dual optimization problem. While this approach may reduce the risk of negative estimates at $q_i$, it does not guarantee non-negativity of intensity elsewhere and undermines the computational advantages inherent in K$^2$IE due to the added complexity of dual optimization.
Finally, we discuss whether the problem in Eq. (11) is truly improper/wrong. (Kim, NeurIPS2024) showed that when RKHS kernels belong to the class of inverse M-kernels (IMKs), the corresponding equivalent kernels $h(x,x')$ are non-negative. This suggests that Eq. (11) may not be inherently improper because K$^2$IE, a sum of equivalent kernels, is also non-negative under IMKs. In one-dimensional cases, the Laplace kernel is known to be an IMK, but no construction is known for IMKs in higher dimensions, posing an interesting challenge. We will include the discussion in the text.
**Can the method be used to reconstruction of intensity at unobserved regions**
Yes, K$^2$IE can reconstruct intensity at unobserved regions. At submission, we had assumed the intensity estimation at observed regions. However, in light of your insightful comment, we revisited the model and confirmed that K$^2$IE in Eq. (12) could accept inputs from unobserved regions as well. Accordingly, a minor revision to Theorem 1 is warranted to reflect this more general setting. Specifically, all integral operators and the squared RKHS norm should be defined over the full domain ($X$ -> $\mathbb{R}^d$), and Eq. (13) should be updated as $q^*(x,s) = \delta(x-s) 1(s \in X) +\frac{1}{\gamma}k^*(x,s)$, where $1(\cdot)$ is the indicator. With this modification, Theorem 1, initially defined in $x \in X$, should be revised to hold for $x \in \mathbb{R}^d$. We sincerely appreciate the valuable comment. Due to time constraints, we have not yet conducted experiments to evaluate accuracy in unobserved regions. But, we are willing to include the results in the final version of the paper if you consider it essential.
**How does the method compare to log-gaussian-cox process methods?**
In response to several reviewers' suggestions, we conducted an additional experiment using the 2D synthetic dataset $\lambda_{\text{2D}}^{1.0}$ to include the result of a scalable Bayesian method. This time, we adopted a variational Bayesian model with a quadratic link function (Lloyd, ICML2015), where 10 x 10 inducing points were employed. We appreciate your kind suggestion of references (we will cite them), but we have not yet become proficient with the R package. $L^2$, $|L|$, and cpu of the Bayesian method were 58.0(8.13), 5.61(0.47), and 28.4(0.96), respectively, where standard deviations are in brackets. The result highlights the high efficiency of K$^2$IE. Note that the reported results are based on 10 trials. We will include the results based on 100 trials in the text.
**Does the methods provide uncertainty estimates for the reconstruction**
K$^2$IE does not provide uncertainty estimates, which limits our model compared to Bayesian models.
---
Rebuttal Comment 1.1:
Comment: The comments answer most of my questions.
1) For the first point the comments made regarding positive kernels and the trade off between an "correct" model and computational efficiency are insight full as well as the comments regarding future research direction. Including the comment on IMKs in the paper would be a good addition, I would also be slightly interested in how big of a problem it is in practice (e.g. did lambda<0 occur in any of the simulations?).
2) The extension to un-observed regions is promising and it would be interesting to see results in the paper although I fully understand if this is not possible due to time and page limitations.
3) Given the complexity of R-INLA I fully understand that the authors comments and I'm satisfied with the alternative model comparisons. | null | null | null | null | null | null |
Fast Tensor Completion via Approximate Richardson Iteration | Accept (poster) | Summary: This paper addresses the computational challenges of Tensor Completion (TC) by proposing a novel method that integrates Tensor Decompositions (TDs)—including CANDECOMP/PARAFAC (CP), Tucker, and Tensor Train (TT)—with a lifting framework.
The authors reformulate TC as a structured tensor decomposition problem, effectively reducing its computational complexity. This reformulation enables the application of efficient Alternating Least Squares (ALS) algorithms. By interpreting the algorithm as a preconditioned Richardson iteration, the authors establish rigorous theoretical convergence guarantees.
To further enhance efficiency, the method incorporates sketching techniques via leverage score sampling, strategically subsampling tensor fibers to accelerate computations.
Comprehensive experiments on synthetic and real-world datasets demonstrate that the proposed approach achieves significant speedups over state-of-the-art TC methods while maintaining competitive accuracy. The integration of lifting with sketching and convergence analysis presents a key advancement in scalable tensor completion.
## update after rebuttal
I thank very much the authors for clarifying the issues I pointed out and their promise to improve the final version accordingly. I keep my original score.
Claims And Evidence: The claims in the submission are generally well-supported by theoretical analysis and empirical results. The main claims are:
- Reduction in Computational Complexity: The paper provides theoretical justifications for the efficiency of the proposed method using leverage score sampling and the lifting approach. Running time analyses for CP, Tucker, and TT decompositions are given. Empirical results show significant speed improvements over standard ALS-based CP TC methods.
- Interpretation as a Preconditioned Richardson Iteration: Theoretical derivations (Lemma 3.5) show that the mini-ALS algorithm simulates Richardson iteration.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are reasonable and relevant for the tensor completion (TC) problem. However, certain aspects require further clarification (see detailed comments below).
Theoretical Claims: I have not verified the proofs provided in the Appendices. However, the theoretical claims appear well-founded.
Experimental Designs Or Analyses: The experimental designs and analysis are correct and useful. However, further clarification is needed (see detailed comments below).
Supplementary Material: I have not reviewed the supplementary material.
Relation To Broader Scientific Literature: The paper effectively integrates concepts from tensor decomposition, iterative optimization, and randomized numerical linear algebra. By leveraging preconditioned Richardson iteration and leverage score sampling, it advances the field of efficient tensor completion while maintaining theoretical guarantees.
However, it focuses solely on tensor decomposition-based completion, while many other tensor completion methods exist, including deep learning-based approaches and nuclear norm minimization, among others.
Essential References Not Discussed: No, I think the papers included in the references are a good selection of relevant works in this field.
Other Strengths And Weaknesses: Strengths:
- Theoretical Contribution: A useful connection between TC and preconditioned Richardson iteration, providing rigorous convergence guarantees (Theorem 3.7) is provided. The lifting approach restores structure in TC regression problems, enabling the use of fast TD algorithms as subroutines. This bridges iterative optimization and tensor algebra in a principled manner.
- Generality: The framework applies to multiple tensor decompositions (CP, Tucker, TT), demonstrating flexibility. By using TD algorithms as black-box subroutines, the method benefits from future advancements in TD research.
- Practical Efficiency: Experiments on real-world datasets (e.g., cardiac MRI, hyperspectral imaging) show significant speedups (up to 100× faster than direct methods) while maintaining competitive reconstruction errors.
Weaknesses:
- Clarity and Accessibility: some parts requires further clarification (see my detailed comments in the next section)
- Comparison to State-of-the-Art: the paper does not benchmark against modern TC methods (e.g., tensor nuclear norm minimization, deep learning-based approaches), leaving its relative performance unclear.
Other Comments Or Suggestions: Here, I provide my detailed comments/suggestions to improve clarity and correctness of the paper:
- The general TC problem of eq. (1) is not alligned with the approach of the current paper where a TD approach is used meaning that the approximation of the data tensor is a low-rank tensor decomposition (CP, Tucker or TT) where its factors are the minimizing parameter \theta. In other words, the regularization term of eq. (1) is not needed nor used in this paper.
- The statement “Rank constraints can be incorporated into (1) by including appropriate penalty terms in R(\theta)” is confusing since in this paper, the low rank constraint is explicitly imposed by chosing the rank of the decomposition.
- Experimental results requires further clarification:
1) Figure 1 show that the approximate-mini-als algorithm achieves lower error than the other algorithms for a range of steps around step 15. This is surprising and should be explained.
2) In the paper, it is not clearly explained what the authors refers as direct and parafac methods.
3) Figure 2 is confusing, for example there is no explanation about what do the plots in 3rd and 4th columns refer to. The caption should be improved.
Questions For Authors: I have some questions that could help me to better understand the experimental results:
- Why does approximate-mini-als achieve lower error than other algorithms around step 15 in Fig. 1?
- Could you please explain what do you mean the direct and parafac methods used in Fig. 2? You should precisely define those methods.
- What is the difference between the plots in 3rd and 4th columns in Figure 2? They both compare the running time of the evaluated algorithms as a function of sample ratio. Do they refer to mini-ALS and accelerated-mini-ALS, respectively? Please, confirm.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your review and careful read. Please see our responses to your questions and weaknesses below.
> The general TC problem of eq. (1) is not alligned with the approach of the current paper where a TD approach is used meaning that the approximation of the data tensor is a low-rank tensor decomposition (CP, Tucker or TT) where its factors are the minimizing parameter \theta. In other words, the regularization term of eq. (1) is not needed nor used in this paper.
We agree that in the algorithm explicitly discussed in the paper, the regularization term of eq. (1) is not used. However, we emphasize that our approach can be applied to methods with regularization as well if the corresponding regression problems in such methods are structured and that structure enables faster algorithms. One example is Tucker decomposition with $\ell_2$-regularization, which reduces to solving *ridge regression* problems. For this problem, fast algorithms using sampling techniques are developed in [1], and our approach gives speed-ups for the corresponding tensor completion problem. Thank you for pointing this out — we will add a discussion about this to the next version.
> The statement “Rank constraints can be incorporated into (1) by including appropriate penalty terms in R(\theta)” is confusing since in this paper, the low rank constraint is explicitly imposed by chosing the rank of the decomposition.
We completely agree that the algorithms we explicitly describe in the paper do not incorporate rank constraints this way. The comment is meant to state that if there are regularization terms that still lead to structured problems that can be solved faster, then our lifting approach can give speed-ups for them as well, e.g., ridge regression that is discussed in the answer to the previous question. Note that the ridge regression reduces the effective dimension of the problem and can be used to implicitly induce a lower rank for the decomposition. We will make this clearer in the next version to avoid confusion.
> In the paper, it is not clearly explained what the authors refers as direct and parafac methods.
This is explained in the first column on page 8. The `direct` method uses normal equations for solving the linear regression problem in each iteration of the ALS algorithm. More specifically for a matrix $A$, vector $b$, and set of observed entries $\Omega$, we require to solve $x^*=\arg\min_{x} || A_{\Omega} x - b_{\Omega} ||_2^2$.
The `direct` method computes $x^*$ using the following operation $x^* = (A_{\Omega}^{\top} A_{\Omega})^{-1} A_{\Omega}^{\top} b_{\Omega}$.
The `parafac` method is the EM approach of Tomasi-Bro (2005), which is equivalent to running the mini-ALS algorithm for one step at each iteration of the ALS algorithm. We will add more detailed explanations about these algorithms in the next version of the paper.
> What is the difference between the plots in 3rd and 4th columns in Figure 2? They both compare the running time of the evaluated algorithms as a function of sample ratio. Do they refer to mini-ALS and accelerated-mini-ALS, respectively? Please, confirm.
Yes, the third column in Figure 2 shows the running times of the `mini-ALS` algorithm for different values of $\varepsilon$, while the fourth column shows the running times of the `approximate-mini-ALS` algorithm. This is explained in the text (lines 407–425, second column on page 8). We will add a comprehensive explanation to the caption of Figure 2 in the next version.
> Figure 1 show that the approximate-mini-als algorithm achieves lower error than the other algorithms for a range of steps around step 15. This is surprising and should be explained.
The `approximate-mini-ALS` in Figure 1 is a randomized method that uses leverage score sampling for the Kronecker product. We believe the lower error is due to randomness, which causes the algorithm to follow a different convergence path. We will provide an explanation of this in the next version of the paper.
References:
[1] Fahrbach, Matthew, Gang Fu, and Mehrdad Ghadiri. "Subquadratic kronecker regression with applications to tensor decomposition." Advances in Neural Information Processing Systems 35 (2022): 28776-28789.
---
Rebuttal Comment 1.1:
Comment: I thank the author for acknowledgment of my constructive comments and their commitment to improve the explanations in the final version of the paper.
I keep my original recommendation | Summary: This paper addresses the problem of tensor completion (TC) by proposing a novel approach that leverages approximate Richardson iteration and structured tensor decomposition (TD) algorithms. The authors introduce a lifting technique to transform the TC problem into a structured linear regression problem, enabling the use of fast TD algorithms as blackbox subroutines. This approach allows for sublinear-time methods, significantly speeding up the completion process compared to traditional direct methods. The proposed method, termed approximate-mini-ALS, is theoretically analyzed for convergence and empirically validated on real-world tensors, demonstrating substantial speed improvements while maintaining comparable reconstruction accuracy.
Claims And Evidence: The authors claim that their proposed method, approximate-mini-ALS, can achieve significant speedups in tensor completion tasks without compromising on the quality of the solution. The evidence supporting these claims includes:
1. Theoretical Analysis: The authors provide a detailed convergence analysis of their approximate Richardson iteration-based algorithm, proving that it converges at the same rate as the exact Richardson iteration under certain conditions (Theorem 3.7).
2. Empirical Validation: Extensive experiments on synthetic and real-world tensors (e.g., CARDIAC-MRI and HYPERSPECTRAL datasets) demonstrate that the proposed method can be orders of magnitude faster than direct methods while achieving similar reconstruction errors.
3. Comparison with Existing Methods: The paper compares the proposed method with existing approaches such as direct ALS and the EM-based algorithm of Tomasi & Bro (2005), showing superior performance in terms of running time and solution quality.
Methods And Evaluation Criteria: The proposed method-approximate-mini-ALS using RRE to measure the relative Frobenius norm of the difference between the completed tensor and the ground truth.I think there is no problem.
Theoretical Claims: Yes, I check the proof of Lemma 3.1, 3.2, 3.5, Theorem 3.7. They are all correct.
Experimental Designs Or Analyses: I think experiments are reasonably well designed. In addition, TC problem is often used for predicting missing values. The author states that the complexity of other algorithms is related to the number of observed values. I’m curious—when the number of observed values is small, does the author’s method have an advantage over traditional algorithms? Is it more efficient in terms of time, or does it have better recovery performance?
Supplementary Material: I have checked all supplementary material.
Relation To Broader Scientific Literature: The paper builds upon and contributes to several areas of the broader scientific literature:
1. Tensor Completion: Extends the state-of-the-art in TC by proposing a novel lifting approach that leverages structured TD algorithms, addressing the limitations of existing methods that rely on direct matrix operations.
2. Iterative Methods: Connects TC to the Richardson iteration, a classical iterative method for solving linear systems, providing new insights into the convergence properties of TC algorithms.
3. Tensor Decompositions: Integrates with existing TD methods (e.g., CP, Tucker, tensor-train) by using them as subroutines, ensuring compatibility with a wide range of tensor structures and applications.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The strength of this paper is that it proposes a novel and fast tensor completion algorithm with originality, and provides theoretical proofs to support it. Experimental results demonstrate its effectiveness. However, a weakness is that it remains unclear whether the proposed algorithm achieves faster speed or better recovery performance under high missing rate scenarios. Additionally, the author mentions tensor networks in the supplementary material—could their method be applied to tensor network algorithms, such as FCTN decomposition or TW decomposition?
Other Comments Or Suggestions: • x^{(k)} should represent the value from the k-th iteration of the ALS algorithm, but this is not clarified in the text. I believe this could confuse the readers, so please provide an explanation.
• The author should provide the parameter settings used in the experiments or release the code to help readers better understand the work.
Questions For Authors: 1. The author should provide the parameter settings used in the experiments or release the code to help readers better understand the work.
2. TC problem is often used for predicting missing values. The author states that the complexity of other algorithms is related to the number of observed values. I’m curious—when the number of observed values is small, does the author’s method have an advantage over traditional algorithms? Is it more efficient in terms of time, or does it have better recovery performance?
3. The author mentions tensor networks in the supplementary material—could their method be applied to tensor network algorithms, such as FCTN decomposition or TW decomposition?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your review, commenting on its novelty, and carefully checking the proofs of the main lemmas/theorems. Please see our responses to your questions and weaknesses below.
> I’m curious—when the number of observed values is small, does the author’s method have an advantage over traditional algorithms? Is it more efficient in terms of time, or does it have better recovery performance?
Please see the answer to the next question.
> However, a weakness is that it remains unclear whether the proposed algorithm achieves faster speed or better recovery performance under high missing rate scenarios.
The vanilla version of our algorithm can be slower than direct ALS methods when a small fraction of elements are observed, but our accelerated version gives significant speed-ups even for this case—see the 4th column of Figure 2. Note that this occurs when $\beta$ is large in Theorem 3.7, which happens if the lifted TD design matrix is not a tight spectral approximation to the TC design matrix. Our asymptotic running times imply that the speed-ups become even faster for larger tensors. In other words, for sufficiently large tensors with a small fraction of observed entries, our algorithm still provides significant speed-ups.
> Additionally, the author mentions tensor networks in the supplementary material—could their method be applied to tensor network algorithms, such as FCTN decomposition or TW decomposition?
Yes, we believe so. Recently, [1] has proposed a framework for designing sampling-based decomposition algorithms for arbitrary tensor networks. Therefore, we think someone could use that approach to design more efficient algorithms for FCTN decomposition and then, using our methods, extend it to tensor completion. Even more recently, sampling-based algorithms were designed for TW decompositions [2]. Our approach can extend their algorithm to tensor completion as well. We will add a discussion about these in the next version.
> x^{(k)} should represent the value from the k-th iteration of the ALS algorithm, but this is not clarified in the text. I believe this could confuse the readers, so please provide an explanation.
Thank you for the suggestion. We clarify this in the next version of the paper.
> The author should provide the parameter settings used in the experiments or release the code to help readers better understand the work.
Note that we included our code in the supplementary material. It is also available in a private GitHub repository. We will make the repository public and provide a link to it in the next version of the paper after the double-blind review process concludes.
[1] Malik, Osman Asif, Vivek Bharadwaj, and Riley Murray. "Sampling-based decomposition algorithms for arbitrary tensor networks." arXiv preprint arXiv:2210.03828 (2022).
[2] Wang, Mengyu, Yajie Yu, and Hanyu Li. "Randomized Tensor Wheel Decomposition." SIAM Journal on Scientific Computing 46, no. 3 (2024): A1714-A1746. | Summary: In this paper the well-known tensor completion problem for several popular low-rank tensor formats is considered. This problem is solved through a modified (randomized) ALS method, where it is assumed that we have access to all elements of the tensor (the so-called “lifting approach”). In those points whose values are not initially given, the values taken by the tensor at a given iteration is taken. Formally, the problem is mathematically written as a minimization problem not only on the elements of the expansion cores, but also on the unknown values of the tensor (the equivalence of these formulations is proved). One of the theoretical results of the paper is the construction of analogies of the obtained ALS iterations with Approximate Richardson Iteration. This allows authors to prove convergence theorems of the method. Small tensors (10^6~10^7 elements) are taken as numerical experiments and only the approximation time is compared with another methods (the accuracy of all methods is approximately the same).
Claims And Evidence: The paper is largely theoretical, and it does have theorems for the convergence of the method and its running time. However, the experiments presented are rather synthetic and small, it is not clear where in real-world applications these techniques can be applied.
Methods And Evaluation Criteria: As numerical experiments, the authors take rather synthetic problems with tensors with a small number of elements. At the same time, the presented algorithm works well only when the ratio of the number of known elements to the number of unknown elements is rather large.
Disadvantages of this approach:
- Low-rank tensor decompositions were created to, among other things, overcome the “curse of dimensinality” --- situations where the full tensor does not fit in the memory of a computing system (or it is so large that it is difficult to work with). When the elements of the tensor occupy a small memory (in the experiments in the paper --- at most tens of megabytes) the need for low-rank decompositions as such is small; and the problem of restoring missing elements can be solved in many other ways (e.g., spline interpolation).
- For the same reason, the case when a very small fraction of elements of all possible tensor elements is known is interesting. In many real-world datasets this can be tenths and hundredths of a percent). And in this mode the presented algorithm performs poorly, as it can be seen from the plots.
- as a result, the authors show only the execution time of the algorithms (all of them have approximately the same accuracy). But for different algorithms, this time can strongly depend on the implementation, the use of hardware such as GPUs, the ability to parallelize and vectorize algorithms, etc. Thus, the only superiority of the presented algorithm in the experimental data, i.e., running time, is rather doubtful.
Theoretical Claims: All theorems are proved correct in my opinion, theoretical results are true.
Experimental Designs Or Analyses: Apart from the fundamental limitations of the experiments I wrote above, the numerical experiments in the paper are carried out without question.
Supplementary Material: Supplementary Material includes all the Python code that allows you to reproduce the experiments. The appendix contains all the necessary proofs and details of the experiments.
Relation To Broader Scientific Literature: In fact, this paper is a some amalgamation of several existing and developed ideas --- 1. The ALS method, which can be applied to all three tensor decompositions presented (and in each of them it can be written as a regression problem), 2. Score Sampling, see e.g. the work of , Bharadwaj et al. (2024,) 3. Richardson Iterations. But ussually Score Sampling is applied in the case where any element of the tensor is accessible, not to the tensor competion problem.
Essential References Not Discussed: no
Other Strengths And Weaknesses: Minor:
- l119: $:=R/R_k$ should not be in subscript
Other Comments Or Suggestions: -
Questions For Authors: - Have you tried your method on much larger datasets (including those for which the full tensor does not fit into computer memory)?
- which parafac implementation did you use in experiments in Fig.2?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review and for going over the theoretical results and experiments carefully. Please see our responses to your questions and weaknesses below.
> The experiments presented are rather synthetic and small, it is not clear where in real-world applications these techniques can be applied
Note that the `cardiac-mri` and `hyperspectral` tensors in Section 5.2 are real-world tensors, not synthetic.
> Low-rank tensor decompositions were created to, among other things, overcome the “curse of dimensinality” --- situations where the full tensor does not fit in the memory of a computing system (or it is so large that it is difficult to work with). When the elements of the tensor occupy a small memory (in the experiments in the paper --- at most tens of megabytes) the need for low-rank decompositions as such is small; and the problem of restoring missing elements can be solved in many other ways (e.g., spline interpolation).
We completely agree that this is a primary motivation for low-rank tensor decompositions. That said, another benefit of low-rank decomposition is to prevent overfitting because it can also be viewed as a form of regularization.
In our experiments, our focus has been on comparison with baselines in terms of the running time and convergence. Our experiments demonstrate that we achieve a significant speed-up even for small tensors, and this speed-up only increases for larger tensors. Moreover, the memory footprint of our method is much smaller than the baselines due to the use of approximate solvers (using sampling techniques). In previous works on tensor decompositions that use leverage score sampling, it has been demonstrated that these methods are *especially effective* when direct methods run out of memory. Thus, we believe our experiments provide sufficient evidence for practicality and improvements of our algorithms.
> For the same reason, the case when a very small fraction of elements of all possible tensor elements is known is interesting. In many real-world datasets this can be tenths and hundredths of a percent). And in this mode the presented algorithm performs poorly, as it can be seen from the plots.
We disagree with this assessment. Although the vanilla version of our algorithm is not faster than direct ALS methods when a small fraction of elements are observed, the *accelerated version* gives significant speed-ups even for this case—see the 4th column (`accelerated-mini-als`) of Figure 2 and compare it with the 3rd column (`mini-als`). Our asymptotic running times (theorems in Section 4) imply that such speed-ups will be even faster for larger tensors. In other words, for sufficiently large tensors with a small fraction of observed entries, our algorithm still provides significant speed-ups.
> But for different algorithms, this time can strongly depend on the implementation, the use of hardware such as GPUs, the ability to parallelize and vectorize algorithms, etc. Thus, the only superiority of the presented algorithm in the experimental data, i.e., running time, is rather doubtful.
Our algorithms similarly benefit from the use of parallelizability, vectorized algorithms, and the use of GPUs since they’re iterative methods under the hood. Note that `numpy.linalg` (and hence `tensorly`) calls BLAS and LAPACK low-level operations, some of which are parallelized on CPU. Further, improvements in asymptotic running times of algorithms often lead to speed-ups that might not be achievable with a better code or hardware. Our approach unlocks such improvements for tensor completion via techniques that have been developed for tensor decomposition.
> Have you tried your method on much larger datasets (including those for which the full tensor does not fit into computer memory)?
We have not tried it on tensor datasets that exceed our machine’s memory. We believe this is a fantastic future direction of research, but it is somewhat out of the scope for this paper since such implementations require special care with respect to the code and hardware.
> which parafac implementation did you use in experiments in Fig.2?
PARAFAC refers to the EM approach of Tomasi-Bro (2005), which is equivalent to running our mini-ALS algorithm for exactly one step at each iteration of the ALS algorithm. | Summary: This paper introduces an efficient tensor completion algorithm, approximate-mini-ALS, which transforms unstructured tensor completion into a structured tensor decomposition problem using a lifting strategy and approximate Richardson iteration. By leveraging leverage score sampling, the method achieves sublinear time complexity and linear convergence with a contraction factor of \(1 - 1/\beta\), theoretically guaranteeing near-optimal sampling complexity \(O(n^{3/2})\) ().
Claims And Evidence: Please refer to below
Methods And Evaluation Criteria: Please refer to below
Theoretical Claims: Please refer to below
Experimental Designs Or Analyses: Please refer to below
Supplementary Material: no supplementary material
Relation To Broader Scientific Literature: Please refer to below
Essential References Not Discussed: Please refer to below
Other Strengths And Weaknesses: Strengths:
1.Innovative Combination: The lifting strategy combined with approximate Richardson iteration and leverage score sampling offers a novel approach to tensor completion, leveraging existing techniques in a non-trivial way.
2. Compared with ALS algorithms, the algorithm speed is improved significantly.
Weaknesses:
1.In the experimental section, the author presents too few experimental results, which do not provide sufficient evidence to convincingly demonstrate the advantages of the proposed algorithm. The author should refer to article and increase the number of experiments.
Other Comments Or Suggestions: There are some typos,eg: mode- -72 unfolding...
The author should check typos carefully.
Questions For Authors: 1.The author should provide ablation experiments.
2.Whether the method can be transferred to t-svd framework?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review and for recognizing that our approach is an innovative combination. Please see our responses to your questions and weaknesses below.
> In the experimental section, the author presents too few experimental results, which do not provide sufficient evidence to convincingly demonstrate the advantages of the proposed algorithm. The author should refer to article and increase the number of experiments.
We believe our experiments provide sufficient evidence for the speed-up of our algorithms in practice (see the running time subplots in Figures 2 and 3). Please note that due to the improved asymptotic running time of our algorithm (several theorems in Section 4), the speed-up of our algorithm (relative to baselines) only gets faster for larger tensors. That said, we appreciate the suggestions for extra experiments and are curious which “article” you are referring to.
> The author should provide ablation experiments
We believe our experiments include an ablation study, as we have provided experiments (1) with and without acceleration (i.e., the fourth column vs the third column in Figure 2; `accelerated-mini-als` is faster than `mini-als`), and (2) with and without leverage score sampling (i.e., `approximate-mini-als` vs `mini-als` in Figure 1). We appreciate more specific suggestions if there are ablations that you think would improve the paper.
> Whether the method can be transferred to t-svd framework?
Some recent studies have considered more efficient algorithms for t-SVD using leverage score sampling—see [1]. Due to this connection, we believe it might be possible to adopt our method for tensor completion under the t-SVD framework as well. We will add a more detailed discussion about this to the next version of the paper.
[1] Tarzanagh, Davoud Ataee, and George Michailidis. "Fast randomized algorithms for t-product based tensor operations and decompositions with applications to imaging data." SIAM Journal on Imaging Sciences 11, no. 4 (2018): 2629-2664. | null | null | null | null | null | null |
KEA: Keeping Exploration Alive by Proactively Coordinating Exploration Strategies | Accept (poster) | Summary: This paper proposes KEA, an exploration strategy for off-policy RL algorithms such as SAC, DQN, and SQL. Since these algorithms have built-in exploration strategies that interact with novelty-based methods like RND and NovelD, the authors introduce a switching mechanism to decouple them. This mechanism allows the agent to alternate between different exploration strategies based on the state. Experiments on 2D navigation tasks and three tasks from the DeepMind Control Suite demonstrate the superiority of KEA over baseline methods.
**update after rebuttal**
I appreciate the additional experimental results. Due to the lack of some tasks and baselines, as well as the efficiency concerns raised by the need to recompute intrinsic rewards upon each sampling, I would prefer to retain my current score.
Claims And Evidence: From line 45: However, while unvisited states may offer high intrinsic rewards, the agent can not identify them due to a lack of prior experience ... ...
I didn’t fully understand this part. Why is the agent unable to identify unvisited states? According to RND, if certain states have not been visited, the prediction error should be large, making them distinguishable. Could you clarify this point?
Methods And Evaluation Criteria: Yes
Theoretical Claims: This paper does not include formal proofs or theoretical claims.
Experimental Designs Or Analyses: Yes. I checked all experimental designs and analyses.
(1)KEA alternates between $A^{B}$ and $A^{SAC}$. Since $A^{SAC}$ is not directly learned by the original SAC algorithm but instead relies on the shaped reward $r_{int} + r_{ext}$, I wonder whether KEA's advantage primarily stems from the shaped rewards. However, I did not find any ablation studies addressing this point. I also noticed experimental results for RND-DQN and RND-SQL, which appear to be based on shaped rewards. If that is the case, why are there no corresponding results for RND-SAC?
(2)For Walker Run Sparse and Cheetah Run Sparse, the thresholds are set at 0.3 and 0.35, respectively. Were these thresholds defined by the original tasks or set by the authors? If they were chosen by the authors, is there a specific rationale or any cited references supporting these choices? Additionally, please clarify that these thresholds were not determined based on experimental results.
(3) Experiments were conducted on only four tasks. The DeepMind Control Suite offers a variety of tasks—why were only three of them selected? Is there a specific reason for this choice?
Additionally, there are other exploration methods beyond RND and NovelD, such as RIDE. While RIDE is mentioned in the Related Work section, it was not included in the experimental comparisons.
Supplementary Material: I reviewed the supplementary material.
Relation To Broader Scientific Literature: Exploration is a key challenge in RL research. However, KEA primarily builds on existing methods like RND and NovelD. Moreover, its alternating mechanism simply relies on a threshold that requires tuning. I did not observe any additional novel components in the method.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The authors aimed to decouple different exploration strategies, which is a somewhat novel idea that could inspire further research in this direction.
Other Comments Or Suggestions: The paragraph starting from line 168 is not very clear. For example, "$A^{B}$ includes a stochastic policy that maintains high variance by slowing down its gradient updates until extrinsic rewards are obtained ...."
The statement seems to imply some details about how to train $A^{B}$. However, since there is no appendix providing the training details, this part of the text causes some confusion.
Questions For Authors: A concern with this approach is that existing methods typically use on-policy RL algorithms for RND, partly because shaped rewards may become outdated as the agent explores the environment more thoroughly.
For instance, when the agent first visits a state $S$, its intrinsic reward may be high. However, upon revisiting $S$, the intrinsic reward decreases. In on-policy RL algorithms, these outdated rewards are discarded, whereas in off-policy RL algorithms, they can be reused multiple times. How does KEA ensure that this does not negatively impact the performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We are grateful to Reviewer Numr for the valuable feedback and insightful questions.
- **Theoretical Explanation**
We have provided the theoretical explanation in our response to **Reviewer hH71**. Please refer to that section for a detailed discussion.
- **Why the Agent Cannot Identify Unvisited States**
Thank you for the insightful question. You are correct that unvisited states yield high intrinsic rewards under RND. However, since SAC updates from transitions stored in the replay buffer, unvisited states--by definition--do not appear in it. As a result, their high intrinsic rewards cannot influence the Q-values or policy updates until the agent actually reaches them. We will revise the text to make this point clearer.
- **Comparison with RND-SAC**
Thank you for the question. We did include RND-SAC results—please refer to Figure 3 and Table 1. The baseline labeled "RND" in our experiments is exactly SAC with RND (i.e., RND-SAC). As shown in Section 3.1, KEA (KEA-RND) improves the learning efficiency over RND-SAC. We will revise the notation in the final version to improve clarity and avoid confusion.
- **Task-Specific Threshold Selection**
The thresholds for Walker Run Sparse (0.3) and Cheetah Run Sparse (0.35) were set by us, not defined in the original tasks. We chose them to ensure the tasks are sufficiently challenging to highlight differences in exploration ability, without making them so difficult that all methods fail or require excessive training time. These thresholds were not tuned based on experimental results.
- **Justification for Excluding RIDE**
Thank you for the suggestion. We chose NovelD as a baseline because it outperforms RIDE in the original paper, and our focus was on evaluating coordination strategies rather than comparing a wide range of exploration methods. Due to space and time constraints, we were unable to include RIDE, but we believe our comparison with NovelD sufficiently demonstrates the effectiveness of KEA.
- **Clarification on Supplementary Material**
Thank you for the comment. We do include supplementary material, which provides an analysis of varying UTD ratios and a visualization of intrinsic rewards, entropy, and action probabilities over time to illustrate how exploration behavior evolves during training.
- **Novelty and Contribution Beyond Existing Methods**
Thank you for the feedback. KEA indeed builds on existing exploration methods like RND and NovelD, but our contribution lies in identifying and addressing the coordination issue that arises when combining these with off-policy algorithms like SAC. KEA introduces a lightweight and general switching mechanism that improves exploration efficiency without modifying intrinsic rewards or learning objectives. While threshold-based, the mechanism is simple, effective, and easy to integrate with existing methods. We believe this addresses a practical gap that has been largely overlooked.
- **Clarification on Implementation Details**
We have provided the implementation details in our response to **Reviewer nUkT**. In the final version, we will include additional implementation details in the Appendix to ensure clarity and reproducibility.
- **Update Mechanism for $\mathcal{A}^B$**
We have provided details on the update mechanism for $\mathcal{A}^B$ in our response to **Reviewer nUkT**. Please refer to that section for a complete explanation.
- **Handling Outdated Intrinsic Rewards**
Thank you for the insightful question. To address the issue of outdated intrinsic rewards in off-policy settings, we recompute the intrinsic reward for each sampled transition during training, rather than using stored values. This keeps the novelty estimates up to date and mitigates potential negative effects on performance.
---
Rebuttal Comment 1.1:
Comment: While some of my concerns have been addressed, I would still appreciate clarification on why only three tasks from the DeepMind Control Suite were chosen, given the broader set available.
---
Reply to Comment 1.1.1:
Comment: Thank you for your follow-up. We selected these three tasks from the DeepMind Control Suite as they are commonly used in the literature, facilitating meaningful comparisons. Due to the scope of our study and limited computational resources, we prioritized tasks that balance between complexity and training efficiency. Tasks that are too easy often fail to highlight differences between our method and baselines, while those that are too difficult typically require prohibitively long training times.
In this work, we evaluate our method on tasks: Walker Run Sparse, Cheetah Run Sparse, and Reacher Hard Sparse. In the rebuttal, we also ran an additional task—Cartpole Swingup Sparse. The result is similar to Reacher Hard Sparse: the episode return curves for KEA-RND and RND are comparable, as both methods are able to obtain extrinsic rewards within a few episodes.
To further evaluate our approach under hard exploration tasks, in the rebuttal, we also include experiments on DeepSea—a hard exploration benchmark—which complements our DeepMind Control Suite results. The results show that our method achieves comparable performance to SOFE on the easier levels of the DeepSea environments and outperforms SOFE as the difficulty increases (more details in **Reviewer CNJg**). | Summary: This paper proposes KEA, which aims to balance novelty-based exploration and SAC’s inherently stochastic-based exploration. The authors argue that naively combining novelty-based exploration with SAC results in suboptimal performance. To address this issue, KEA introduces a co-behavior agent that works alongside SAC and a switching mechanism to facilitate proactive coordination between exploration strategies from the two methods. The proposed approach is evaluated on a 2D navigation task and a subset of sparse reward control tasks from the DeepMind Control Suite.
## Update after rebuttal
I appreciate additional experimental results and explanations. However, my concerns regarding the experimental results remain. The performance of the proposed approach is still mixed; in many tasks, the baselines achieve similar or better results. Furthermore, the claim that the proposed approach could "operate on the between explored and unexplored area" is not sufficiently supported by the evidence provided.
Claims And Evidence: The reviewer found that several claims lack sufficient support from the experimental results. For instance, the claim on lines 180-181, stating that the coordination ensures consistent escape from local minima, is not substantiated by any quantitative or qualitative evidence demonstrating such an escape. Similarly, the claim on lines 203-205, suggesting that the mechanism ensures operation near the boundary between explored and unexplored regions, lacks evidence to confirm this capability. While these may be the intended goals of the proposed approach, the reviewer notes that the evidence provided does not convincingly demonstrate their achievement. The reviewer suggests that the authors provide additional evidence or revise their claims to better align with the experimental results.
More importantly, the reviewer has some concerns about the experimental settings. Please see the ‘Experimental Designs Or Analyses’ sections for more details.
Methods And Evaluation Criteria: The primary technical innovation presented in this work is the switching mechanism, which alternates between the two exploration policies based on the intrinsic reward received. However, the reviewer finds this technical contribution to be somewhat limited in its scope and significance.
For Evaluation Criteria, please see the ‘Experimental Designs Or Analyses’ sections.
Theoretical Claims: No formal theoretical claim is presented.
Experimental Designs Or Analyses: The experiments section lacks enough supporting evidence. Please see details below.
1. The reviewer finds that the experimental section of the paper provides limited evidence and is not convincing. Specifically, the reviewer notes that only four tasks (one 2D maze and three tasks from DeepMind Suites) were used to evaluate the proposed approach. This limited number of tasks makes it difficult to properly assess the effectiveness and generalizability of the method. To address this limitation, the reviewer suggests that the authors include results on a more comprehensive set of tasks, such as additional tasks from the DeepMind Control Suite, to provide a more thorough and convincing evaluation of the proposed approach.
2. In the limited reported experimental results, the performance of the proposed approach appears mixed. Specifically, as shown in Table 1 and Table 3, the baseline Novel1D seems to outperform the proposed approach on 2D Navigation and reacher hard sparse. The mixed results raise concerns about the effectiveness of the proposed method.
3. The baseline approach used in the experiments, while classic, appears to be somewhat outdated (RND 2018 and NovelID 2021). Comparison with state-of-the-art methods, such as [a] and [b], is essential to demonstrate the effectiveness and advancements of the proposed approach.
[a] Rethinking Exploration in Reinforcement Learning with Effective Metric-Based Exploration Bonus, Wang et al, 2024.
[b] Improving Intrinsic Exploration by Creating Stationary Objectives, Castanyer et al, 2024
Supplementary Material: Supplementary material is not reviewed.
Relation To Broader Scientific Literature: This paper aims to unify entropy-based exploration, as used in SAC, with intrinsic reward-driven methods, such as RND.
Essential References Not Discussed: None
Other Strengths And Weaknesses: The reviewer finds that the clarity of the paper needs improvement, as many technical details require further clarification. For example, Figure 1 is confusing: it lacks an explanation of what the colors in the map and the gray bar in the middle of the map represent, and the distinctions between each figure are unclear. These elements need to be adequately addressed to enhance the overall clarity and comprehensibility of the paper.
The statement "slowing down its gradient updates until extrinsic rewards are obtained" (L173-175) lacks clarity. The authors need to provide a formal statement and explain how this mechanism functions.
Other Comments Or Suggestions: Typos: L152 (left) As figure 2, …
Questions For Authors: Please see the above discussions.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We greatly appreciate Reviewer CNJg for the helpful comments and thoughtful suggestions.
- **Theoretical Explanation**
We have provided the theoretical explanation in our response to **Reviewer hH71**. Please refer to that section for a detailed discussion.
- **Broader Contribution Beyond the Switching Mechanism**
Thank you for the feedback. Our contribution goes beyond proposing a switching mechanism--we identify and analyze a core inefficiency in combining SAC with novelty-based exploration, and design KEA as a practical coordination solution. The method improves exploration consistency, is lightweight, and integrates easily with existing approaches.
- **Faster Convergence as a Key Strength of KEA**
Thank you for the comment. While final returns in some tasks (e.g., 2D Navigation, Reacher Hard Sparse) are similar between KEA-NovelD and NovelD, KEA-NovelD consistently achieves faster convergence. For instance, in 2D Navigation, KEA-NovelD reaches a return of 0.6 around 190k steps, whereas NovelD requires 250k steps, as shown in the experimental results. This demonstrates KEA’s advantage in sample efficiency, even when final performance is close.
- **Comparison with Recent State-of-the-Art Methods in DeepSea**
Thank you for the helpful suggestion. We agree that comparing against recent state-of-the-art exploration methods is important to demonstrate the effectiveness of KEA. To address this, we conducted additional experiments in the **DeepSea** environment [1], evaluating our method (KEA-RND-SAC) against the recently proposed SOFE (Castanyer et al., 2024), as well as DeRL (Schäfer et al., 2021).
DeepSea is a hard-exploration benchmark defined on an $N \times N$ grid, where only penalized rightward actions lead to the goal—posing a significant credit assignment challenge for agents relying solely on extrinsic rewards.
Following the setup in SOFE, the table below summarizes average returns and one standard deviations over 100,000 evaluation episodes:
| Algorithm | DeepSea 10 | DeepSea 14 | DeepSea 20 | DeepSea 24 | DeepSea 30 |
|------------|------------------|------------------|------------------|------------------|------------------|
| DeRL-A2C | **0.98 ± 0.10** | 0.65 ± 0.23 | 0.42 ± 0.16 | 0.07 ± 0.10 | 0.09 ± 0.08 |
| DeRL-PPO | 0.61 ± 0.20 | 0.92 ± 0.18 | -0.01 ± 0.01 | 0.63 ± 0.27 | -0.01 ± 0.01 |
| DeRL-DQN | **0.98 ± 0.09** | **0.95 ± 0.17** | 0.40 ± 0.08 | 0.53 ± 0.27 | 0.10 ± 0.10 |
| SOFE-A2C | 0.94 ± 0.19 | 0.45 ± 0.31 | 0.11 ± 0.25 | 0.08 ± 0.14 | 0.04 ± 0.09 |
| SOFE-PPO | 0.77 ± 0.29 | 0.67 ± 0.33 | 0.13 ± 0.09 | 0.07 ± 0.15 | 0.09 ± 0.23 |
| SOFE-DQN | 0.97 ± 0.29 | 0.78 ± 0.21 | 0.70 ± 0.28 | **0.65 ± 0.26** | **0.42 ± 0.33** |
| **KEA-RND-SAC**| 0.97 ± 0.05 | 0.89 ± 0.06 | **0.73 ± 0.13** | **0.66 ± 0.12** | **0.43 ± 0.31** |
These results show that KEA-RND-SAC achieves comparable or superior performance to SOFE across varying levels of difficulty in the DeepSea environment. We will include these new results in the final version to strengthen the empirical evaluation and highlight KEA’s effectiveness in addressing complex exploration challenges.
-----
[1] Behaviour suite for reinforcement learning, Osband et al., 2019
[2] Improving Intrinsic Exploration by Creating Stationary Objectives, Castanyer et al, 2024
[3] Decoupled Reinforcement Learning to Stabilise Intrinsically-Motivated Exploration, Schäfer, Lukas, et al., 2021
- **Improving the Clarity of Figure 1**
Thank you for the valuable feedback. We agree that additional explanation would improve the clarity of Figure 1. In the final version, we will clarify the meaning of the color map, the gray bar (which represents an obstacle), and the differences between each subfigure. Each subfigure shows the agent’s behavior in a specific region at different time stages, illustrating how intrinsic rewards and policy entropy shape exploration. We will revise the figure and caption to make this clearer.
- **Clarification on Update Mechanism for $\mathcal{A}^B$**
Thank you for the comment. During training, we set the loss weight for $\mathcal{A}^B$ to zero until extrinsic rewards are obtained, effectively freezing its updates and maintaining high action variance. Once extrinsic rewards are observed, we set the loss weight to one to gradually train $\mathcal{A}^B$. This allows $\mathcal{A}^B$ to remain exploratory early on and focus on task optimization later. We agree that providing a formal statement and clearer explanation will improve clarity, and we will revise this part accordingly in the final version.
- **Correction of Typographical Error**
Thank you. We will correct the sentence in L152. | Summary: This paper presents KEA, a RL-based to enhance exploration efficiency in sparse reward environments. The authors propose a proactive coordination mechanism between novelty-based exploration methods and the stochastic policy of Soft Actor-Critic . KEA introduces a co-behavior agent and a dynamic switching mechanism to maintain exploration diversity, improve learning efficiency, and mitigate redundant sample collection.
Claims And Evidence: Evidence includes experimental results from 2D navigation tasks and continuous control tasks from the DeepMind Control Suite, demonstrating improved performance and faster convergence compared to baselines.
Methods And Evaluation Criteria: The core method includes an additional co-behavior agent, operating alongside SAC, with a dynamic switching mechanism for exploration based on state novelty measured by intrinsic rewards.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: Experiments include a 2D navigation task and three continuous control tasks from the DeepMind Control Suite.
Supplementary Material: I read the visualizations that illustrate how intrinsic rewards, entropy, and action probabilities evolve during training, providing additional insights into exploration behaviors.
Relation To Broader Scientific Literature: Builds upon existing novelty and curiosity-based exploration methods.
Essential References Not Discussed: Some bandit-based exploration strategies are missing. For example, (1) Neural contextual bandits with ucb-based exploration; (2) Ee-net: Exploitation-exploration neural networks in contextual bandits; (3) Neural thompson sampling.
Other Strengths And Weaknesses: Strengths: (1) Clear methodological contributions with empirical validation; (2) Effective visualizations illustrating key concepts.
Weaknesses: (2) The novelty may not surpass the bar of ICML; adding theoretical analysis is helpful to improve this paper. (2) Primarily limited to off-policy RL settings; limited applicability for on-policy methods.
Other Comments Or Suggestions: none
Questions For Authors: How sensitive is KEA to different novelty computation methods beyond RND and NovelD?
Can the propose exploration method process a decent theoretical performance guarantee?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We are grateful to Reviewer hH71 for the constructive and valuable feedback.
- **Theoretical Explanation of Exploration Strategy Interaction Problem**
> **Problem Setup and Assumptions**
>
> Consider an MDP with states $S=$ { $s_0, s_1, s_2$ }, actions $A=$ { $a_1, a_2$ }, and deterministic transitions: $T(s_1|s_0, a_1)=1$, $T(s_2|s_0, a_2)=1$.. Rewards combine sparse extrinsic and intrinsic novelty-based components:
$$
r(s, a, s') = r^{ext}(s, a, s') + \beta \ r^{int}(s')\ ,
$$
>
> with $r^{int}(s')=1/N(s')$, where $N(s')$ counts state visits. We assume Soft Actor-Critic (SAC) with entropy coefficient $\alpha$, uniform initial Q-values ($Q^0(s_i,a_j)=\epsilon$), uniform initial policy ($\pi^0(a_j|s_i)=0.5$), and single-step episodes.
> **Definitions and Policy Structure**
>
> The soft Q-function, denoted as $Q(s, a)$, is updated according to the following soft Bellman operator, which incorporates both intrinsic and extrinsic rewards:
$$
Q^{t+1}(s, a) = r^{ext}(s, a, s') + \beta \ r^{int}(s') + \gamma\ V(s')
$$
>
> where the soft value function $V(s)$ is given by:
$$
V(s) = \sum_a \pi(a|s)\ [Q(s, a) - \alpha\ \log \pi(a|s)]
$$
>
> The policy probabilities are given by a softmax of the Q-values:
$$
\pi^{t}(a|s) = \frac{exp(Q^{t}(s, a)/ \alpha)}{\sum_{a'} exp(Q^{t}(s, a')/ \alpha)}
$$
> **Interaction of Exploration Methods**
>
> Initially at state $s_0$, $Q^0(s_0,a_1)=Q^0(s_0,a_2)=\epsilon$, thus $\pi^0(a_j|s_0)=0.5$. After taking action $a_1$ and transitioning deterministically to $s_1$, the updated Q-value becomes:
$$
Q^{1}(s_0, a_1)
= \beta \ r^{int}(s_1) + \gamma\ V(s_1)
= \beta + \gamma\ (\epsilon + \alpha \log2)
$$
>
> assuming $r^{ext}(s_0,a_1,s_1)=0$, and initial intrinsic reward $r^{int}(s_1)=1 / N(s_1)=1$.
>
> The updated policy probabilities at $s_0$ are then:
$$
\pi^{1}(a_1|s_0) = \frac{exp(Q^{1}(s_0, a_1)/ \alpha)}{\sum_{a'} exp(Q^{1}(s_0, a')/ \alpha)}\ ,\ \
\pi^{1}(a_2|s_0) = 1 - \pi^{1}(a_1|s_0)
$$
> **Analytical Derivation of Step Count $k$**
>
> Define the action probability ratio at step as:
$$
\eta^k = \frac{\pi^{k}(a_1|s_0)}{\pi^{k}(a_2|s_0)}
= exp(\frac{Q^{k}(s_0, a_1) - Q^{k}(s_0, a_2)}{\alpha})
$$
>
> With repeated visits to state , intrinsic rewards reduces as $r^{int}(s_1) = 1/k$, giving:
$$
Q^{k}(s_0, a_1)
= \frac{\beta}{k}+\gamma\ (\epsilon+\alpha \log(2)), \ \ Q^{k}(s_0, a_2) = \epsilon
$$
>
> For equal probabilities ($\eta^k = 1$), solving explicitly yields:
$$
k^* = \frac{\beta}{(1-\gamma)\epsilon-\gamma \alpha \log(2)}
$$
> **Interpretation**
>
> The derived equation explicitly demonstrates how the intrinsic reward factor ($\beta$) influences equilibrium behavior and highlights constraints on valid parameter ranges.
>
> Initially, intrinsic rewards increase the probability of revisiting novel states, resulting in repeated collection of similar transitions. This effect reduces as the state novelty decays, and action probabilities return to equilibrium (0.5). The delayed shift from novelty-driven to entropy-driven exploration may thus introduce inefficiencies and slow down learning. Despite the simplified setup, this dynamic is likely broadly relevant, including cases with longer episodes and more complex novelty-based intrinsic rewards.
- **Explanation of How KEA Addresses the Interaction Problem**
As discussed in our theoretical explanation, combining SAC with novelty-based exploration can lead to inefficiencies—such as repeatedly visiting a state (e.g., $s_1$) to reduce its novelty and lower the action probability ratio $\eta$. KEA addresses this by switching to the co-behavior agent, whose $\eta$ is already close to 1, enabling more efficient coordination without excessive additional sampling.
- **On-policy Limitation**
Thank you for the suggestion. We focused KEA on off-policy RL due to challenges in combining novelty-based exploration with off-policy methods, especially in transfer across continuous control tasks. KEA addresses these issues by coordinating exploration strategies, improving efficiency and performance. Adapting KEA to on-policy settings would require reconsidering or redesigning the interaction between $\mathcal{A}^{SAC}$ and $\mathcal{A}^{B}$, especially since a shared replay buffer is no longer available in such frameworks.
- **Comparison with Recent State-of-the-Art Methods in DeepSea**
To further strengthen our empirical evaluation, we performed additional experiments in the DeepSea environment—an established benchmark for hard-exploration tasks. The results show that KEA-RND-SAC performs on par with or better than recent state-of-the-art methods. For more details, please refer to our response to **Reviewer CNJg**. | Summary: This paper proposes an exploration technique for sparse reward problem. They propose to use a co-behavior agent to reduce the interference when combining two exploration mechanism. In particular, they have the co-behavior agent to perform novelty-based exploration while the standard agent explores through traditional stochastic policy. Further, they introduce a switching mechanism to dynamically select between the two agent.
Claims And Evidence: They claim that novelty based exploration technique may interfere with the implicit exploration mechanism of stochastic policy. While they provide an intuitive explanation through the heat-map of action probabilities, this lacks a theoretical grounding. Experimental results resonate with the claims.
Methods And Evaluation Criteria: Assuming the discussed interference, the proposed approach appears as an interesting and simple solution. However, there is no evaluation/comment on the overhead (time and memory) introduced by the additional agent.
Theoretical Claims: The paper doesn't provide any theoretical insights. It would be nice to see a theoretical backup of why natural shifts between novelty-based exploration and SAC’s stochastic policy-based exploration may result in delays and inefficiencies.
Experimental Designs Or Analyses: Overall, the experiments are well designed. In Figure 5, SAC results are missing in first two experiments. Also, further experiments on different navigation tasks or high-fidelity environments that require exploration would benefit the paper.
Supplementary Material: Supplementary materials include additional analysis and visualization that help.
Relation To Broader Scientific Literature: I found this work interesting and orthogonal to the existing works. Disentangling different exploration mechanism with separated network (agent) nicely juxtaposes with the current literature that mostly incorporates novelty/count based exploration mechanism within the same network.
Essential References Not Discussed: The paper should discuss a bit on ensemble-based exploration methods that often use multiple agents.
Other Strengths And Weaknesses: Experimental results with other off-policy methods clearly indicates the wide applicability of this method.
The writing could be improved especially in terms of clarity.
Other Comments Or Suggestions: I found it pretty confusing why the authors name the novelty-based agent as A^SAC and the traditional stochastic agent as A^B. I would suggest to alter the notation, denoting the entropy-based traditional agent as A^SAC and naming other agent accordingly.
In line 264, "This demonstrates that our method not only maintains exploration efficiency but also improves convergence speed". I believe instead it is better to attribute improved exploration for faster convergence.
Questions For Authors: Since there is a trade-off in selecting \sigma, is there any findings or prescription to select \sigma under different reward structure? Or it needs to be manually tuned as the problem (reward structure) changes.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer Acaj for the valuable suggestions and helpful comments.
- **Theoretical Explanation**
We have provided the theoretical explanation in our response to **Reviewer hH71**. Please refer to that section for a detailed discussion.
- **Computational Overhead of the Additional Agent**
Both agents share a unified replay buffer, which improves data efficiency and limits memory overhead. Training is slightly slower due to computing losses for both $\mathcal{A}^{SAC}$ and $\mathcal{A}^B$. At inference time, only one agent ($\mathcal{A}^B$) is active, and the action computation is equivalent to a standard SAC policy, so runtime and memory overhead remain minimal.
- **Comparison with Recent State-of-the-Art Methods in DeepSea**
Thank you for the suggestion. We conducted additional experiments in the DeepSea environment, a well-established hard-exploration benchmark. In these experiments, KEA-RND-SAC was compared against recent state-of-the-art methods and demonstrated comparable or superior performance. For more details, please refer to our response to **Reviewer CNJg**.
- **Clarification on Missing SAC Results in Figure 5**
Thank you for the feedback. SAC results in Figure 5 are always remain zero. Without novelty-based exploration, SAC fails in these hard exploration tasks.
- **Clarification on Notation of Agents**
Thank you for the suggestion. We agree that the current naming could be confusing and will revise the notation in the final version to improve clarity and readability.
- **Clarification on the Relationship Between Exploration and Convergence**
Thank you for the suggestion. We agree that the faster convergence observed in our method is a result of improved exploration. We will revise the sentence accordingly in the final version to better reflect this relationship.
- **Limitations and Future Directions in Threshold Selection**
Lower thresholds encourage more stochastic exploration, while higher ones rely more on novelty-based exploration. The threshold balances these two strategies and is task-dependent. In our experiments, we hand-tuned the threshold to keep the usage of $\mathcal{A}^{SAC}$ around 85–90%. While automated methods (e.g., grid search, Bayesian optimization) could optimize this threshold, defining a general and adaptive selection mechanism remains a non-trivial challenge and a promising direction for future work. | Summary: This paper introduces KEA (Keeping Exploration Alive), a method designed to address coordination issues that arise when combining Soft Actor-Critic (SAC) with novelty-based exploration methods. The research identifies that when SAC's stochastic policy exploration coexists with novelty-based exploration, the complex interactions between these strategies can lead to exploration inefficiencies and redundant sampling.
The paper's main contribution is proposing a mechanism to proactively coordinate different exploration strategies: (1) introducing a co-behavior agent ($\mathcal{A}^\text{B}$) that works alongside SAC incorporating existing novelty-based methods ($\mathcal{A}^\text{SAC}$); and (2) designing a dynamic switching mechanism ($ψ$) based on state novelty that intelligently determines which strategy to use. When novelty exceeds a threshold, the system switches to the co-behavior agent to maintain high-entropy exploration; when novelty falls below the threshold, the system employs the SAC agent for exploring relatively novel regions.
## update after rebuttal
After carefully reviewing the authors' responses, I find that they have adequately addressed my main concerns. I have decided to update my rating to weak accept.
Claims And Evidence: Several core claims in the paper lack sufficient supporting evidence:
1. Exploration Strategy Interaction Problem: Although the paper identifies the issue of inefficiency caused by interaction between exploration strategies through Figure 1 and 2D navigation experiments, it lacks theoretical proof.
2. Effectiveness of KEA: The paper claims KEA effectively coordinates exploration strategies, but testing on only 3 DeepMind Control environments is insufficient to support claims of broad effectiveness.
3. Generalizability of KEA: The claim that KEA can be generalized to other off-policy methods is supported by limited evidence from simple tests with DQN and SQL. Particularly for cases where the effect is not clear (e.g., standard DQN), the paper fails to provide theoretical explanations.
4. Design Rationale for Coordination Mechanism: The paper proposes a threshold-based switching mechanism but lacks theoretical analysis or extensive experiments to support this design over other possible coordination approaches. The basis for threshold selection is also insufficiently justified.
While the paper presents interesting problems and solution approaches, it lacks comprehensive and in-depth experimental evidence and theoretical support, with many key claims requiring more substantial evidence for verification.
Methods And Evaluation Criteria: The method proposed in the paper conceptually aligns with the problem it aims to solve (the coordination issue when combining SAC with novelty-based exploration methods), but there are significant deficiencies in the evaluation criteria: (1) The 2D navigation task experimental environment lacks critical details, making it impossible to assess or reproduce. (2) The evaluation is limited to only 3 sparse reward environments from DeepMind Control Suite, which is insufficient to demonstrate the method's broad applicability. (3) Comparisons are restricted to original SAC, SAC-RND, and SAC-NovelD, without benchmarking against a wider range of state-of-the-art exploration methods.
Theoretical Claims: The coordination mechanism design lacks theoretical foundation: the switching mechanism appears to be based on intuition rather than mathematical analysis, and lacks theoretical justification for why this design is superior to other possible approaches.
Experimental Designs Or Analyses: The authors conducted experiments in a 2D navigation task and three sparse reward environments from the DeepMind Control Suite, but the experimental design has significant limitations. First, the 2D navigation task, which serves as the primary concept validation platform, is severely under-described, lacking critical details about environment parameters, reward design, and state space. Although Figure 8 provides some visualization results, these are difficult to comprehensively evaluate and interpret due to insufficient background information about the task.
In the DeepMind Control Suite experiments, the authors selected three environments: Walker Run Sparse, Cheetah Run Sparse, and Reacher Hard Sparse. Results show that KEA-RND and KEA-NovelD indeed outperform their unmodified counterpart algorithms, providing some support for the method's effectiveness. However, selecting only three highly homogeneous locomotion control tasks as evaluation benchmarks is clearly insufficient and fails to demonstrate the method's effectiveness across more diverse environments. Particularly questionable is the lack of exploration into the method's performance in non-sparse reward environments, leaving unanswered whether KEA would introduce unnecessary computational overhead or even negative effects in standard reward settings.
Regarding the threshold selection experiments, the authors tested different thresholds (0.50 to 1.50) and their impact on KEA's performance. Results show that threshold settings affect the frequency of co-behavior agent usage, with higher thresholds leading to lower usage rates, while a threshold of σ=1.00 achieved optimal performance. The paper lacks theoretical analysis explaining why specific thresholds perform better and doesn't provide a general methodology for threshold selection. This lack of theoretical guidance for parameter selection means that threshold tuning in practical applications may require extensive experimentation, increasing the difficulty of applying the method.
The UTD ratio experiment is one of the more complete analyses in the paper, where the authors explored algorithm performance under different update intensities by adjusting the update frequency of SAC and RND (8 to 48 times). KEA-RND can maintain higher policy entropy in regions with high intrinsic rewards, reducing the likelihood of getting stuck in local optima. Figure 8 visualizes the evolution of exploration behavior under different UTD settings, intuitively illustrating how KEA coordinates different exploration strategies.
Supplementary Material: I reviewed Appendix A: Analysis of Varying UTD Ratios. This section provides important additional experiments that examine the impact of update frequency (UTD ratio) on the effectiveness of the KEA method.
Appendix A.1: Experimental Results details the setup of the UTD ratio experiments, which observe performance changes by adjusting the number of SAC gradient updates (8, 16, 32, 48) and RND updates (8 or 16). The results demonstrate that KEA-RND achieves higher average returns than RND across various UTD settings. Meanwhile, Appendix A.2: Visualization illustrates the changes in intrinsic rewards, entropy, and action probabilities throughout the training process. The comparative analysis shows that RND tends to get stuck in local optima in worst-case scenarios, while KEA-RND consistently reaches the goal in both best and worst-case scenarios.
Relation To Broader Scientific Literature: KEA builds upon the long-standing challenge of sparse rewards in reinforcement learning. In recent years, intrinsic reward methods have become mainstream solutions, including curiosity-driven [1] and novelty-driven [2,3] approaches. KEA does not propose an entirely new intrinsic reward mechanism, but rather identifies and addresses a specific problem that arises when combining these methods with maximum entropy reinforcement learning algorithms.
A core insight of the paper relates to the interaction between entropy-regularized algorithms like SAC [4] and novelty-based exploration. SAC optimizes both exploration and exploitation by maximizing policy entropy in the objective function, an idea that originates from early maximum entropy reinforcement learning work [5]. This paper points out that in regions with high intrinsic rewards, policies tend to become deterministic, conflicting with the entropy maximization objective and leading to inefficient exploration.
KEA's primary innovation lies in proposing an exploration strategy coordination mechanism, which relates to several research directions:
1. Hierarchical Reinforcement Learning: KEA's co-behavior agent (AB) and switching mechanism resemble high-level policies in hierarchical RL [6], but focus on exploration strategies rather than task decomposition.
1. Multi-Policy Learning: Similar to the Options framework [7], KEA employs multiple policies, but its purpose is to solve a specific exploration strategy interaction problem.
References:
[1] Pathak, Deepak, et al. "Curiosity-driven exploration by self-supervised prediction." International Conference on Machine Learning. PMLR, 2017.
[2] Burda, Yuri, et al. "Exploration by Random Network Distillation." International Conference on Learning Representations, 2019.
[3] Badia, Adrià Puigdomènech, et al. "Never Give Up: Learning Directed Exploration Strategies." International Conference on Learning Representations, 2020.
[4] Haarnoja, Tuomas, et al. "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor." 2018.
[5] Ziebart, Brian D., et al. "Maximum entropy inverse reinforcement learning." AAAI, 2008.
[6] Barto, Andrew G., and Sridhar Mahadevan. "Recent advances in hierarchical reinforcement learning." Discrete event dynamic systems 13 (2003): 341-379.
[7] Sutton, Richard S., Doina Precup, and Satinder Singh. "Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning." Artificial intelligence 112.1-2 (1999): 181-211.
Essential References Not Discussed: The paper fails to cite CAT-SAC [1], which addresses nearly identical problems with a complementary approach. While KEA introduces a co-behavior agent with a switching mechanism, CAT-SAC directly modifies SAC's entropy temperature to be curiosity-aware, making it higher in unfamiliar states and lower in familiar ones.
References:
[1] Lin, Junfan, et al. "Cat-sac: Soft actor-critic with curiosity-aware entropy temperature." (2020).
Other Strengths And Weaknesses: Weaknesses:
1. Limited Experimental Evaluation: The paper only evaluates the method on three DeepMind Control sparse reward environments and one 2D navigation task, which is insufficient to fully demonstrate the method's generalizability across different problem domains.
2. Lack of Theoretical Foundation: The paper provides no theoretical justification for why the proposed switching mechanism works effectively.
3. Insufficient Comparison Baselines: The comparison with state-of-the-art methods is limited, missing opportunities to benchmark against other exploration coordination approaches.
4. Inadequate Environment Details: The 2D navigation task, which serves as a motivating example, lacks comprehensive description of its reward function, state representation, and action space, making it difficult to assess the method's effectiveness or reproduce the results.
5. Threshold Selection: The method relies on manually tuned threshold $σ$ without providing an automatic selection mechanism, potentially limiting its applicability to new environments.
6. Missing Implementation Details: The paper omits critical details about the configuration of $\mathcal{A}^\text{B}$ and $\mathcal{A}^\text{SAC}$, such as network architecture, parameter sizes, learning rates, and discount factors, which are essential for reproduction.
Strengths:
1. Elegant Solution Approach: The paper proposes a conceptually simple yet effective method for coordinating different exploration strategies, addressing a practical problem in reinforcement learning with sparse rewards.
2. Insightful Visualizations: The visualizations effectively demonstrate how the proposed method maintains exploration in high-novelty regions compared to baseline approaches.
Other Comments Or Suggestions: I recommend the following improvements to strengthen the paper:
1. Expand the background section to include a more thorough explanation of maximum entropy RL, which would help readers better understand the problem the paper aims to solve.
2. Provide detailed configuration information for both $\mathcal{A}^\text{B}$ and $\mathcal{A}^\text{SAC}$, including network architecture, parameter sizes, learning rates, and discount factors to ensure reproducibility.
3. Include comprehensive details about the 2D navigation task setup, including state/action spaces, reward function design, and environmental constraints.
4. Add pseudocode for the proposed algorithm to clarify the implementation details and make the method more accessible to other researchers.
Questions For Authors: 1. Could you elaborate on the update mechanism for the policy $\mathcal{A}^\text{B}$? The paper mentions it follows the standard SAC update, but more details would be helpful.
2. Please provide more information about the "includes a stochastic policy that maintains high variance by slowing down its gradient updates until extrinsic rewards are obtained" mechanism for $\mathcal{A}^\text{B}$. What specific approach is used to slow down these updates, and how were the parameters for this mechanism determined?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer nUkT for the thoughtful and detail feedback.
- **Theoretical Explanation**
We have provided the theoretical explanation in our response to **Reviewer hH71**. Please refer to that section for a detailed discussion.
- **Rationale Behind the Threshold-Based Switching Mechanism**
We provide a detailed theoretical explanation in our response to **Reviewer hH71**. Instead of requiring repeated visits to reduce the action probability ratio $\eta$ (e.g., visiting $s_1$ multiple times), KEA switches to the co-behavior agent, which already maintains a high $\eta$ value. This allows for more efficient and timely coordination between exploration strategies without the need for additional sampling.
- **Comparison with Recent State-of-the-Art Methods in DeepSea**
Thank you for the thoughtful feedback. We agree that broader evaluation is important to support the effectiveness and generality of KEA. To address this, we conducted additional experiments in the DeepSea environment, a widely used benchmark for hard-exploration tasks. These experiments evaluate KEA-RND-SAC against recent state-of-the-art methods, including SOFE (Castanyer et al., 2024) and DeRL (Schäfer et al., 2021). The results show that KEA achieves comparable or superior performance across increasing levels of task difficulty, reinforcing the value of its coordination mechanism in challenging exploration settings. For detailed results and discussion, please refer to our response to **Reviewer CNJg**.
- **Why KEA Is Not Effective with DQN**
In DQN, $\epsilon$-greedy exploration is independent of Q-values, so the issue KEA addresses—delayed shifting from novelty-driven to method's original exploration—does not arise. Therefore, KEA is not expected to provide benefits in this case. We included DQN mainly to highlight this contrast and emphasize that KEA is more suited to methods, where exploration is influenced by intrinsic reward-driven Q-values.
- **Clarification on the 2D Navigation Task Setup**
Thank you for the feedback. The task description is provided in Section 3.1. To clarify further: the environment is a 41×41 grid with a 4×34 obstacle in the center; the goal is fixed at (10, 0), and the agent starts randomly in the left half. The maximum episode length is 100 steps and the episode terminated when reach the goal, the boundary, or the obstacle. A reward is given only when reaching the goal. We will improve clarity and reproducibility in the final version.
- **Limitations and Future Directions in Threshold Selection**
Lower thresholds encourage more stochastic exploration, while higher ones rely more on novelty-based exploration. The threshold balances these two strategies and is task-dependent. In our experiments, we hand-tuned the threshold to keep the usage of $\mathcal{A}^{SAC}$ around 85–90%. While automated methods (e.g., grid search, Bayesian optimization) could optimize this threshold, defining a general and adaptive selection mechanism remains a non-trivial challenge and a promising direction for future work.
- **Relation to CAT-SAC and Distinction from KEA**
Thank you for pointing out CAT-SAC, which we will cite. While both methods aim to improve coordination in novelty-driven SAC, CAT-SAC modulates entropy temperature based on novelty, whereas KEA introduces a co-behavior agent and a switching mechanisum. A key advantage of KEA is that the co-behavior agent is trained without intrinsic rewards and thus reliably converges to the optimal policy for the extrinsic task.
- **Clarification on Implementation Details**
Thank you for the feedback. Both $A^B$ and $\mathcal{A}^{SAC}$ use Fully connected(256, 256) actor and Q-network. Learning rates are 0.0003 for Actors and 0.001 for Q-networks, with a discount factor of 0.99. We will include full configuration details for $A^B$, $\mathcal{A}^{SAC}$, and novelty-based models in the Appendix in the final version.
- **Update Mechanism for $\mathcal{A}^B$**
Thank you for the question. $\mathcal{A}^B$ is updated jointly with $\mathcal{A}^{SAC}$. In each update step, we sample a batch from the replay buffer and compute the soft Q-value losses and policy losses for both $\mathcal{A}^{SAC}$ and $\mathcal{A}^B$ separately, then sum them for optimization. Notably, before any extrinsic reward is received (i.e., before task completion), the loss weight for $\mathcal{A}^B$ is set to zero.
- **Mechanism for Maintaining High Variance in $\mathcal{A}^B$**
During training, we set the loss weight for $\mathcal{A}^B$ to zero until extrinsic rewards are obtained, effectively freezing its updates and maintaining high action variance. Once extrinsic rewards are observed, we set the loss weight to one to gradually train $\mathcal{A}^B$. This allows $\mathcal{A}^B$ to remain exploratory early on and focus on task optimization later.
- **Including Pseudocode**
Thank you for the feedback! We will add pseudocode for KEA in the final version.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed responses to my concerns. While KEA demonstrates promising results on exploration-focused tasks, I remain uncertain about its effectiveness across a broader range of continuous action space environments. I would like to see evaluation on more DeepMind Control Suite tasks to verify whether KEA performs equal to or better than standard SAC in environments that don't specifically focus on exploration challenges.
Additionally, the manual threshold tuning remains a significant limitation. Even with grid search or Bayesian optimization, determining appropriate thresholds for new environments would require substantial computational resources, limiting the method's practical applicability.
Based on these remaining concerns, I maintain my current rating (leaning towards reject).
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback and for engaging deeply with our work. We would like to clarify a few key points regarding the scope and design choices behind KEA.
Our primary motivation is to improve performance on hard exploration tasks, where standard RL algorithms like SAC typically struggle. Novelty-based exploration is commonly used in such cases to enhance exploration. KEA is specifically designed to address challenges in hard exploration tasks, by combining novelty-based exploration with SAC in a better way. Therefore, while we understand the interest in broader evaluations, environments that do not present significant exploration challenges are not the main focus of our study.
To further evaluate our approach under hard exploration tasks, in the rebuttal, we also include experiments on DeepSea—a hard exploration benchmark—which complements our DeepMind Control Suite results. The results show that our method achieves comparable performance to SOFE on the easier levels of the DeepSea environments and outperforms SOFE as the difficulty increases (more details in **Reviewer CNJg**).
Regarding the need to set the switching threshold $\sigma$, we found that setting $\sigma$ around 1 is generally effective and robust across different tasks. This follows common practice in novelty-based exploration methods, where intrinsic rewards are normalized using a running mean and standard deviation, leading to a distribution with mean near 0 and standard deviation close to 1. As a result, $\sigma = 1$ serves as a reasonable default, without requiring extensive tuning.
Moreover, as shown in Table 2, a small discrete set of values, such as ${0.5, 0.75, 1, 1.25, 1.5}$, is sufficient for tuning when necessary, with a step size of 0.25. In our experiments, for each new environment, we run a single episode and select the $\sigma$ value that leads to an approximately 15% usage rate of the co-behavior agent ($A^B$), which serves as a practical and efficient estimate. This procedure helps limit the computational cost.
We appreciate your constructive comments and hope this helps clarify our design choices and the intended scope of our contributions. | null | null | null | null |
Habitizing Diffusion Planning for Efficient and Effective Decision Making | Accept (poster) | Summary: This paper introduces a novel and general framework that can accelerate the existing diffusion-based planning models. The motivation is that, most of the diffusion-based planning methods are very slow due to the iterative denoising steps during deployment. In this work, a VAE-like learning framework is proposed to learn the distilled policy from the pre-trained diffusion models. The framework contains a prior encoder (state as input), a posterior encoder (state, action as input) and a latent decoder based on the posterior latent. The typical MSE reconstruction loss is used to distill the diffusion policy to the VAE-policy, and an extra KL-divergence loss is devised to align the decision space between prior and posterior encoders. A critic is trained to evaluate the quality of the samples. And during inference, out of the generated samples, the one with the highest critic score will be selected for deployment. The authors conduct extensive experiments under D4RL benchmark and compare with deterministic policies, diffusion policies/planners and accelerated diffusion-based methods. The proposed method demonstrates a much better computation efficiency with similar solution quality as the strongest baselines.
## update after rebuttal
I have read the author's response as well as other reviewers' comments. Overall, I think the work is novel and is okay to be accepted by ICML, though a few flaws exist. I updated my final ratings as "weak accept."
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes (although there is no proof in this paper).
Experimental Designs Or Analyses: Yes. I checked all the experiments.
Supplementary Material: No. This paper doesn't have supplementary materials.
Relation To Broader Scientific Literature: This paper proposed diffusion model framework can work at very high frequency, opening the door for fast online deployment for systems that require real-time performance.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Pros:
* The idea is very straightforward and novel.
* The paper is easy to read.
* Extensive experiments have been conducted (including reasonable ablation studies).
Cons:
* Missing vital baselines: should compare with flow-based methods [1, 2], vanilla VAE, and consistency models [3, 4].
* The proposed method requires a pre-trained diffusion model and also needs to train a critic - which requires more time in training than other "train-from-scratch" methods.
* Though the critic can improve per-step quality, the overall episode return does not seem to be improved much and even degrades later.
References:
[1] Zhang, Zhilong, et al. "Flow to better: Offline preference-based reinforcement learning via preferred trajectory generation." The Twelfth International Conference on Learning Representations. 2023.
[2] Zhang, Qinglun, et al. "FlowPolicy: Enabling Fast and Robust 3D Flow-based Policy via Consistency Flow Matching for Robot Manipulation." arXiv preprint arXiv:2412.04987 (2024).
[3] Chen, Yuhui, Haoran Li, and Dongbin Zhao. "Boosting continuous control with consistency policy." arXiv preprint arXiv:2310.06343 (2023).
[4] Prasad, Aaditya, et al. "Consistency policy: Accelerated visuomotor policies via consistency distillation." arXiv preprint arXiv:2405.07503 (2024).
Other Comments Or Suggestions: N/A.
Questions For Authors: Why not report the computation time for BC and SRPO baselines?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank for reviewing our paper and acknowledging "***idea is very straightforward and novel / easy to read / extensive experiments / reasonable ablation studies.***" We believe the following response can address your concerns.
> Q1: Comparison with other generative decision-making baselines
Thank you for your suggestions. Since references 2 and 4 involve visuomotor policies and lack the same benchmarks, we compared our method with:
- Flow to Better: Offline preference-based reinforcement learning via preferred trajectory generation
- Boosting continuous control with consistency policy
The policy of FTB is also deterministic, which is the same as BC/SRPO.
| Environment | HI (Ours) | FTB | CPIQL | CPQL |
|-------------|-----------|-----|-------|------|
| HalfCheetah-ME | **98.0 ± 0.0** | 85.2 ± 0.7 | 81.0 ± 1.7 | 97.8 ± 0.5 |
| HalfCheetah-MR | **48.5 ± 0.0** | 38.4 ± 1.3 | 48.0 ± 1.4 | 46.6 ± 0.8 |
| HalfCheetah-M | 53.5 ± 0.0 | - | 54.6 ± 1.0 | **56.9 ± 0.9** |
| Hopper-ME | 92.4 ± 2.0 | **111.1 ± 2.0** | 110.6 ± 1.4 | 110.4 ± 3.2 |
| Hopper-MR | **102.0 ± 0.0** | 89.6 ± 4.9 | 100.6 ± 1.5 | 97.7 ± 4.6 |
| Hopper-M | **102.5 ± 0.1** | - | 99.7 ± 2.0 | 99.9 ± 4.5 |
| Walker-ME | **113.0 ± 0.0** | 109.3 ± 0.3 | 110.9 ± 0.2 | 110.9 ± 0.1 |
| Walker-MR | **102.0 ± 0.0** | 79.1 ± 1.4 | 91.8 ± 2.8 | 93.6 ± 5.6 |
| Walker-M | **91.3 ± 0.1** | - | 86.2 ± 0.6 | 82.1 ± 2.4 |
| **Tasks Average** | **89.2** | - | 87.0 | 88.4 |
| **Shared Tasks (w/o M dataset) Average** | **_92.7_** | 85.5 | 90.5 | **92.8** |
| **Frequency** | 1329.7 | **11892.6** | 578.2 | 570.1 |
We conducted comparisons on common MuJoCo locomotion tasks (since these work did not present the results on the other benchmarks in our paper, and FTB lacks the results on Medium dataset). Habi achieves the best performance in most tasks and demonstrates a higher overall score across environments.
Importantly, we do not want to convey the message that Habi (variational Bayesian method) is better than flow-based method. Indeed, they are **orthogonal** and can be combined (e.g., Kingma et al., 2016. "Improved Variational Inference with Inverse Autoregressive Flow"). We appreciate your question and will have a deeper dive into potential collaboration between flow-based method in our future work.
> Q2: The proposed method requires a pre-trained diffusion model and also needs to train a critic - which requires more time in training than other "train-from-scratch" methods.
Thank you for your question. This work positions Habi as a general acceleration framework designed to speed up given diffusion planners. Our starting point is to maintain diffusion planning performance while significantly improving decision frequency.
Indeed, Habi's training process is light-weighted,e.g., Habi only takes around 2 hours for Habitual Training using one A100 GPU.
> Q3: Though the critic can improve per-step quality, the overall episode return does not seem to be improved much and even degrades later.
There is probably a misunderstanding. We assume you are talking about Figure 7 (please let us know if not the case). Figure 7 illustrates the impact of increasing candidates on performance given the same critic. When there are too many candidates, performance may decline. Therefore, around 5 candidates is recommended. This phenomenon occurs because too many candidates amplify the critic's judgment errors for individual actions.
> Q4: Why not report the computation time for BC and SRPO baselines?
We acknowledge that BC and SRPO are super fast, as they used deterministic policy function with single-pass inference (i.e., an MLP). We did not report the computational time since the bottleneck of inference time no longer come from action computation, but rather from other parts such as `env.step(action)`. Thanks for the advice, and we will revise the paper to clarify this.
However, probabilistic generative models like diffusion policies and planners typically achieve superior performance by modeling complex, multi-modal action distributions -- but at significant computational cost. Our work focuses specifically on accelerating probabilistic generative decision-making methods while maintaining their performance advantages over deterministic approaches. Habi addresses this trade-off by accelerating probabilistic generative models while preserving their stochastic nature and performance benefits.
Thank you again for your constructive feedback! We hope the responses above have addressed all your comments. Please kindly let us know if you have additional suggestions, and we would be more than happy to discuss them. | Summary: This paper presents Habi, a framework that habitizes diffusion planning into faster decision-making models by using a VAE-based approach inspired by biological habit formation. While the method demonstrates impressive speedups and maintains comparable performance to diffusion planners, the technical innovation is limited as it primarily applies standard VAE techniques to policy distillation without substantial modifications. Additionally, the claimed advantages over direct distillation are marginal in several tasks and lack thorough component-wise analysis. I have some concerns about the experimental setup and fairness of comparisons, as Habi fundamentally depends on pre-trained diffusion planners and should be more accurately positioned as an "acceleration technique" rather than a standalone decision-making algorithm.
Claims And Evidence: 1. I have some concerns about the experimental setup and fairness of comparisons. The paper frames Habi as a complete decision-making method, but its fundamental nature is a two-stage process that depends on pre-trained diffusion planners. This dependency is not adequately acknowledged, creating a misleading comparison. Habi should be more accurately positioned as an "acceleration technique" rather than a standalone decision-making algorithm. This would better reflect its true nature and enable more appropriate comparisons with other acceleration approaches.
2. For a truly fair evaluation, the paper should provide comprehensive cost analyses that include both the initial planner training and subsequent habitization process, giving readers a complete understanding of the total computational investment required. Moreover, the authors claim superiority over standard distillation approaches, but evidence is unconvincing. In several tasks (MuJoCo and Maze2D), performance improvements are marginal (only 2-5% better). Without comprehensive component-wise analysis, it's unclear what drives these improvements or whether they are statistically significant.
Methods And Evaluation Criteria: 1. The comparision and the experiment setup is not convicing. The comparison between Habi (which utilizes pre-trained planners) and methods that learn from scratch (like AdaptDiffuser) is fundamentally imbalanced.(See Claims Q1 and Q2)
2. The "Direct Distill" baseline lacks detailed implementation specifications, making it impossible to verify whether it represents state-of-the-art distillation techniques. Comparisons with established knowledge distillation methods (e.g., Hinton's approach with soft targets) are conspicuously absent.
3. The paper introduces a Critic component for action selection, but Table 5 shows that in many environments, performance without the Critic (N=1 case) is already strong. This raises questions about the necessity of this additional component and complicates the architecture without clearly justified benefits.
Theoretical Claims: 1. While the habitization process draws inspiration from cognitive science, the theoretical mapping between brain processes and the VAE framework is superficial. The paper does not sufficiently establish why ELBO optimization should effectively model the transition from goal-directed to habitual behavior.
2. The core technical contribution is essentially applying a standard VAE for policy distillation. The ELBO loss (L = L_recon + β_KL·L_KL) comes directly from standard VAE theory with minimal modification. The paper reinterprets this as "habitization" without substantial theoretical innovation.
Experimental Designs Or Analyses: 1. The comparision and the experiment setup is not convicing.(See Claims Q1 and Q2)
2. While the paper includes analysis on the number of candidate samples (N), it lacks comprehensive ablations for other critical components such as network architecture choices, latent space dimensionality, and the impact of different diffusion planners as teachers.
Supplementary Material: Yes and all.
Relation To Broader Scientific Literature: See above.
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: 1. Innovation largely limited to applying existing techniques (VAE, critic-based selection) in a new context
2. Positioning as a biologically-inspired method appears to be primarily a narrative framing rather than a substantive technical innovation
3. The lack of real-world deployment testing or vision-based tasks limits confidence in practical applicability
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We greatly appreciate your detailed and comprehensive comments, the following responses are to address your concerns.
>Q1: Position Habi as an "acceleration technique" rather than a standalone algorithm.
Thank you for the clarification. Actually, we indeed position Habi as a general acceleration framework for diffusion planners in this paper. Our goal was creating an elegant framework that significantly improves decision frequency while maintaining effectiveness on SOTA diffusion planners through brief habitual training (1-2 hours).
Our focus is on decision-frequency, with performance comparisons to demonstrate Habi's effectiveness.
>Q2: Comparisons with established distill methods (e.g., soft targets)
Diffusion models' probability distributions aren't directly accessible, making traditional distillation methods using cross-entropy loss (e.g., soft targets) inapplicable. This motivated Habi's development.
We included comprehensive comparisons with two categories of acceleration approaches: numerical acceleration methods (DiffuserLite) and distillation-like frameworks (DTQL). The Direct Distill baseline uses the same architecture as Habi but performs imitation learning directly on state-action pairs.
>Q3: Performance improvements seem marginal for simple tasks (Maze2D, MuJoCo)
There is probably a misunderstanding. Our rigorous evaluation methodology (5 training seeds × 500 evaluation seeds) demonstrates these improvements are **_statistically significant_** and reproducible, which is supported by:
$$\mu_{avg} = \frac{1}{G} \sum_{i=1}^G \mu_i$$
$$stderr_{avg} =\frac{\sqrt{\frac{1}{G}\sum_{i=1}^G(\mu_i^2 + N_{seed} (stderr_i)^2) - \mu_{total}^2}}{\sqrt{G*N{seed}}},$$
where $G$ is the task number, $N_{seed}$ is the number of seed.
**Performance Comparison**:
|Task|Method|HI(Ours)|HI w/o Critic|Direct Distill|
|--|--|--|--|--|
|MuJoCO|Absolute|89.24±0.29|87.46±0.32|84.98 ± 0.26
|-|Δ vs Ours|-|-1.78 (p=1.88e-5)|-4.26 (p<1e-10)|
|Maze2D|Absolute|164.53±1.07|161.33±1.11|159.87±1.10
|-|Δ vs Ours|-|-3.20 (p=0.019)|-4.66 (p=0.001)|
Moreover, the improvements becomes more pronounced in ***complex environments***: in Kitchen and Antmaze domains, Habi exhibits ***19.1–29.4%*** stronger performance.
>Q4: Necessity of Critic.
Thank you for your question. The critic is essential. Without it, performance in complex environments like Antmaze suffers an 18.5% degradation. Tested with over 500 seeds, the critic consistently provides performance improvements. Meanwhile, the critic is a shallow MLP that doesn't add significant computational burden.
> Q5: Questions on network structures, diffusion planners, and latent dimensionality.
Thank you for your suggestion. For high decision speed, MLPs are already sufficiently simple. DV and DQL adequately represent the two types of diffusion planning. Regarding latent dimensionality, our experiments show:
|Env|dim(z)=64|128|256|512|
|--|--|--|--|--|
|Antmaze-L-D|62.2±2.1|60.7±2.0|65.2±2.0|70.6±2.0|
|Antmaze-L-P|79.3±1.8|80.3±1.7|81.7±1.7|85.9±1.5|
|Antmaze-M-D| 88.5±1.4|89.6±1.3|88.8±1.4|78.6±1.8|
|Antmaze-M-P| 78.0±1.8|82.3±1.7|85.3±1.5|86.7±1.5|
|**Avg**|77.0|78.2|80.3|80.5|
|Maze2D-L|204.0±1.9|204.3±1.9|199.2±2.0|202.4±1.9|
|Maze2D-M|151.1±1.5|151.8±1.4|150.1±1.5|149.2±1.6|
|Maze2D-U|143.6±1.7|144.5±1.7|144.3±1.7|144.1±1.7|
|**Avg**|166.2|166.9|164.5|165.2|
Latent dimension has **_modest impact_** on performance. For Antmaze, there's slight improvement as dimension increases. For Maze2D, performance remains stable. As our method does not involve an information bottleneck, we selected dim(z)=256 as a balanced choice providing good performance without unnecessary computational overhead.
>W1: Innovation & Q5: Difference between Habi and VAE
We understand your concerns. As acknowledged by other reviewers (***qHBM***, ***JDXx***), Habi maintains simple and elegance for easy using, while Habi and VAE are different fundamentally:
**Function**: VAE's latent dimension is smaller than input dimension, forming an information bottleneck; Habi uses a large latent dimension (256) as it doesn't target compact representation.
**Latent Bottleneck**: VAE's latent dimension is smaller than input dimension, forming an information bottleneck; Habi uses a large latent dimension (256) as it doesn't target compact representation.
**Learnable Prior**: VAE employs a fixed unit-Gaussian prior; Habi's prior distribution conditions on the current state and is learned.
>W2. biologically-inspired rather than technical innovation
Thanks for the suggestion. While the high-level idea of our paper is brain-inspired, we agree that we should more clearly highlight the technical innovations by revising the paper.
>W3. Lack of real-world, vision-based deployment.
Thank you for the suggestion. We acknowledge it in Section 6 of our paper. This work focuses on algorithmic contributions using standard offline RL for robust benchmarking in the simulation, which is common practice in the field. | Summary: This paper has introduced a simple yet effective framework to speed up Diffusion-based planners (Habi). During training Habi learns:
- A prior encoder for context (state)
- A Posterior encoder and decoder for distilling learned planning in diffusion.
- A critic for evaluating actions.
I like the elegant idea and strong performance, while there are a few questions/claims that I hope to discuss during the rebuttal period.
Claims And Evidence: - Does the expert/teacher planner have to be a diffusion-based model?
From the framework, I didn't see any special assumptions on the pretrained planner, and it seems to me this should work for any learned model-based RL algorithm. What's the relationship between the proposed framework and diffusion-planners?
- The claim in L107-108 is a bit too strong:
"Habi can be used straightforwardly for any diffusion planning and diffusion policy models." While it seems that in the experiments the authors have only implemented HI for one base planner (correct me if I misunderstood).
Methods And Evaluation Criteria: The method and evaluation make sense for the problem and application.
Theoretical Claims: N/A
There is only a (widely known) ELBO theoretical proof for VAE shown in the supplementary.
Experimental Designs Or Analyses: This is one design choice that I hope to see some analyses:
- Why is the framework using $z_t^q$ instead of $z_t^p$ for training the Critic?
It seems to me that during inference, $z_t^p$ is the one used for Critic evaluation, so using $z_t^p$ during training seems to be a more intuitive design, is there any theoratical/empirical support for such design?
Supplementary Material: I reviewed most parts of the supplementary material and no obvious issues were found.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- Simple yet effective approach
- Well-motivated idea
- Well written paper
Weaknesses:
- Some claims are too strong to be supported in the experiments.
- It is unclear to me about the relationship between this approach and diffusion.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your thoughtful comments and acknowledging "***Simple yet effective approach / Well-motivated idea / Well written paper / elegant idea and strong performance***". We will address all of your concerns as follows.
> Q1: Does the expert/teacher planner have to be a diffusion-based model? What's the relationship between the proposed framework and diffusion-planners?
Good point! Theoretically, Habi also supports acceleration for other ***probabilistic generative*** decision-making models. However, Habi is especially well-suited for diffusion planners for several important reasons:
1. Many other generative models (like VAEs or flow-based models) are already computationally efficient, but often face performance limitations rather than speed constraints [1, 2]. Their bottleneck, however, is typically quality of decisions, not decision frequency.
2. Diffusion models have emerged as the most powerful and well-performing probabilistic generative models in decision-making, as evidenced by the impact of Diffuser and related work. However, diffusion planners are inherently slow during inference due to their multi-step denoising process, which limits their practical application in real-world scenarios.
Habi addresses this crucial gap between 1) and 2) by accelerating diffusion-based decision-making models while maintaining nearly lossless performance. To our best knowledge, Habi is the first framework to successfully preserve diffusion models' superior performance while achieving decision speeds comparable to traditional generative models.
We appreciate your suggestion and will clarify this important distinction in the revised paper.
> Q2: The claim in L107-108 is a bit too strong:
"Habi can be used straightforwardly for any diffusion planning and diffusion policy models." While it seems that in the experiments the authors have only implemented HI for one base planner (correct me if I misunderstood).
Actually, we used Habi on two base diffusion decision making models: [1] and [3] (line 738, Appendix Table 3). We selected these two models since they are best performing on corresponding benchmarks.
However, we greatly appreciate this suggestion since we did not realize this clarity issue. We will make more clear statements in the main texts of the revised paper.
> Q3: Why is the framework using ztq instead of ztp for training the Critic?
The Prior distribution and Posterior distribution are constrained through KL divergence. Due to the nature of the KL constraint KL(q||p):
$$KL(q(z) \parallel p(z)) = \int q(z) \log \frac{q(z)}{p(z)} dz$$
The prior $p(z)$ is required to cover the posterior distribution $q(z)$, but points with low probability in the posterior may still be sampled by the prior. Using these low-probability samples to train the critic would introduce noise and potentially mislead the training process.
Therefore, using latents generated from the posterior can reduce the impact of these misleading samples on critic training, ensuring more reliable and accurate critic learning. Intuitively, this shares a roughly similar idea with *Teacher Forcing* in autoregressive models: During training, instead of feeding the model's own predicted output (akin to prior z in Habi), teacher forcing uses the ground-truth (akin to posterior z in Habit) to prevents compounding errors.
We also make an additional experiment on maze2d to empirically validate this design choice:
|Environment| Posterior Critic | Prior Critic |
|----|----|----|
|Maze2D-Large| 199.2 ± 2.0 | 195.3 ± 2.2 |
|Maze2D-Medium| 150.1 ± 1.5 | 150.6 ± 1.5 |
|Maze2D-Umaze| 144.3 ± 1.7 | 143.0 ± 1.7 |
|**Average**| **164.5** | 163.0 |
It can be seen that using posterior z for critic training works slightly better.
Thank you again for your acknowledgement of Habi and your contribution to our paper. Hope that we have address your concerns.
**Reference**
[1] What Makes a Good Diffusion Planner for Decision Making? Lu et al. ICLR 2025
[2] CleanDiffuser: An Easy-to-use Modularized Library for Diffusion Models in Decision Making, Dong et al. NeurIPS2024
[3] Diffusion policies as an expressive policy class for offline reinforcement learning, Wang et al. ICLR 2023
Thank you again for your constructive feedback! We hope the responses above have addressed all your comments. Please kindly let us know if you have additional suggestions, and we would be more than happy to discuss them. | Summary: This paper proposes the Habi algorithm (a Diffusion Planner), which combines excellent performance with high-frequency inference speed. It utilizes a VAE-like inference framework to distill information from the diffusion planning process. Extensive experiments were conducted across various environments, achieving consistent improvements.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: yes
Relation To Broader Scientific Literature: n/a
Essential References Not Discussed: I think some mbrl work should be discussed, see weakness.
Other Strengths And Weaknesses: pro:
- The paper is written very clearly, and the research question is very important. I quite agree that decision speed is a critical issue in the application of diffusion models for decision making.
- The method makes sense and is supported by extensive experimental validation. It compares a series of strong baselines, such as diffusion planners and diffusion policies.
- The method is simple and elegant, and it is expected to achieve both efficiency and high performance.
cons:
- Why not use a GPU for decision-making speed tests? Why does the decision frequency of DiffuserLite seem to be significantly lower than the paper version?
- Although the article describes the core idea using habitual decision-making, is the overall concept closer to some world model and representation-related works, such as TDMPC, TDMPC2, and MRQ (Towards General-Purpose Model-Free Reinforcement Learning)? By achieving better representations, it enables the distillation of the diffusion planner, thereby achieving better performance. From this perspective, Habi is similar to a more advanced offline MBRL method. Could it be discussed in relation to advanced offline MBRL methods? For example, MOREC (Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning)."
- Can the ideas of habi be applied to more realistic robot control diffusion policy or robot control environments (such as libero, rlbench, etc.)?
- Why is the diffusion planner algorithm rarely applied in real robotic control scenarios? As far as I know, most mainstream algorithms are based on diffusion policy.
Other Comments Or Suggestions: see above
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful to your thoughtful comments and acknowledging our work "***written very clearly / research question is very important / method makes sense / supported by extensive experimental validation / simple and elegant***" as well as bringing the questions. We address your questions as follows.
> Q1: GPU for decision-making speed tests.
Thanks for the suggestion. In fact, we already provided the results on both CPU and GPU in Appendix Table 4. We reported the results on CPU as CPUs are more portable (as descent CPUs can be put into mobile phones), thus be more practical when considering local deployment of decision models on individual robots and edge devices.
> Q2: Decision frequency of DiffuserLite
We also noticed this. DiffuserLite serves as a key baseline for decision-making acceleration. We are unsure if absolute value differences stem from hardware variations, as we follow its [official code](https://github.com/diffuserlite/diffuserlite.github.io) for testing. In our experiments, we used consistent hardware (Apple M2 Max, Nvidia A100, AMD EPYC 7V13 64-Core) and software (PyTorch=2.2.2, numpy=1.22.4) to ensure fair and consistent speed test results. Therefore, the relative decision frequency presented in our paper could fairly reflects the differences in decision speed among various generative decision-making methods.
> Q3: Relationship with world model and MBRL
Thank you for the suggestion. We treat Habi purely as an acceleration framework for generative decision-making. Its training process does not involve Q-Learning, nor does it interact with virtual reward dynamics. Habi's goal is to maintain the performance of the original generative model while accelerating decision-making within 1-2 hours of Habitual Training. Indeed, Habi is not limited to accelerating model-based planners (e.g., Diffuser, DV) but can also accelerate model-free planners (e.g., IDQL, DQL). We appreciate your suggestion and will discuss Habi's connection to offline MBRL methods (e.g., MOREC) in the related works section.
> Q4: Can the ideas of habi be applied to more realistic robot control or robot control environments?
Yes, and thanks for suggesting extensive benchmarks. Beyond the Franka robotic arm manipulation tasks, we evaluate Habi on the Adroit environment (including opening the door, driving the nail, repositioning the pen orientation, and relocating the ball) for dexterous manipulation assessment.
| Robotics Environments | Diffusion Planner [1] | Habi |
|--------------|------------------------|------|
| Door | 104.2 ± 0.7 | 105.6 ± 0.3 |
| Hammer | 124.4 ± 1.8 | 129.4 ± 0.4 |
| Pen | 122.2 ± 1.8 | 121.0 ± 2.4 |
| Relocate | 109.3 ± 0.5 | 108.8 ± 0.5 |
| Franka Kitchen-M | 73.6 ± 0.1 | 69.8 ± 0.4 |
| Franka Kitchen-P | 94.0 ± 0.3 | 94.8 ± 0.6 |
The results demonstrate that Habi maintains comparable performance to diffusion planners across various robot control tasks. In some environments like Door, Hammer, and Kitchen-P, Habi even shows slight improvements. We believe this highlights the robustness of our simple-yet-effective method and its potential for real-world robotic applications.
> Q5: Why is the diffusion planner algorithm rarely applied in real robotic control scenarios? As far as I know, most mainstream algorithms are based on diffusion policy.
We first would like to make a disambiguity statement about the "diffusion policy" in your question.
***--If you meant the paper Diffusion policy: Visuomotor policy learning via action diffusion from Chi et al.***
The premise of this question contains a misconception. Diffusion Policy is actually built upon Diffuser (a classic diffusion planner), with slight modifications to action modeling and visual inputs. Consequently, ***diffusion planner algorithms have indeed been deployed in robotics***, with DP being among the first to bridge the ML-robotics gap and demonstrate results on physical robots.
Furthermore, the subsequent development directions of these two approaches differ: the robotics community typically emphasizes ***Imitation Learning*** with ***expert demonstrations***, while the ML community focuses on ***Offline RL*** that can derive optimal policies from ***varied-quality data***. The core algorithms, however, share the same diffusion-based foundation.
***--If you meant the model-free diffusion policies such as Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning from Wang et al.***
We believe one of the most crucial problems of diffusion planners are its heavy computation cost even on a descent GPU: As we have shown in Appendix Table 4, Diffuser take 0.3~0.6 second for one decsion, which is not acceptable to real-world robot. However, research shows diffusion planners can outperform diffusion policies on many tasks [1]. This is why our work focused on accelarating diffusion planning while maintaining its effectiveness.
[1] What Makes a Good Diffusion Planner for Decision Making? Lu et al. ICLR 2025 | null | null | null | null | null | null |
LIFT the Veil for the Truth: Principal Weights Emerge after Rank Reduction for Reasoning-Focused Supervised Fine-Tuning | Accept (poster) | Summary: This paper presents LIFT (Low-rank Informed Sparse Fine-Tuning), which introduces the idea of Principal Weights—parameters with the largest magnitude after low-rank approximation—as the most critical ones for LLM fine-tuning. This research in very important to the community, as sparse fine-tuning methods have been largely ignored in the LLM era, even though they were effective for pre-LLM models. The authors argue that one of the reasons for this is that sparse fine-tuning struggles to identify which parameters matter, and they address this with a fundamentally new approach, i.e., using low-rank decomposition to first filter noise and then select key weights based on magnitude.
The method is novel, well-motivated, and well-executed. Sparse fine-tuning has been overlooked in LLMs, and this paper does a fantastic job of showing that it can be revived and improved with low-rank decomposition. The results are strong, the insights are compelling, and the memory efficiency makes it highly practical.
There are some minor things that could be improved (discussion of computational cost, layerwise analysis), but these are not major flaws—more like areas for further exploration. Overall, this is a clear strong accept. It pushes the field forward in an important way, and I expect it will have significant impact in both research and practical applications of LLM fine-tuning.
## update after rebuttal
Authors have addressed my concerns. Thus, I keep it as accept.
Claims And Evidence: The claims in the paper are well supported. The insights that Principal Weights are important to LLMs’ pre-trained knowledge are well supported by their pre-liminary study in Section 4. The effectiveness of LIFT is further supported with extensive experiments in Section 5. I didn’t see how many runs are provided in Section 5. I believe the results will be more significant, if this is the averaged performance over >3 runs. Another claim of LIFT learns better on target domains and preserve better on source domains is also supported in Figure 4.
One issue is that the computational overhead of LIFT isn’t fully explored. While the authors have mentioned that the comparison in their experiments are fair, it is important to report the exact memory costs in the main tables to provide a full picture to audiences.
Methods And Evaluation Criteria: The idea of Principal Weights itself is new and makes sense. To my best knowledge, denoising first, and then selecting weights for fine-tuning is non-trivial and well-motivated. Previous work of LASER (https://arxiv.org/abs/2312.13558) demonstrate that rank reduction using SVD has a denoising effect on LLMs, enhancing performance, which provides insights to the selection of Principal Weights.
Regarding the evaluation criteria, multiple commonly used benchmarks are evaluated, including commonsense reasoning, math reasoning, and GLUE. The baselines in this paper include SOTA low-rank approaches LoRA, DoRA, and PiSSA, as well as a recent sparse fine-tuning baseline, i.e., S2FT. Some might say that there are many more PEFT approaches over there, but I believe that two most important baselines are already included, (1) PiSSA: SVD-based low-rank adaptation; (2) S2FT: a strong baseline of sparse fine-tuning.
One natural follow-up question to authors is how the rank-level affect the effectivenss of LIFT.
Theoretical Claims: This paper does not have theoretical claims.
Experimental Designs Or Analyses: Yes, the authors provide strong evidence that this approach revives sparse fine-tuning, making it competitive with (and often better than) Full FT and LoRA-like approaches. One question is whether the results are averaged over multiple runs, given the fact that LLM fine-tuning can be quite noisy. The analysis in this paper is intensive.
Most existing PEFT approaches like LoRA and its variants focus on adding low-rank adapters, they are reported to struggle with large-scale settings (https://arxiv.org/abs/2405.09673). The observation that low-rank decomposition can help sparse fine-tuning by denoising first, then selecting weights, is non-trivial and well-motivated.
Supplementary Material: Yes, the authors have submitted their code and it looks fine to me.
Relation To Broader Scientific Literature: LIFT algorithm is highly related to previous literature of using low-rank approximation for LLM denoising (https://arxiv.org/abs/2312.13558, https://arxiv.org/pdf/2406.03068). Moreover, sparse fine-tuning is an active research direction before the popularity of LLMs, where the authors have provided a paragraph of related work.
Essential References Not Discussed: https://arxiv.org/abs/2405.19597
Other Strengths And Weaknesses: Strengths:I really like the title “LIFT the Veil for the Truth” which subtly incorporates the algorithm name in a meaningful way, so kudos to the authors for that.
Weaknesses:One thing that’s missing is a layerwise breakdown. We know from previous work that FFN layers and Attention layers behave very differently in fine-tuning. Does LIFT work equally well on both? Or are some layers more important than others? A study on layerwise sensitivity would be really interesting and could also help optimize LIFT further.
Other Comments Or Suggestions: Some minor typos:
Line 144: contains more -> contain more
Line 250: state-of-the-srt -> state-of-the-art
Line 437: We hope that these problems can lead to -> We hope that these problems can inspire
Questions For Authors: Does LIFT work equally well on both? Or are some layers more important than others? How about depth?
How the rank-level affect the effectivenss of LIFT?
What are the exact memory costs in the main tables to provide a full picture to audiences?
How many random seems are reported in main tables?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear reviewer, thank you for the insightful and constructive feedback. We'd like to address your concerns as follows. **The supplementary figures/tables are in the [rebuttal link here](https://github.com/icml12437/ICML2025_12437).**
# Q1: Layer-wise analysis on LIFT
In Appendix G.4 of our paper, we analyze the **layer-wise effect of LIFT**, where we compare the performance of LIFT in **fine-tuning a single type of layers**. Our results show that fine-tuning only MLP layers yields significantly better results than fine-tuning attention layers. We hypothesize that attention components mostly store information on token relations rather than task-specific knowledge. On the other hand, MLP layers are more adaptive to downstream tasks, and fine-tuning them is more effective.
Based on that insight, we explore the possibility of further improving the efficiency of LIFT by **only fine-tuning MLP layers**. The table below shows the results of LLaMA-2-7B on arithmetic datasets where we only fine-tune the MLP layers (LIFT_MLP), and only fine-tune the attention layers (LIFT_Attn). We can see that LIFT_MLP has similar performance to the full version of LIFT, while LIFT_Attn has drastically worse performance. This suggests that for LIFT, fine-tuning MLP layers is more effective than fine-tuning Attention layers.
||MultiArith|GSM8K|AddSub|AQuA|SingleEQ|SVAMP|MAWPS|Avg|
|-|-|-|-|-|-|-|-|-|
|**LIFT**|98.67|47.31|92.66|**26.77**|**96.85**|**63.6**|**90.34**|**73.74**|
|**LIFT_MLP**|**99.66**|**47.61**|91.90|25.59|95.67|62.6|**90.34**|73.34|
|**LIFT_Attn**|95.00|43.75|91.14|25.59|91.73|60.1|86.55|70.55|
|Full FT|98.17|46.55|**93.67**|22.05|96.85|63.2|89.08|72.79|87.54|
|LoRA|98.00|47.76|92.41|23.62|95.08|62.9|90.76|72.93|
# Q2: How does the rank-level affect the effectiveness of LIFT?
In **Appendix G.5 and Fig. 15** of our paper, we study the influence of rank in low-rank approximation (LRA Rank) on the performance of LIFT. We found a correlation between LRA Rank and the number of trainable parameters, that **the optimal LRA Rank increases as we select more parameters to train**. In practice, when LIFT has the same number of parameters as LoRA, we find that **using LRA Rank similar to LoRA rank yields optimal performance**. We note that LRA Rank may differ with different models, using some layer-adaptive metrics could bring further performance gain to LIFT.
# Q3: More discussions on related references
The SVFT paper [1] introduces a PEFT method called SVFT, which performs SVD on pre-trained weights and fine-tunes the resulting SVD factors. In contrast, our work considers sparse fine-tuning and proposes a novel approach for selecting Principal Weights to update. Furthermore, by storing only the optimizer states associated with these Principal Weights, LIFT significantly reduces memory overhead. We will cite SVFT in the revised version of our paper.
# Q4: Detailed training settings and memory costs
## Training Settings
In Fig. 3 in our paper, we showed the results of LIFT on GSM8K datasets with 4 random seeds. Due to resource limitations, for other experiments we only report the results of one seed. In Table 8 in the rebuttal link, we present the results on arithemetic datasets with four random seeds (Table 2 in the paper), which further prove the robust performance of LIFT, where it outperforms other baselines under four seeds.
## Number of Trainable Parameters and GPU Memory Costs
In all experiments in the paper, we **compare the best results of different methods among a range of parameter sizes**. Specifically, when comparing LIFT with LoRA-like methods, we search the LoRA rank in {16, 32, 64, 128, 256}, and **LIFT with the same parameter counts** to ensure fair comparison, and pick their best results. In practice, we find that the LIFT and LoRA-like methods typically have the best performance at **rank = 128**, similar to results from PiSSA paper [2].
In Fig. 6 of Appendix B, we present the memory breakdown of LIFT compared to Full FT and LoRA under optimal settings. We showed that LIFT achieves similar memory overhead as LoRA, significantly lower than Full FT. Furthermore, we explored only fine-tuning MLP layers with LIFT, which further improves memory advantages over LoRA, while maintaining test performance (details are in the Q2 response to Reviewer ceyi).
# Q5: Typos
We thank the reviewer for pointing out the typo, and we will fix it in the revised version of the paper.
### References
[1] Lingam et al, 2024
[2] Meng et al, 2024
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors’ responses, which have addressed my concerns. I keep it as accept.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer yeg4,
We sincerely thank you for your positive response. We will include the additional experimental results and texts in the revised version of our paper.
Best,
Authors | Summary: The authors propose a novel sparse fine-tuning approach, LIFT, which identifies so-called Principal Weights. By only training these Principal Weights, LIFT outperforms full-parameter fine-tuning in multiple benchmarks including commonsense reasoning, math reasoning, and GLUE tasks. Specifically, Principal Weights are the weights that have the largest magnitude after performing low-rank reduction.
Claims And Evidence: The paper introduces the Low-rank Informed Sparse Fine-Tuning (LIFT) method and provides substantial evidence supporting its effectiveness. The claim of LIFT's superior performance is validated through extensive experimental evaluation across three distinct tasks: (1) Commonsense Reasoning, (2) Arithmetic Reasoning, and (3) Natural Language Understanding. Additionally, the paper robustly supports the claim that LIFT identifies Principal Weights through experiments in Figure 2.
Methods And Evaluation Criteria: The motivation for identifying principal weights via low-rank decomposition is clearly justified by Figure 2. The effectiveness of LIFT is thoroughly demonstrated through extensive experiments across diverse tasks and varying model sizes.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: (1)Good evaluation, comparing against a number of alternative state of the art models. I appreciate the fact that all experiments are run with four random seeds, which demonstrate the performance gain of LIFT is significant.
(2)It would be helpful if the paper explicitly analyzed the ratio or overlap between Principal Weights and original largest-magnitude weights across different layers. Understanding whether Principal Weights significantly differ from massive weights, and how this ratio varies across model layers, could offer deeper insights into LIFT’s effectiveness and behavior.
Supplementary Material: No
Relation To Broader Scientific Literature: The low rank finds principle weights is related to the paper “ From Galore to Welore: how low-rank weights no-uniformly emerge from low-rank gradients” (https://arxiv.org/abs/2407.11239.)
Essential References Not Discussed: https://arxiv.org/pdf/2401.16405 this is very early sparse finetuning methods for LLMs, which is missing in this paper.
Other Strengths And Weaknesses: Strengths:
Novel idea nicely explained and motivated. It is interesting to see that crucial weights emerge after performing SVD.
LIFT demonstrates stronger generalization, balancing the learning of new task-specific knowledge with minimal forgetting of the source domain knowledge.
Weakness:
The paper's identification of Principal Weights is insightful; however, recent research (e.g., the Massive Weights in LLMs paper) suggests original large-magnitude weights are also crucial. It might be beneficial to explore combining both Principal Weights and original Massive Weights to further enhance performance, rather than relying exclusively on one set.
Other Comments Or Suggestions: Typos: state-of-the-srt -> state-of-the-art in line 250.
Questions For Authors: Considering that unstructured sparse fine-tuning is not well supported by GPUs, whereas structured sparse fine-tuning enjoys efficient GPU support, Can this LIFT technique be adapted to structured sparse finetuning?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Dear reviewer, thank you for the insightful and constructive feedback. We'd like to address your concerns as follows.
# Q1: Overlap between Principal Weights and the original largest-magnitude weights
In **Appendix G.6** of our paper, we discussed the overlap between parameters selected by LIFT and parameters selected by weight magnitude. In Fig. 16, we plotted the overlap ratio between LIFT and weight magnitude on different layer types as the number of trainable parameters varies. We showed that the overlap is generally below 20%, while different layers have different overlap ratios. We note that the overlap on MLP layers are significantly lower than that on attention layers. This suggests that the low-rank approximation of the Query and Key matrices are close to the original matrix, and Query and Key modules are low-rank in nature.
The above results indicate that the *Principal Weights* of LIFT and original large-magnitude weights do not overlap heavily, and how to combine the two types of weights is indeed a promising direction of future work.
# Q2: Missing discussions on recent sparse fine-tuning paper.
Here we discuss the paper "Scaling Sparse Fine-Tuning to Large Language Models" [1]. This paper proposed SpIEL approach to reduce the need for computing all gradients during backward propagation, which scales the gradient memory linearly with the selected weights. While our method LIFT proposed a novel way to select Principal Weights for sparse fine-tuning and store only the optimizer states of Principal Weights to significantly reduce the memory overhead.
**We also compare LIFT with SpIEL on the GSM8K dataset.** We use the training setting as in Fig. 3 in our paper. For both methods, we searched learning rate among {5e-5, 1e-4, 2e-4, 5e-4} and trainable parameters corresponding to LoRA rank among {16, 32, 64, 128, 256} to obtain the best results. The table below shows that LIFT significantly outperforms SpIEL with both LLaMA-2-7B and LLaMA-3.2-3B model.
|**GSM8K**|LIFT|SpIEL|Full FT|
|-|-|-|-|
|LLaMA-3.2-3B|**46.46**|43.76|44.50|
|LLaMA-2-7B|**24.24**|21.61|22.57|
What's more, combining LIFT with the way SpIEL reduces memory overhead to further pursue memory efficiency is a promising direction for future work.
# Q3: Can LIFT technique be adapted to structured sparse finetuning?
To evaluate whether LIFT can work in a structured fine-tuning fashion, we perform initial experiments on structured block-sparsity, where we select a number of $4\times4$ blocks to fine-tune (**LIFT_Structured**). We select 5% of all parameters to train (corresponding to LoRA rank $\approx$ 128). We choose to fine-tune LLaMA-2-7B model with MATH-10K dataset and evaluate on seven arithmetic reasoning tasks. The results are as follows.
||MultiArith|GSM8K|AddSub|AQuA|SingleEQ|SVAMP|MAWPS|Avg.|
|-|-|-|-|-|-|-|-|-|
|**LIFT_Structured**|98.33|**48.07**|93.16|25.98|95.47|**65.1**|89.92|**73.72**|
|**LIFT**|**98.67**|47.31|92.66|**26.77**|**96.85**|63.6|**90.34**|**73.74**|
|Full FT|98.17|46.55|**93.67**|22.05|96.85|63.2|89.08|72.79|87.54|
From the table above, we can see that LIFT still achieves great performance under structured sparsity. LIFT_Structured has almost the same performance as original LIFT, while outperforming Full FT. This suggests that LIFT has the potential to be adapted to structured, sparse fine-tuning to achieve further computation acceleration.
# Q4: Typos
We thank the reviewer for pointing out the typo, and we will fix it in the revised version of the paper.
### References
[1] Ansell et al, 2024
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed responses, which address my concerns. Considering the significance of the work in PEFT domain, especially for sparse-based finetuning methods, I would like to raise my recommendation.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer AcQs,
We sincerely appreciate your positive feedback and your decision to raise your recommendation. We will include the additional experimental results and texts in our revised draft.
Best,
Authors | Summary: This paper proposes a novel sparse fine-tuning method, LIFT, that identifies and fine-tunes the critical parameters (Principal Weights) through SVD-based rank reduction. Extensive experiments demonstrate that LIFT significantly outperforms existing parameter-efficient fine-tuning (PEFT) approaches, including LoRA, as well as full fine-tuning. The paper also provides valuable insights through thorough analyses, including eigenspace and eigenspectrum investigations and detailed ablation studies.
Claims And Evidence: The claims presented in the paper are well-supported by comprehensive empirical results across various tasks, demonstrating consistent effectiveness.
Methods And Evaluation Criteria: The benchmarks selected in this paper are suitable for evaluating the proposed method's performance.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental setup is clearly structured and adequately covers a broad spectrum of tasks. However, the comparison baselines are primarily related to LoRA-based methods, while Sparse FT methods are less extensively compared, with only S2FT included as a baseline.
Supplementary Material: I have reviewed all supplementary materials provided.
Relation To Broader Scientific Literature: The paper introduces a straightforward yet highly effective approach for sparse FT. Additionally, it explores multiple parameter selection strategies, providing valuable insights and analysis.
Essential References Not Discussed: The paper adequately discusses relevant literature but omits discussions and comparisons with several recent sparse fine-tuning methods, including:
[r1] Sparse is Enough in Fine-tuning Pre-trained Large Language Models, ICML 2024
[r2] SMT: Fine-Tuning Large Language Models with Sparse Matrices, ICLR 2025
[r3] Scaling Sparse Fine-Tuning to Large Language Models, arXiv preprint arXiv:2401.16405, 2024.
Other Strengths And Weaknesses: Strengths:
- The paper is clearly structured, well-written, and easy to follow.
- The proposed method (LIFT) is conceptually sound, with thorough analyses and ablation studies to support its effectiveness.
- LIFT achieves superior performance compared to Full FT, while maintaining memory efficiency similar to LoRA and other parameter-efficient methods.
Weaknesses:
- The tables presented in the paper do not explicitly report the actual GPU memory consumption or the exact number of trainable parameters for each method. Additionally, details such as the specific rank setting for LoRA are unclear; from Figure 11, it seems the LoRA rank is at least 512, significantly different from the common settings (typically 8-32 for LLaMA), potentially leading to unfair comparisons. It is advisable to supplement this information explicitly.
- According to Appendix B, LIFT still has higher GPU memory usage during training compared to LoRA and its variants. The part of GPU memory consumed by activations can be significantly reduced using checkpointing techniques, which may make LoRA’s memory efficiency advantage even clearer.
Other Comments Or Suggestions: - In the analysis of eigenspace and eigenspectrum, incorporating metrics like effective rank (as presented in Figure 8 of HiRA [r4]) might better represent the full singular value spectrum.
- Expand the method comparison by including recent sparse fine-tuning methods as well as advanced high-rank fine-tuning techniques, like HiRA, which could further demonstrate LIFT’s strengths.
[r4] HiRA: Parameter-Efficient Hadamard High-Rank Adaptation for Large Language Models, ICLR 2025
Questions For Authors: Q1: The paper states that LIFT balances learning and forgetting effectively, yet from Figure 5, the weights undergo significant changes, especially Principal Weights. Could such substantial modifications negatively impact the performance on Out-of-Distribution (OOD) tasks to some extent?
Q2: Could the authors add loss convergence curves for each method in the main experiments, similar to Figure 12, to further clarify the training behavior and convergence speed?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer, thank you for the constructive feedback. We'd like to address your concerns as follows. **The supplementary figures/tables are in the [rebuttal link here](https://github.com/icml12437/ICML2025_12437).**
# Q1: Comparison with recent sparse fine-tuning methods
We compare LIFT with sparse fine-tuning methods, SIFT [1] and SpIEL [2] (since SMT is not opensourced, we compare with other two). We will cite these papers in the revised version.
**First, we compare LIFT with SIFT on GLUE tasks** following the SIFT paper, with the same number of trainable parameters (5% total parameters). We use the RoBERTa-large model and search LR of SIFT and LIFT in {5e-5, 7e-5, 1e-4, 2e-4} and compare their best results. The table below shows that LIFT outperforms SIFT on all GLUE tasks, while outperforming Full FT on almost all tasks.
||MNLI|SST2|MRPC|CoLA|QNLI|QQP|RTE|STSB|Avg.|
|-|-|-|-|-|-|-|-|-|-|
|**LIFT**|**90.79**|96.67|90.93|**70.44**|**94.69**|**92.38**|**87.00**|**92.58**|**89.44**|
|SIFT|89.91|**96.79**|89.95|66.29|93.04|88.49|87.00|92.27|87.97|
|Full FT|90.58|96.22|**91.91**|68.55|94.47|91.52|85.92|92.21|88.92|
**Second, we compare LIFT with SpIEL on GSM8K** (See Q2 response to Reviewer AcQs). We show that LIFT significantly outperforms SpIEL on both LLaMA-2-7B and LLaMA-3.2-3B models.
# Q2: Number of trainable parameters and GPU memory cost of LIFT
## 2.1. Number of trainable parameters
In all experiments in the paper, we **compare the best results of different methods in a range of parameter sizes**. When comparing LIFT with LoRA-like methods, we search the LoRA rank in {16, 32, 64, 128, 256}, and **LIFT with the same parameter counts** to ensure fair comparison. We find that LIFT and LoRA-like methods typically perform best at **rank = 128**.
In addition, in our analytical results (e.g. Fig. 11), we compare LIFT and LoRA both with **rank = 128**. Some layers of LoRA have rank larger than 128 is likely due to numerical errors when computing the rank.
## 2.2. GPU memory cost of LIFT
We note that although the memory cost of LIFT (as in Fig. 6 in Appendix B) is slightly larger than LoRA, we can further reduce the memory cost of LIFT while preserving performance.
In Appendix G.4, we showed that fine-tuning MLP layers is more effective than fine-tuning attention layers. The table below shows the results of LLaMA-2-7B on arithmetic datasets where we **only fine-tune the MLP layers (LIFT_MLP)**. We see that LIFT_MLP has similar performance to LIFT. Furthermore, only fine-tuning MLP layers further reduces the memory usage on gradients and optimizer states. In Fig. 17 of the rebuttal link, we see that LIFT_MLP achieves better memory efficiency than LoRA under optimal settings.
||MultiArith|GSM8K|AddSub|AQuA|SingleEQ|SVAMP|MAWPS|Avg.|
|-|-|-|-|-|-|-|-|-|
|**LIFT**|98.67|47.31|92.66|**26.77**|**96.85**|**63.6**|**90.34**| **73.74**|
|**LIFT_MLP**|**99.66**|**47.61**|91.90|25.59|95.67|62.6|**90.34**| **73.34**|
|Full FT|98.17|46.55|**93.67**|22.05|96.85|63.2|89.08|72.79|87.54|
|LoRA|98.00|47.76|92.41|23.62|95.08|62.9|90.76|72.93|
# Q3: Could LIFT negatively impact the performance on OOD tasks?
We believe although model weights undergo significant changes during fine-tuning, because only a small set of parameters are changed while most weights remain unchanged (see the center of histogram in Fig. 5), the model retains its fundamental capacities that enable it to generalize to OOD settings.
In Sec. 7.1, we also analyzed the generalization performance of LIFT by training models with arithmetic datasets and evaluating on commonsense tasks (OOD). This study verifies that LIFT achieves stronger OOD performance compared to other baselines.
# Q4: Metrics like effective rank
Here we compare LIFT with HiRA [3] and evaluate the effective rank of other baselines.
The table below shows the results of LIFT and HiRA on arithmetic datasets (same as Table 2 in our paper). We search HiRA rank in {16, 32, 64, 128, 256, 512} and found that HiRA performs best at rank = 512. We see that LIFT (rank = 128) outperforms HiRA with rank = 512.
||MultiArith|GSM8K|AddSub|AQuA|SingleEQ|SVAMP|MAWPS|Avg.|
|-|-|-|-|-|-|-|-|-|
|**LIFT**|**98.67**|**47.31**|**92.66**|**26.77**|**96.85**|**63.6**| **90.34**|**73.74**|
|HiRA|98.50|46.70|91.65|25.59|95.67|61.5|89.50|72.72|
In Fig. 19 of the rebuttal link, we show the effective rank of different methods. We see that the update matrix of LIFT has larger effective rank than LoRA and PiSSA, close to Full FT, while slightly lower than HiRA. We believe although effective rank is a good indicator of model capacity, it's insufficient to soly rely on it to predict performance.
# Q5: Training curve of LIFT
In Fig. 18 of the rebuttal link, we show the training loss curve of all methods in Table 2 results. We see that the convergence speed of LIFT is on par with Full FT, notably faster than other PEFT methods.
## References
[1] Song et al, 2024
[2] Ansell et al, 2024
[3] Huang et al, 2025
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply. While some of my concerns have been addressed, several key issues remain unresolved:
#### Explanation of Numerical Errors
> The claim that ranks exceeding 128 are due to “numerical errors” is vague and unconvincing. Please specify what numerical instability would cause a computed rank to be higher than the intended value.
#### Discrepancy in Analytical Results
> Figures 11 and 19 show that many LoRA components and layers exhibit near-zero ranks, which appears inconsistent with the stated use of **rank = 128**. This discrepancy raises concerns about whether LoRA was fully utilized during training.
#### Lack of Parameter Search Details
> You state that LoRA ranks in {16, 32, 64, 128, 256} were explored, yet the paper does not provide any evidence of this search process. In particular, no table explicitly reports the rank settings used. For transparency and fair comparison, it is important to **include performance results for each method across different ranks**.
I encourage the authors to provide these missing details to substantiate their claims. **If these issues remain unresolved, I will need to reconsider my overall assessment of the paper.**
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer ceyi,
Thank you for your thoughtful follow-up. We acknowledge that these concerns are important for substantiating our claims. We now address your points more thoroughly below. Updated figures and tables can be found at **[rebuttal link](https://github.com/icml12437/ICML2025_12437)**.
## 1. Discrepancy in Analytical Results.
First, we thank the reviewer for the insightful observation! We would like to clarify that Figures 11 and 19 originally presented results using incorrect LoRA checkpoints (with rank = 16 instead of the intended rank = 128). We apologize for this oversight. The corrected results are now provided in Figures 19 and 21 in the rebuttal link. Additionally, we have included the averaged rank and effective rank across different layer types in Tables 9 and 10 of the rebuttal link.
**We can see that the overall claim still holds:** LIFT with sparse fine-tuning retains a substantially higher rank and effective rank than LoRA.
## 2. Explanation of Numerical Rank Calculation.
**LoRA’s computed rank is higher than LoRA’s target rank due to the default threshold parameter in `torch.linalg.matrix_rank` function being too low. We use a more robust threshold in our updated results.**
The built-in `torch.linalg.matrix_rank` function computes the rank by counting singular values greater than a threshold $\tau$, which has the default value:
$$\tau = \max{(m, n)} \times \sigma_{\max} \times \epsilon,$$
where $(m, n)$ is the matrix shape, $\sigma_{\max}$ is the largest singular value, and $\epsilon$ is precision of input data type. **If $\tau$ is lower than the rounding error commited during LoRA’s update matrix evaluation, the computed rank may exceed LoRA's target rank.**
In our experiments, the update matrices were evaluated by subtracting the base weights from the fine-tuned weights with LoRA adapters merged. When we computed the rank in Fig. 11 using `torch.linalg.matrix_rank`, we used the default threshold which was too low to guarantee that the LoRA’s computed rank does not exceed the LoRA’s target rank.
**For further investigation, we raise the threshold in `torch.linalg.matrix_rank`.** Fig. 20 in the rebuttal link shows the computed ranks of LoRA updates (with Rank = 16) of Query matrices under varying threshold-raising factors. As the threshold increases (e.g., by a factor of 10), LoRA’s numerical rank begins to align with its target rank and we no longer see high values over 500.
Therefore to obtain robust rank comparison, in our updated results (Fig. 21 and Table 9), we compute the ranks of LIFT, Full FT and LoRA update matrices with a **threshold set to 10 times the default**. We observed the same overall trend as before: LIFT consistently achieves a significantly higher computed rank than LoRA.
## 3. Lack of Parameter Search Details
Here we report the test results under different ranks for all methods in our main experiments. Note that since LIFT focuses on improving the best performance, in the main tables of the paper, **we report the best results among all the ranks for each method** (highlighted in the tables below). Due to space limit, here we present the rank search results from Table 2 in the paper. We will incorporate the search table for more experiments in our camera-ready version. All baseline results that were reproduced are similar to previously reported results such as S2FT paper.
In the below tables we show the **mean accuracy on 7 arithmetic reasoning tasks (as in Table 2 in our paper)**. We can see that for LLaMA-2-7B and LLaMA-3.2-3B, LIFT and PEFT methods generally achieve the best performance at Rank = 128, under which LIFT significantly outperforms other baselines. When the LoRA rank is low (e.g., 16), all sparse fine-tuning methods (such as LIFT and S2FT) exhibit degraded performance. This is likely because sparse fine-tuning updates only a tiny, insufficient subset of parameters, whereas other PEFT approaches like LoRA modify the entire weight matrix, allowing for more expressive adaptation even at lower ranks.
|LLaMA-2-7B|16|32|64|128|256|
|-|-|-|-|-|-|
|**LIFT**|70.91|71.09|72.74|**73.74**|73.67|
|Full FT|72.79|72.79|72.79|72.79|72.79|
|S2FT|67.78|71.78|72.48|**73.10**|72.63|
|PiSSA|71.57|71.82|72.54|**73.03**|72.54|
|DoRA|71.10|71.74|**72.42**|71.83|71.81|
|LoRA|70.91|71.74|72.81|**72.93**|72.24|
|LLaMA-3.2-3B|16|32|64|128|256|
|-|-|-|-|-|-|
|**LIFT**|74.51|76.06|76.61|**77.06**|76.41|
|Full FT|76.45|76.45|76.45|76.45|76.45|
|S2FT|72.17|74.86|75.12|**75.58**|75.20|
|PiSSA|75.26|74.97|75.55|**75.69**|72.65|
|DoRA|75.19|75.00|75.16|**75.59**|75.57|
|LoRA|74.82|75.59|75.66|**75.71**|75.64|
We hope these results address your concerns, and we appreciate your continued feedback. These results will be incorporated in the camera-ready manuscript. Thank you for helping us improve our work. | Summary: This paper introduces a sparse fine-tuning approach, LIFT, which identifies and updates what the authors call “Principal Weights” in LLMs. The central claim is that the most critical parameters for downstream fine-tuning can be found by first applying low-rank approximation (e.g., SVD) to each weight matrix, then selecting the largest-magnitude weights from the approximated matrix. These selected parameters are subsequently updated during training, while the rest remain frozen. The paper reports that LIFT achieves performance on par with or exceeding full fine-tuning across various tasks (e.g., arithmetic and commonsense reasoning), all while using a memory footprint comparable to parameter-efficient fine-tuning (PEFT) methods like LoRA.
======
After reviewing the full discussion again, I find that two issues still remain unresolved and, in my opinion, are unconvincing:
- The authors mention that they follow the experimental setup of WeLore, yet they do not include WeLore in their comparisons or discussions in the manuscript.
- The implementation of the method directly modifies original model weights, causing position changes during training that violate sparsity constraints. Thus, the approach aligns more with memory-efficient or sparse training methods rather than PEFT.
For the reasons stated above, I maintain my current rating.
Claims And Evidence: The paper’s main claims—that “Principal Weights” identified via low-rank approximation are the most important for fine-tuning and that the resulting LIFT method outperforms both full fine-tuning and prior PEFT approaches—feel somewhat overstated. While the empirical results are suggestive, the authors don’t convincingly prove that this exact selection strategy is uniquely optimal.
Methods And Evaluation Criteria: They use typical LLM benchmarks—like arithmetic and commonsense tests—to judge performance. While these tasks do show some variety, there’s no deeper or more specialized evaluation that might reveal the approach’s limits or weaknesses. The metrics are standard accuracy-based scores, which is fine, but there’s little beyond that (e.g., no real-world or higher-level analysis).
Theoretical Claims: There isn’t much formal proof offered
Experimental Designs Or Analyses: Their experimental setup mostly follows standard fine-tuning procedures, but there’s little discussion of hyperparameter tuning or random seed variability. It’s unclear how stable these results are across multiple runs.
Supplementary Material: The materials mainly expand on the main text’s claims—showing additional ablation and rank analyses. They don’t provide deeper or more rigorous proofs beyond what’s hinted at in the main paper.
Relation To Broader Scientific Literature: They do reference recent efforts like LoRA, low-rank approximation approaches, and sparse-adapter methods. However, the paper mostly leans on known results about low-rank structures in large language models (e.g., Eckart–Young–Mirsky) and doesn’t deeply compare against or incorporate broader theoretical and empirical work on compressive adaptation (like structured pruning, block-wise sparsity, etc.).
Essential References Not Discussed: They mention some sparse and low-rank methods (like LoRA), but skip other key variants on dynamic or structured pruning (e.g., top-k fine-tuning approaches) that also use magnitude- or gradient-based selection.
Other Strengths And Weaknesses: A notable plus is that the paper lays out an easy-to-replicate method and backs it up with multiple experiments. The authors do a decent job packaging a simple rank-then-sparsify idea into a seemingly effective pipeline. However, the novelty is fairly modest. The paper also sometimes feels repetitive when explaining the approach, and the clarity on hyperparameters and training details could be improved.
Other Comments Or Suggestions: A clearer, more concise discussion of hyperparameters for each experiment and how they’re tuned would help readers reproduce results.
Questions For Authors: You focus on arithmetic and commonsense tasks—have you tested more diverse domains (e.g., structured QA, code generation)? If so, how does LIFT fare there, especially on tasks known for high domain shift or specialized knowledge?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear reviewer, thank you for the constructive feedback. We'd like to address your concerns as follows. **Please find the supplementary figures/tables in the [rebuttal link here](https://github.com/icml12437/ICML2025_12437).**
# Q1: Higher-level analysis on LIFT
In our paper, we conduct higher-level analysis on LIFT through empirical analysis and eigenspectrum analysis.
**First, we show that parameters selected by LIFT are crucial to model performance.** In Sec. 4 and Fig. 2, we show that LIFT selects principal weights that are sensitive to random perturbations, which has significant impact on the model performance, thereby validating that these weights are good candidates for fine-tuning.
**Second, we also provided in-depth analysis of the eigenspace of LIFT.** Sec. 7.2 and 7.3 show that LIFT induces larger weight changes (Fig. 5), and significantly increases the rank (and effective rank, see the Q4 response to Reviewer ceyi) of weight updates (Fig. 11), and exhibits greater singular subspace divergence (Fig. 10), all of these demonstrates LIFT shows a greater learning capacity in fine-tuning. Additionally, spectral norm analysis suggests LIFT enhances generalization (Fig. 8, 9), where random perturbations sharply increase the spectral norm in both random matrix and LLM settings, and LIFT's lower gradient norm curves (Fig. 12) further demonstrate its effectiveness, as supported by PiSSA [1].
# Q2: Evaluation on diverse domains
We conduct more experiments on QA and Code Generation datasets. These experiments further showcase the effectiveness of LIFT on more diverse domains.
For QA, we use the experimental setup of StrategyQA dataset following recent WeLore paper [2]. For PEFT methods, we consider ranks {16, 32, 64, 128, 256}; for LIFT, we use the same counts of trainable parameters. The learning rates are {1e-5, 2e-5, 5e-5, 1e-4} for Full FT and {5e-5, 1e-4, 2e-4, 5e-4} for others. We select the best-performing config for each method and report the results below. We can see that LIFT achieves notable performance gains than all other methods on both LLaMA-2-7B and LLaMA-3-8B.
|StrategyQA|LIFT|Full FT|LoRA|DoRA|PiSSA|
|-|-|-|-|-|-|
|LLaMA-2-7B|**72.53**|70.61|71.78|71.98|71.26|
|LLaMA-3-8B|**75.85**|74.81|74.44|74.27|75.19|
For code generation, we adopt the settings from the recent SIFT paper [3], where we fine-tune LLaMA-2-7B with Alpaca dataset for one epoch, and evaluate on the **Humaneval** dataset. From the table below we can see that LIFT outperforms all other methods in both pass@1 and pass@10 settings.
|**Humaneval**|LIFT|Full FT|SIFT|LoRA|DoRA|
|-|-|-|-|-|-|
|Pass@1|**16.46**|15.24|14.02 |13.66|13.96|
|Pass@10|**31.10**|28.05|30.48|27.44|29.88|
# Q3: Is the results of LIFT robust under multiple seeds?
In Fig. 3 in our paper, we show the results of LIFT on GSM8K datasets with 4 random seeds. In table 8 in the rebuttal link, we present the results on arithemetic datasets with four random seeds (same as Table 2 in the paper), which further prove that LIFT has robust performance, where it outperforms other baselines under four seeds.
# Q4: Comparison with dynamic/structured pruning methods
In Fig. 3, we have compared LIFT with a wide range of other dynamic sparse fine-tuning methods, including Top-k Magnitude, Top-k Gradient, and Movement, and showed that LIFT outperforms all other sparse fine-tuning methods. We note that unstructured sparse fine-tuning usually serves as the performance ceiling of structured and block-wise sparsity. In addition, in our main results, we have compared LIFT with S2FT, which is the state-of-the-art structured fine-tuning method by far. These results suggest that LIFT is indeed superior to other common sparse selection methods.
# Q5: Hyperparameter Tuning
Here we explain the hyperparameter tuning for experiments. We will incorporate detailed information in the revised version of our paper.
## Training Hyperparameters
In all experiments in the paper, we **compare the best results of different methods among a range of parameter sizes**. Specifically, when comparing LIFT with LoRA-like methods, we search the LoRA rank in {16, 32, 64, 128, 256}, and **LIFT with the same parameter counts** to ensure fair comparison, and pick their best results. We find that LIFT and LoRA-like methods typically have the best performance at **rank = 128**, similar to results from PiSSA paper [1].
## LIFT Hyperparameters
The main hyperparameter of LIFT is the **LRA Rank**, which **controls the rank for low-rank approximation** of weight matrices. In **Appendix G.5 and Fig. 15** of our paper, we study the influence of LRA Rank on the performance of LIFT. We found that the LRA Rank of best performance increases as we select more parameters to train. In practice, when we tune LIFT with the same number of parameters as LoRA, we find that **using LRA Rank similar to LoRA rank yields optimal performance**.
## References
[1] Meng et al, 2024
[2] Jaiswal et al, 2024
[3] Song et al, 2024
---
Rebuttal Comment 1.1:
Comment: Thanks for sharing these new experimental results. Overall, I’m feeling better about the paper, but I still see limited novelty here. For now, I’m not inclined to raise my score, though I might change my mind if the AC-led discussion brings up fresh insights.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer kFAr,
Thank you again for your follow-up. To further address your concerns, we'd like to summarize the novelty of our work as follows.
**First, we propose a novel concept of principal weights which is crucial to LLM fine-tuning, and designed a new sparse fine-tuning algorithm.** We showed that incorporating eigenspectrum and low-rank information can identify principal weights crucial to model performance. To our knowledge, our approach is the first to leverage eigenspectrum decomposition and rank reduction for sparse fine-tuning of LLMs. In terms of the novelty of LIFT algorithm, we received support from the other three reviewers (e.g. "The method is novel, well-motivated, and well-executed." from Reviewer `yeg4`, "Novel idea nicely explained and motivated." from Reviewer `AcQs`)
**Second, we conduct comprehensive experiments across a wide range of domains**, to show the effectiveness and robustness of our method compared to other state-of-the-art algorithms. We also conduct **in-depth empirical and eigenspectrum analysis** to study the reason behind our method's success. Our experimental setup has also been acclaimed by other reviewers (e.g. "Adequately covers a broad spectrum of tasks." from Reviewer `ceyi`)
**Third, our work is meaningful to the research community on sparsity and fine-tuning.** Our method showed that sparse fine-tuning can achieve strong performance on LLM fine-tuning. This is meaningful to future research in the community on sparse fine-tuning, as it has fallen behind in the modern LLM era. This significance of our work is also recognized by other reviewers (e.g. "This research is very important to the community", "It pushes the field forward in an important way" from Reviewer `yeg4`).
In addition, during the rebuttal, we firmly believe that we have addressed all of your concerns, including 1) higher-level analysis of our LIFT method, 2) more evaluation on diverse domains, 3) performance robustness under random seeds.
We sincerely appreciate your constructive feedback, and we hope you can reconsider your evaluation of our work.
Best,
Authors | null | null | null | null | null | null |
A Non-Asymptotic Convergent Analysis for Scored-Based Graph Generative Model via a System of Stochastic Differential Equations | Accept (poster) | Summary: This paper provides a non-asymptotic convergence analysis for score-based graph generative models (SGGMs), which involve coupled stochastic differential equations for graph structure and node features. The authors explore convergence bounds across three graph generation paradigms and identify factors like graph topology and feature normalization that influence convergence. Theoretical insights are paired with experimental validation using synthetic graphs, offering practical guidance on hyperparameter selection. The work deepens understanding of SGGMs' theoretical behavior and their application in fields such as drug discovery.
Claims And Evidence: The claims made in the submission are well supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense.
Theoretical Claims: Theorem 4.1 to Theorem 4.3 in page 6 checked, no issue found.
Experimental Designs Or Analyses: Experiment design in Section 5 checked, no issue identified.
Supplementary Material: Appendex C (proof for Theorem 4.1 to 4.3 in main article) reviewed.
Relation To Broader Scientific Literature: This paper builds on prior work in score-based generative models (SGMs), which typically use a single SDE. It extends these analyses to score-based graph generative models (SGGMs) involving coupled SDEs for graph structure and node features. The paper provides novel non-asymptotic convergence bounds and insights for hyperparameter tuning.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The paper presents a novel non-asymptotic convergence analysis for score-based graph generative models (SGGMs), extending traditional SGM theories to accommodate the complexities of coupled stochastic differential equations (SDEs), which enhances its theoretical contributions in graph generation.
2. By addressing SGGMs, the paper makes a meaningful contribution to critical real-world applications like drug discovery and protein design, where graph-based generative models have significant practical value.
3. The authors provide empirical evidence to support their theoretical findings, using synthetic graph models. This strengthens the paper's practical relevance and enhances the reliability of the proposed convergence analysis.
Weaknesses:
See questions for authors
Other Comments Or Suggestions: N/A
Questions For Authors: Although the paper is overall sound and convincing, I have some confusion regarding the equation: $P(g_t|g_0)=p(X_t|X_0)P(A_t|A_0)$ on page 4. If I understand correctly, does this imply that, at time t, the influence of the node feature $X$ and and the adjacency matrix $A$ on the graph are independent of each other?
While this can be explained by the assumption that the generation process is based on a standard Gaussian distribution, an intuitive explanation for why this assumption holds—or at least does not significantly impact the model's performance—would strengthen the paper further.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer, thank you for positive comments and insightful questions. These greatly help us in making our paper better, and we appreciate for the opportunity to address your questions here.
# Independency in the forward equation
* We would like to clarify that the independence assumption in the forward equation $\mathbb{P}(\mathbf{G}_t|\mathbf{G}_0)=\mathbb{P}(\mathbf{X}_t|\mathbf{X}_0)\mathbb{P}(\mathbf{A}_t|\mathbf{A}_0)$ is conditional on the initial graph $\mathbf{G}_0 = (\mathbf{X}_0, \mathbf{A}_0)$. In other words, while the forward noising processes for $\mathbf{X}$ and $\mathbf{A}$ are independently modeled given their initial states, the resulting variables $\mathbf{X}_t$ and $\mathbf{A}_t$ remain implicitly interdependent due to their dependence on $\mathbf{X}_0$ and $\mathbf{A}_0$ (that are dependent of each other). This conditional independence assumption aligns with standard practices in score-based graph generation literature, such as those discussed in [1, 2].
* Your question also highlights an exciting future research direction for the analysis of this paper: exploring the impact of replacing the independent noising processes with a jointly dependent one, potentially by comparing their convergence behaviors. We hypothesize that if the diffusion process could better reflect the dependencies in the original data (i.e., between $\mathbf{X}_0$ and $\mathbf{A}_0$), it may lead to improved learning. Specifically, aligning the noise structure with the data dependencies could help the score networks more effectively capture the interactions between features and topology—by providing samples with more consistent relationships throughout the diffusion process.
Thank you again for this valuable point. We will incorporate this discussion into the revised version of our manuscript, and hope our response has satisfactorily addressed your questions.
[1] "Score-based generative modeling of graphs via the system of stochastic differential equations." *International conference on machine learning*.
[2] "DiGress: Discrete Denoising diffusion for graph generation." *The Eleventh International Conference on Learning Representations*. | Summary: The paper presents an analysis of graph diffusion processes building heavily on results from https://arxiv.org/abs/2211.01916 (at least based on my reading?) . They find that graph size is much more determinative for convergence rate than feature complexity (mirroring results from https://proceedings.neurips.cc/paper/2020/hash/99503bdd3c5a4c4671ada72d6fd81433-Abstract.html ) derive testable implications from their analysis (normalization helps, regular graphs are easier than power law graphs, graph size hurts more than feature size) which are verified in a toy example.
Claims And Evidence: The analysis looks sensible, although I did not have time to critically check each step.
The empirical analysis is a bit weak, but passes a toy example sniff test in what is primarily a theory paper
Methods And Evaluation Criteria: As above, the numerics are a toy example, I think it would be beneficiel to check literature results of SOTA models that fit the papers framework and check the graph properties for the expected correlation (maybe kendal tau) with performance to beef up the papers applicability with realtively little work. alterantively, picking tractable datasets or standard models (SBM, AB is already present, a few more), and confirm with numerics not via the kernel generation method but via somehting more interpretable (the images look the same to me). failing that, at least using suitable statistical tests (see e.g. https://arxiv.org/abs/1811.12808 ) should be performed.
Theoretical Claims: As stated above, the claims seem supported, but no detailed anaysis has been performed.
Experimental Designs Or Analyses: see the methods and evaluation section.
Supplementary Material: no supplementary provided
Relation To Broader Scientific Literature: I think https://proceedings.neurips.cc/paper/2020/hash/99503bdd3c5a4c4671ada72d6fd81433-Abstract.html is an important paper to cite, since it offers some justification for why feature complexity might not matter. open question of _why_ graph size hurts so much though (intuitively it makes sense)
Essential References Not Discussed: none except the above suggestion.
Other Strengths And Weaknesses: nothing I can highlight
Other Comments Or Suggestions: nothing I can highlight
Questions For Authors: nothing I can highlight
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer, thank you for the positive comments and thoughtful suggestions. These greatly help us in making our paper better. Below, we outline the efforts we have undertaken and plan to take in response to your suggestions.
## Suggested Reference
Thank you for pointing out the relevant reference! It will help us further strengthen our results. We will make sure to include it in the revised version of the paper.
## Empirical study improvement
Thank you very much for your thoughtful suggestions regarding the empirical study — we greatly appreciate the detailed and constructive feedback.
We have attempted to collect data from existing literature involving SOTA models, but so far have not had much success due to the variability in experimental setups. Real-world studies often differ across multiple factors simultaneously, making it challenging to isolate the effects of individual variables such as graph size or feature complexity. That said, we agree that this is a promising direction and will continue investigating it for future updates and revisions.
Regarding the evaluation metric: we chose KL divergence as our primary measure because our theoretical bounds are expressed in terms of KL divergence, allowing for a principled comparison between theory and experiment. While we acknowledge that more interpretable metrics could enhance accessibility, most evaluation methods for graph generation—especially when comparing distributions—ultimately rely on kernel-based similarity metrics (e.g., MMD, Gaussian kernel estimates, etc.). Developing more interpretable or task-specific evaluation metrics remains a meaningful open question in the field of generative modeling.
In response to your suggestion, we conducted an additional experiment using the Erdős–Rényi (ER) random graph model of connection probability $0.1$ [1]. While ER does not allow us to control structural features like degree heterogeneity (e.g., power-law vs. regular), it provides a clean testbed for examining the relative influence of graph size versus feature size, as well as the effect of normalization, under a controlled setting. The setup mirrors our experiments in the paper.
To quantify the effects, we further computed best-fit line coefficients to estimate the sensitivity of KL-divergence to changes in each variable:
| | 50 | 100 | 200 | 500 | Best-Fit Line Coefficient |
| --- | --- | --- | --- | --- | --- |
| Changing Graph Size (with fixed feature size= 50, with normalization) | 0.56 | 2.00 | 4.86 | 8.67 | 0.017 |
| Changing Feature size with normalization (with fixed graph size = 50) | 0.56 | 1.23 | 2.23 | 3.88 | 0.007 |
| Changing Feature size without normalization (with fixed graph size = 50) | 0.60 | 1.31 | 2.62 | 4.46 | 0.008 |
As shown above, the results support our theoretical predictions: graph size has a more pronounced effect on convergence than feature size, and feature normalization improves performance. These trends are consistent with those observed in our original experiments.
Thank you again for your valuable comments. We will incorporate these new results and the accompanying analysis into the revised version of the paper.
[1] Newman, Mark, Networks: An Introduction | Summary: The authors present convergence analysis for score-based graph diffusion generative models where both the generation of the feature vectors at each node of the graph as well as the graph structure is generated based on a diffusion model.
Claims And Evidence: The claims made are clear
Methods And Evaluation Criteria: Evaluation and method make sense
Theoretical Claims: I did not check the correctness of the proofs. I skimmed the proof structure and key lemmas.
Experimental Designs Or Analyses: I did not check the soundness of the experimental designs.
Supplementary Material: I skimmed through the supplementary appendix proofs.
Relation To Broader Scientific Literature: The joint generation creates extra difficulties in the convergence analysis as compared to prior work that focused on convergence analysis of generative models, but not graphical models. In particular, the two reverse diffusion processes are interdependent.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: I found the main claim of the essence of the technical contribution not clear.
It is true that the two reverse SDEs are interdependent, but what was not at all clear to me was why theoretically can't one just view this as a single SDE on the pair G_t = (Xt, At).
What breaks if one views the generative process as a single SDE on Gt and applies prior results on the convergence of score-based generative models. In any single generative model, the process that generates any sub-component of the vector, is interdependent with the process that generates the remaining sub-components. So this type of difficulty exists in prior proofs too, if one views it as a single diffusion process.
The authors do not explain clearly, why such a route to a theoretical result is not viable, even if in practice one would view them qualitatively as two processes.
The proof seems to essentially be invoking prior theoretical results on convergence for single diffusion processes, but simply twice, for each diffusion. A better explanation as to which technical lemmas are novel would be very elucidating.
Other Comments Or Suggestions: No other comments.
Questions For Authors: No other questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer, Thank you for your insightful comments! Your questions have helped us sharpen the presentation of our technical contributions, and we sincerely appreciate the opportunity to clarify them here.
## Why Not Formulate as a Single SDE on $\mathbf{G}_t$
You are absolutely right that, in principle, the coupled reverse SDEs for $\mathbf{X}_t$ (node features) and $\mathbf{A}_t$ (graph structure) could be rewritten as a single SDE over the joint variable $\mathbf{G}_t=(\mathbf{X}_t,\mathbf{A}_t)$. However, the two-process formulation is not just a modeling preference, but a technical necessity for capturing the rich structure and asymmetry in graph data. Specifically, graphs involve:
- Intra-structure dependency: among entries of $\mathbf{A}_t$.
- Intra-feature dependency: among entries of $\mathbf{X}_t$.
- Cross-dependency: between $\mathbf{A}_t$ and $\mathbf{X}_t$, which evolves through the shared time parameter and coupled score functions.
The evolution of these dependencies involves coupled score functions that dynamically influence each other. Representing these coupled processes as a single SDE would obscure their distinct dependency structures, hindering our ability to isolate and analyze the individual and cross-dependent behaviors clearly. Crucially, such an aggregation would mask critical insights into the non-isotropic effects of increasing graph size versus feature dimensionality and would obscure how topological properties (such as regular versus power-law structures) impact convergence.
Therefore, the two-process formulation not only matches more closely with the practical implementation (providing more practical insight) but also enables a finer-grained convergence analysis that reveals how these factors differentially affect generative performance—something not accessible in standard single SDE theory.
## What Is Technically Novel and Important?
We would like to first emphasize that our contribution is the first rigorous theoretical extension of these techniques to score-based graph generative models (SGGMs). Given the critical applications of SGGMs in domains such as drug discovery and protein synthesis, understanding their convergence behavior is essential for ensuring reliability and efficacy. While our overall proof framework builds on classical convergence theory for score-based generative models (SGMs), as we acknowledged in the paper, our work introduces several significant technical innovations:
- Due to the distinct dependency structure in graphs and the two-process formulation (discussed above), the derivations in Lemmas (such as B.5–B.12) are novel extension of existing SDE results. These lemmas require careful handling of the cross-dependencies between $\mathbf{A}_t$ and $\mathbf{X}_t$, which are absent in standard (single-process) SGM analyses. This leads to new forms of error decomposition and convergence bounds tailored to graph-structured data (as demonstrated explicitly in Theorems 4.1–4.3).
- Another core technical contribution lies in linking convergence behavior to graph-specific properties. Our analysis uniquely leverages spectral graph theory tools to link convergence behaviors directly to graph characteristics (e.g., graph size/feature dimensionality and topology). These tools lead to novel insights, such as identifying a preference for certain graph structures (regular vs. power-law graph), which are absent from the traditional SGM convergence literature.
We sincerely appreciate your valuable feedback, which has allowed us to refine and strengthen our paper. We will incorporate these discussions into the revision and hope our response has satisfactorily addressed your concerns and questions. | null | null | null | null | null | null | null | null |
From Uncertain to Safe: Conformal Adaptation of Diffusion Models for Safe PDE Control | Accept (poster) | Summary: This paper proposes a safe PDE control method based on the diffusion model inspired by conformal prediction. Specifically, the authors propose two new phases of post-training and inference-time fine-tuning to accommodate the quantified uncertainty score from the calibration set for the safety constraint and objective. Experiments show that the proposed method can achieve better overall results and meet safety constraints compared to baselines.
Claims And Evidence: From the perspective of PDE control theory, this paper solves a similar problem of in-domain PDE control, but the problem formulation in Eq (1) is more like offline safe RL because it misses the boundary and time conditions for the PDE dynamical system. I don't think the PDE safe control setting is the best manner to showcase the idea of a diffusion model with a conformal prediction for the safe sequential decision-making task. That is to say, as shown in (Liu et al. 2023a), there are tons of offline safe RL datasets and benchmarks to formulate the constrained optimization problem in Eq (1) and validate the methodology. The method part is disconnected from the motivation of the safe PDE control problem and can work on other general safe RL settings if it is a genuinely effective method.
Methods And Evaluation Criteria: There are some concerns and comments for method part.
- Overall, the method is basically some fine-tuning over the calibration set so that the performance gets better, which is trivial and lacks in-depth insight. Post-train and inference-time fine-tuning are similar and not novel and impressive to the audience.
- The distribution shift in Eq (7) is directly handled through previous weighted conformal prediction (Tibshirani et al. 2019), and post-training of Eq (12) directly follows the preliminary part of Eq (3), which greatly weakens the contribution and novelty.
- Regarding conformal prediction, I agree that exchangeability is the key assumption but it may not hold in sequential decision-making settings. However, the reasons are not only the distribution shift but also the correlation of sequential data. It seems to be problematic to use the sample $(u,w)$ as the probability density function in Eq (7) because of such correlation between $u,w$ and within each trajectory as well.
Theoretical Claims: The theories and proofs directly follow the conformal prediction literature but under different settings. The theorems are not essential to the significance of the proposed method since it is very mature in conformal prediction.
Experimental Designs Or Analyses: I appreciate the different PDE settings and multiple baselines in the experiment part. However, it is expected to experiment on the PDE control benchmark https://github.com/lukebhan/PDEControlGym for off-the-shelf baselines and for fair comparison. Besides, in Table 1 and Table 3, some baselines can achieve 0% regarding safety constraints so it is expected to make the safety constraint harder to be satisfied to highlight the performance of the proposed method. I also wonder if the proposed method can do inference-time scaling, e.g. to adapt to different safety constraints without fine-tuning.
Supplementary Material: Yes, most parts are reviewed.
Relation To Broader Scientific Literature: It is benificial to scientific ML and AI4Science.
Essential References Not Discussed: Yes, some essential references are not discussed. PDE control problem has been well studied in the control and system community [1,2,3,4]. Another recent paper studies safe PDE boundary control [5], which should be discussed and compared. Also, regarding safe RL and safe control, control barrier function (CBF) based methods are missing [6,7].
---
[1] Krstic et al. Boundary control of PDEs: A course on backstepping designs, 2008
[2] Smyshlyaev et al. Adaptive control of parabolic PDEs, 2010
[3] Zhang et al. Controlgym: Large-scale control environments for benchmarking reinforcement learning algorithms, 2024
[4] Bhan et al. Pde control gym: A benchmark for data-driven boundary control of partial differential equations, 2024
[5] Hu et al. On the Boundary Feasibility for PDE Control with Neural Operators, 2024
[6] Dawson et al. Safe control with learned certificates: A survey of neural lyapunov, barrier, and contraction methods for robotics and control. 2023
[7] Zhao et al. Model-free safe control for zero-violation reinforcement learning, 2021
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: N/A
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for comments. Below are our responses.
>Q1. Problem formulation (Eq. 1): like offline safe RL, no boundary and time conditions.
- \(C(u,w)=0\) in Eq. 1 is the PDE constraint. Following your advice, we will add boundary and time conditions along with explanations.
>Q2. PDE-safe control setting may not be ideal, as safe RL can formulate the same. The method is decoupled from PDE and applicable to safe RL.
- PDE control scenario is **important and complex**. Its dimensions are high, and dynamics are nonlinear.
- We evaluate several **SOTA safe RL baselines** like CDT and TREBI.
- Our paper's motivation lie in **highlighting importance of safe PDE control and introducing it into deep learning-based control**. To address this, we design methods and develop **experimental scenarios, datasets, and baselines across domains**.
- Given **PDEs' high-dimensional and complex dynamics**, we choose diffusion models for their strong modeling power [1, 2]. To satisfy **initial conditions**, we apply conditional generation to enforce strict adherence.
- To address the concern, we provide results on **a safe RL benchmark** [3]. https://anonymous.4open.science/r/Safe-Diffusion-Models-for-PDE-Control-213C/Rebuttal.md
>Q3. The method is some fine-tuning on the calibration set and trivial. Post-training and finetuning are similar and not novel.
- Our method goes **beyond finetuning**, incorporating post-training, guidance and key uncertainty quantification via conformal prediction (CP).
- It adjusts **not only on the calibration set*. Post-training uses both training and calibration sets, while finetuning and guidance are applied to control targets and the calibration set.
- Post-training and inference-time finetuning adjust outputs **completely differently**: Post-training uses a reweighted loss (Eq. 12), while finetuning uses a sampling-based loss (Eq. 15) with additional guidance.
>Q4. Weighted CP directly follow Eq. 7; Eq. 12 directly follows Eq. 3.
- We do **not directly use Eq. 7**. As noted in Section 4.2, applying it requires estimating the intractable ratio between generated and dataset distributions. To address this, we design a loss (Eq. 12) and prove Theorem 4.2 and 4.3 to justify using Eq. 13 for valid coverage.
- Eq. 12 **does not directly follow Eq. 3**. Loss in Eq. 12 is designed to both enable ratio estimation and promote safer, more optimal distributions. Its form is derived in Appendix A.
>Q5. Exchangeability for CP breaks in sequential decision-making due to correlations between \(u\), \(w\), and within trajectories.
- As noted after Eq. 5, the distribution is over full state and control trajectory pairs, so dependencies between \(u\) and \(w\), and within trajectories don't affect exchangeability.
>Q6. Theories directly follow CP literature but under different settings, and are not critical to method's significance.
- Except for Theorem 4.1, theorems are largely **independent of existing CP theory**, with separate derivations.
- Theorems are **essential**: Theorem 4.2 aligns post-training distribution with the desired form, and Theorems 4.1–4.3 jointly ensure valid conformal intervals for the post-trained model.
>Q7. Experiment on PDE ContRol Gym for off-the-shelf baselines and fair comparison.
- This benchmark is an **online** setting **not for safe PDE control**. As noted in Introduction, interaction with environments in safe control is risky, so we focus on offline control. In the absence of safe PDE benchmarks, we **contribute safe datasets and environments**. Our experiments follow **widely used prior works** (PhiFlow: 1.6k stars) and use **the same equations** as PDE ContRol Gym, including Burgers' and NS.
- **This benchmark's baselines** are **not for safe control**, while we compare **SOTA safe RL methods**. Our comparison is **fair**, with open-sourced code in the paper and tuned hyperparameters.
- We will **cite** this valuable benchmark in Related Works of the next version.
>Q8. Some baselines achieve 0% unsafety (Tables 1 and 3), so strengthening constraints.
- There are two criteria for safe control methods: **safety and control accuracy under safety**. If only one method is safe, we can't compare the second criterion, limiting comprehensive analysis.
>Q9. Can this method adapt to different safety constraints without finetuning?
- **Different constraints**: Our method remains applicable by computing each safety score minus its bound, combining them via smooth max [4], and constraining the result to be below 0, as in the current method.
- **Without finetuning**: We can take guidance to direct generated data to meet constraints.
>Q10. References.
- We will add them in the paper.
**Reference**
[1] Synthetic Lagrangian turbulence by generative diffusion model.
[2] DiffusionPDE: Generative PDE-solving under partial observation.
[3] Datasets and Benchmarks for Offline Safe Reinforcement Learning.
[4] Magnetic control of tokamak plasmas through deep reinforcement learning.s
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed clarification. I am not an expert in offline safe RL, and I suggest the authors add more explicit discussion regarding "online" PDE ContRol Gym [1] and the "online" safety filter based safe PDE boundary control [2] in the updated version. Also, since PDE control has been an old topic for decades in the control theory community, and so has safe control, the listed literature should be included in the related work part in the updated version. Other than that, most of my concerns have been addressed. I raise my score to 3.
[1] Bhan et al. Pde control gym: A benchmark for data-driven boundary control of partial differential equations, 2024
[2] Hu et al. On the Boundary Feasibility for PDE Control with Neural Operators, 2024
---
Reply to Comment 1.1.1:
Comment: Thank you for the helpful suggestions. We will include a more explicit discussion about the "online" PDE ContRol Gym [1] and the "online" safety-filter-based safe PDE boundary control [2] in the next version. We will also add the listed references to the Related Work section and include more literature on traditional safe control.
References
[1] Bhan, et al. Pde control gym: A benchmark for data-driven boundary control of partial differential equations.
[2] Hu, et al. On the Boundary Feasibility for PDE Control with Neural Operators. | Summary: This paper introduces an approach that maintains safety constraints in PDE-constrained control problems. It employs uncertainty estimation with conformal prediction to optimize control while preserving safety. It fine-tunes a diffusion model using conformal prediction to produce safe control sequences. The experimental results in tasks like 1D Burger equation show the effectiveness of the proposed method.
## update after rebuttal
My major concerns, such as running time, have been resolved. Therefore, I maintain my score as weak accept.
Claims And Evidence: Yes. I think the claims are correct and clear.
Methods And Evaluation Criteria: Yes. The evaluation criteria follow previous works and effectively assess performance. And the method is proposed to solve the safety issue,
Theoretical Claims: Yes. I think it is majorly correct.
Experimental Designs Or Analyses: The paper designs good experimental designs that assesses SafeDiffCon performance across three PDE control tasks: 1D Burgers’ equation, 2D incompressible fluid, and controlled nuclear fusion. The chosen baseliens are also relatively new. Therefore, the experimental design is valid.
Supplementary Material: N/A. No supplementary material is uploaded.
Relation To Broader Scientific Literature: This research makes a good contribution by combining diffusion models, conformal uncertainty quantification, and safe control frameworks within PDE-constrained control systems which fills a research gap.
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths**
- This paper is well-written. The proof is well performed.
- The evaluation is comprehensive, and the experimental results show the effectiveness of the proposed method.
**Weakness**
- Lack of evaluation of its performance in real-world scenarios.
- The performance of the proposed framework heavily relies on the collected training data.
- The introduced framework is a little complex, and the time needed is unclear.
Other Comments Or Suggestions: 1. It would be better for the authors to provide a table about the running time of the proposed methods and the baselines.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the constructive review. Below are the responses.
>Q1. Lack of evaluation of its performance in real-world scenarios.
- In fact, the tokamak control in the paper is a near real-world experiment for controlled nuclear fusion which is a highly-nonlinear and highly coupled system. This environment is trained using real data collected from the KSTAR tokamak device (https://github.com/jaem-seo/KSTAR_tokamak_simulator). Additionally, its simulation has been tested with many real discharges, showing reasonable predictions and acceptable prediction accuracy.
>Q2. The performance of the proposed framework heavily relies on the collected training data.
- Compared to baselines, our method **does not rely more heavily on the collected training data**. The table below shows the average unsafety rate of the training set and the generated data. It can be seen that the distribution of data generated by our method differs from that of the training data. Moreover, only our method achieves full safety.
| Datasets|Training data|Generated data|
| -| -| -|
| Burgers' equation|89.7%|0%|
| Incompressible fluid|53.1%|0%|
| Tokamak fusion reactor|71.2%|0%|
- As mentioned in the third and fourth paragraphs of the Introduction, interacting with the environment in the safe PDE control problem is hazardous. Therefore, we choose a more appropriate **offline setting**, where we can only rely on the data that has already been collected.
>Q3. The introduced framework is a little complex.
- Overall framework:
- The method begins with the introduction of **uncertainty quantification** through the concept of uncertainty quantiles. This is integrated throughout the algorithm to address potential gaps between the actual and predicted safety scores, which could lead to unsafe events not anticipated by the model. By introducing them, we address the critical issue of predictive uncertainty and its impact on safety scores, enabling more robust and safe control actions.
- After pre-training, we employ a reweighted loss function to **post-train** the model’s output distribution, guiding it toward regions with better safety and more optimal objectives.
- During the **inference** phase, task-specific fine-tuning is performed on the post-trained model, allowing it to achieve better control performance and stronger safety guarantees for specific tasks, with minimal adjustments needed.
- To help understand, we have present our proposed method with the framework outlined in **Figure 1, Algorithm 1 and the first paragraph of Method**. We are happy to **provide a clearer explanation** of the algorithm at the beginning of the Method section to enhance readability.
>Q4. Comparison of running time.
- Thanks for the good suggestion. The table below presents the **comparison of the inference time between our method and baselines** on the Burgers' equation on A800 with 8 CPUs. We would like to highlight that we have enhanced the inference efficiency using several techniques. Firstly, by introducing post-training, the data distribution is already closely aligned with the desired distribution prior to inference. Secondly, we significantly speed up the sampling process of the diffusion model using DDIM [1]. As a result, our method demonstrates a reasonable inference speed and is considerably faster than TREBI, which is also based on the diffusion model.
|Methods|BC|PID|SL-Lag|MPC-Lag|CDT|TREBI|Ours|
|-|-|-|-|-|-|-|-|
|Inference Time (min)|0.1351|0.1091|0.8842|26.2905|0.0890|13.5525|2.3575|
**References**
[1] Denoising Diffusion Implicit Models.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply! I have one more question here. Which base diffusion model are you using? Is it the DDIM model?
---
Reply to Comment 1.1.1:
Comment: Thank you for your question! The diffusion model we use is trained following the standard DDPM framework [1], while the sampling process follows DDIM [2]. However, our code also provides an option for sampling using the standard DDPM procedure.
References:
[1] Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models[J]. Advances in neural information processing systems, 2020.
[2] Song J, Meng C, Ermon S. Denoising Diffusion Implicit Models[C]. International Conference on Learning Representations, 2021. | Summary: This paper introduces SafeDiffCon, a method that integrates safety constraints into diffusion models for PDE control tasks. By leveraging conformal prediction to quantify model uncertainty, the approach employs post-training with a reweighted loss and inference-time fine-tuning to align generated control sequences with safety requirements. Experiments on 1D Burgers’ equation, 2D fluid dynamics, and nuclear fusion control demonstrate that SafeDiffCon uniquely satisfies safety constraints while achieving superior control performance compared to baselines like BC, MPC-Lag, and TREBI. The key innovation lies in combining uncertainty-aware conformal adaptation with diffusion models to address distribution shifts in offline settings.
Claims And Evidence: The paper claims that SafeDiffCon achieves optimal control under safety constraints, and the experiment results support the claim.
Methods And Evaluation Criteria: The post-training and fine-tuning methods are suitable for constrained problems. The benchmark datasets Burgers, NS, and Tokamak make sense for the control applications.
Theoretical Claims: The theoretical claims seem to be correct.
Experimental Designs Or Analyses: The inference efficiency, e.g., the inference time cost is not compared. The proposed model's inference finetune might induce significant extra time cost.
Supplementary Material: I reviewed the experiment details in Appendix E.
Relation To Broader Scientific Literature: Related to Control Theory, Numerical Methods.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: No
Other Comments Or Suggestions: No
Questions For Authors: 1. Can you provide inference time cost comparison with baselines?
2. Why is the diffusion model chosen for the control problem, which is essentially different from the generation task?
3. What PDE is the Tokamak control problem driven by?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your insightful comments. Below are our responses.
>Q1. Inference time cost comparison with baselines. The proposed model's inference finetune might induce significant extra time cost.
- Great suggestion. We would like to point out that we **have accelerated the efficiency of inference**. On one hand, by introducing post-training, the data distribution is already close to the desired distribution before inference. On the other hand, we have significantly accelerated the diffusion model's sampling time using DDIM [9].
- The table below presents a **comparison of the inference time between our method and the baseline** on the Burgers' equation on A800 with 8 CPUs. It shows that our method achieves a moderate speed and is much faster compared to TREBI, which is also based on the diffusion model.
|Methods|BC|PID|SL-Lag|MPC-Lag|CDT|TREBI|Ours|
|-|-|-|-|-|-|-|-|
|Inference Time (min)|0.1351|0.1091|0.8842|26.2905|0.0890|13.5525|2.3575|
>Q2. Why is the diffusion model chosen for the control problem, which is essentially different from the generation task?
- In fact, diffusion models have already been **widely used** in decision-making tasks, including scenarios such as PDE systems [1, 2], robotics [3, 4], and traditional reinforcement learning [5, 6, 7]. This is because the task can naturally be modeled as a probabilistic distribution over state and control sequences, and the diffusion model has many unique advantages in handling such tasks.
- It has superior modeling capabilities due to its ability to capture complex distributions and generate high-fidelity outputs by progressively refining predictions over multiple denoising steps.
- By denoising from a Gaussian distribution, it models the entire trajectory, meaning it learns to generate states at different time steps simultaneously, aiding in capturing long-range dependencies. So it can perform **global optimization**.
- Furthermore, studies have shown that it is **robust** to noise which is essential in safe control problems [8].
>Q3. What PDE is the Tokamak control problem driven by?
- Tokamak control involves the coupling of multiple PDEs. The primary equation is the Grad–Shafranov equation, along with others such as the Heat Transport Equation and the Skin Effect Equation [10].
**References**
[1] A generative approach to control complex physical systems.
[2] Wavelet diffusion neural operator.
[3] Diffusion policy: Visuomotor policy learning via action diffusion.
[4] Hierarchical diffusion policy for kinematics-aware multi-task robotic manipulation.
[5] Planning with diffusion for flexible behavior synthesis.
[6] Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning.
[7] Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling.
[8] Diffusion Models are Certifiably Robust Classifiers.
[9] Denoising Diffusion Implicit Models.
[10] Reconstruction of current profile parameters and plasma shapes in tokamaks. | Summary: The paper introduces SafeDiffCon, a method integrating safety constraints into deep learning-based control of PDE systems through diffusion models. Addressing the gap in existing methods that neglect safety, SafeDiffCon employs conformal prediction to estimate uncertainty quantiles, which guide both post-training and inference phases. Evaluated on 1D Burgers equation, 2D incompressible fluid flow, and a nuclear fusion control problem, SafeDiffCon uniquely satisfies all safety constraints across tasks, outperforming classical and deep learning baselines in control performance. Key contributions include the integration of uncertainty quantification via conformal prediction, safety-constrained diffusion training, and adaptive inference mechanisms, demonstrating robust and safe PDE control in complex scenarios.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes. I have checked the proofs.
Experimental Designs Or Analyses: Following previous works, the designs of experiments are clear and well-organized. The analysis is also convincing.
To improve the quality of the paper, I suggest the authors provide more ablation analysis on the other two tasks, 1D Burgers Equation and nuclear fusion control, to better demonstrate the effectiveness of the proposed method.
Supplementary Material: I have reviewed all of the supplementary material.
Relation To Broader Scientific Literature: The work builds upon existing diffusion models for physical control (like Wei et al., 2024; Hu et al., 2024) but extends them with conformal prediction techniques (Vovk et al., 2005; Tibshirani et al., 2019) to handle distribution shifts between training data and desired safe controls. The approach combines post-training with reweighted loss functions and inference-time fine-tuning, enabling models to satisfy safety constraints while optimizing control objectives. This relates to safe offline reinforcement learning methods like CPQ (Xu et al., 2022), COptiDICE (Lee et al., 2022), and TREBI (Lin et al., 2023), but differs by specifically addressing PDE-constrained systems and using uncertainty quantiles based on conformal prediction to provide safety guarantees. The work demonstrates applications in three different physical domains (1D Burgers' equation, 2D incompressible fluid dynamics, and controlled nuclear fusion), showing greater effectiveness than traditional control methods (PID, MPC-Lag) and deep learning alternatives.
Essential References Not Discussed: To the best of my knowledge, the paper has discussed all essential references.
Other Strengths And Weaknesses: Strengths:
- The paper addresses a critical gap in existing methods by integrating safety constraints into deep learning-based control of PDE systems.
- This paper is well-written and clearly organized, with a strong motivation and clear contributions.
- The experimental results are well-presented and provide a comprehensive evaluation of the proposed method in various physical domains, highlighting its effectiveness.
Weakness:
- The effectiveness of the uncertainty quantile seems to be related to the choice of various parameters, such as coverage probability $\alpha$, the split ratio of training/calibration data, and the weight of objective $\gamma$. It is suggested to provide more ablation analysis on these parameters to better understand their impact on the performance of the proposed method.
- As mentioned above, more ablation analysis on the other two tasks, 1D Burgers Equation and nuclear fusion control, would further demonstrate the effectiveness of the proposed method.
Other Comments Or Suggestions: There are a few typos in the paper that need to be corrected:
- On page 6, in Figure 2's caption, the authors refer to their method as "SafeConPhy" instead of "SafeDiffCon".
- Same on page 19, in appendix H.1, the authors refer to their method as "SafeConPhy" instead of "SafeDiffCon".
- On page 22, in the "H.7. PID" section, there's a typo in the first sentence where "Propercentageal" is written instead of "Proportional" when describing the PID control method.
Questions For Authors: - To what extent can the conformal prediction intervals in SafeDiffCon provide reliable coverage guarantees for safety scores when applied to novel control tasks during inference, especially considering the distribution shift between calibration data and the optimal control distribution?
- While the paper presents an interesting approach to safe PDE control, I'm curious about the specific choice of diffusion models as the foundation. Could you elaborate on the intrinsic advantages that diffusion models provide for your method compared to other generative or deep learning approaches? Where specifically does the synergy between your uncertainty quantification framework and the diffusion model architecture manifest? Would applying similar conformal adaptation techniques to alternative model architectures yield comparable safety guarantees and performance improvements? These are my main concerns of the paper.
## Post Rebuttal
Most of my concerns are appropriately addressed. I choose to increase the score.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate your recognition. Below are our responses.
>Q1. Ablation studies on other two tasks, 1D Burgers' Equation and tokamak control.
- Thanks for the suggestion. We conduct ablation studies on these two tasks. Results still show that the absence of any module affects the model, causing it to fail to meet the safety constraints. Therefore, each module is effective. These will be added to the manuscript.
Burgers':
|Methods|J|$R_{sample}$|$R_{time}$|$R_{point}$|
|-|-|-|-|-|
|SafeDiffCon|0.0011|0%|0%|0%|
|w/o post-training|0.0014|4%|50%|0.01%|
|w/o fine-tuning|0.0007|40%|20%|3%|
|w/o Q|0.0006|30%|10%|1%|
Tokamak:
|Methods|J|$R_{sample}$|$R_{time}$|
|-|-|-|-|
|SafeDiffCon|0.0094|0%|0%|
|w/o post-training|0.0153|10%|0.52%|
|w/o fine-tuning|0.0210|50%|7.74%|
|w/o Q|0.0269|28%|0.65%|
>Q2. Analysis on parameters, such as coverage probability α, the split ratio of training/calibration data, and the weight of objective γ.
- Great suggestion! We conduct experimental analysis on these parameters. The tables show that they do not impact much on performance, indicating that the model is robust. This will be added to the next version.
- Tokamak:
|α|J|$R_{sample}$|$R_{time}$|
|-|-|-|-|
|0.8|0.01017|0%|0%|
|0.85|0.0091|0%|0%|
|0.9|0.0094|0%|0%|
|0.95|0.0095|0%|0%|
|Split ratio|J|$R_{sample}$|$R_{time}$|
|-|-|-|-|
|0.005|0.0124|4%|0.03%|
|0.01|0.0093|0%|0%|
|0.02|0.0094|0%|0%|
- Incompressible fluid:
|γ|J|SVM|$R_{sample}$|
|-|-|-|-|
|0.01|0.3548|0|0%|
|0.1|0.4953|0.004|2%|
|0.3|0.498|0.003|2%|
>Q3. How reliable are the conformal intervals in SafeDiffCon for safety scores on novel control tasks, given the distribution shift between calibration data and the optimal control distribution during inference?
- Good questions! Novel control tasks include **three cases**: new equation forms (i.e., dynamics), (1) new safety constraints and bounds, and (2) a shift in the optimal control distribution and (3) the distribution of the calibration set.
- We primarily discuss **the third case** in the paper. Both theoretical and experimental results demonstrate that our method can adapt to this distribution shift, ensuring that the actual score is covered. However, the theoretical analysis assumes that the diffusion model's approximation of the training set distribution during pretraining is not too poor.
- We lack discussion of **the first and second cases** in the paper, which will be added in the next version. Since Eq. 5 and Eq. 11 are tied to specific dynamics and safety constraints, the conformal interval doesn't generalize to new ones. However, we can **skip the post-training and use inference-time finetuning** to compute conformal prediction intervals based on new tasks, ensuring reliable coverage guarantees.
>Q4. Why choose diffusion models? What advantages do they offer over other generative or deep learning methods? Where specifically does the synergy between your uncertainty quantification framework and the diffusion model architecture manifest? Could similar conformal techniques applied to other models achieve comparable safety and performance?
- Since our approach to uncertainty quantification is **probabilistic**, we require a model with an explicit probabilistic framework. Diffusion models meet this requirement, as they allow us to **analyze the relationship between the generated distribution and training set distribution**. This enables us to incorporate probability-related theory into its training and sampling algorithms, such as our consideration of distribution shift and the design of reweighted loss.
- With multi-step denoising, diffusion models have **strong modeling capabilities**, allowing it to handle high-dimensional, long-term problems. It has been used in weather and PDE system modeling tasks [1, 2]. Additionally, because it denoises from the Gaussian distribution and **models the entire trajectory rather than transition pairs**, it can perform **global optimization**, and has been applied in many decision-making scenarios [3, 4]. Furthermore, studies have shown that it is **robust**, which means it can withstand noise attacks [5] and is critical in safe control. **Experimental results** also demonstrate its superiority in control problems.
- **Alternative model architectures** can also draw from ideas of conformal prediction. However, due to the strong capabilities of diffusion models and their ability to explicitly model the relationship between training and generated distributions, they integrate with conformal prediction more naturally and effectively.
>Q5. Typos.
- Thanks. We will correct them.
**References**
[1] Dyffusion: a dynamics-informed diffusion model for spatiotemporal forecasting.
[2] Synthetic Lagrangian turbulence by generative diffusion models.
[3] Planning with diffusion for flexible behavior synthesis.
[4] Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning.
[5] Diffusion Models are Certifiably Robust Classifiers. | null | null | null | null | null | null |
PIPA: Preference Alignment as Prior-Informed Statistical Estimation | Accept (poster) | Summary: The submission presents an approach called Prior-Informed Preference Alignment (PIPA) that accommodates paired and unpaired preference data, as well as answer and step-level annotations, for the purpose of doing offline preference alignment in language models. The approach is claimed to unify recent approaches like DPO and KTO into an RL-free probabilistic framework.
PIPA starts by assuming an (unpaired) preference dataset made of triplets $(x, y, c)$ sampled from $p^{data}(x, y, c)$, where $x$ is the instruction input, $y$ is the model response, and $c$ represents whether the response was desirable or not. It then formulates the preference learning problem as minimizing the discrepancy between the model's distribution and the data distribution subject to some prior constraint on the model's distribution. The nature of the constraint gives rise to different PIPA variants.
PIPA-M attempts to model the conditional distribution $p^\mathrm{data}(y, c \mid x)$ under the constraint that when marginalizing over $c$ the equality $p(y \mid x) = p^\mathrm{prior}(y \mid x)$ (the latter being defined through a pretrained LLM) holds. It introduces two parameterized functions $f^\theta(y \mid x)$ and $g^\theta(x)$ to represent the conditionals $p(y \mid x, c = 1)$ and $p(c = 1 \mid x)$, respectively. The constraint is enforced by construction by i) defining $p^\theta(y \mid x, c = 1) = f^\theta(y \mid x) g^\theta(x)$ and $p^\theta(y \mid x, c = 0) = p^{prior}(y \mid x) - f^\theta(y \mid x) g^\theta(x)$ and ii) enforcing that $p^\theta(y \mid x, c = 0) > 0$ via $g^\theta(x) = \min(g^\theta_0(x), p^\mathrm{prior}(y \mid x) / f^\theta(y \mid x))$.
PIPA-N instead attempts to model the conditional distribution $p^\mathrm{data}(c \mid x, y)$ under the constraint that $p^\theta(y | x, c = 0) = p^\mathrm{prior}(y \mid x)$. The parameterized $p^\theta(c = 1 \mid x, y)$ is constructed using this equality constraint and conditional probability identities, and the counterpart for $c = 0$ is obtained by noting that the two must sum up to 1 since $c$ is binary.
The step-level variants for PIPA-M and PIPA-N are defined analogously, with the difference that the functions $f^\theta$ and $g^\theta$ are defined autoregressively.
Experiments are presented for all combinations of {paired,unpaired} preferences and {answer,step}-level annotations using an unpaired preference dataset for math reasoning that includes problems from GSM8K and MATH. PIPA is compared against DPO, KTO, and IPO as well as the step-level variants of DPO and KTO and is shown to outperform competing approaches in all settings. Ablations are presented to compare PIPA-M and PIPA-N, examine the effect of several design decisions, and investigate the effect of step-level annotation.
## Update after rebuttal
The discussion with the authors addressed my concerns regarding the soundness of the proposed approach, and I feel more inclined to recommend acceptance. I have adjusted my score accordingly. I remain worried about clarity, but the authors' efforts in the discussion alleviate that to some extent.
Claims And Evidence: I am not entirely convinced by the connection drawn between PIPA and DPO. Firstly, Equation 6 in Appendix A.1 represents the DPO objective only in the special case where $\beta = 1$. Secondly, and more importantly, the chosen and rejected responses are characterized as having been sampled from $p(y \mid x, c = 1)$ and $p(y \mid x, c = 0)$, respectively, but this is not how the data generating process works in practice for paired preferences: two responses are sampled, and the annotator makes a relative judgement on the two responses. This means that response $y$'s categorization into $c = 0$ or $c = 1$ is intrinsically linked to the second response $y'$ in the pair that was presented to the user: response $y$ could be bad, but so long as response $y'$ is _worse_, $y$ would be associated with $c = 1$.
Methods And Evaluation Criteria: The proposed approach makes sense in how it is constructed: having verified the derivations on my own for Sections 2.2 and 2.4, the objectives are sound and follow naturally from the definition of conditional probabilities.
One aspect of PIPA that makes less sense to me is what $g^\theta(x)$ is meant to represent. A literal interpretation would be that it is the probability of a correct answer given an instruction $x$, but that has to depend on how answers are generated from the instruction, right? If so, what generative process do we assume?
Theoretical Claims: I have not checked the proofs in the Appendix, but I did verify for myself that the objectives constructed in the main paper correctly follow from first principles of conditional probabilities.
Experimental Designs Or Analyses: The dataset used to compare approaches makes sense in the unpaired case, but I don't think creating paired data out of correct and incorrect responses to problems is a good stand-in for a paired preference dataset (see my point about the paired/unpaired correspondence in Claims and Evidence). Given the clear dichotomy between "correct" and "incorrect" in the mathematical domain, such a procedure cannot create paired data where two responses are satisfying but one is better than the other (e.g., both are factually correct, but one is more detailed than the other), or two responses are bad, but one is worse than the other (e.g., both are factually incorrect, but one uses toxic language). It's unclear to me how PIPA's way of adapting to paired data would fare in that case in comparison to approaches specifically designed for paired data. A comparison on a paired dataset would be necessary to determine that.
I am also concerned that the model selection procedure is not sufficient to ensure that each method is presented in the best light possible. Are we sure that some methods wouldn't benefit from smaller or larger batch sizes than 256? Are we sure that some methods wouldn't benefit from a smaller or larger number of parameter updates? Why was beta chosen to be 0.1 for DPO, IPO, and Step-DPO? Do we know that this is a good hyperparameter choice for AlphaMath?
Supplementary Material: I had a look at Appendix A.1.
Relation To Broader Scientific Literature: In general the submission does a good job of relating to the broader scientific literature. Dumoulin et al. (2024) is cited when discussing DPO's probabilistic interpretation, but is not mentioned among the works that "[approach] alignment from a probabilistic perspective"; why is that?
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: The submission presents interesting ideas in its proposed approach.
One major weakness is its clarity. I think I now understand the ideas it tries to convey, but I had to spend a lot of time connecting the dots and navigating between sections before reaching that point. A few notable examples:
* The introduction sells the proposed approach as being applicable to both paired and unpaired data, but the preference alignment problem formulation in Section 2.1 describes the unpaired case specifically without going over the paired case. How PIPA handles paired data is only made clear in Section 3.1: "Both KTO and PIPA decouple the paired data"; "[...] we construct the paired subset by matching each correct solution to a corresponding incorrect solution from the same problem [...]". From this I infer that adapting PIPA to work on a paired dataset amounts to treating all chosen responses in the comparison pairs as having been classified as "correct/desirable" and all rejected responses as "incorrect/undesirable". This should be clarified sooner in the paper. (I also have my doubts about the validity of this adaptation; see my point in Claims and Evidence, but essentially if two responses are bad but one is less so than another, does it even make sense to say that it is "correct/desirable"?)
* The notation also contributes to the confusion: Section 2.1 talks about "estimating $p(y | x, c = 1)$", which suggests that this is the target distribution we are trying to approximate, yet the preference dataset is sampled from $p^\mathrm{data}$, and MLE usually attempts to approximate the distribution from which the empirical data is sampled. Do the authors mean that $p(y | x, c = 1)$ is the distribution with which we try to approximate $p^\mathrm{data}$? This is what Equation 1 suggests, but it should be clarified.
* The connection between PIPA-M's Equation 3 and Step 3 in Algorithm 1 is not immediately obvious. If I understand correctly, since $p^\mathrm{prior}$ does not depend on $\theta$, the derivative with respect to $\theta$ for $c = 0$ and $c = 1$ in Equation 3 are identical to that of step 3 in Algorithm 1, which is why the latter is substituted for the former to unify PIPA-M and PIPA-N in Algorithm 1. Is this correct? In any case, the substitution should be explained more clearly.
Other Comments Or Suggestions: * In Section 2.2's last unnumbered equation, doesn't the optimization reduce to _maximizing_ the expected log-likelihood rather than minimizing it?
* What does the subscript 0 in $p^\mathrm{prior}_0(y \mid x)$ mean in Equation 5?
Questions For Authors: 1. How was the dataset partitioned for training, validation, and testing? What criterion (and what dataset split) was used to determine the best learning rate in the grid search?
2. The KTO paper also presents results on GSM8k which appear to be different from the ones presented in the submission. What explains this discrepancy?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for the detailed summary of the PIPA-M and PIPA-N derivations, including their extension to the step-level setting, as well as for recognizing our comprehensive experiments on math tasks.
## 1. Connectio between PIPA and DPO
### 1.1 $\beta$
The reviewer mentions that our current PIPA-N only reduces to DPO when $\beta=1$. We emphasize that DPO with $\beta \neq 1$ directly corresponds to PIPA-N with a power prior [1] in Bayesian analysis, meaning:
$$p(A|B) \propto p(B|A)^{\beta} p(A).$$
Specifically, introducing $\beta$ modifies the original DPO loss as follows:
$$\max_\theta \quad \log\sigma\left(\beta\log \frac{p^{\theta}(y^+ | x, c = 1)p^{\text{prior}}(y^- | x)}{p^{\theta}(y^- | x, c = 1)p^{\text{prior}}(y^+ | x)}\right)
= -\log\left(1+\left(\frac{p^{\theta}(y^- | x, c = 1)p^{\text{prior}}(y^+ | x)}{p^{\theta}(y^+ | x, c = 1)p^{\text{prior}}(y^- | x)}\right)^\beta\right).$$
Furthermore, when incorporating the power prior $\beta$ in PIPA-N, all terms of $h_\theta(y_1,y_2,x)$ in equation (9) transform into $h_\theta^{\beta}(y_1,y_2,x)$. It is straightforward to verify that the updated PIPA-N loss with $\beta$ precisely matches the DPO loss with $\beta$.
### 1.2 Sampling
We respectfully disagree with the reviewer's interpretation and emphasize that our PIPA process aligns with practice. As the reviewer noted, an answer $y$ can be classified as positive ($c=1$) or negative ($c=0$) based on its comparison with the paired answer. This implies the existence of a **non-trivial joint distribution** over $(y,c|x)$. For a high-quality response, $p(y, c=1|x)$ will be greater, while $p(y,c=0|x)$ will be higher for a less favorable response. In practice, we can model positive samples as being drawn from $p(y|x,c=1)$ and negative samples from $p(y|x,c=0)$, allowing us to apply MLE to address the problem.
### 1.3 Meaning of $g^{\theta}(x)$
This represents the marginal distribution $p^{\theta}(c=1|x)$, which computes integral over all possible answers $y$. Empirically, it can be interpreted as **the difficulty of a question**. For example, if the question $x$ is "What is 2+2?", then $g^{\theta}(x)$ should be expected to be close to 1 since there's clear consensus on the correct answer. However, if the question is "What is the most beautiful color?", then $g^{\theta}(x)$ would be closer to 0.5 since this is highly subjective with no definitive answer.
## 2. Experiments
### 2.1 Paired data construction
We respectively disagree with the reviewer’s concern about pairing correct and incorrect answer together does not help learn the preference.
- Paired data is standard and widely used in SOTA math models training, e.g., [2]. These paired data may start from similar analysis but suddenly diverge in certain steps, which help the model learn the key steps from comparison.
- Our experiments about win rate and KL trajectory (https://anonymous.4open.science/r/PIPA-DB7B) also clearly shows that our DPO training is effective.
- Practical paired data construction should rely on the goal. For math, if the goal is to enhance reasoning ability, pairing correct and incorrect answers should be effective. And if the aim is to achieve an efficient and short reasoning chain, pairing a longer correct answer with a shorter correct answer is preferable. While pairing two incorrect answers is theoretically possible, it serves limited practical utility in most training scenarios. For general preference alignment, we also don’t think pairing two bad answers (bad versus worse) is helpful. We believe robust filtering mechanisms should be implemented to avoid such non-productive training signals.
### 2.2 Hyperparameters
Our hyperparameter choice is based on rigorous selection to ensure fair comparison across all methods.
- **Batch size**: Since PIPA only modifies the loss function compared to baselines, **a fair comparison would be keeping the batch size and training steps to be the same across all algorithms.** And 256 is a standard choice for datasets containing tens of thousands of examples.
- **$\beta$**: See the results in Section 1 of response to 3B8U.
### 2.3 Data split
We use standard benchmarks (GSM8K, MATH) for testing and Alphamath for training (10% for validation). LR search follows standard paradigm using train loss and validation loss.
### 2.4 Results differences
They are from differences in models and training data.
## 3. Writing Improvements and Clarifications
We thank reviewers for their careful reading and suggestions. We will:
- Add Dumoulin et al. (2024) to the related work section.
- Clarify PIPA's approach for paired data earlier.
- Clarify the training objective explanation.
- Clarify substitutions in Equation 3 and step 3.
- Change minimizing to maximizing.
- Remove the 0 in Equation 5.
[1] Ibrahim J G, et al. The power prior: theory and applications[J]. Statistics in medicine, 2015.
[2] Xiong, Wei, et al. “Building Math Agents with Multi-Turn Iterative Preference Learning.” ICLR 2025.
---
Rebuttal Comment 1.1:
Comment: Thank you. Below are my comments following your response.
**1.1** I am satisfied with your explanation. It's important for clarity that this is also made explicit in the paper.
**1.2** I think I'm starting to see the point you are making. Please let me know if this is correct: the joint $p(y, c | x)$ can be thought of as implicitly marginalizing over all possible $y'$ alternatives and reflects the marginal probability that $y$ would be preferred over any $y'$. If so, this is reminiscent of how Munos et al. (2024) define the preference between two distributions conditioned on a state $x$.
**1.3** The interpretation makes sense; however I'm still not clear on what distribution we marginalize over when computing the integral over all possible answers $y$. Can you elaborate?
**2.1** I am satisfied with the explanation in the context of math problems, but I disagree that pairing two bad answers (bad versus worse) is unhelpful in general preference alignment. The precise reason why one bad answer is better than a worse answer can still yield some useful signal: to reuse the example in my review, if two answers are factually incorrect but one uses toxic language, we can still get a useful signal on toxicity. I think this disagreement is illustrative of a broader worry I have with the proposed approach's general applicability, which is that it works best when there is a clear dichotomy in preferable and non-preferable answers. Note that this by itself is not a fatal flaw in the submission (I recognize that even in this more restricted scope valuable contributions can be made), but it is an interrogation point that I could see people having when deciding whether to adopt PIPA for their own preference learning problem.
**2.2** I remain concerned: I don't agree that keeping the batch size and training steps to be the same across all algorithms is the fair thing to do. A modified loss function could mean a completely different loss landscape requiring more or less steps to converge, for instance.
**2.3** I am satisfied with the clarification. Please make sure to mention those details in the paper.
**2.4** Can the authors elaborate on the differences? Isn't KTO the same between the paper that proposes it and this submission?
**3** I am satisfied with the authors' response.
**References**
Munos, R., Valko, M., Calandriello, D., Azar, M. G., Rowland, M., Guo, Z. D., ... & Piot, B. (2024). Nash learning from human feedback. In Proceedings of the International Conference on Machine Learning.
---
Reply to Comment 1.1.1:
Comment: Thank the reviewer for your thorough reading of our rebuttal and for replying to each point. We are glad that we address many of your concerns.
**1.1** Thank you for the satisfaction, we will include this in the paper.
**1.2** Thank you for agreeing with our explanation. Yes your understanding is correct. We will clarify this further in the paper and cite Munos et al. (2024).
**1.3** Thank you for agreeing with our interpretation. Theoretically, the integral is taken over all possible answers $y$, that is, $p^\theta(c=1|x) = \int p^\theta(c=1|x,y) p^\theta(y|x) dy$. In our PIPA framework, we approximate $p^\theta(c=1|x)$ directly using a neural network that depends on $x$.
**2.1** Thank you for your positive response to our explanation regarding pairwise data in the math context, and for recognizing the effectiveness of PIPA in good-bad pair scenarios. Regarding cases like good-better and bad-worse:
- As you mentioned in **1.2**, the PIPA framework does cover situations where labels are derived after comparison. In this paper, we primarily focus on the KTO-style PIPA for decoupled data. However, in scenarios where coupled data is necessary (as in good-better or bad-worse comparisons), **a DPO-style PIPA can also be applied**. In particular, Equation (9) in our paper reduces exactly to DPO when assuming a prior of $p(c=1|x) = 0.5$. By removing this prior, we arrive at the paired version of PIPA-N, which introduces **an additional margin term** $\frac{p^\theta(c=1|x)}{p^{\theta}(c=0|x)}$ to the DPO loss. We present experimental results for this variant and show the advantage in Section 2 of our response to 3B8U.
- Empirically, PIPA only modifies the loss functions in DPO/KTO etc. Therefore, it remains fully compatible with any dataset used by those approaches, without imposing further constraints on data selection.
- The question of whether bad-worse pairs are useful is beyond the scope of our work and pertains more broadly to the field of preference alignment. Here we agree with your point that, since methods like DPO and paired PIPA optimize for preference gaps, comparisons between bad and worse can also provide useful information for the model.
**2.2** We evaluate batch sizes of {64, 256, 1024} for DPO, IPO, and PIPA. Across all batch sizes, PIPA consistently outperforms the other methods, and the performance trends remain similar. As shown in the table, PIPA achieves higher accuracy both when comparing results at each batch size and when considering the best-performing batch size for each algorithm.
| Algorithm | Batch Size | GSM8K Accuracy | MATH Accuracy |
|-----------|------------|----------------|---------------|
| DPO | 64 | 69.60 | 45.48 |
| DPO | 256 | 68.39 | 46.94 |
| DPO | 1024 | 68.69 | 47.14 |
| IPO | 64 | 71.42 | 47.58 |
| IPO | 256 | 69.14 | 46.94 |
| IPO | 1024 | 68.69 | 46.56 |
| PIPA | 64 | **81.35** | **52.34** |
| PIPA | 256 | **79.08** | **50.82** |
| PIPA | 1024 | **75.74** | **47.60** |
**2.3** Thank you for the satisfaction, we will include this information in the paper.
**2.4** PIPA, DPO, and KTO differ only in their loss functions, while GSM8K serves purely as a testing benchmark. Therefore, differences in pre-trained models, training data, and evaluation strategies can naturally lead to variations in results.
Specifically, the KTO paper fine-tunes general-purpose models like LLaMA-3-8B and Qwen-2.5-3B-Instruct on the UltraFeedback dataset, and evaluates using an 8-shot setting.
In contrast, our experiments fine-tune stronger models—Deepseek-Math-7B-Instruct and Qwen-2.5-Math-7B-Instruct on the high-quality Alphamath dataset. We also adopt a zero-shot evaluation strategy. By leveraging more capable models and domain-specific data, **we aim to establish stronger baselines and narrow the room for improvement, making any gains achieved by better loss functions even more impressive.**
Moreover, zero-shot evaluation helps us isolate the effect of the loss function itself, avoiding confounding factors such as the randomness or relevance of few-shot examples. Overall, our setup is **carefully designed to provide a clean, fair and challenging benchmark for comparing different algorithms.**
**3** Thank you for the satisfaction. | Summary: The paper proposes a prior-oriented perspective on effectively leveraging negative samples in preference learning, given that MLE (SFT) is the optimal solution with positive samples. Within this perspective, DPO and KTO are unified by their use of different priors and loss functions for negative samples.
Building on this framework, the authors introduce PIPA-N and PIPA-M as two design approaches, which naturally incorporate learning a value model for token-level valuation. Experimental results on paired/non-paired data, as well as outcome-level and step-level labels, demonstrate the effectiveness of the proposed methods
---
Post-rebuttal update:
I thank the authors for their rebuttal, which addresses most of my concerns. I continue to recommend acceptance of this paper, as I find the proposed framework to be a valuable contribution, and the experimental results adequately support the claims. My remaining concern lies primarily in the clarity and writing quality of the paper. With significant improvements in presentation, I would give a score of 4. As it stands, please interpret my final score as a 3.5, if such granularity is permissible.
Claims And Evidence: The claims are generally well-supported. However, there are concerns regarding which specific design choices contribute to the observed experimental results and the comparison to other baselines. Please see the #2 and #4 question on the experimental analysis.
Methods And Evaluation Criteria: The methods are generally well-founded. However, the reviewer has the following question on design choice in PIPA-N and the token-level reward:
In PIPA-N, why is learning $p^{\theta}(c=1|x)$ considered a simpler and more natural design compared to DPO and KTO? A more detailed explanation would help clarify the advantages of this approach over existing methods.
The token-level reward is defined by the equation at line 235. If $c$ epresents correctness, then the definition is well-defined. However, when $c$ represents preference, what does it mean in the context of $y_{<t}$?
Theoretical Claims: It would be helpful to provide a more detailed explanation of the reduction after Theorem 2.1, particularly why $g_{0}^{\theta}$ disappears from the objective. The reviewer understands that $g^{\theta}$ is introduced to satisfy the constraint and is inherently not a variable to optimize, but this could be made clearer to the reader.
Experimental Designs Or Analyses: 1. Table 2 lacks a clear explanation of the experimental settings. The reviewer has to guess that this is the DeepSeek model and the step-wise approach for unpaired data.
2. Table 2 raises a question about the source of performance improvement. Does the observed improvement primarily stem from the learned token-level values, rather than modifications in the PIPA formula? The results with a fixed $p^{\theta}(c_t|x,y_t)$ appear similar to step-KTO, suggesting that the gains might not be attributed to changes in the formula itself.
3. Figure 3: On which dataset was the likelihood calculated?
4. The PIPA methods apply token-level rewards, whereas the baseline methods rely on outcome-level or step-level rewards. How does PIPA compare to other approaches that also utilize token-level rewards within the DPO framework (e.g., RTO, TDPO, OREO)?
Supplementary Material: The reviewer skimmed through the proofs for DPO and KTO.
Relation To Broader Scientific Literature: The paper aims to unify offline preference alignment algorithms from a prior-oriented perspective. It introduces a general framework that encompasses DPO and KTO, simplifying them into practical algorithms that outperform previous methods.
Additionally, the paper provides intuitions on the distinction between SFT and preference alignment, framing it as the incorporation of a prior on negative responses.
Essential References Not Discussed: To the reviewer’s knowledge, all significant references have been appropriately discussed.
Other Strengths And Weaknesses: The prior-oriented perspective is novel and effectively illustrates the distinction between SFT and offline preference alignment methods.
The definition and introduction of the token-level reward is novel.
Other Comments Or Suggestions: 1. Line 152 (left): The sentence is not fully clear. Should "Paramter" be "parameterize"? What does this sentence meant to say?
2. The notation $p^{\theta}$ and $g^{\theta}$ both include $\theta$, which is confusing before reaching Section 2.6.
Questions For Authors: After reading the paper, the reviewer feels that the authors attempt to cover too much content within the main text. The paper would benefit from moving some ablation studies to the appendix while expanding the explanations in Sections 2.2, 2.3, and 2.4 for better clarity.
Additionally, could the proposed framework also encompass other offline preference learning methods discussed in the related work section?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer for acknowledging our novel well-supported framework and comprehensive experiments in math. We will improve writing to make presentation more clear.
## 1. Why learnable $p^\theta(c=1|x)$ is better?
Unlike DPO (which uses a simple 0.5 prior) and KTO (which uses a complex KL-based prior), our approach makes this term learnable—a more elegant solution. As explained in our response to reviewer 4Kmz (section 1.3), this term can be interpreted as problem difficulty and is better learned during training rather than using a predetermined fixed value.
## 2. Meaning of $y_{<t}$
We agree that conditional independence only works for math-like tasks with clear good/bad examples. However, for scenarios involving good/better comparisons you mentioned, a DPO-style paired loss function is necessary rather than the current KTO-style unpaired approach presented in our work, and it also means there is no step-level signals due to the uncertainty.
To get the new algorithm in the new setting, we can follow Theorem A.1's derivation and design a paired PIPA variant (not explored in our current version which focuses on KTO-style unpaired PIPA). Starting from equation (9), we can expand terms in $h_\theta$ autoregressively while directly learning $p^{\theta}(c=1|x)$.
This deriviation does not need token-level labels, avoiding the concerns you mentioned.
## 3. Writing improvements and clarifications
Thank the reviewer for the careful reading and suggestions to improve our work. We will
- Explain $g_0^\theta$ more clearly in Thm 2.1.
- Clarify the setting in Table 2.
- Change Parameter to parameterize in Line 152.
- Make notation with $\theta$ more clear.
- Reorganize the sections and move some experiments to appendix to make presentation more clear.
Dataset used in Figure 3: The likelihood is computed on Alphamath during training.
## 4. Experiments
### 4.1 Source of advantages
Our approach gains benefits from both formula and token-level rewards—these components work effectively and neither functions effectively without the other. Importantly, integrating token-level rewards into DPO/KTO frameworks is non-trivial and it’s not straightforward to design an algorithm that only employs token level rewards without appropriate changes to the formula.
### 4.2 Baselines
We don't include comparisons with RTO and OREO because they require an additional learnable model, unlike our approach which only needs an extra prediction head. This makes our method more directly comparable to the original DPO and KTO implementations with similar computational costs. We add the comparison with TDPO below using Deepseek model, and PIPA is better. More comparison will be included later due to the limited response period.
| Algorithm | Learning Rate | $\beta$ | Batch Size | GSM8K Accuracy | MATH Accuracy |
|-----------|---------------|------|------------|----------------|---------------|
| TDPO | 1e-5 | 1.0 | 256 | 76.15 | 48.84 |
| PIPA-M | 5e-5 | NA | 256 | 79.08 | 50.82 |
## 5. Encompassing other methods
The answer is yes. In paired PIPA, equation (9) in Theorem A.1 can be formulated as DPO plus a margin term structured as $\log\frac{p^\theta(c=0|x)}{p^\theta(c=1|x)}$, which enables us to cover margin-based approaches similar to SimPO. Additionally, by introducing a power prior into PIPA, we can encompass DPO variants with $\beta\neq 1$. See section 1.1 in the response to reviewer 4Kmz for details.
## 6. Why math tasks?
Here we provide further explanation on our choices of using math tasks in this paper.
### 6.1 Math is a natural fit
Given that our PIPA framework provides key advantages—such as removing the requirement for paired data and naturally extending to step-wise labels—we prioritize math tasks as the optimal testbed for validation. This enables us to benchmark PIPA against general baselines like DPO and more specialized approaches such as step-DPO and step-KTO, which are designed to tackle these challenges.
### 6.2 Easy generalization to other tasks
We appreciate the suggestion to expand the scope of our study. Our PIPA framework is applicable to general preference tasks, particularly through the paired PIPA formulation outlined in Theorem A.1 of our paper. Preliminary results are presented in Table 4 in response to Review 3B8U. Further exploration about paired PIPA and other tasks will be explored more in future work.
### 6.3 The importance of paired data in math
While some math benchmarks use rule-based rewards, a preference signal that distinguishes correct from incorrect answers remains crucial. It helps the model recognize and learn from flawed reasoning or incorrect steps within the same problem [1, 2, 3…]. The impact of paired data is more pronounced when high-quality paired data is available. Thus, when designing alignment algorithms, it is essential to consider both paired and unpaired data. PIPA serves as a promising step in exploring this direction. See more discussions in Section 4.2 in the reply to 4Kmz. | Summary: **Post-rebuttal Update**: I thank the authors for their detailed explanation on the unified framework. I have adjusted my score to 3.
---
The paper presents a novel framework called Prior-Informed Preference Alignment (PIPA), which unifies various preference optimization objectives for language model alignment. The authors propose two variants, PIPA-M and PIPA-N, both formulated as constrained maximum likelihood estimation (MLE) problems with prior constraints. The PIPA framework effectively generalizes existing methods like Direct Preference Optimization (DPO) and Kahneman-Tversky Optimization (KTO) by incorporating prior-informed constraints. Empirical results on GSM8K and MATH benchmarks demonstrate that PIPA achieves consistent performance gains without additional computational costs.
Claims And Evidence: The authors claim that PIPA unifies multiple preference optimization approaches, generalizes existing methods, and improves performance. Theoretical analysis justifies the framework's design, while extensive experiments across answer-level and step-level tasks validate these claims.
However, the observed performance difference between PIPA-M and PIPA-N lacks a consistent explanation. Neither can consistently surpass the other. This raises questions about the underlying motivations for these two variants.
Methods And Evaluation Criteria: The proposed methods are sound and align well with established evaluation criteria for preference optimization in language models. The authors provide a clear derivation of their approach and conduct comprehensive experiments on relevant benchmarks (GSM8K and MATH).
The choice to test on math benchmarks may limit the broader applicability of the proposed method. I wonder why the authors chose to test on math benchmarks, where a rule-based reward signal is available and there is no pressing need to construct preference signals. Alternatively, have the authors tested PIPA on alignment tasks, where the pairwise preference makes more sense?
Theoretical Claims: The derivations appear correct to me.
Experimental Designs Or Analyses: The experiments are comprehensive and address key aspects of preference optimization across different data types (paired/unpaired and answer/step-level). However, the comparison to DPO, IPO, and KTO in the step-level setting may be somewhat unfair since these methods are not inherently designed for multi-step math/reasoning tasks. A comparison to reinforcement learning (RL) methods would be more appropriate given the rule-based reward signals available for math tasks.
Supplementary Material: I checked the derivations in the supplementary material.
Relation To Broader Scientific Literature: None that I am aware of.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: Strengths:
+ The proposed framework has the potential to offer a unified understanding of preference optimization techniques, which might be of independent interest.
Weaknesses:
+ The empirical distinction between PIPA-M and PIPA-N lacks a strong theoretical grounding or unifying motivation.
+ Testing only on math benchmarks underutilizes the potential impact of PIPA on alignment tasks with human preferences.
Other Comments Or Suggestions: typos:
1. Line 140, left column: This is by definition is...
2. Line 151, left column: we are going to parameter (parameterize?)
Questions For Authors: 1. Can the authors provide a clearer explanation for the differing motivations behind PIPA-M and PIPA-N? How can these methods be unified under a common perspective?
2. There is an impression that PIPA-M and PIPA-N are quite different algorithms, and empirically there is no consistent advantage for one to be better than the other. Is there any more unified motivation so that PIPA-M and PIPA-N are motivated from the same starting point? Specifically, eqn (2) and (5) are not quite related to each other, except that they can fall under (1) as a constrained optimization problem.
3. The answer-level setting is reasonable to me. I wonder why the authors chose to test on math benchmarks, where a rule-based reward signal is available and there is no pressing need to construct preference signals. Alternatively, have the authors tested PIPA on alignment tasks, where the pairwise preference makes more sense?
4. For the step-level setting, the comparison to DPO, IPO or KTO might be slightly unfair as they are not originally designed for math or reasoning tasks. Instead, given the rule-based reward, comparison to RL algorithms is more suitable, suppose a process reward signal is available. And even if there is no process reward signal, the RL algorithms can still optimize on the rule-based outcome reward without any computation overhead.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer for acknowledging our innovative framework and thorough experiments for mathematical tasks.
## 1. Unified derivation of PIPA-M and PIPA-N
Both PIPA-M and PIPA-N derive from the same MLE target, differing only in their prior assumptions. For data (x, y, c) where:
- x: question
- y: answer
- c: label (1=pos, 0=neg)
We want to use MLE to solve $p^{\theta}(c | x,y)$. By Bayes’ rule:
$$p^{\\theta}(c=1 | x, y) = \\frac{p^{\\theta}(y|x, c=1) \\cdot p^{\\theta}(c=1|x)}{p^{\\theta}(y|x)} \quad \\text{and} \quad p^{\\theta}(c=0 |x, y) = 1-p^{\\theta}(c=1 | x,y)$$
Now how to parameterize the 3 terms in $p^{\\theta}(c=1 | x, y)$? In both PIPA-M and N, the 2 terms in numerator are learnable:
- $p^{\theta}(y|x, c=1)$ corresponds to $f^{\theta}(y|x)$ in our paper
- $p^{\theta}(c=1|x)$ corresponds to $g^{\theta}(x)$ in our paper
The key difference lies in how we handle the denominator $p^{\theta}(y|x)$:
- *PIPA-M*: Uses a fixed prior for the margin distribution: $p^{\theta}(y|x) = p^{prior}(y|x)$
- *PIPA-N*: Uses a fixed prior for the negative conditional distribution. So we expand the marginal distribution to conditional distribution:
$$\\begin{align}
p^{\\theta}(y|x) &= p^{\\theta}(y|x,c=1)p^{\\theta}(c=1|x) + p^{\\theta}(y|x,c=0)p^{\\theta}(c=0|x) \\\\
&= f^{\\theta}(y|x) \\cdot g^{\\theta}(x) + p^{prior}(y|x) \\cdot (1-g^{\\theta}(x))
\\end{align}$$
This concise derivation illustrates the relationship between PIPA-M and PIPA-N. **The reviewer questions the relationship between (2) and (5). We point out here that they are the same problem with different priors.**
Equation (5) for PIPA-N is exactly the MLE above. The equivalence between the above MLE for PIPA-M and the KL objective (2) is from the marginal prior and is demonstrated in Lines 144-147 of our paper. In our next revision, we will improve the presentation to provide a more unified description of both approaches.
## 2. Empirical understanding about performance difference
As mentioned in Lines 318-329, our empirical findings suggest different optimal scenarios for each approach. Below, we provide additional understanding for these observations:
### 2.1 From the prior
- **PIPA-N performs better for self-alignment**: When data is generated on-policy from previous model versions, our goal is self-improvement. In this context, it's reasonable to assume the current model $p^{prior}$ better resembles the negative-condition distribution.
- **PIPA-M performs better under distribution shift**: When using data from external sources, the distribution shift makes it more appropriate to use the current model as a prior for the overall distribution rather than specifically for the negative class, which would be an aggressive assumption.
### 2.2 From the loss
A deeper analysis of the loss functions of positive samples also show the differences:
**PIPA-M Loss:**
$$-\\log\\frac{p^{\\theta}(y|x,c=1)p^{\\theta}(c=1|x)}{p^{prior}(y|x)}$$
This effectively maximizes $p^{\\theta}(y|x,c=1)$ and $p^{\\theta}(c=1|x)$ directly.
**PIPA-N Loss:**
$$-\\log \left(p^{\\theta}(y|x,c=1)p^{\\theta}(c=1|x)\right) + \\log \left(p^{\\theta}(y|x,c=1)p^{\\theta}(c=1|x) + p^{prior}(y|x)(1-p^{\\theta}(c=1|x)\right)$$
PIPA-N incorporates an **additional regularization term** that prevents $p^{\\theta}(y|x,c=1)$ from overfitting to the training data. This regularization helps the model avoid getting stuck at the checkpoint that generated correct but suboptimal examples, which is particularly beneficial for self-improvement scenarios.
## 3. General preference tasks
Please see Table 4, in Seciton 2 of the response to 3B8U.
## 4. Why math tasks?
See Section 6 in the reply to Zrno.
## 5. Why use DPO/KTO for baselines?
DPO/KTO and their variants (iterative, stepwise, etc.) are widely acknowledged as **the primary—and often the only—baselines** in most alignment research on mathematical reasoning [1,2,3,4…]. We respectfully disagree with the reviewer’s suggestion to use RL methods like PPO/GRPO as baselines. These are **on-policy** algorithms, whereas the original DPO/KTO and our PIPA are all **off-policy** methods using a fixed dataset that differ only in their loss functions. Comparing off-policy methods with RL approaches would be inherently unfair. When comparing DPO/KTO in our work, we keep everything the same except the loss functions. A fair comparison with RL would require considering iterative/online DPO and developing an online version of PIPA, which extends far beyond the scope of this study.
## 6.Typos
We will fix the typos in Line 140, 151.
[1] Pang, Richard Yuanzhe, et al. "Iterative reasoning preference optimization." NeurIPS 2023.
[2] Chen, Guoxin, et al. “Step-level Value Preference Optimization for Mathematical Reasoning.” EMNLP 2024.
[3] Wang, Huaijie, et al. "Offline Reinforcement Learning for LLM Multi-Step Reasoning."
[4] Xiong, Wei, et al. “Building Math Agents with Multi-Turn Iterative Preference Learning.” ICLR 2025. | Summary: The paper presents Prior-Informed Preference Alignment (PIPA), a unifying probabilistic framework for offline preference tuning of language models. It views alignment as a maximum likelihood estimation with constraints that tie the “correct” and “incorrect” output distributions to a reference prior. Within this perspective, the authors explain how existing algorithms like DPO and KTO are special cases with different ways of imposing prior constraints. They also propose two PIPA variants (PIPA-M and PIPA-N), each incorporating the prior differently, and how they can handle both answer-level and step-level annotations. Experimental evaluations on GSM8K and MATH confirm that PIPA consistently outperforms baselines under multiple conditions while retaining a simple, efficient training procedure.
Claims And Evidence: Overall, most of the paper’s claims are supported by evidence in the form of clear theoretical derivations (showing how DPO and KTO emerge as special cases of PIPA). However, one possible limitation is the scope of experimentation: all empirical results come from math-focused datasets (GSM8K and MATH). While these are strong tasks for stepwise feedback, it’s less clear how PIPA behaves for broader preference-alignment scenarios.
Methods And Evaluation Criteria: While the math-domain experiments are systematic and informative, a key limitation is that both the tasks (GSM8K, MATH) and the models (Deepseek-Math-7B, Qwen2.5-Math-7B) are math-focused, which may not fully generalize to broader open-ended preference scenarios (e.g., creative writing, summarization, or subjective opinion tasks). In particular, it is unclear whether step-level preference alignment would improve performance on non-math benchmarks where correctness is less binary. For example, I would suggest evaluating the proposed methods on widely used preference alignment benchmarks, such as AlpacaEval and ArenaHard.
Moreover, while the authors compare PIPA against DPO and KTO variants, they omit more recent or higher-performing preference-alignment methods like SimPO, which reportedly achieves strong results on open-ended generation tasks.
Theoretical Claims: The main theoretical claims center on
1. Showing that DPO and KTO emerges as special cases by imposing different prior constraints on the model distribution (theorem A.1, theorem A.2)
2. Constructing a parameterization that satisfies a marginal or negative-prior constraint (theorem 2.1)
The derivations are consistent with standard probabilistic modeling arguments, and there do not appear to be any major errors
Experimental Designs Or Analyses: From an experimental design perspective, one area that could be strengthened is hyperparameter validation. While the paper mentions a grid search, it isn’t entirely clear whether the proposed methods and baselines are treated equally or whether the chosen hyperparameters reflect their optimal settings. From my experience, simple hyperparameter tuning can make it seem like a proposed method shows better performance than the baseline.
A helpful way to address potential “hyperparameter hacking” is to plot a “GT win rate” (or accuracy/performance) against the KL-divergence across a dense grid of hyperparameter values for both the new methods and the baselines (similar to approaches in other RLHF or offline alignment work (https://arxiv.org/abs/2406.02900) which uses a similar plot for observing overoptimization). Such plots would demonstrate (1) whether the selected hyperparameters lie near a reasonable optimum and (2) that the same thoroughness is applied to all methods. Without this, one might worry that the authors’ methods have undergone more meticulous tuning than the baselines, potentially skewing the comparison.
Supplementary Material: Yes. I looked in detail at Appendix A (the theoretical analysis linking DPO and KTO to PIPA), as well as Appendix B (which details the baseline methods (Step-DPO and Step-KTO) and their respective implementations or loss functions.).
Relation To Broader Scientific Literature: A number of recent works (e.g., DPO, KTO) already eschew traditional RL algorithms in favor of more direct, offline approaches. However, these methods typically target either paired preference data (DPO) or unpaired data (KTO) without a unifying outlook. The paper’s “prior-informed” lens systematically shows how such offline algorithms can be viewed as special cases of one probabilistic formulation.
Essential References Not Discussed: In line 421, the authors discuss previous works showing that DPO decreases the probability of positive samples during training. They, however, left out one work that also shows this problem (“Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer” https://arxiv.org/abs/2405.16436).
Other Strengths And Weaknesses: The most original aspect is the unified viewpoint that treats offline preference alignment as a constrained MLE problem. This clarifies perspectives, bridging various existing methods—DPO, KTO, Step-DPO, etc.—under one probabilistic framework. The theoretical consolidation is meaningful: clarifying how each alignment technique differs only by the choice of prior constraint (marginal vs. negative) can help practitioners see how new or existing approaches fit in.
Other Comments Or Suggestions: It would be nice to see how the recently proposed SimPO, which shows significant performance gain compared to DPO or other DPO variants, fits into the proposed framework.
Moreover, as discussed earlier in my review under [Experimental Designs Or Analyses], it is essential to perform experiments on non-math-related tasks and validate the choice of hyperparameter by showing the Performance vs. KL-Divergence curve.
Questions For Authors: 1. As discussed earlier, could the authors show the Performance vs. KL-Divergence to validate their hyperparameter searching?
2. Could the authors show their method on non-math related tasks (Alpaca Eval, Arena Hard)
3. Also, could the authors compare their method to SimPO?
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the recognition of our theoretical contribution in unifying existing preference alignment methods under a single probabilistic framework, and our experiments demonstrating PIPA's effectiveness on math reasoning tasks with stepwise feedback.
## 1. Hyperparameter validation
**Validation via learning trajectory**
Following the paper referenced by the reviewer, we plot the trajectory of the forward KL divergence and the win rate computed by the implicit reward during training on Alphamath with Deepseek (https://anonymous.4open.science/r/PIPA-DB7B). Both metrics indicate that DPO effectively learns throughout the training process without exhibiting the over-parameterization issues highlighted in the referenced paper.
**Validation via additional results**
Our hyperparameters were carefully selected through rigorous search, ensuring that they were not cherry-picked to favor our results. To illustrate this, we present additional results for batch size, $\beta$, and learning rate—three key hyperparameters in DPO. The current settings in the paper are (256, 0.1, 5e-7). We here find a slightly larger $\beta$ than 0.1 allows for a higher learning rate and improves performance, leading to better results. However, DPO still falls short of PIPA. Specifically:
- **Batch size:** Consistent with previous studies, we use a fixed batch size across all algorithms to ensure a fair comparison by maintaining the same number of updates. We evaluate batch sizes of {64, 256, 1024} for DPO, IPO, and PIPA, and the results consistently show PIPA’s advantage at all batch sizes. We use 5e-7 for DPO and IPO here because larger LR will crash.(https://anonymous.4open.science/r/PIPA-DB7B/table1.md)
- **$\beta$:** Initially, we set $\beta$ to 0.1 following the original Alphamath paper [1]. We then explored values from {0.5, 1.0, 2.0, 5.0} and found that $\beta$ = 1.0 yielded the best results. However, DPO still lags behind PIPA. (https://anonymous.4open.science/r/PIPA-DB7B/table2.md)
- **Learning rate:** With optimal batch size and $\beta$, we try learning rates from {5e-7, 1e-6, 5e-6, 1e-5, 5e-5}. We find increasing $\beta$ to 1.0 allows for a higher optimal learning rate of 1e-5. This adjustment improves DPO's performance on the GSM8K dataset, reaching 0.74 and narrowing the gap with PIPA to 5%. However, it does not lead to any improvements on the more challenging MATH dataset. (https://anonymous.4open.science/r/PIPA-DB7B/table3.md)
## 2. Non-math preference tasks
We employed the paired version of PIPA-N as outlined in Theorem A.1, with further algorithmic details provided in Section 2 of response to Reviewer Zrno. Following the setup proposed in SimPO, we conducted comparisons between DPO and SimPO by fine-tuning Llama-3-8B-instruct on the Ultrafeedback dataset. We reproduced the DPO and SimPO baselines in our codebase using OpenRLHF, maintaining identical hyperparameters from the SimPO paper (e.g., batch size, epochs, $\beta$, $\gamma$), except for using LoRA (r=64, $\alpha$=16) with a higher learning rate, as the original lower learning rate is ineffective with LoRA. Our reproduction produced results matching those reported by SimPO.
For our PIPA-N, current results are better than SimPO. It’s noteworthy that due to the limited rebuttal period, we did not have the resources to thoroughly optimize the hyperparameters for PIPA-N. In contrast, the SimPO results benefited from careful tuning (e.g., using $\beta$=2.5 and $\gamma$=1.375 for Llama-3-8B-instruct). This suggests that PIPA-N has the potential for further improvement with more extensive parameter tuning. More results will be provided later given the limited response period.
| Algorithm | AE2LC | AE2WR| AH |
|-----------|-------|---------|----------|
| DPO | 41.17 | 36.25 | 30.47 |
| SimPO | 43.28 | 39.86 | 32.49 |
| PIPA-N | **44.79** | **40.53** | **33.08** |
*Table 4: Non-math tasks.*
## 3. Comparison to SimPO
Table 2 shows that SimPO is similar to DPO in Alphamath which lags behind PIPA, and Table 4 shows that PIPA-N is also better than SimPO.
## 4. Missing citation
We will cite the paper by Zhihan Liu et al. in Line 421.
[1] Chen Guoxin, et al. AlphaMath Almost Zero: Process Supervision without Process, NeurIPS 2024. | null | null | null | null | null | null |
A Unified Theoretical Analysis of Private and Robust Offline Alignment: from RLHF to DPO | Accept (spotlight poster) | Summary: This submission studies the interplay between privacy and robustness in both RLHF and DPO, two main alignment methods for language models. It shows that, when considering a linear reward model for RLHF and a log-linear policy class for DPO, the problem of offline alignment reduces to parameter estimation for logistic regression under private and corrupted labels. Using this reduction, one can derive suboptimality upperbound for the policy learned for both RLHF and DPO, based on parameter estimation error for logistic regression under private and corrupted labels.
Summary after rebuttal: I asked a question from the authors after their rebuttal about their considered corruption model in the LTC case. However, the authors did not reply to this question. Yet, I am going to keep my score for the other results in the paper.
Claims And Evidence: The authors have provided proofs for the theoretical claims made in the submission.
Methods And Evaluation Criteria: Currently, the experiments reported in the submission are very limited in terms of the amount of experimental results, the datasets and the used alignment algorithms. The only reported results are in Table 1 and 2, which are just for DPO (and rDPO). No results are reported for RLHF under corrupted/privatized labels.
Theoretical Claims: I did not go through the proofs in details. However, I had a quick look at them to get an idea of their sketch.
Experimental Designs Or Analyses: I have the following comments/questions about the experimental results.
1. I think it would be better if the authors could manage to include the experimental results (Tables 1 and 2 in the appendix) in the main body of the paper.
2. Also, why are there no results included on RLHF under privacy/corruption/both? For completeness, both DPO and RLHF should be evaluated under four scenarios of privatization (RR)/corruption/LTC/TCL.
Supplementary Material: I mostly went through the experimental setup (section D). I have asked my question above.
Relation To Broader Scientific Literature: In the context of alignment of language models, this work allows for establishing a theoretical separation result between LTC and CTL, which is important to consider when performing offline alignment of language models.
Essential References Not Discussed: It seems that most of the related works are mentioned and discussed in the submission.
Other Strengths And Weaknesses: I have mentioned my comments/questions above. After receiving the authors' answers, if needed, I will consider changing my evaluation.
Other Comments Or Suggestions: see above
Questions For Authors: 1. Is there any intuitive reason why alignment under LTC is more challenging than CTL? For example, in the first row of Table. 2, $\epsilon=1$ and $\alpha=0.1$. Therefore, in LTC, each label is flipped by randomization (RR) with probability 0.27 followed by another flip due to corruption with probability $\alpha=0.1$. Therefore, roughly, the probability of each label to get flipped (compared to that in the clean preference dataset) is 0.027. With the same reasoning, the probability of each label getting flipped under TCL is also roughly 0.027. So why does the order of the two label flippings (RR and corruption) matter? From another point of view: the constant $c(\epsilon)$ appearing in the upperbound of LTC is not a large constant (for example for $\epsilon=1$, it is 2.16). However, from the results in Table. 2, we see that LTC is clearly worse than CTL. Do the authors have an intuitive idea why this is the case?
2. The main assumptions made in this submission are 1. a linear reward model for RLHF 2. a log-linear policy model for DPO 3. The label distribution in the preference dataset following a LR model. How much limitative these assumptions are? Can we extend the obtained theoretical results to other RLHF/DPO models with more relaxed assumptions on the data as well as reward/policy models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your time and comments. We will recap your valuable comments and present our detailed response. We hope our answers will resolve your concern.
**1. About experiments.** Thank you for your valuable suggestion. In the next revision, we will aim to include our experiments within the main body. Since our primary contributions are theoretical, we initially chose to present experiments using DPO as a proof of concept. However, we believe similar results can be obtained with RLHF, aligning with the findings in [R1], which considers corruption alone. We plan to incorporate additional results on RLHF in our future revision.
**2. Intuition about the separation between LTC and CTL.** Your observation regarding the random-flipping corruption model is absolutely correct. That is, under this oblivious corruption model, the final corruption rate is the same between LTC and CTL. However, it becomes different when we move from this corruption model. Think about the following situation. The clean data is all 0, and the corruption model here is to simply set the data to be 1 (with probability $\alpha$). That is, with probability $\alpha$, the data will be set to be 1, otherwise, it remains unchanged. This is exactly the corruption model considered in our experiment, which is different from the random flipping corruption model. As mentioned in Line 737, one can see that LTC will lead to more 1s in the end compared to CTL (as the randomized response of LDP may flip back 1 to 0).
**3. Generalization beyond current assumptions.** Thank you for the insightful question. Although our main results (e.g., the separation between LTC and CTL) are established under our current assumptions, we believe these findings will extend more generally—beyond linear models—to broader reward functions, policy classes, and general preference models (not limited to the BT-model). However, transitioning from linear to more general settings will likely require new analytical techniques. This challenge arises because, even in settings without privacy or corruption, employing an analysis similar to ours yields suboptimal rates [R2]. Exploring these generalizations represents an exciting avenue for future research. Nevertheless, we believe that our current results under the linear model will serve as important benchmarks for these advancements.
---
[R1] Chowdhury, S. R., Kini, A., and Natarajan, N. Provably robust DPO: Aligning language models with noisy feedback. arXiv preprint arXiv:2403.00409, 2024
[R2] Zhu, B., Jordan, M., and Jiao, J. Principled reinforcement learning with human feedback from pairwise or K-wise comparisons. In International Conference on Machine Learning, pp. 43037–43067. PMLR, 2023. | Summary: The paper considers alignment problems such as RLHF or DPO, in a robust private setting. In this setting, we are given a preference dataset where each example contains an input text $s$, two actions $a_0, a_1$, and a label $y \in \{0, 1\}$ denoting which action is preferred in the example. Commonly, we assume the label $i$ is sampled with probability proportional to $exp(r(s, a_i))$, where $r$ is a reward function on the text $s$ and action $a$ In RLHF we train an intermediate reward model on the dataset which is then used to optimize the policy, in DPO the policy is directly optimized on the dataset. In the robust, private setting there are two sources of error on the labels (1) corruption, where some fraction $\alpha$ of the labels are corrupted by an adversary after inspecting the dataset, and (2) local DP, where we apply randomized response to change the label of each example (with probability $1 / (1 + e^\epsilon)$ for some $\epsilon$) to provide privacy to user. One can consider the LTC or CTL settings, where local DP is applied and then corruption occurs, or vice-versa. The authors give an analysis for both RLHF and DPO, in the LTC, CTL, and "CLC" setting (where corruption occurs both before and after local DP).
For RLHF the analysis assumes that the reward function is the dot product of some ground (bounded norm) ground truth $\theta^*$ and some known function $\phi(s, a)$, which was done by most of the prior work. They consider two algorithms: One that chooses a policy maximizing the expectation over the policy of this dot product for a current parameter estimate of $\theta^*$, and one that choosing a policy that maximizes the minimum this expectation can increase relative to a reference policy even if the parameter estimate is perturbed by a bounded amount. Under the linear model assumption, the authors show the labels correspond to a logistic regression model and bound the suboptimality of their algorithms. For DPO, they assume the optimal policy falls a log-linear model and prove similarly that the labels in DPO follow a logistic regression model and bound the suboptimality of a policy that is a function of an estimate of the ground truth in the logistic regression model.
Using these results, the authors design algorithms for getting the parameter estimate of $\theta^*$, by shifting and scaling the received labels based on the privacy of randomized response. They give bounds for this parameter estimate's error, with and without corruption, and with and without a uniform coverage assumption (the covariance of the features has lower bounded eigenvalues). Combining the parameter estimation bounds with the suboptimality bounds for the algorithm using the parameter estimate, they derive suboptimality bounds for RLHF and DPO in the LTC and CTL settings, with or without uniform converage. The bounds consist of (1) a bias term depending on the corruption parameter $\alpha$, which is larger for LTC than for CTL, demonstrating that LTC is fundamentally harder) and (2) terms decaying to 0 as $1/\sqrt{n}$ in the dataset size $n$, matching the noiseless rate.
## update after rebuttal
I remain in support of accepting the paper
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I did not check the proofs in detail.
Experimental Designs Or Analyses: Yes, no major issues.
Supplementary Material: I looked at Appendix B to get a better detailed understanding of the algorithms, and Appendix D to quickly understand the experimental results. I skimmed the rest of the Appendix to understand what it contains.
Relation To Broader Scientific Literature: Matches the dependence on $n$ of past work in the noiseless setting. Improves on Mandal et al. in terms of dependence on corruption parameter $\alpha$ for RLHF under a weaker corruption model, and matches the offline rate. For DPO, improves dependence on $\alpha$ and removes dimension dependence for bias term. First result on private and robust DPO, prior work studied private-only or robust-only. Improves Chowdury et al. result for robust DPO which has weaker dependence of $1/n^{1/4}$ on the dataset size. Overall, seems to match the rate in more restrictive settings (i.e. non-private or non-robust) or improve the rates of past works.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Other Strengths:
* The technical results are reasonably difficult to arrive at.
* Presentation allows for one who is not super familiar with RLHF/DPO or with local DP/randomized response to understand the problem setup and algorithms reasonably well.
Other Weaknesses:
* A minor shared weakness with past work, that the analysis assumes a linear reward function. The authors do cite this as room for improvement in future works, and attempt to mitigate this with empirical studies where the linear reward function assumption may not hold.
* Minor presentation weakness: The introduction suggests a separation, which implies a lower bound for one setting and upper bound for the other setting. However, the results are upper bounds only. It is unlikely but theoretically possible there is no separation with a different algorithm, this should be clarified when discussing the contributions of the paper.
Other Comments Or Suggestions: -Line 24, column 2: Missing period
-Line 265, column 1: "singple" -> single
Questions For Authors: No questions that would substantially affect my evaluation.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your positive feedback. We will fix the typos in the next version. We now recap your comment and present our detailed response.
**Minor presentation weakness about LTC and CTL.** Thanks for your sharp question. You are absolutely right and we will make this point more clear in the paper. We have briefly talked about the lower bound on the separation between LTC and CTL in the conclusion section and Appendix C. Let's recap them based on the following two scenarios:
- **With uniform coverage condition.** As mentioned in Appendix C, we believe that the current additional $c(\epsilon)$ faction in LTC is tight.
- **Without uniform coverage condition.** As mentioned in the conclusion section, in this case, the current additional factor in LTC may not be tight. However, we believe that there still exists a separation between LTC and CTL. The intuition here is that adversary corruption after randomized response for privacy can always lead to a larger amount of flipping of the preference labels, as the adversary can focus on the attacks on the clean data by first observing the output of LDP. | Summary: The authors provide a theoretical framework to analyze the suboptimality gap of the learned policy in offline alignment, in the presence of privatized and corrupted labels. Specifically, they reduce two main paradigms, the Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO), to the parameter estimation problem of logistic regression under some linear model assumptions. The suboptimality gap is then upper bounded by the estimation error of the logistic regression parameters. They demonstrate that, with a “shifting and scaling” loss function, the estimation error can be upper-bounded when labels are privatized (i.e., random response mechanism) and corrupted (i.e., an adversary modifies partial labels arbitrarily), quantifying the robustness of the alignment training to noisy labels.
Claims And Evidence: The claims made in the paper (“a unified analysis of the interplay between
privacy and robustness in both RLHF and DPO”) are well matched with their theoretical results.
Methods And Evaluation Criteria: Reducing the alignment problem to logistic regression and upper bounding the estimation error is a promising and sensible approach.
Theoretical Claims: I do not check the correctness of the proof, but it reads technically sound.
Experimental Designs Or Analyses: There are no experiments included in the main text. One experiment is included in the Appendix, showing that training with a robust loss function to noisy labels can outperform the regular cross entropy loss, which demonstrates the benefits of robust alignment training. This aligns with the robustness shown in the paper.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Other strength: as the authors point out, the results of the estimation error bounds of private/robust logistic regression may be of independent interest.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the positive evaluation of our paper. We are delighted to hear that you find our methods promising and well-founded. | Summary: The paper develops a unified theoretical framework analyzing the impact of label corruption and privacy on two primary offline alignment methods: RLHF and DPO. The authors focus on the interplay between LDP and adversarial label corruption, formalizing three noise models: CTL (corruption then LDP), LTC (LDP then corruption), and CLC (corruption before and after LDP). A key conceptual contribution is a reduction from the offline alignment problem to logistic regression under these noise models. Major findings include:
- LTC is provably more challenging than CTL, even under linear models.
- The analysis yields new state-of-the-art theoretical guarantees for offline alignment under privacy-only or corruption-only regimes.
- Novel estimation bounds are derived for logistic regression under the joint noise setting, leading to suboptimality bounds for RLHF and DPO algorithms.
- The authors also provide practical algorithmic implications, including a new estimator and empirical verification of the separation between LTC and CTL on GPT2.
Claims And Evidence: The main claims of the paper—such as the separation between CTL and LTC, and the improved suboptimality bounds under joint privacy and corruption—are supported by a well-structured reduction to logistic regression under noisy labels. While the claims appear to be justified by clear derivations and formal proofs, I have not carefully reviewed the correctness of the theoretical arguments.
Methods And Evaluation Criteria: The methodological choice to reduce the suboptimality of RLHF and DPO to parameter estimation in logistic regression is well-motivated and principled, enabling a unified treatment of privacy and corruption. The authors assess performance using standard metrics, including suboptimality gap and parameter estimation error in both weighted and Euclidean norms, all under clearly stated assumptions such as linear reward models and bounded feature maps. These choices appear appropriate given the theoretical focus of the work. That said, due to my limited familiarity with several of the domains involved, I am not in a position to rigorously evaluate the methodology in its entirety.
Theoretical Claims: The derivations appear technically sound from a high level, and the proofs are well-organized. However, I am not equipped to verify their correctness in detail. I particularly appreciate the clean and interpretable separation between CTL and LTC in the error bounds, which draws an interesting parallel to similar phenomena observed in private robust mean estimation.
Experimental Designs Or Analyses: While the primary focus is theoretical, the authors include empirical experiments on GPT2-large that demonstrate the practical difference between CTL and LTC. These experiments are not extensively described in the main paper, but are useful as sanity checks. Further empirical validation across more complex or realistic settings would strengthen the practical case, but is not strictly necessary for this theory-focused submission.
Supplementary Material: NA
Relation To Broader Scientific Literature: From the perspective of the LDP literature, this paper contributes a novel extension of LDP techniques to preference-based alignment, which remains relatively underexplored in the privacy literature. The authors extend ideas from private logistic regression and randomized response mechanisms, as seen in works such as Chowdhury et al. (2023), to a more complex setting involving both privacy and adversarial corruption.
While I do not have deep familiarity with the broader literature on the other areas covered in the paper, the authors position their work as unifying and extending previous theoretical treatments of RLHF and DPO under separate noise models. Their comparison between CTL and LTC, and the development of a unified reduction to private and corrupted logistic regression, appear to be novel contributions that bridge multiple subfields.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your time and positive evaluation of our paper. We're glad to hear that you found our methods well-motivated, our derivations clear, and our experimental results useful. | null | null | null | null | null | null |
Convergence Analysis of Policy Gradient Methods with Dynamic Stochasticity | Accept (poster) | Summary: This paper introduces PES, a phase-based policy gradient algorithm for optimizing (hyper-)policies. Throughout the phases, the stochasticity required for exploration is gradually reduced. However, within a single phase, the stochasticity level remains constant. This allows existing convergence analyses for constant stochasticity to be leveraged to establish convergence within each phase of the PES algorithm. Subsequently, phase-wise convergence to the optimal deterministic policy is proven.
Furthermore, the widely used SL-PG algorithm is examined. In this approach, the stochasticity level is introduced as an additional parameter in the policy gradient algorithm and is optimized jointly with the deterministic policy. Under stronger assumptions, convergence is also demonstrated, but to the optimal stochastic policy. Both algorithms are evaluated in different simulation studies and compared to algorithms with static stochasticity.
## update after rebuttal: Thanks for the answers to clarify some questions.
Claims And Evidence: All Theorems, Lemmas and Corollaries are proven in the appendix. I could not find unclear claims or statements in the text.
Methods And Evaluation Criteria: In theoretical analyses of PG algorithms, a static stochasticity level has almost always been considered. PES is one of the first approaches where stochasticity is gradually reduced, reflecting a common practice in real-world applications. Additionally, SL-PG investigates an algorithm in which the stochasticity is trained alongside the policy and automatically adapted. In my view, this represents a practically relevant algorithm whose convergence rates had not been analyzed before. Therefore, both methods appear well-founded.
The algorithms were evaluated in various examples in Appendix H, which can be considered standard for comparing PG-based methods.
Theoretical Claims: The authors include the case $T=\infty$ when introducing the setting in Sec. 2. However, the convergence results in Sec. 7 sometimes rely on the constant $T$. The authors should explain how this can be possible or delete the possibility of $T=\infty$. Claims in Sec. 4 are all based on Montenegro et.al. 2024, I did not check the correctness in this reference. I checked the Claims in Sec. 5, 6 and 7. They seem solid.
Experimental Designs Or Analyses: I did not implement the experiments to check the correctness. But the design seems fine to me.
Supplementary Material: I read through all parts of the supplementary material.
Relation To Broader Scientific Literature: No direct comparisons is made regarding the convergence rate with previously analyzed algorithms that use static stochasticity. The (upper bound) convergence rates of PES with O(\epsilon{-5}) and SL-PG with O(\epsilon{-3}) are not as favorable as, for example, O(\epsilon^{-2}) in Fatkhullin et.al. 2023 using HARPG.
The authors should address this aspect in more detail in the main part of the paper.
Essential References Not Discussed: None. But parts of the paper are very close (also in words) to Montenegro et.al. 2024.
Other Strengths And Weaknesses: Strengths:
* The authors analyze two different versions of PG methods with a non-static stochasticity setting
* They verify the performance of the theoretically analyzed algorithms in various applications
Weaknesses:
* The authors do not present examples of policy parametrizations which verify the assumptions in Sec. 4 or Sec. 7.
* The proofs are primarily based on already known methods and results. No new proof strategy is evident.
Other Comments Or Suggestions: My judgement is in the middle between weak accept and weak reject. The article is interesting but a pretty close continuation of Montenegra et al. I rated weak accept but note that it is a rather weak weak accept.
Questions For Authors: * Could you provide some examples of common policy parametrizations such that the assumptions in Sec. 4 and Sec. 7 are verified?
* Could you compare your convergence results with the known rates for constant stochasticity?
* Would applying momentum-based methods, as in Fatkhullin et al. (2023), also lead to a faster convergence rate in your case?
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the Reviewer for reviewing our work and for recognizing that the proposed algorithms are practically relevant, have not been analyzed before, and appear theoretically well-founded. Below, we address the reviewer’s concerns.
### 1. On the dependence on $T$
For the AB results, the correct quantity that should appear is the **effective horizon $\overline{T} = (1-\gamma^T)/(1-\gamma)$**. We will clarify this point and correct the notation accordingly.
### 2. Theoretical comparison with static stochasticity PGs
We provide a detailed comparison on this in **Appendix A**, which we plan to move and expand in the main paper, leveraging the additional page.
The best known rate is achieved by HARPG [2] , a momentum-aided PG method, converging to the **optimal stochastic policy** (AB only) with $\tilde{\mathcal{O}}(\epsilon^{-2})$. However, HARPG differs from vanilla PG due to a Hessian correction step and performs poorly (Fig. 1 of [2]).
Our focus is on **vanilla PG methods**, thus a comparison with [1,3] is more appropriate. Both provide convergence to the **optimal stochastic (hyper)policy** with rate $\tilde{\mathcal{O}}(\epsilon^{-3} \sigma^{-2})$, and only the latter considers the PB setting.
That being said, **SL-PG** achieves a rate $\tilde{\mathcal{O}}(\epsilon^{-3} \sigma_{\min}^{-2})$, where $\sigma_{\min}$ is the minimum value for $\sigma$ ensured by its parameterization.
In contrast, **PES** is designed to guarantee convergence to the **optimal deterministic policy**, with a rate $\tilde{\mathcal{O}}(\epsilon^{-5})$, matching [3] but without requiring a fixed $\sigma = \epsilon$.
### 3. Differences with [3] and technical novelty
While our analysis builds upon the framework of [3], we introduce several **non-trivial contributions** that go beyond their results. For a summary on the key contributions and the differences with [3], please refer to the answer to Reviewer UaJQ. Here we focus on the technical novelty which is mainly represented by:
1. Theorem 5.1: there we **quantify the loss incurred in terms of performance index $J$ when varying $\sigma$.** This represents the first result of this kind and it highlights the need of employing an additive noise as presented in Section 2 which maintains the same distribution over the training.
2. Theorem 6.1: there we assess the **convergence rate of PES**, leveraging the results of [3] for each phase, but introducing a deterministic schedule leading to a final $\sigma$ allowing for a safe deterministic deployment.
3. Lemmas G.1--G.12, 7.1, 7.2: there we characterize **crucial quantities for proving Theorem 7.3 (SL-PG convergence)** which relies on standard procedures for convergence study. However, the preliminary results, which we believe are of independent interest, help in better understanding the conditions under which SL-PG converges, especially concerning the kind of $\sigma$ parameterization to be employed.
4. Theorems G.14 and G.15: there we study **whether and under which conditions WGD can be inherited by the SL-PG objective**, additionally deriving its sample complexity under WGD inheritance.
In conclusion, we believe our contribution is *not a mere simple extension, providing for the first time a formal foundation for the common practice of dynamically adjusting the stochasticity in PG methods*.
### 4. Examples of parameterizations
All the assumptions solely regarding the policy parameterization and the AB and PB noise are met by linear parameterized deterministic policies $\mu_{\theta}(s) = \theta^{\top} s$ with noises sampled from $0$-mean gaussians: (AB) $\varepsilon \sim \mathcal{N}(0, \sigma I_{d_{\mathcal{A}}})$; (PB) $\varepsilon \sim \mathcal{N}(0, \sigma I_{d_{\theta}})$. Assumptions regarding the $\sigma$ parameterization are met by $\sigma = \sigma_{\min} + 1/(1 + e^{-\xi})$, being $\xi$ the optimization variable. An example of MDP allowing to meet also the remaining MDP-dependent assumptions is LQR, in which the chosen parameterization exhibit WGD (Asm. 4.2, 7.3) [4, Lemma 3].
### 5. PES and SL-PG over momentum-based PGs
Recovering the convergence result HARPG (Thr. 3 of [2]), it exhibits a sample complexity of order $\tilde{\mathcal{O}}(\epsilon^{-2})$. In our setting, this translates into a rate $\tilde{\mathcal{O}}(\epsilon^{-2} \sigma^{-2})$, when considering a fixed $\sigma$ (we have the bound to the policy score being $\mathcal{O}(\sigma^{-2})$).
Thus, **PES over HARPG** converges to the **optimal deterministic policy** with a rate $\tilde{\mathcal{O}}(\epsilon^{-4})$, thus improving our current result at the cost of considering PG subroutines leveraging Hessian estimation. Similarly, **SL-PG over HARPG** converges to the **optimal stochastic policy** with a rate $\tilde{\mathcal{O}}(\epsilon^{-2} \sigma_{\min}^{-2})$, with the same issue. We will add a comment on this.
**References**
[1] Yuan et al. (2021)
[2] Fatkhullin et al. (2023)
[3] Montenegro et al. (2024)
[4] Fazel et al. (2019) | Summary: The studies effects of exploration on the convergence of the policy gradient in RL. It proposes PES method that reduces the stochasticity with iteration, allowing sufficient exploration in the beginning and the convergence to the the optimal policy in the end. Further, it proposes another SL-PG method, and shows sample complexity of both the methods.
Claims And Evidence: I didn't verify the results, but it seems believable.
Methods And Evaluation Criteria: I am not completely satisfied with comparison of the methods with existing litreature.
Theoretical Claims: No.
Experimental Designs Or Analyses: No.
Supplementary Material: No.
Relation To Broader Scientific Literature: To my understanding, exploration in RL is a very central topic. The theoretical understanding of effects of exploration on convergence is of great importance to the community.
Essential References Not Discussed: [1] is not discussed.
Other Strengths And Weaknesses: Strengths: The paper studies the exploration in policy gradient algorithms which are close to the practice. And provides theoretical bounds.
Weakness: The paper is difficult to read and understand. The advantages of PES algorithm over vanilla actor-critic algorithm (global convergence with sample complexity O(\epsilon^{-4}), see Proposition 1 of [1] ) is not clear. .
[1] @misc{kumar2024improvedsamplecomplexityglobal,
title={Improved Sample Complexity for Global Convergence of Actor-Critic Algorithms},
author={Navdeep Kumar and Priyank Agrawal and Giorgia Ramponi and Kfir Yehuda Levy and Shie Mannor},
year={2024},
eprint={2410.08868},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2410.08868},
}
Other Comments Or Suggestions: Suggestions: SL-PG algorithm is crucial to the paper, a peudocode would be helpful for the reader.
Questions For Authors: Q1: We can obtain global convergence of PG (actor-critic) with an sample complexity of $O(\epsilon^{-4}$ ( local convergence in [1] combined with gradient dominal lemma, see Proposition 1 of [2]). However, it requires mis-match coefficient to be finite, equivalent it assumes sufficient state-space coverage.
I assume, the paper in the review, wants to alleviate this issue with adding exploration in policy? However, I see, the paper still makes sufficient coverage assumptions? Could the author compare and contrast the results and setting, with [1], what are theoretical benefits of adding exploration, how does this explorations relaxes the assumptions in [1] ?
Q2: The return in RL satisfies gradient domination condition (Agrawal et. al.), then why it is taken as a assumption in the paper?
[1] @misc{chen2024finitetimeanalysissingletimescaleactorcritic,
title={Finite-time analysis of single-timescale actor-critic},
author={Xuyang Chen and Lin Zhao},
year={2024},
eprint={2210.09921},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2210.09921},
}
[2] @misc{kumar2024improvedsamplecomplexityglobal,
title={Improved Sample Complexity for Global Convergence of Actor-Critic Algorithms},
author={Navdeep Kumar and Priyank Agrawal and Giorgia Ramponi and Kfir Yehuda Levy and Shie Mannor},
year={2024},
eprint={2410.08868},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2410.08868},
}
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the Reviewer for reviewing our work and for recognizing its practical relevance. Next, we address the reviewer’s questions.
### 1. Clarification on the paper contribution
Our paper focuses on **actor-only PG methods** in **continuous state and action spaces**, using (hyper)policies with **dynamically varying stochasticity**, as commonly done in practice. Our goal is to provide a **theoretical foundation** to this practice, which had not been rigorously analyzed before.
We study **SL-PG**, where $\sigma$ is learned via gradient ascent. Under a gradient domination assumption, we prove convergence to the **optimal stochastic policy** with a rate $\tilde{\mathcal{O}}(\epsilon^{-3} \sigma_{\min}^{-2})$, but SL-PG does not allow to quantify the final $\sigma$, hence no guarantee on the deployed deterministic policy.
To address this, we propose **PES**, which **deterministically decreases $\sigma$** within phases. This allows control over the final stochasticity and guarantees **last-iterate convergence to the optimal deterministic policy** with rate $\tilde{\mathcal{O}}(\epsilon^{-5})$, matching [1], but without requiring to fix $\sigma = \epsilon$ for the entire training.
### 2. Lack of comparison with [2]
We acknowledge the relevance of the contribution of [2], but we believe it lies **outside the scope of our study**, since their **framework is different from the one adopted in our work**.
Specifically, [2] analyze **finite state and action spaces and employ a softmax policy parameterization**. Their theoretical results establish last-iterate global convergence guarantees for **actor-critic methods with AB exploration**.
In contrast, our setting considers **continuous state and action spaces and general (hyper)policy parameterization**. We address **both AB and PB explorations for actor-only PG methods**, and, most importantly, we explicitly analyze scenarios in which the **stochasticity $\sigma$ varies during the learning** process, being interested in providing guarantees on the goodness of the **deployed deterministic policy**.
For these reasons the methodology of [2] is not directly comparable to ours, either in terms of assumptions or scope.
However, we thank the reviewer for highlighting the lack of discussion on the actor-critic convergence literature, which we plan to include in the paper.
### 3. Answer to Q1
As previously said, our setting is different w.r.t. the one of [2,3]. Specifically for [3], we highlight the following:
1. We are in **continuous state and action spaces**, [3] consider finite actions;
2. We provide **last-iterate global convergence to the optimal stochastic (SL-PG) and deterministic (PES) policies** for actor-only PG methods, [3] provide convergence to the stationary point under stochastic policies for actor-critic methods;
3. We consider **both PB and AB explorations**, [3] only AB one;
4. We consider a setting in which the **(hyper)policy stochasticity varies** while learning, [3] consider static stochasticity;
5. We are interested in **deploying deterministic policies**, [3] stochastic ones;
6. For convergence, we need **$J_D$ to be smooth, the variance of the estimators to be finite, and weak gradient domination (WGD) on $J_D$**, [3] assume sufficient exploration ($A$ matrix to be negative definite, e.g., in tabular MDPs this requires to the policy to explore all the state-action pairs), uniform ergodicity of the stationary distribution, and the regularity of the policy.
Our set of assumptions is more general, which is required by the more general setting of our work. Notice that the kind of exploration we propose combined with the WGD do not require to assume sufficient state coverage, allowing to treat continuous spaces and general parameterizations, permitting to ensure last-iterate convergence (stronger w.r.t. the one to stationary points).
### 4. Why assuming weak gradient domination (WGD)
In general settings, as the one considered in our work, WGD does not hold and has to be assumed [1,7,8,9]. [4] prove it in **tabular MDPs** with **direct policy parameterization only**. However, WGD can be derived under different sets of assumptions. For instance, [5] show that, in **AB exploration**, **WGD is induced** on $J_A$ when the **policy satisfies the Fisher non-degeneracy condition**, while it remains an open question whether a similar result holds for PB exploration. In our framework, [1] show that **if it holds for** $J_D$, then it is **inherited by both $J_A$ and $J_P$**.
Similarly, for **SL-PG**, we show in Appendix G.4.3 that **WGD is inherited by $J_A$ and $J_P$**, even when including the additional learned variable $\sigma$, provided it holds for $J_D$ and under additional assumptions.
**References**
[1] Montenegro et al. (2024)
[2] Kumar et al. (2024)
[3] Chen and Zhao (2024)
[4] Agrawal et al. (2020)
[5] Liu et al., (2020)
[6] Mei et al. (2022)
[7] Yuan et al. (2021)
[8] Fathkullin et al. (2023)
[9] Bhandari & Russo (2024b) | Summary: This paper provides a global last-iterate convergence analysis for a widely used class of reinforcement learning algorithms, specifically deterministic policy gradient methods with dynamic stochasticity. It considers two common types of dynamic stochasticity: phased exploration scheduling (PES) and stochasticity learning scheduling. The paper establishes global convergence under standard assumptions and offers a thorough discussion of theoretical connections with prior work. Additionally, experimental results are presented to support the analysis.
Claims And Evidence: The claims made in the submission are well supported by evidence.
Methods And Evaluation Criteria: Methods And Evaluation Criteria make sense for the problem.
Theoretical Claims: I reviewed the overall structure of the theories (though I did not check the proof details line by line) and found that the theory is well-written and sound.
Experimental Designs Or Analyses: Some of the experiments are unclear. Please refer to the questions for more details.
Supplementary Material: Yes, overall proof and additional experiments.
Relation To Broader Scientific Literature: This work addresses a common and highly relevant scenario in reinforcement learning, offering valuable insights for both theorists and practitioners. It is particularly pertinent to those utilizing deterministic policy gradient methods with noise-based exploration, such as in DDPG.
Essential References Not Discussed: No specificly.
Other Strengths And Weaknesses: Strengths:
Broad Scenarios: The paper considers both action-based stochasticity and parameter-based stochasticity for phased-based dymanic and learning-based dynamic.
The theoretical analysis is robust and effectively compared with prior work.
Weaknesses:
Please refer to the questions.
Other Comments Or Suggestions: Please see questions.
Questions For Authors: Overall, I find this paper to be well-written and solid. However, I have a few questions:
1. How do the experimental results help to illustrate the theoretical findings? While reviewing the results for Swimmer-v5 and InvertedPendulum-v5, the connection between the global convergence rate and the levels of stochasticity is not clear.
2. Given that dynamic stochasticity is a common scenario in practice and of interest to a broader audience, I would appreciate seeing more experimental results from environments with higher dimensions.
3. While the Weak Gradient Domination is addressed as assumptions (4.2) in this paper, I wonder if this WGD can be derived by other assumptions?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the Reviewer for reviewing our work, and for recognizing that we address a highly relevant scenario in RL, providing valuable insights for both theorists and practitioners, and offering a strong theoretical analysis. Below, we address the Reviewer’s concerns.
### 1. Relation between theoretical results and empirical learning curves
We are happy to clarify this point. Our aim is to provide, for the first time, a **theoretical foundation to the common practice of reducing the (hyper)policy stochasticity** during the learning process. To this end, we analyze the practice of learning $\sigma$, named in our work as SL-PG, and we propose PES, which decreases deterministically $\sigma$. The former converges to the **optimal stochastic policy** with a rate $\tilde{\mathcal{O}}(\epsilon^{-3} \sigma_{\min}^{-2})$, the latter converges to the **optimal deterministic policy** with a rate $\tilde{\mathcal{O}}(\epsilon^{-5})$, both matching SOTA results employing a fixed stochasticity during the whole learning [1,2,4] whose prescribe, for deterministic convergence, to set $\sigma=\mathcal{O}(\epsilon)$, resulting in an impractical setting. In general, we expect that methods showing a small dominating term in the sample complexity bound to converge fast also in practice. Here, PES and SL-PG **provide the same guarantees of the ones employing a static $\sigma$**. This happens since in the theoretical analysis the dominant term in the variance of the estimator is the one related to the smallest $\sigma$ seen during training, thus providing guarantees as if we use a static $\sigma=\sigma_{\min}$ for the entire learning. However, as seen in our numerical validation, PES and SL-PG **may perform better in practice since they expand the exploration possibilities by employing various $\sigma$ values, without requiring to select one and to keep it for the whole learning.** We conclude by stressing that by employing PES there is the additional advantage of knowing the final $\sigma$ which can be set small as the user wants, guaranteeing a proportional loss in performance when deploying a deterministic policy.
### 2. Experiments on high-dimensional environments
We emphasize that the **primary focus** of our work is **theoretical**. Indeed, our goal is to **provide formal foundations for practices** that are already common in the use of **SL-PG** [6,7,8], and to propose **PES** as an alternative when the user aims to **obtain an almost deterministic final policy.**
This is why our experiments remain close to the theoretical setting, while, regarding the **learning of** $\sigma$, the **practical literature has already adopted this approach**, and most experiments in the deep PG field involve learning the stochasticity.
That said, for the purpose of this rebuttal, we have also **conducted additional experiments using deep RL** methods to assess the behavior of PES and SL-PG in high-dimensional settings.
(https://drive.google.com/file/d/1149yLdiUaLudIfGEA8yRZK6FjHmY9fJG/view?usp=sharing)
As shown in the paper, **PES and SL-PG perform at least as well as their static $\sigma$ counterparts**, without requiring prior tuning. With PES, the final $\sigma$ is predefined, yielding an **almost deterministic policy**. As predicted by theory, parameter-based methods may struggle with large parameter spaces.
### 3. Weak gradient domination (WGD) and its derivation
WGD is a **customary assumption** in the PG literature when establishing convergence guarantees [1,2,3,4]. Fundamentally, it enables **last-iterate convergence guarantees** by **characterizing the objective function without requiring concavity** in the parameters and while allowing for the presence of (at most $\beta$-near) local optima.
In the same setting we adopt, [1] show that WGD is **inherited by both AB and PB objectives**, denoted respectively by $J_A$ and $J_P$, whenever it is **assumed to hold for the deterministic** objective $J_D$.
Additionally, [5] show that, for AB exploration, **WGD is induced on the objective** $J_A$ when the **stochastic policy satisfies the Fisher non-degeneracy condition**, i.e., when there exists $\lambda > 0$ s.t. $\mathbb{E}[\nabla \log \pi_{\theta}(a|s) \nabla \log \pi_{\theta}^{\top}(a|s)] \succeq \lambda I$ for any $\theta, a, s$. While this result is well established for AB exploration, it **remains an open question** whether WGD **can be similarly induced** on $J_P$ through a specific class of hyperpolicies **in the PB setting**.
Finally, for **SL-PG**, we show in Appendix G.4.3 how WGD **can be inherited by the stochastic objectives** $J_A$ and $J_P$, even when **including the additional learning variable** associated with $\sigma$, provided it holds for $J_D$, along with supplementary assumptions.
**References**
[1] Montenegro et al. (2024)
[2] Yuan et al. (2021)
[3] Fathkullin et al. (2023)
[4] Bhandari & Russo (2024b)
[5] Liu et al. (2020)
[6] Duan et al. (2016)
[7] Schulman et al. (2017)
[8] Tirinzoni et al. (2024) | Summary: The paper focuses on the convergence analysis of policy gradient methods in RL with dynamic stochasticity. It introduces PES, a phase-based algorithm that reduces stochasticity through a deterministic schedule while running policy gradient subroutines with fixed stochasticity in each phase. The paper demonstrates that PES achieves last-iterate convergence to the optimal deterministic policy. Additionally, it analyzes the common practice of jointly learning stochasticity and policy parameters (SL-PG), showing that this approach also ensures last-iterate convergence but to the optimal stochastic policy only, requiring stronger assumptions compared to PES. Experimental results on a toy simulation environment demonstrate that the theoretical claims are indeed satisfied.
Claims And Evidence: The claims regarding convergence of PG algorithms under dynamic stochastic noises are proven thoroughly with theoretical proofs.
Methods And Evaluation Criteria: This is more of a theoretical paper for convergence analysis of PG approaches in RL. Experiments and evaluation are conducted on toy simulation setting just to demonstrate the theoretical properties.
Theoretical Claims: The paper is theoretically very strong. Although majority of the ideas are derived from Montenegroetal. (2024), the authors have proved the convergence properties step by step with several theorems.
Experimental Designs Or Analyses: Experiments are conducted on a toy simulation environment to demonstrate the impact of different number of phases and their duration, and to compare between proposed PES and SL-PG. While these analysis makes sense to demonstrate the theoretical properties, a detailed analysis by comparing the performance with existing deep PG algorithms would have been more appreciated.
Supplementary Material: I have read the supplementary material at a high level and might have missed or did not properly follow some of the mathematical proofs.
Relation To Broader Scientific Literature: While policy gradient in RL is an important and broader topic, the convergence analysis under white noise hyper-policies would of interest to a smaller portion.
Essential References Not Discussed: Distinction between the proposed work and Montenegroetal. (2024) [from which most of the theories are derived] could have been explained better.
Other Strengths And Weaknesses: The paper is theoretically strong and solves an important problem of convergence analysis of PG algorithms through a novel phase-based iterative algorithm. It also analyzes common practices of Stochasticity-Learning Policy Gradient (SL-PG). Most importantly, the paper provides a solid theoretical foundation for the use of dynamic stochasticity in policy gradient methods, bridging the gap between theory and practice. Having said that, there are a few concerns in terms of practical application:
1. Complexity of PES: While PES provides strong convergence guarantees, its phase-based approach and deterministic schedule might be complex to implement and tune in practice. The need to carefully select parameters such as the number of phases and the learning rate schedule adds to this complexity. How to select the optimal parameter configuration needs to be explained in more details.
2. Limited Practical Validation: The paper primarily focuses on theoretical analysis and does not provide extensive practical validation of the proposed methods. More empirical results and experiments in real-world scenarios would strengthen the paper's contributions. Furthermore, benchmarking the performance against SOTA deep PG methods would be a nice addition.
Other Comments Or Suggestions: Majority of the space is allocated towards theorem and analysis. The paper is convoluted with symbols and theorems which is hard to follow in the current format. Some restructuring would be helpful for the readers. For example, adding related work section in main body, and highlighting the high-level takeaways from each theorem.
Questions For Authors: 1. Why PES and SL-PG are not compared with SOTA deep PG algorithms?
2. How to identify optimal number of phases and their length in a practical setting?
2. What is the main difference between the proposed analysis and the work of Montenegroetal. (2024)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the Reviewer for reviewing our work and for recognizing the strengths of our theoretical analysis, and the relevance of addressing an open problem in the convergence of PG methods with dynamic stochasticity. Below, we address the Reviewer’s concerns.
### 1. Comparison against SOTA Deep RL techniques
We wish to emphasize that the **exploration schedules** adopted in PES (i.e., a deterministic decay of $\sigma$) and SL-PG (i.e., learning $\sigma$ via gradient ascent) can, in principle, be **applied to any PG method** (also actor-critic).
Thus, we can consider deep PG methods, as PPO, to be incorporated with the dynamic stochasticity methods of PES and SL-PG.
We stress that the convergence analyses of PES and SL-PG over actor-critic methods require different analyses.
Nonetheless, we now present an **empirical comparison of three variants of PPO**: $(i)$ the version with a fixed $\sigma$; $(ii)$ a PES-inspired version deterministically decreasing $\sigma$; $(iii)$ an SL-PG-inspired version where $\sigma$ is learned.
(https://drive.google.com/file/d/1149yLdiUaLudIfGEA8yRZK6FjHmY9fJG/view?usp=sharing) As shown in the paper, **PES and SL-PG perform at least as well as their static $\sigma$ counterparts**, without requiring prior tuning. With PES, the final $\sigma$ is predefined, yielding an **almost deterministic policy**. As predicted by theory, parameter-based methods may struggle with large parameter spaces.
### 2. On the selection of schedule's parameters
For PES, the selection of the schedule's parameters is crucial. In general, following the theory, Theorems 4.4 and 6.1 suggest that longer phases allow to better converge to the optimum of a specific $\sigma$, while a higher number of phases $P$ allows a smooth transition among objective functions ($\sigma$-dependent). When designing the actual schedule, considering equal length phases, the user may decide the total number of iterations $K$ and the desired final $\sigma$. Then, selecting the smoothness of the schedule $y$, $P$ is identified by inverting $\sigma_P = \sigma_{\max}P^{-y}$. If $y$ is large, then the schedule will not be smooth and $P$ will be small, while $K_p$ large. As $y\to0$, then the schedule will be smoother and $P \to K$, leading to a continuous schedule ($K_p=1$) which often works properly (Fig. 1a, 4a, 4b, 5a, 5b). SL-PG may overcome these by adapting $\sigma$ automatically, avoiding manual tuning. However, since the final $\sigma$ is not controlled, **no guarantees can be provided on the resulting deterministic policy** after switching off the noise.
### 3. Differences with [1]
Our paper **builds upon the foundation established by [1]**, which consider fixed $\sigma$ throughout their analysis. [1] show that the convergence to the optimal deterministic policy is achieved with $\tilde{\mathcal{O}}(\epsilon^{-5})$, by setting $\sigma = \mathcal{O}(\epsilon)$. This is impractical since it means to keep $\sigma$ fixed to a small value for the whole training, leading to slow convergence in practice, and requiring to run multiple trainings to tune $\sigma$. In contrast, for the first time in theory, we consider a setting in which $\sigma$ is variable, studying methods eliminating the issues of [1] and managing to provide the same convergence guarantees for PES and similar for SL-PG.
We explicitly **include the results of [1]** (Sec.4, Apx. D), as their framework introduces key concepts necessary to fully appreciate our contribution. Nevertheless, **all additional results that explicitly model and analyze the role of varying stochasticity are novel**:
1. The design of PES (Sec. 3);
2. The characterization of the loss incurred in the objectives $J_A$ and $J_P$ when the stochasticity level is changed (Thr. 5.1);
3. The derivation of the sample complexity required by PES to guarantee last-iterate global convergence to the optimal deterministic policy (Thr. 6.1);
4. The study on the conditions and the sample complexity under which SL-PG ensure last-iterate global convergence to the optimal stochastic policy (Lem. G.1--G.12, 7.1, 7.2, Thr. 7.3);
5. The study on whether the weak gradient domination (WGD) is inherited by $J_A$ and $J_P$ considering also the additional learning variable related to the $\sigma$ (Thr. G.14) and the resulting sample complexity of SL-PG under WGD inheritance (Thr. G.15).
### 4. Relevance of white-noise-based exploration
White-noise (hyper)policies encompass a large variety of controllers. Indeed **most stochastic (hyper)policies used in continuous control**, such as Gaussian with fixed variance, **fall in this class** [2,3,4].
### 5. On the clarity of the manuscript
We agree that moving the **related works section to the main text** will improve clarity, and we plan to do so in the final version. We also aim to better highlight the core ideas of each result to enhance readability.
**References**
[1] Montenegro et al. (2024)
[2] Duan et al. (2016)
[3] Schulman et al. (2017)
[4] Tirinzoni et al. (2024) | null | null | null | null | null | null |
FOCoOp: Enhancing Out-of-Distribution Robustness in Federated Prompt Learning for Vision-Language Models | Accept (poster) | Summary: This work focuses on enhancing Out-of-distribution (OOD) robustness for federated prompt learning on pretrained vision-language models. The authors propose a federated OOD-aware Context Optimization framework, i.e., FOCoOp, which contains two main modules, i.e., BOS and GOC. BOS not only enhances class-level matching between image and prompts corresponding the class, but also maintains distribution-level separation between ID and OOD data. GOC further enhances the consistency in OOD generalization and detection among all clients. The authors evaluate FOCoOp in extensive experiments, validating its effectiveness in generalization and robustness in OOD shifts.
Claims And Evidence: Yes, the claims in the paper are verified with convincing empirical results, e.g., the motivation example in Fig. 1, and subsequent experiments in section 4.
Methods And Evaluation Criteria: Yes, both the proposed method and evaluation criteria are well-aligned with the problem at hand. The OOD robustness of federated prompt learning is a crucial challenge in real-world scenarios, and the proposed FOCoOp framework effectively addresses multiple types of OOD shifts simultaneously. Moreover, the evaluation criteria are directly relevant to the contributions of FOCoOp in enhancing OOD robustness, ensuring that the assessments meaningfully reflect the framework’s effectiveness.
Theoretical Claims: Yes, the theoretical claims and proofs are correct and reasonable.
Experimental Designs Or Analyses: Yes, the experiment designs and analysis are solid and comprehensive for verifying the claims and method designs. The study includes rigorous evaluations on plenty of real-world datasets, ensuring both performance and OOD robustness. The detailed analysis, such as alation studies and hyper-parameter sensitivity studies are sufficient.
Supplementary Material: I have reviewed the supplementary material. The pseudocodes of proposed methods algorithms are clear, but it is recommended to publish the project for reproducing.
Relation To Broader Scientific Literature: This work builds on prior research in federated learning and vision-language models, aiming to enhancing OOD robustness. Unlike previous studies, FOCoOp optimizes prompts to enhance both class-level and distribution-level separations per client while maintaining consistency at the server, improving both performance and OOD robustness.
Essential References Not Discussed: The references are sufficient, no extra essential reference is needed to include.
Other Strengths And Weaknesses: Strengths:
S1. The novelty of considering OOD robustness for federated prompt learning is vital and potentially necessary for the era of foundation models. And the authors make contributions to it with both methodology and evaluations.
S2. The authors provide the correct and sound theoretical inductions for the proposed methods, i.e., the optimizations of Eq.(6) and Eq.(8).
S3. The proposed bi-level prompts distribution robust optimization is quite interesting and insightful. The authors constrain the perturbation of global prompts within optimal transport cost for generalization, while perturbing OOD prompts within unbalanced optimal transport cost for the awareness of open-world unseen outliers.
S4. The experiments are comprehensive in comparing both federated prompt learning methods and federated learning with Clip-based OOD detection methods.
Weakness:
W1. There are some unclear notations, e.g., (1) $c()$ function in Eq.(7), (2) $d^+$ and $d^-$ in Eq.(2) and right part of lines 239-252, and (3) typos in step 13 of Algorithm 2, the last $\epsilon_u^o$ is supposed to be $\boldsymbol{o}_{\epsilon_u^o}$.
W2. The authors introduce three set of prompts, ID global prompts, local prompts, and OOD prompts, but their relationships could be further explained for readability.
W3. Based on the empirical results, it is interesting to find that FOCoOp seems to have more contributions on OOD detection tasks while insignificant improvements on ID and ID-C generalization. The authors could provide more explanation with regarding to it.
Other Comments Or Suggestions: See weakness
Questions For Authors: Can the authors explain the confusion, i.e., the local prompts are aggregated in lines 158-157, while global prompts are aggregated in lines 246-147?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: **1 We are willing to release our code and have made the project available at [https://anonymous.4open.science/r/FOCoOp/](https://anonymous.4open.science/r/FOCoOp/). It will be included in the final version of the paper.**
**2 $\mathfrak{c}()$ in Eq.(7) is the cost function, which can be computed via L2-norm distance.**
**3 d+ and d- indicate the positive and negative distance set, respectively.**
Specifically, $d^+$ means we compute the set by satisfying $\eta$ percentile of positive cosine distance, while $d^-$ means the set satisfying $\eta$ percentile of negative distance.
**4 We will carefully revise our presentation and fix the typos.**
**5. Clarification on the Role and Aggregation of Prompts.**
As described in lines 78–81, the three sets of prompts serve distinct roles:
**Global prompts** capture the shared ID global distribution across all clients.
**Local prompts** capture each client's ID personalized but heterogeneous distribution.
**OOD prompts** are trained to mismatch with ID data, preventing them from mistakenly aligning with samples that are unseen locally but present in other clients.
**During training, each client sends both global prompts and OOD prompts to the server, while leaving local prompts at clients.**
The server aggregates global prompts to learn a consistent alignment with the global distribution, and selects the most dissimilar OOD prompts to represent a consistent misalignment across clients.
Since OOD prompts are shared and distribution-agnostic, they generalize well to open-world OOD datasets, leading to superior performance in OOD detection tasks.
We will elaborate on these design choices more clearly in the final version.
**6 Explanation on FOCoOp’s greater impact on OOD detection.**
We appreciate the reviewer’s observation. **FOCoOp is the first method that is designed to enhance OOD robustness while maintaining strong performance on standard FPL tasks.** This is indeed reflected in the results: while improvements on ID and ID-C generalization are moderate (as existing FPL methods already achieve strong performance), FOCoOp demonstrates substantial gains in OOD detection.
This discrepancy arises from our explicit design: FOCoOp introduces OOD prompts and uses a DRO objective to explore a broader prompt space. While this helps generalization to some extent, the major benefit lies in detection. Specifically, we distinguish between seemly OOD prompts (which match with ID samples that are unseen locally but seen globally) and strict OOD prompts (which are mismatched with all ID data), enabling more accurate identification of OOD data.
We will clarify this distinction and its impact in future revisions.
**7 Clarification on aggregating prompts.**
In lines 157.left-158.left, we present a **general formulation of PFL** and refer to **prompts matched with local data on each client as local prompts**. As described in lines 246–247, we also explicitly introduce the method of **FOCoOp, which first aggregates global prompts across all clients**, and calibrates global prompts later by Eq. (10). We will clarify this distinction more clearly in the next version. | Summary: In this paper, the authors provide a Federated OOD-aware Context optimization framework named as FOCoOp. FOCoOp optimizes three sets of prompts for generalization, personalization, and detection, respectively. In the client local training, the authors devise a BOS module to maintain the class-level and distribution separations for discriminate client data. And a GOC module tackles challenges of enhancing the consistency of OOD robustness among clients.
Claims And Evidence: The claims are supported by clear and convincing evidence. For example, Federated prompt learning methods are inferior in OOD robustness, which is evidently reflected in Fig. 1.
Methods And Evaluation Criteria: The proposed method and evaluation make sense. Specifically, FOCoOp aims to maintain the performance and enhance OOD robustness, both of which are evaluated valid in various empirical setups. The authors also provide ablation studies for the effect of BOS and GOC modules.
Theoretical Claims: The proofs of theoretic claims are correct.
Experimental Designs Or Analyses: The experimental design and analysis are well-structured and conclusive. Firstly, the authors select a diverse range of datasets for various tasks. Secondly, they evaluate FOCoOp and its representative baselines under different levels of data heterogeneity and varying client participation. Lastly, the analyses align with the claimed contributions of enhancing OOD robustness, supported by comprehensive experimental discussions.
Supplementary Material: I have reviewed all contents in supplementary materials, which are crucial and clear. However, the authors should add the descriptions of tables and figures in section E Additional Experimental Results.
Relation To Broader Scientific Literature: The proposed FOCoOp contributes maintaining the performance and enhancing OOD robustness on federated prompt learning for vision-language models. It is quite novel and vital for the development of utilizing foundation models in a privacy-preserving and robustness-aware approach.
Essential References Not Discussed: The essential references are discussed.
Other Strengths And Weaknesses: Strengths:
1.The paper is well organized and the main idea is easy to follow.
2.The problem of enhancing OOD robustness in federated prompt learning is vital and novel for the development of foundation models. This motivates border studies of this topic in the future.
3.The proposed methods are sound and effective in addressing challenges related to the improved discrimination of in-distribution classes, handling distribution shifts in clients, and ensuring consistent OOD robustness under client heterogeneities.
4.The experiments and analyses are extensive and investigable for contribution of model designs.
5.The designed crucial modules provide motivative insights. For example, the alignment between global prompts and OOD prompts are coupled with semi-unbalanced optimal transport, and further enhance the discrimination of two sets of prompts based on coupling results.
Weakness:
1.Though the good organization of main paper, the presentation should be refined, e.g., clarifying typos.
2.Some details need more discussions: a) the contents in supplementary materials, b) the data used for federated prompt learning and testing, and c) the choice of baseline methods.
3.The need to optimize multiple prompt sets (Global, Local, OOD) results in higher computational costs compared to traditional FPL methods.
Other Comments Or Suggestions: The authors should clarify some typos, i.e.,
a) T^g_c in line 262,
b) robustness3 in line 416,
c) \pho rather p in line 742,
d) two rather three in line 328,
e) last \epsilon in line 787.
Questions For Authors: Q1: Are the OOD shifts data used for the optimization of prompts? If so, could the authors explain how these OOD shifts data are trained in the process. If not, please explain the objective of Eq.(1) for clarity.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **1 We will carefully refine current version and correct typos.**
**2 The supplemental materials consist of five sections: (A) related work details, (B) algorithms, (C) theoretical analysis of optimization, (D) datasets and implementation, and (E) additional experimental results.**
**In Section E**, we report the results on CIFAR-100 and TinyImageNet under different Dirichlet distributions (Tables 6–7), which are consistent with Tables 1–3 and further verify OOD robustness. Table 4 has been split into Tables 8 and 9 for clarity.
Tables 10–11 provide numerical results for Figure 4 (OOD detection on various OUT datasets), while Tables 12–13 are numerical results corresponding to Figure 5 (generalization on different ID-C datasets). Figures 8–9 are also enlarged for better readability. We will add them in a future version.
**3 Clarification on data usage in FPL**
**We do not use OOD data when optimizing prompts. Please refer to Answer 4 in Review zfft for further details.**
As indicated in Eq. (1), the final evaluation targets minimizing classification errors on ID and ID-C test sets, and minimizing the failure to detect OUT data. We will clarify this more explicitly in the future. A summary of training and evaluation settings is provided below:
**Training datasets:**
CIFAR-100, TinyImageNet, Food101, DTD, Caltech101, Flowers, and OxfordPets, each split into train/test sets and simulated as heterogeneous clients using Pathological or Dirichlet partitioning.
DomainNet and Office-Caltech10 use an N-1 domain split strategy.
**Evaluation settings:**
ID ACC: Accuracy on the test sets of the above training datasets.
ID-C ACC: Accuracy on CIFAR-100-C, TinyImageNet-C, and the held-out domain in DomainNet and Office-Caltech10.
OOD FPR95 / AUROC: Evaluated on Places365, Texture, iSUN, LSUN-C, and LSUN-R.
**4 Baseline Selection.**
We select baselines from the state-of-the-art (SOTA) personalized federated learning (PFL) methods and centralized prompt-based OOD detection methods. For centralized prompt-based OOD baselines, we choose GalLop and LAPT due to their strong performance in both generalization and OOD detection.
We also include LoCoOp, as GalLop is proposed as an improvement over it.
**5 Our computation cost is comparable to existing FPL methods such as PromptFolio and FedOTP, with the global and local prompt computation cost being $\mathcal{O}(KEBCd)$. The introduction of OOD prompts adds an extra but manageable overhead of $\mathcal{O}(KEB(C+U)d)$, where $U$ is the number of OOD prompts.** Since $U$ is a controllable parameter, incorporating OOD prompts is a reasonable choice for significantly improving OOD detection performance.
All notations are defined in our main paper, i.e., client numbers $K$, local epochs $E$, averaged batch size $B$, class number $C$, and prompt embedding $d$.
Furthermore, we measured the average computation time for each communication round. An extra set of OOD prompts only increases computation cost by 4.76%, and 1.61% for CIFAR100 and TinyImageNet. The most computational burdens stem from encoding features rather than prompt learning. This indicates that employing three sets of prompts remains a practical and efficient design. FOCoOp containing BOS and GOC causes slightly larger but still controllable computation, which increases cost by 1.58s(9.06%) and 1.64s(6.18%) for CIFAR100 and TinyImageNet, respectively.
| Method| CIFAR100(s) | Increase | TinyImagenet (s) | Increase |
|-|-|-|-|-|
| PromptFolio| 17.4765| -| 26.5969 | - |
| FOCoOp-w/o-dro&uot | 18.3078 | +0.8313 (+4.76%) | 27.0261 | +0.4292 (+1.61%) |
| FOCoOp | 19.0597 | +1.5832 (+9.06%) | 28.2402| +1.6433 (+6.18%) |
**6 We will carefully refine our presentation and correct typos.**
---
Rebuttal Comment 1.1:
Comment: The authors provided more detailed explanations regarding the presentation, data usage, and time complexity. These clarifications are reasonable and address most of my concerns, so I am raising my rating. | Summary: Federated Prompt Learning (FPL) allows models to adapt across clients while maintaining data privacy. However, current methods face challenges balancing performance and robustness, especially when encountering out-of-distribution (OOD) data shifts, limiting their real-world reliability. This is mainly due to data heterogeneity among different clients. To overcome these challenges, FOCoOp introduces a strategy utilizing three types of prompts to create clear separations at both class and distribution levels. Experiments demonstrate that FOCoOp effectively improves robustness and adaptability to OOD scenarios in heterogeneous federated learning settings.
Claims And Evidence: 1. In lines 68-72, the authors mention that "The crucial reason is that each client maintains local OOD robustness on heterogeneous data distribution, which is inconsistent among FPL." It is unclear why the maintenance of local OOD robustness on heterogeneous data distributions would be considered inconsistent in FPL, especially given that prompts are able to handle other downstream tasks, like different data distributions. Could the authors elaborate on this inconsistency with a more detailed explanation or provide empirical results that clarify this aspect?
Methods And Evaluation Criteria: 1. In lines 234-235, the authors mention that "The OOD prompts in client match the unseen data from other clients with high similarity scores, hurting the generalization in global view." Could the authors clarify why these OOD prompts would match unseen data from other clients? Is this a general case or a specific situation? A more detailed explanation would help in understanding the reasoning behind this claim.
2. In lines 250-254, the authors mention that "The OOD prompts capture the misalignment between local ID data and prompt context, mistakenly identifying the ID data from other clients as outlier." Could the authors clarify why the OOD prompts would capture this misalignment between local ID data and the prompt context? A more detailed explanation or simple results to support this claim would be appreciated.
Theoretical Claims: Theorems 3.1 and 3.2.
Experimental Designs Or Analyses: 1. Could the authors provide any experimental results demonstrating that FOcoOp is effective in exploring OOD data?
2. Are there any experiments that show how the proposed three types of prompts create clear separations at both the class and distribution levels?
3. Do the proposed methods work with ResNet CLIP models? Given that ViT-B/16 may be difficult for clients to deploy, it would be helpful to understand if the methods are also effective with more accessible models like ResNet-50.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper proposes FOCoOp, a Federated OOD-aware Context Optimization framework to enhance robustness and performance in Federated Prompt Learning (FPL) for VLMs.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: 1. What are the privacy considerations regarding the client prompts, particularly when they are transmitted to the server? Could the authors address how this aspect is handled or discuss it with any related works?
Other Comments Or Suggestions: A typo: in line 81-82 in Section 2.1, two CLIP2FL methods.
Questions For Authors: Please refer to the above sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1 Inconsistency in maintaining local OOD robustness in FPL.**
**FOCoOp aims to improve the OOD robustness of FPL concerning the global distribution, which covers all clients' training data.** The consistency we focus on lies in detecting semantic shifts beyond all client data and ensuring generalization under covariate shift and client heterogeneity within the global distribution.
**However, this consistency cannot be directly achieved by prompt tuning for local robustness.**
While prompts can transfer the knowledge of foundation models, they are adapted to heterogeneous local data during training, **leading to inconsistent matching patterns across clients.**
This makes prompts wrongly identify any data different from local distribution as OUT, degrading both generalization and detection.
**Results in Tab. 1-3 reflect this limitation.**
PromptFL and FedLoCoOp lack consistency mechanisms and underperform in all metrics.
Similarly, FPL methods like FedOTP and PromptFolio aim to balance generalization and personalization, yet still fail in detection.
**2 Reason for OOD prompts in client match unseen data from other clients.**
**This is a general case caused by heterogeneous data distributions across clients.** In federated settings, clients often have imbalanced or missing classes, e.g., client 1 may only have data of dog and cat classes, while client 2 has dog, cat, and fish.
During training, OOD prompts are optimized to avoid resembling locally seen data. However, lacking the reference of global distribution, OOD prompts may mistakenly match with samples locally unseen but present in other clients. For instance, in client 1, OOD prompts may falsely match "fish" images as OUT, but fish images are in client 2.
**3 We realize the goal of OOD prompts, i.e., capturing misalignment between local ID data and the OOD prompt context, by explicitly optimizing Eq. (5).** Specifically, we penalize higher similarities between OOD prompts and local ID data(Eq. 3), and ensure the total similarity of OOD prompts is lower than that of ID prompts(Eq. 4). **This design notably improves OOD detection across all evaluated tasks in Sec 4.2.** For example, it consistently improves the detection of different semantic shift datasets in Fig.4.
However, since each client only observes a subset of the global distribution, the learned OOD prompts may mistakenly match ID samples unseen in this client but valid from other clients as OUT. Therefore, the GOC module is used to enhance consistent OOD robustness across clients.
**4 Experiments on exploring OOD data.**
We clarify that **only in-distribution (ID) data is used during training**, which aligns with federated settings where OOD data is typically unavailable. **Our method does not explicitly explore OOD data, instead, it uses a DRO-based objective to explore prompt space**, allowing global (OOD) prompts to better match (mismatch) ID samples.
**All OOD data (both ID-C and OUT) are used only for evaluation(main paper Table 5), not for training prompts.**
Next, we evaluate OOD robustness on various OOD datasets, i.e., (1) Tables 1–2 show FOCoOp’s strong generalization to ID-C datasets (e.g., CIFAR100-C, TinyImageNet-C) and its improved detection of OUT data (e.g., Texture), (2) Extra evaluation of various OUT datasets also verified that FOCoOp can consistently enhance OOD robustness. These confirm the effects of exploring prompts in FOCoOp.
**5 We can verify the clear separations for both class and distribution levels by similarity matrix.** In **[link](https://anonymous.4open.science/r/FOCoOp/prompt_separation.png)**, we model FOCoOp on Cifar10 (10 ID prompts and OOD prompts), and sample 100 images per class to compute the average of similarities between images and prompts. The diagonal of the ID prompt matrix shows the highest similarities, suggesting intra-class alignment and clear class separation. Meanwhile, the similarities of OOD prompts are notably lower than those of ID prompts, further indicating clear distribution separation.
**6 We extend our evaluation to ResNet50 on CIFAR100 and TinyImageNet. The results validate that FOCoOp maintains strong generalization and detection capabilities even with smaller models.**
Pathological Non-overlap (10 clients, K=10)
|Dataset|||CIFAR100|||TinyImageNet||||
|-|-|-|-|-|-|-|-|-|-|
|Method(%)|ACC|CACC|FPR95|AUCROC|ACC|CACC|FPR95|AUCROC|
|PromptFolio|51.96|47.47|50.14|86.45|44.70|30.64|57.57|83.03|
|FedOTP|55.34|50.98|61.38|74.92|43.61|28.98|72.67|73.86|
|FedLoCoOp|17.03|12.09|93.36|52.66|9.44|4.96|93.63|56.40|
|FOCoOp|60.94|55.92|28.83|95.54|49.96|35.89|53.51|87.75|
**7 Privacy analysis. Prompts encode only class-level statistics rather than instance-specific information, making it non-trivial to infer original individual image data stored on clients.** The privacy of class-level statistics can be strengthened by applying differential privacy on prompts before sending to server.
8 We will carefully fix the typos mentioned. | Summary: The paper focuses to devise an out-of-distribution (OOD) enabled federated prompt learning method based on CLIP model. In addition to global and local prompts, the papers proposes to learn OOD prompts to enable OOD detection at each client in the federated learning framework. Similarity scores in the CLIP embedding space are used to optimize all the three prompts, and further encourage to make the prediction probability of local and global prompts to be larger than OOD prompts on the client data. One of the main contributions of the paper is to introduce Bi-level Prompts Distribution Robust Optimization (BDRO), that perturbs both global and OOD prompts to the worst case at each client using optimal transport discrepancy to enable wider distribution exploration. Besides, the paper also provides an understanding that OOD prompts of one client could observe data from other client as OOD. To eliminate such discrepancy, the paper extracts relevant global prompt information from such OOD prompts and extract strictly OOD prompts that are consistent across all the clients using semi-unbalanced optimal transport. Experiments conducted on CIFAR-100, TinyImageNet and their corrupted variants show better clean accuracy and generalization with significantly improved OOD scores compared to existing federated learning approaches. Similar improvements are also noticed across datasets like Food101, DTD, Caltech101, Flowers102 and OxfordPets. Domain generalization on datasets like DomainNet and Office show competetive results.
## update after rebuttal
I thank the authors for providing sufficient justification and explanation addressing my questions. I encourage the authors to reflect this rebuttal in their revised version and make it easy for the readers to comprehend the method. Given the current state of the paper and based on the author response, I am inclined to accept the paper.
Claims And Evidence: The claims on improving the performance with bi-level distribution robust optimization of both global and OOD prompts and also with global-view OOD consistency across clients are sufficiently backed up by the empirical evidence shown in Tables 1, 2 and 3.
Methods And Evaluation Criteria: Evalution Criteria: To my knowledge, authors made an exhaustive evaluation with various benchmark datasets like CIFAR-100, TinyImageNet, Food101, DTD, Caltech101, Flowers102 and OxfordPets, DomainNet and Office.
Methods: Both the bi-level distribution robust optimization and global-view OOD consistency are heavily developed based on the optimal transport. Despite the empirical improvements, it is not clear to realize the why optimal transport is right fit in this setting. In particular, authors intend to create worst case global and OOD prompts via bi-level distribution robust optimization. The rationale to create worst case prompts and the choice to make use of optimal transport to achieve it is not clear and justified. The authors connect it to wider distribution exploration, but it is hard to realize how such exploration is possible with the proposed setup. Moreover, in equation (7), the \hat{t}^g gets sampled from \hat{P} and \hat{t}^o gets sampled from \hat{Q}. How do these \hat{P} and \hat{Q} are defined, as these are not clear even after looking into Algorithm 2 in appendix. Main paper should contain insights about them. There is a function that operates on (\hat{t}^g, t^g), there is no definition given to this function. Similarly, the equation 8 related to global-view OOD consistency, there is no clear explanation on computing or obtaining seemly OOD prompts and strict OOD prompts using optimal transport. Overall, the current explanation of the method needs a good refinement. A detailed explanation supporting the equations would provide a good understanding to the readers.
Theoretical Claims: There are two theorems presented in the paper. I have not verified their correctness.
Experimental Designs Or Analyses: The authors follow the existing literature experimental protocol setting to conduct their experiments.
Supplementary Material: I reviewed the Related work, Algorithms 1, 2 and 3, and experimental results provided in the Supplementary material.
Relation To Broader Scientific Literature: There is a list of literature focusing on federated prompt learning methods using CLIP model. This work complements this literature by introducing OOD detection into the framework that could be leveraged by all the participating clients.
Essential References Not Discussed: To my knowledge, I see that authors have cited and discussed the relevant literature.
Other Strengths And Weaknesses: Strengths:
1. The paper show significant improvements for OOD detection while improving accuracy on federated prompt learning setting across multiple benchmarks.
2. Ablation studies provide better understanding of the method.
3. I also appreciate the authors for providing the Algorithms and details on their experimental setup.
Weakness:
1. In Figure 2, the server block is hard to infer based on the illustration. Explanation in the caption could help to understand.
2. The explanation of the method section lacks clarity and justification, and requires refinement to improve readability and understanding the concepts presented in the section. Please see my comments in the method block above.
Other Comments Or Suggestions: None
Questions For Authors: What is the rationale to create worst case prompts?
What makes the optimal transport the right choice to make use of for the problem?
How do the \hat{P} and \hat{Q} are defined?
What is the function definiton that operates on (\hat{t}^g, t^g) in equation 7?
Based on my comments on the method section, please incorporate the suggestions and improve the readability. The results shown in the paper are promising, but it should be complemented with a well explained method section.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **1Reasons for using optimal transport (OT).**
**OT is a powerful tool for comparing distributions, and is used for different goals in BOS and GOC.**
**In BOS, OT is used to constrain uncertainty set in distributionally robust optimization (DRO).** Unlike KL-divergence, capturing categorical distribution without considering geometry in feature space, OT divergence preserves the geometry of latent feature spaces, which is vital to text-image feature matching in VLMs. As noted in lines 211.left-188.right, global prompts enhance robustness in ID data in all clients and covariate-shift data near ID data, making OT as a suitable constraint. However, we adopt unbalanced OT (UOT) to constrain the uncertainty set of OOD prompts. Because it combines the strengths of OT and KL-divergence, capturing both geometric and non-geometric semantic shifts.
**In GOC, it uses semi-UOT to improve discrimination between global prompts and OOD prompts.** By coupling global prompts and all OOD prompts, we can select seemly OOD prompts based on the close distance, and distribute mostly distant (strict) OOD prompts to all clients.
**2 The reason for creating worst-case prompts and how to realize wider distribution exploration.**
We obtain the worst-case prompts via DRO, as defined in Eq. (6). Specifically, we construct an uncertainty set $P$ for global prompts, which is centered around the original distribution of global prompts $P_0$, and bounded by an OT divergence threshold $\eta_1$. Within this set, we identify and optimize the perturbed prompts that yield the worst performance, as formulated in Eq. (6). The distribution of worst-case prompts is denoted as $\hat{P}$.
The same procedure with UOT-divergence is applied to OOD prompts, where original distribution $Q_0$, uncertainty set $Q$, and worst-case prompts $\hat{\boldsymbol{t}}^o \sim \hat{Q}$ are corresponding notations.
Since we match images with textual prompts that yield the worst performance, the matching process is no longer point-to-point. Instead, it becomes a point-to-uncertainty-set matching, enabling more robust alignment in a wider semantic space.
**3 In Eq.(7), $\mathfrak{c}(\hat{t}^g,t^g)$ is the cost function of optimal transport, we implement it by computing L2-norm distance.**
**4 Explanation on computing Eq.(8) to get seemly and strict OOD prompts.**
As in line 270.left, we compute all costs between aggregated global prompts and OOD prompts from all clients, and we regularize the assignments for OOD prompts with soft KL regularization while the assignments for global prompts are balanced regularization. Then we solve the mapping $\pi$ by theorem 3.2, where we use ${\pi^*}^\top 1_{\text{KU}}$ as distance. Based on it, we can rank OOD prompts, and the seemly OOD prompts are close to global prompts, but strict ones are mostly distant. The whole procedure can be found in lines 266.left-265.right.
**5 We will add explanations for server block in Fig. 2.** Specifically, in global-view OOD consistency, we use Semi-UOT to couple aggregated global prompts and all client OOD prompts. Then we take ${\pi^*}^\top 1_{\text{KU}}$ as probability/distance, to select seemly OOD prompts and strict OOD prompts for the next communication round. Based on mapping, we can re-assign global prompts with seemly OOD prompts by Eq.(10), and remain strict OOD prompts by Eq.(11). Then server sends the final global prompts and OOD prompts to all clients for consistency.
**6 We sincerely appreciate the reviewer’s constructive suggestions, and will improve the clarity of our revised version.** | null | null | null | null | null | null |
Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding | Accept (poster) | Summary: This paper tackles the problem of knowledge fusion across language models while aiming to balance effectiveness and efficiency. The paper proposes CoSD that integrates speculative decoding to accelerate generation and employs probability-based classification for token selection. CoSD provides a flexible and adaptive approach to multi-LLM fusion. The empirical results demonstrate its ability to effectively merge knowledge from multiple models. Comparing to baselines, CoSD has advantages on efficiency and fusion performance.
## update after rebuttal
The authors address most of my concerns. So I keep my positive score.
Claims And Evidence: The paper claims that CoSD improves knowledge fusion performance, enhances efficiency through speculative decoding, and provides flexibility by eliminating explicit model selection. Empirical results basically support these claims, showing multi-LLM integration and faster inference.
Methods And Evaluation Criteria: The method involves running a draft model to generate candidate tokens and verifying them with assistant models before finalizing outputs. Evaluation is conducted on standard LLM benchmarks, measuring fusion performance via accuracy and perplexity while assessing efficiency through decoding speed. Comparisons are made against individual LLM baselines and naive ensemble methods.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experiment part focuses on setting up several model pairs to mimic the real-world scenarios when we need to fuse LLM knowledge. CoSD is tested on 5 benchmarks. The experiment settings are valid. However, some questions remain in the hyperparameter part. What is the hyperparameter for the decision tree? Do we need to adjust the parameters during the inference stage, and how can we do this?
Supplementary Material: N/A
Relation To Broader Scientific Literature: The paper is related to the LLM knowledge fusion field and the efficient inference field. It is an application and augmentation of both fields.
Essential References Not Discussed: An early work on model knowledge fusion (not LLMs):
Dong, X. L., & Srivastava, D. (2015, May). Knowledge curation and knowledge fusion: challenges, models and applications. In Proceedings of the 2015 acm sigmod international conference on management of data (pp. 2063-2066).
Other Strengths And Weaknesses: N/A.
Other Comments Or Suggestions: I suggest to show more interesting samples like Table 4 in the appendix.
Questions For Authors: What is the token latency for only use one LLM in Table 6?
It seems that the CoSD does not directly modify the answers in Table 4. So how does CoSD improve the score on GSM8K? More explanations are needed.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer’s recognition of the strengths of our work. Regarding the weaknesses and questions raised, we address all concerns in detail below.
> However, some questions remain in the hyperparameter part. What is the hyperparameter for the decision tree? Do we need to adjust the parameters during the inference stage, and how can we do this?
We perform the decision tree training with max depth=10. The results of using different max depths can be found here:
| Max Depth | MMLU | GSM8K | HumanEval |
|-----------|--------|--------|-----------|
| 3 | 61.56 | 34.72 | 21.61 |
| 5 | 63.00 | 36.52 | 22.34 |
| 10 | 60.88 | 37.17 | 23.04 |
| 20 | 60.88 | 37.17 | 23.04 |
Since the performance is mainly influenced by the training dataset rather than the hyperparameters of the decision tree, so it is not necessary to repeatedly tune the hyperparameters — setting a reasonable value is sufficient.
>Essential References Not Discussed
We will add the references to the revised paper. Thanks for pointing out valuable papers to cite and discuss.
>I suggest to show more interesting samples like Table 4 in the appendix.
We will add more samples, especially for the coding samples (i.e., HumanEval test samples) to the appendix and further discuss them in the revised paper.
>What is the token latency for only use one LLM in Table 6?
The token latency of using only one LLM will be around 30-50 ms in Table 6. However, since CoSD is designed to combine complementary knowledge from two models, rather than mimicking one with the other. Therefore, the appropriate baseline is not a single model, but a naive two-model decoding setup where both models generate token-by-token and decide on the final output via a selection mechanism (e.g., our baseline Avg Decoding). In this case, the token latency will be more than 1.5x times higher than CoSD:
| Method | CoSD-Rule | CoSD-Tree | Avg Decoding | Co-LLM |
|----------------|-----------|-----------|---------------|--------|
| Token-Wise Latency | 132.31 | 135.82 | 212.73 | 254.16 |
>It seems that the CoSD does not directly modify the answers in Table 4. So how does CoSD improve the score on GSM8K? More explanations are needed.
We discuss how CoSD improves the score in the Case Study section. In CoSD-Rule, in the fifth line, the assistant model rejects the draft model’s incorrect computation of 20% of 20 = 10 and instead uses the correct calculation of 20 * 0.2 = 4, successfully avoiding the error in the draft model’s tax calculation. In the sixth line, the draft model correctly leads to generate the subtotal of $24, so in the final step, CoSD-Rule computes the simpler 24 + 5 instead of the more complicated 15 + 3 + 2 + 5, resulting in the correct answer.
---
Rebuttal Comment 1.1:
Comment: The rebuttal addresses my concerns and I keep my positive score. | Summary: This paper introduces Collaborative Speculative Decoding (CoSD), a new inference-time algorithm designed to fuse complementary knowledge from multiple LLMs without additional model training or fine-tuning. CoSD leverages a draft model to autoregressively generate initial tokens, which an assistant model then verifies in parallel. A simple, interpretable rule-based or decision-tree-based strategy then determines whether draft tokens should be retained or replaced by tokens proposed by the assistant model, thus effectively integrating diverse knowledge sources.
Claims And Evidence: NA
Methods And Evaluation Criteria: NA
Theoretical Claims: This paper does not provide theoretical justification or analysis to support the claim that training the decision tree with very limited data (eg: only three samples) is sufficient for effective generalization across different tasks or domain. Additionally, while the proposed rule-based mechanism is intuitive, this paper does not include adequate theoretical or statistical analyses examining how well model confidence aligns with correctness in varying scenarios.
Experimental Designs Or Analyses: The selection of diverse models (e.g., Mistral, Llama, TinyLlama, WizardMath, DeepSeek Coder) effectively demonstrates the generalizability and practical applicability of the proposed method. The authors also used several popular benchmarks such as MMLU, GSM8K, HumanEval, Hellaswag, and TruthfulQA, ensuring comprehensive evaluation.
Supplementary Material: No.
Relation To Broader Scientific Literature: Extend the speculative decoding research.
Essential References Not Discussed: I'm not familiar with related works in this field.
Other Strengths And Weaknesses: **Strengths**
1. This paper clearly defines scenarios where its method performs well and where it may struggle.
2. Transparent and interpretable methods (rule-based and tree-based verification) enhance the trustworthiness in practice.
**Weaknesses**
1. The training set for the decision tree method is extremely limited (e.g., only 3 samples in some scenarios), raising concerns about robustness and generalization.
2. This paper lacks in-depth statistical significance testing.
Other Comments Or Suggestions: 1. It would be valuable to include a detailed error analysis to identify exactly when and why the decision rules or decision tree sometimes select incorrect tokens.
2. It would be better to analyze the complexity or depth of the decision tree to achieve optimal performance.
Questions For Authors: 1. Can you clarify why you chose only three samples for training the decision tree? How sensitive is CoSD's performance to increasing the number or diversity of training samples?
2. Have you considered incorporating more advanced uncertainty estimation methods or ensemble learning approaches (such as Bayesian ensemble methods)? Would this significantly alter CoSD's complexity or performance?
3. Could you provide additional details or analyses on cases where the confidence-based verification fails? What percentage of incorrect token replacements are caused by the assistant model's overconfidence?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer’s recognition of the strengths of our work. Regarding the weaknesses and questions raised, we address all concerns in detail below.
> The number of samples for the decision tree. The hyperparameters of the decision tree.
We would like to clarify that using 3 dataset samples (e.g., 3 samples from GSM8K) does not mean the decision tree is trained on only 3 tokens. Each GSM8K sample typically contains more than 50 tokens and can have up to hundreds of tokens. Therefore, 3 samples from GSM8K can provide over 150 training points for the decision tree, which is sufficient given the data sample size (1×2). The sensitivity of CoSD-Tree to the training dataset is reported in Table 5. The relationship between the number of training samples and CoSD-Tree performance is shown below:
| # of Samples | MMLU | GSM8K | HumanEval |
|-----------------------|--------|--------|-----------|
| 1 | 62.68 | 35.84 | 22.19 |
| 3 | 60.88 | 37.17 | 23.04 |
| 5 | 62.34 | 37.62 | 20.94 |
| 10 | 61.01 | 36.46 | 20.04 |
We also conducted experiments varying the max depth of the decision tree:
| Max Depth | MMLU | GSM8K | HumanEval |
|-----------|--------|--------|-----------|
| 3 | 61.56 | 34.72 | 21.61 |
| 5 | 63.00 | 36.52 | 22.34 |
| 10 | 60.88 | 37.17 | 23.04 |
| 20 | 60.88 | 37.17 | 23.04 |
(10 and 20 get the same decision tree)
These results show that CoSD-Tree is not sensitive to the number of training samples or the tree’s hyperparameters. However, it is more sensitive to the type of dataset, as illustrated in Table 5.
> Statistical significance testing
We will report mean ± standard deviation over 5 runs in our experiments in the revised paper.
> It would be valuable to include a detailed error analysis to identify exactly when and why the decision rules or decision tree sometimes select incorrect tokens.
We include illustrative examples in Table 4 and Table 9. For MMLU samples, where a single token determines whether an answer is correct, the decision rule or tree may choose incorrect tokens when the model assigns higher confidence to an incorrect answer. This typically occurs due to hallucination by the assistant model.
For benchmarks requiring chain-of-thought outputs (e.g., GSM8K in Table 4), the replaced tokens are often not directly tied to the final answer (e.g., replacing “we” with “the” in CoSD-Rule). Thus, it can be challenging to pinpoint exactly when and why the decision rule/tree fails. A more informative analysis is to compare CoSD outputs with the outputs of the individual draft and assistant models.
As shown in Table 4, in the fifth line, the assistant model corrects a draft model’s incorrect calculation of 20% of 20 = 10 by replacing it with the correct calculation 20 * 0.2 = 4, avoiding an error in tax computation. In the sixth line, the draft model correctly computes the subtotal $24, leading CoSD-Rule to choose the simpler 24 + 5 over the more verbose 15 + 3 + 2 + 5, producing the correct final answer.
For the tinyMMLU dataset, our experiments show that the wrong replacement rate is around 2%–3% across different model pairs. For Capacity Imbalance tasks (i.e., pair 4), this rate is 0%.
> Have you considered incorporating more advanced uncertainty estimation methods or ensemble learning approaches (such as Bayesian ensemble methods)? Would this significantly alter CoSD’s complexity or performance?
Thank you for the thoughtful suggestion.
We agree that advanced uncertainty estimation techniques and ensemble learning methods, such as Bayesian ensemble models, are valuable and may be worth exploring in future work. However, in this paper, our primary goal is to develop a simple, lightweight, and efficient collaborative decoding framework. To this end, we intentionally adopt minimal decision strategies — a rule-based filter and a shallow decision tree — which are fast, interpretable, and require minimal supervision.
While more advanced methods like Bayesian ensembles may provide stronger theoretical guarantees, we believe they are unlikely to yield significant performance improvements over CoSD in practice, especially considering the strong results we already observe across benchmarks. Moreover, such methods typically involve additional complexity, including performing posterior inference, and evaluating model-specific performance. These extra steps would compromise one of CoSD’s key advantages — being plug-and-play, without requiring per-model evaluation on multiple benchmarks.
We will clarify this design choice and trade-off more explicitly in the revised version. We sincerely thank the reviewer for bringing up this important point. | Summary: This paper addresses the challenge of language model knowledge fusion, aiming to effectively integrate complementary knowledge from multiple LLMs while maintaining efficiency. The authors propose CoSD, a method that classifies output tokens based on their probabilities to achieve fusion and leverages a speculative decoding framework to enhance generation speed. Experimental results demonstrate that CoSD successfully merges knowledge from two or three LLMs while maintaining flexibility, eliminating the need for explicit model selection. The approach is straightforward and empirically effective, offering a practical solution for multi-LLM integration. A deeper analysis of its impact on generation quality and potential trade-offs in diverse scenarios would further strengthen the contribution.
Claims And Evidence: The claims about the paper’s main contribution include the knowledge fusing performance and the effficiency. The paper provides algorithm and evaluations for both points. The evidences are clear to support the claims.
Methods And Evaluation Criteria: This paper evaluate CoSD on 6 LLM pairs and 5 common benchmarks. The experiments use different scenarios such as complementary knowledge fusion and catastrophic forgetting healing. Experiment results delivered the basic ideas. However, the following experiments are not included:
1)What will happen if we swap the draft model and the assistant model? Since the authors claim that users don’t need to choose between LLMs, they may also not want to determine which LLM to be the draft/assistant model. Therefore, the question is will the performance drop significantly if we swap the two models?
2)I’m also very curious about the real samples of the HumanEval dataset. Since the authors already showed the real samples of MMLU (QA) and GSM8K (Math), what will the sample look like in code generation will be interesting. Will the assistant model help repair bugs in the draft generations? A table similar to Table 4 can be helpful.
Theoretical Claims: The paper have no strict theoretical proofs. I have checked the algorithms and the equations and there is no significant flaws.
Experimental Designs Or Analyses: I have checked the following experimental parts
(1)The scenarios defined by the paper, including knowledge fusing, catastrophic forgetting healing, capacity imbalance, and different tokenizers. The settings are overall valid, but the knowledge fusing and catastrophic forgetting healing seem similar, might need some explanations.
(2)The picked baselines are valid. The difference between CoSD and baselines is clear.
(3)The hyperparameters seem to be determined by Figure 2. Do other benchmarks follow the same pattern? Need to be explained.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The key contributions are related to the LLM knowledge fusion and the speculative decoding. LLM knowledge fusion aims to fusing the knowledge of different LLMs with complementary knowledge. Speculative decoding focus on speeding up the inference with collaboration between large annd small LMs. CoSD in this paper seems to combine the advantage of the two fileds by specifically designed algorithm.
Essential References Not Discussed: Some new papers in this field:
[1]Wan, F., Zhong, L., Yang, Z., Chen, R., & Quan, X. (2024). Fusechat: Knowledge fusion of chat models. arXiv preprint arXiv:2408.07990.
[2]Liu, L., Zhang, D., Li, S., Zhou, G., & Cambria, E. (2024, October). Two Heads are Better than One: Zero-shot Cognitive Reasoning via Multi-LLM Knowledge Fusion. In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management (pp. 1462-1472).
Other Strengths And Weaknesses: N/A.
Other Comments Or Suggestions: If you have any other comments or suggestions (e.g., a list of typos), please write them here.
The algorithm part might be too big (Can it be put in one column?).
The ‘texts’ in Eq. (3) should be inside a bracket. E.g., [x.....x].
Questions For Authors: If you have any important questions for the authors, please carefully formulate them here. Please reserve your questions for cases where the response would likely change your evaluation of the paper, clarify a point in the paper that you found confusing, or address a critical limitation you identified. Please number your questions so authors can easily refer to them in the response, and explain how possible responses would change your evaluation of the paper.
1)What will happen if we swap the two LLMs?
2)If the two models are in the same size, can CoSD still speed up the inference?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's recognition of the strengths of our work. Regarding the weaknesses and questions raised, we address all concerns in detail below.
>What will happen if we swap the draft model and the assistant model? Since the authors claim that users don’t need to choose between LLMs, they may also not want to determine which LLM to be the draft/assistant model. Therefore, the question is will the performance drop significantly if we swap the two models? What will happen if we swap the two LLMs?
We performed experiments on swapping the draft model and the assistant model. Here are the results of pair 2:
| Benchmarks | Draft | Assist. | Spc. Dec. | CoLLM | CoSD-R | CoSD-T |
|------------|-------|---------|-----------|--------|--------|--------|
| MMLU | 52.02 | 54.81 | 54.23 | 53.92 | 54.17 | 56.18 |
| GSM8K | 51.02 | 39.79 | 41.28 | 45.75 | 49.52 | 48.88 |
| HumanEval | 43.90 | 21.34 | 25.17 | 36.90 | 42.31 | 43.62 |
| **Avg** | 48.98 | 38.65 | 40.23 | 45.52 | 48.67 | **49.56** |
The results show that CoSD still outperforms all the baselines. We will add these results to the revised paper.
>I’m also very curious about the real samples of the HumanEval dataset. Since the authors already showed the real samples of MMLU (QA) and GSM8K (Math), what the sample will look like in code generation will be interesting. Will the assistant model help repair bugs in the draft generations? A table similar to Table 4 can be helpful.
Thanks for the valuable suggestion. We will add a HumanEval real sample to the revised paper. Our sample shows that the assistant model can modify the draft code to another style (e.g., with more function), which sometimes improves the accuracy and reduces the number of bugs.
>The knowledge fusing and catastrophic forgetting healing seem similar, might need some explanations.
The knowledge fusing task aims to fuse the knowledge of two models with complementary knowledge. The catastrophic forgetting task often fuses the knowledge of one LLM that is only good at one specific task (through fine-tuning) and another model good at all other tasks.
>The hyperparameters seem to be determined by Figure 2. Do other benchmarks follow the same pattern? Need to be explained.
Yes, all the experiments of CoSD-Rule follow the same hyperparameters. We found that the optimal $\alpha$ and $\beta$ values are transferable between models and tasks.
>Comments and Suggestions
Thanks for the suggestions of some details in the paper. We will further polish the paper according to the suggestions.
>If the two models are in the same size, can CoSD still speed up the inference?
Yes, although speculative decoding cannot speed up inference with two models with the same size, our CoSD is used for a different task. CoSD is designed to combine complementary knowledge from two models rather than mimicking one with the other. Therefore, the appropriate baseline is not a single model but a naive two-model decoding setup where both models generate token-by-token and decide on the final output via a selection mechanism (e.g., our baseline Avg Decoding). In this case, the token latency will be more than 1.5x times higher than CoSD:
| Method | CoSD-Rule | CoSD-Tree | Avg Decoding | Co-LLM |
|----------------|-----------|-----------|---------------|--------|
| Token-Wise Latency | 132.31 | 135.82 | 212.73 | 254.16 |
Therefore, CoSD still benefit from the speculative decoding algorithm in this task.
---
Rebuttal Comment 1.1:
Comment: I have read the author's rebuttal and the review of other reviewer. I'd love to maintain my accept score. | Summary: The paper introduces an algorithm called Collaborative Speculative Decoding (CoSD) that is designed to efficiently fuse the knowledge of multiple Large Language Models (LLMs) together at inference time, without requiring any additional model training. The key idea is to leverage the same inference paradigm followed in standard speculative decoding, except that a general decision rule (instead of the typical rejection sampling) is used to reconcile differences in token predictions (and as such proceed efficiently in the cases where there is no discrepancy). Experimental results support the effectiveness of this model-merging approach.
Claims And Evidence: The claims are supported. For the most part, the results of CoSD-Tree are somewhat disappointing relative to the naive CoSD-rule approach. However, the results of CoSD-rule are encouraging relative to the evaluated baselines.
Methods And Evaluation Criteria: Yes, the methods and evaluation criteria make sense. However, I was surprised that no evaluation of speed/efficiency was done, as this seems to be a critical claimed strength over other work that does not take advantage of blockwise parallel decoding. Furthermore, this *should* also be a claim that should be empirically supported, since the typical assumption in speculative decoding is that the drafter model is much more efficient to run sequentially than the verifier model. In this setting, however, both the "drafter" and the "assistant" model are not necessarily computationally that much different, so decoding from each model in parallel (vs. one first, and then the other blockwise parallel) might not have as much benefit?
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: The experiments are sound.
Supplementary Material: I reviewed the appendix.
Relation To Broader Scientific Literature: The paper introduces a simple method for merging token predictions from different models, that is made efficient by leveraging the same blockwise parallel decoding strategy of speculative decoding. The approach makes sense and is well presented, albeit with fairly limited novelty.
Essential References Not Discussed: The related work is adequately discussed.
Other Strengths And Weaknesses: One weakness which sticks out to me is that this method is unlikely to scale well to multiple collaborating LLMs? With multiple models, the likelihood that a discrepancy arises within just one or a few steps will quickly grow, and speculative decoding will cease to continue to be an efficient strategy (e.g., vs a strategy such as the on in Shen et. al., 2024).
Other Comments Or Suggestions: - From just looking at Fig. 1 it is unclear what the main differences are between CoSD and standard speculative decoding, apart from generalizing the acceptance / rejection mechanism for draft tokens to incorporate an arbitrary rule or decision tree. I think what you want to emphasize more clearly throughout the paper is that the primary goal is to be able to efficiently merge the predictions of two LLMs that are on equal footing (i.e., vs standard speculative decoding where the verifier is considered to be the better, but slower model). Rather, if I understand correctly, here the speculative decoding aspect is mainly used as a trick to speed up inference and merging of the two models.
- L138: "generated autoregressively and produced sequentially" is redundant.
- The results should have error bars to reflect the effects of randomness in the generations. For example, in theory, Speculative Decoding should have the same average performance as the assistant model, but exhibits a substantial amount of deviation in the table.
Questions For Authors: The abstract says "CoSD employs a draft model to generate initial sequences and an easy-to-learn rule or decision tree to decide when to invoke an assistant model to improve these drafts." However, from the method description it seems that the assistant model is always "invoked", but just via the speculative decoding procedure. So I am a bit unsure what this is referring to.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's recognition of the strengths of our work. Regarding the weaknesses and questions raised, we address all concerns in detail below.
> However, I was surprised that no evaluation of speed/efficiency was done, as this seems to be a critical claimed strength over other work that does not take advantage of blockwise parallel decoding.
We would like to clarify that our method has indeed been evaluated in terms of speed and efficiency. Specifically, **Table 6** in our paper reports latency results, showing that our method achieves nearly the same runtime performance as standard speculative decoding. This confirms that CoSD maintains comparable decoding efficiency. However, we agree that additional efficiency experiments would further highlight our contributions, and we will include them in the revised version of the paper.
> Both the "drafter" and the "assistant" model are not necessarily computationally that much different, so decoding from each model in parallel (vs. one first, and then the other blockwise parallel) might not have as much benefit?
This is an excellent point — In our case, the drafter and assistant models have similar capacities, so such gains might not be expected. However, **our task is fundamentally different**. While speculative decoding aims to approximate the behavior of a large model efficiently, **CoSD is designed to combine complementary knowledge from two models**, rather than mimicking one with the other. Therefore, the appropriate baseline is not a single model, but a **naive two-model decoding** setup where both models generate token-by-token and decide on the final output via a selection mechanism (e.g., our baseline *Avg Decoding*).
For clarity, we provide the comparison below:
| Method | CoSD-Rule | CoSD-Tree | Avg Decoding | Co-LLM |
|------|-----|-----|-----|-----|
| Token-Wise Latency | 132.31 | 135.82 | 212.73 | 254.16 |
Note that *Avg Decoding* could also be implemented using a speculative-style process, but we adopt standard two-model decoding (i.e., two LLMs generating autoregressively) to emphasize the efficiency gain introduced by speculative decoding in CoSD. In this setting, CoSD is at least **1.5× faster** than baselines that do not use speculative decoding. We will include these results in Table 6 of the revised paper.
> One weakness which sticks out to me is that this method is unlikely to scale well to multiple collaborating LLMs?
We agree that collaborative generation involving more models may reduce the acceptance rate. However, our experiments suggest that the drop is not as drastic as one might expect. Our key observation is that for many common, non-domain-specific token sequences (e.g., “I am”, “This is”), well-trained models tend to produce highly consistent predictions. As shown below, the acceptance rate remains relatively high even as more models are added:
| # of LLMs | Acceptance Rate (%) |
|----|----|
| 2 | 81 |
| 3 | 79 |
| 4 | 76 |
| 5 | 77 |
The experiment settings follow Table 6.
Additionally, we can also use a lower number of generated token per step (i.e., lower **K** in Algorithm 1) to drop less tokens, thus ensuring the efficiency when we have a lower acceptance rate. We will clarify and expand upon this discussion in the revised version of the paper.
>From just looking at Fig. 1 it is unclear what the main differences are between CoSD and standard speculative decoding.
We agree with the reviewer that Fig 1 focuses more on the speculative decoding part than the knowledge fusion part. We will modify the figure in the revised paper and emphasize the knowledge fusion ability of the “CoSD Verification” part.
>L138: "generated autoregressively and produced sequentially" is redundant.
Thanks for pointing out the redundant part. We will modify it in the revised paper.
>The results should have error bars... Speculative Decoding should have the same average performance as the assistant model, but exhibits a substantial amount of deviation in the table.
We will add the error bars in the revised paper. The speculative decoding algorithm we use (Miao et al., 2023) has a soft verification strategy, that uses a random number as a threshold to decide whether to accept the draft token for efficiency. In this case, the average performance of speculative decoding will not be exactly the same as the assistant model.
>The assistant model is always "invoked" question.
You're absolutely right that in our current method, the assistant model is always invoked via the speculative decoding. What we intended to convey in the abstract is that the token replacement is invoked when the assistant token differs from the draft model's token and pass the decision rule.
We agree that the original wording may misleadingly suggest that the assistant model is not used at all unless triggered by a separate process. We will revise this in the final version for clarity, to better reflect the actual mechanism.
---
Rebuttal Comment 1.1:
Comment: Thanks for the replies to my questions.
> On inference speed comparisons
Table 6 compares latency between speculative decoding methods only, so this isn't really relevant. Within the framework of speculative decoding, the efficiency will mainly hinge on (a) the acceptance rate and (b) the blockwise parallel benefits of running the verifier, which mainly scales with verifier size.
I do not think that naive two-model decoding is the right comparison, since simply producing next token distributions from two independent model can be done in parallel. With some minor synchronization overhead, I would expect that the right baseline is approximately max(token-wise latency model A, token-wise latency model B), not their sum.
---
Reply to Comment 1.1.1:
Comment: Thanks for the replies and suggestions.
>I do not think that naive two-model decoding is the right comparison, since simply producing next token distributions from two independent model can be done in parallel. With some minor synchronization overhead, I would expect that the right baseline is approximately max(token-wise latency model A, token-wise latency model B), not their sum.
Thanks for raising this important point. We would like to clarify that the running time for two-model decoding (even with parallelization) significantly exceeds the maximum of token-wise latency for either model A or model B individually. The main reason is that, during joint token prediction by two models, we cannot effectively utilize inference-time optimizations like KV cache. Specifically, token-wise merging means that the initial predictions from one model might differ from the final chosen token. Consequently, the stored KV cache can become invalid due to incorrect previous tokens, necessitating either frequent recomputation of the entire input or additional mechanisms to detect when KV cache can be reliably used. Our experiments demonstrate that this limitation leads to a substantial increase in running time, often exceeding twice the latency compared to scenarios where KV cache can be smoothly implemented.
For the experiments we added during the rebuttal, we did make the two models generate in parallel, but the token-wise latency is still more than 3x longer than one model that can utilize KV cache. | null | null | null | null | null | null |
DeepLayout: Learning Neural Representations of Circuit Placement Layout | Accept (poster) | Summary: This paper proposes a framework to implement pre-training on circuit netlist and physical layout. Two loss functions are designed to realize the pre-training task. And the pre-training network can be applied to downstream tasks, such as wirelength and congestion prediction.
## update after rebuttal
After carefully reviewing the rebuttals and comments, I would like to maintain my current score.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes. There are no issues about the correctness of any proofs for theoretical claims.
Experimental Designs Or Analyses: Yes. There are no issues about the soundness/validity of any experimental designs or analyses.
Supplementary Material: The authors did no upload any supplementary material.
Relation To Broader Scientific Literature: The pre-trained network could be integrated mature commercial EDA tools to accelerate the design process.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths**
1. The presentation of the paper is good, making the approach easy to follow and understand.
2. The idea of masking grid in spatial modality and back to cells in graph modality is novel, which is effectively to capture the spatial and graphic (functional) information.
3. A new supervision signal, the RPA, is proposed to guide the pre-training.
4. Performance on multiple downstream tasks are better than previous baselines, including the congestion and posted-routed wirelength prediction, making the approach a more universal representation learner.
**Weaknesses**
1. The presentation of the pre-training loss should be polished to make it more formal and clearer.
2. Ablation studies about the pre-training loss and the proposed heterogeneous GNN architecture + MSGA are not taken.
Other Comments Or Suggestions: **Typos**
1. $x \in W, y \in H$ below the summation symbol in Eq. 5 is incorrect. Maybe $W$ should be modified to $\text{range}(W)$ or other format?
**Suggestions**
1. Ablation studies of the effectiveness of each pre-training loss and model architecture are not presented. Analyze their seperate effectiveness is encouraged.
Questions For Authors: I have some questions about the calculation of the pre-training loss.
1. For the pre-training task 1 Cell Coordinate Reconstruction, what is the meaning of K in line 266?
2. If there is an one-to-one mapping between the ground-truth coordinate of cell and the predicted coordinate of the cell, why the operation `min' is applied in Eq. 3?
3. For the pre-training task 2 Routing Process Prediction, the definition of $\text{RPA}(x,y)$ in Eq. 4 is not clear. What is the meaning of $x,y$ in the left hand side (LHS), and which $u$ is involved in the calculation of this RPA?
4. In my personal understanding, assume the number of masked grids is $N_m$, the number of nets is $n_u$, and the chip size $W,H$, the RPA for all cases should be a 4D tensor with shape $(N_m \times n_u \times W \times H)$, and $\text{RPA}(i,j,w,h)$ is the RPA: given a specific net $j$, the masked grid $i$ occupies the region $[x_i, y_i] \times [x_i + d_x, y_i + d_y]$, and for each $(w,h) \in [x_i, y_i] \times [x_i + d_x, y_i + d_y]$, the $\text{RPA}(i,j,w,h)$ can be calculated with Eq. 4. If I misunderstanding the calculation, please give a more clearer and formal definition of the RPA.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Q1
1.Ablation studies of ... is encouraged.
2.Ablation studies ... taken.
A1
Ablation study of encoder modules
Congestion prediction
| Method | 5 | 5 | 5 | 5 | 10 | 10 | 10 | 10 | 20 | 20 | 20 | 20 |
|---|---|---|---|--|--|--|--|--|---|--|--|--|
| | Pearsonr | MAE | RMSE | SSIM | Pearsonr | MAE | RMSE | SSIM | Pearsonr | MAE | RMSE | SSIM |
| train from scratch | 0.1154 | 0.0508 | 0.0874 | 0.1428 | 0.3047 | 0.0289 | 0.0423 | 0.2560 | 0.3398 | 0.0225 | 0.0386 | 0.4513 |
| wo/ MSGA | 0.4142 | 0.0288 | 0.0684 | 0.7152 | 0.4271 | 0.0135 | 0.0363 | 0.7755 | 0.4391 | 0.0132 | 0.0356 | 0.7777 |
| wo/ HGNN | 0.4138 | 0.0143 | 0.0376 | 0.7670 | 0.4298 | 0.0324 | 0.0728 | 0.7048 | 0.4240 | 0.0335 | 0.0756 | 0.6996 |
| DeepLayout | 0.4270 | 0.0146 | 0.0379 | 0.7718 | 0.4383 | 0.0130 | 0.0360 | 0.7820 | 0.4418 | 0.0121 | 0.0349 | 0.7909 |
Post-routing wire length prediction
|Method|5|5|5|10|10|10|20|20|20|
|------|---------|---------|---------|----------|----------|----------|----------|----------|----------|
| |Pearsonr|MAE |RMSE |Pearsonr |MAE |RMSE |Pearsonr |MAE |RMSE |
|train from scratch|0.3342|0.1332|0.1737|0.3048|0.1330|0.1765|0.3233|0.1313|0.1764|
|wo/MSGA|0.3634|0.1279|0.1712|0.3743|0.1270|0.1706|0.3848|0.1290|0.1684|
|wo/HGNN|0.3593|0.1311|0.1705|0.3667|0.1326|0.1701|0.3694|0.1303|0.1698|
|DeepLayout|0.3704|0.1305|0.1695|0.3806|0.1290|0.1689|0.3961|0.1270|0.1682|
Ablation study of pretrain loss:
Congestion prediction
| Model | 5| 5|5|10|10|10| 20 |20 |20 |
|--|-|-|-|-|-|-|-|-|-|
| | Pearsonr | MAE | RMSE | SSIM | Pearsonr | MAE | RMSE | SSIM | Pearsonr | MAE | RMSE | SSIM |
|--|-|--|--|--|--|-|--|--|--|--|--|-|
| train from scratch | 0.1154 | 0.0508 | 0.0874 | 0.1428 | 0.3047 | 0.0289 | 0.0423 | 0.2560 | 0.3398 | 0.0225 | 0.0386 | 0.4513 |
| task #1 loss| 0.1507 | 0.0454 | 0.0954 | 0.2020 | 0.3203 | 0.0293 | 0.0440 | 0.2534 | 0.3667 | 0.0235 | 0.0387 | 0.3875 |
| task #2 loss | 0.4050 | 0.0135 | 0.0368 | 0.7760 | 0.3785 | 0.0183 | 0.0455 | 0.7408 | 0.3755 | 0.0141 | 0.0381 | 0.7673 |
| DeepLayout | 0.4270 | 0.0146 | 0.0379 | 0.7718 | 0.4383 | 0.0130 | 0.0360 | 0.7820 | 0.4418 | 0.0121 | 0.0349 | 0.7909 |
Post-routing wire length prediction
| Model | 5|5| 5 | 10|10|10 | 20 |20|20|
|-|-|-|-|--|-|--|-|-|-|
| | Pearsonr | MAE | RMSE | Pearsonr | MAE | RMSE | Pearsonr | MAE | RMSE |
| train from scratch | 0.3342 | 0.1332 | 0.1737 | 0.3048 | 0.1330 | 0.1765 | 0.3233 | 0.1313 | 0.1764 |
| task #1 loss | 0.2737 | 0.1391 | 0.1837 | 0.2878 | 0.1393 | 0.1841 | 0.3064 | 0.1343 | 0.1808 |
| task #2 loss | 0.3336 | 0.1373 | 0.1761 | 0.3301 | 0.1365 | 0.1779 | 0.3522 | 0.1321 | 0.1737 |
| DeepLayout | 0.3704 | 0.1305 | 0.1695 | 0.3806 | 0.1290 | 0.1689 | 0.3961 | 0.1270 | 0.1682 |
Q2
x∈W,y∈H ... format?
A2
We will updating the notation to explicitly use range(W) and range(H).
Q3
For the pre-training task 1... K in line 266?
A3
K denotes the total number of masked cells, where |Iₘ| = K (Algorithm 1, line 6).
Q4
If there is an one-to-one ... Eq. 3?
A4
'min' indicates the optimization objective to minimize the loss. We will remove this notation from Eq 3 in our paper update.
Q5
For the pre-training task ... RPA?
A5
As shown in Figure 5 (bottom left), RPA is computed for each net u covering area (xul,xuh,yul,yuh) using Eq 4 at every point (x,y) within this rectangle. The final layout RPA is obtained by aggregating values from all nets. We will enhance this explanation with additional visual aids in our final paper to improve clarity.
Q6
In my personal ... RPA.
A6
Our explanation is divided into two parts to address your concerns.
1. RPA Feature Computation Without Masking:
Initially, we compute the RPA feature without considering any masking. Utilizing Eq4 and 5, we perform a global routing probability overlay for all nets across the layout. This process generates a two-dimensional (2D) RPA feature map (w x H), which represents the global routing probability distribution across the entire layout. This feature map encapsulates the inherent routing characteristics of the layout, serving as a foundational element for subsequent processing.
2. Self-Supervised Masked Reconstruction:
Our objective is to enable the network to reconstruct the routing probabilities of masked grid nodes to their original states before masking. To achieve this, we employ a self-supervised reconstruction loss based on Equation 6. Specifically, our loss function calculates the difference between the predicted routing probabilities of the masked grids and their original routing probabilities prior to masking. This approach aligns with the implementation details of Masked Autoencoders (MAE), ensuring consistency and effectiveness in the reconstruction process. | Summary: This paper presents a method to learn the layout representation of circuit for downstream tasks. The method is based on an graph neural network (GNN) and a mask training strategy.
## update after rebuttal
I keep my rating since the authors have answered my question, and I still slightly lean toward accept.
Claims And Evidence: The reported experiment results support the claim of contribution.
Methods And Evaluation Criteria: The overall network and loss seems reasonable to me but not novel in my opinion.
It seems borrow some existing method and make it work for circuit design data.
Theoretical Claims: There is no theoretical claims that requires proofs. Or at least, I believe there is no way to prove the effectiveness or performance of this kind of neural network work.
Experimental Designs Or Analyses: The experiment seems overall reasonable, and the authors try to perform thorough comparisons.
Supplementary Material: There is no supplemental material provided.
Relation To Broader Scientific Literature: To be honest, since I am not familiar with this community.
So even though I think this work will be useful and impactful for a large circuit design community, I am not sure whether this method will be useful or not..
Essential References Not Discussed: Not that I know.
Other Strengths And Weaknesses: From an outsider (from the circuit design communitiy) point of view:
Strength:
- if the claim of authors is true, the representation learning will be useful for other downstream tasks just like similar thing happen in other field.
- the evaluation results suggest that the proposed method outperform existing works.
Weakness:
- most of the design in the proposed network seems not novel and exists in network/method for other domains. I personally think this is not critical as long as the method is working well for this specific domain..
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Q1
“most of the design in the proposed network seems not novel and exists in network/method for other domains. I personally think this is not critical as long as the method is working well for this specific domain..”
A1
We appreciate your thoughtful response and the opportunity to address our concerns more comprehensively. As you have rightly pointed out, we have drawn inspiration from the network architectures and self-supervised learning methods in the field of artificial intelligence in designing DeepLayout. However, our direct adaptation of these methodologies to circuit layout representation learning revealed three critical technical challenges:
1) Distinct Graph Structures of Circuit Layouts
An integrated circuit's graph structure, derived from Verilog netlists, differs from traditional graph learning domains. It incorporates gate-level physical properties during technology mapping (e.g., TSMC N3), layout positions, and routing interconnect details—unlike point clouds or social networks, as shown in the table below:
| | Social Network Graph | Point Cloud | Circuit Graph Structure |
|----------------|----------------------|-------------|-------------------------|
| **Node Attributes** | Yes | No | Yes (height, width, power) |
| **Geometric Information** | No | Yes | Yes (x, y coordinates) |
| **Edges** | Yes | No | Yes (length, resistance, capacitance) |
To address this issue, we propose a novel encoder architecture that jointly learns the topological structure and geometric information of the circuit graph structure through the combination of Multi-Scale Graph Attention (MSGA) and Heterogeneous Graph Neural Networks (HGNN). By fully extracting these two types of features, DeepLayout enables comprehensive extraction of topological and spatial features, generating layout representations applicable to routing prediction - a computationally intensive critical phase in circuit design.
2) Limitations of Self-Supervised Learning in Circuit Layouts
Self-supervised learning in computer vision and graph learning relies on inherent labels for restoration tasks, but this approach has limitations in circuit layout learning. First, circuit graphs have low redundancy, making it hard to restore masked node attributes without extra guidance. Second, layout representation learning aims to predict routing-phase quality, but traditional node-masking methods fail to capture routing characteristics.
| | MAE | GraphMAE2 | DeepLayout |
|---------------------|-----------|-----------|-----------------------------------------|
| **Input** | Image | Graph | Heterogeneous Graph |
| **Masking Unit** | Patch | Independent Nodes | Set of Nodes in the Grid |
| **Masking Ratio** | 75% | 50% | 50% |
| **Is there guidance?** | No | No | Yes (Topological edges on the graph serve as guidance) |
| **Number of Masks** | Fixed | Varies with the number of nodes on the graph | Varies with the layout size |
| **Supervisory Signal** | Masked Pixels | Node Attributes | Node Geometric Coordinates + Routing Probability Algorithm (RPA) |
To overcome these challenges, we propose a self-supervised learning approach tailored for circuit layouts. Our key design points include: 1) introducing a grid-based masking strategy, 2) masking only the geometric information of nodes while retaining edge information as guidance for restoration, and 3) designing two distinct supervised tasks. Notably, Supervised task #2 utilizes the easily computable Routing Probability Algorithm (RPA) feature as a supervisory signal to captures essential aspects of the routing process.
3)Downstream Tasks in Circuit Layout Representation Learning
In graph representation learning, downstream tasks typically involve learning global graph attributes or predicting node classifications, without the need for a specialized decoder. However, downstream tasks in circuit layout learning span multiple levels, from individual nodes to 2D representations.
| **Hierarchy** | **Downstream Tasks in Graph Learning** | **Downstream Tasks in Circuit Graph Learning** |
|---------------|----------------------------------------|----------------------------------------------------|
| **Node** | Node Classification | Line length of each net after routing |
| **Edge** | Whether an edge exists or not | Timing prediction after routing |
| **Whole** | Graph Classification | Overall power consumption prediction |
| **2D** | None | Layout congestion prediction, DRC violation prediction | | Summary: This paper proposes a representation learning framework for backend circuit design by integrating GNNs and spatial transformers to capture both topological connectivity and geometric distribution of circuits. Also the authors propose a self-supervised learning method based on mask-based autoencoder for layout representation to improve sample efficiency. The results show that the proposed method outperformed SOTA methods in congestion and post-routing wire length estimation.
Claims And Evidence: The proposed Deeplayout provides a general representation learning framework for backend circuit design that captures key circuit attributes without requiring task-specific engineering. The proposed masking strategy preserves key geometric and topological circuit characteristics so that it effectively reconstructs the routing prediction. Tables 2 and 3 provide quantitative comparisons showing Deeplayout outperforming SOTA models in congestion and post-routing wirelength estimation.
Methods And Evaluation Criteria: The authors evaluated Deeplayout using two downstream tasks, congestion prediction and post-routing wirelength estimation. Also the performance of Deeplayout was compared to four other existing methods.
Theoretical Claims: The paper modeled the circuits as heterogeneous graphs for general circuit representation adaptable to diverse physical design downstream tasks. The author claims that a masked autoencoder can effectively pre-train circuit layouts, reducing the need for labeled data. The paper also suggests masking strategy and shows that a 50% masking ratio achieves the best trade-off between feature learning and model generalization.
Experimental Designs Or Analyses: The study uses a large-scale public database of IC designs for real-world industrial applications. The authors carried out performance comparison across baselines, ablation study on mask ratio, and provided both statistical and visualization results.
Supplementary Material: There’s no supplementary material.
Relation To Broader Scientific Literature: Application-wise, the paper extends prior work on front-end circuit representation learning to backend design. Methodology-wise, it leverages a masked autoencoder, contrastive learning, and graph-based techniques to enhance layout representation.
Essential References Not Discussed: It could mention other graph-based learning techniques in EDA beyond CircuitGNN and discuss on RL-based methods for backend placement to strengthen the background.
Other Strengths And Weaknesses: Strengths:
-The paper effectively acknowledges the multi-level dependencies in EDA workflows, considering both upstream and downstream tasks.
-General-purpose representation learning eliminates the reliance on task-specific models, improving flexibility.
-Strong empirical results demonstrate that the proposed method outperforms existing SOTA approaches.
-Well-structured experimental design, with comprehensive benchmarking against multiple baselines.
Weakness:
-Scalability concerns are not addressed.
Other Comments Or Suggestions: Swap Figure 1 and Figure 2 for better readability.
Typo inPage 2, line 93: “Therefor” → “Therefore”.
Text and details in Figure 5 are too small to read clearly.
Visualizations of congestion predictions in Figure 7 are not clear. Please consider improving contrast and resolution.
Questions For Authors: How does DeepLayout perform on larger circuits with millions of components?
What are the computational resource requirements for pretraining DeepLayout?
Can DeepLayout generalize to other layout tasks (e.g., DRC prediction, power estimation)?
Do the authors plan to extend this to full-chip routing prediction?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Q1
It could mention other graph-based learning techniques in EDA beyond CircuitGNN and discuss RL-based methods for backend placement to strengthen the background. if we miss any related papers, please feel free to point out. We will incorporate them in the final version.
A1
We sincerely appreciate your valuable suggestions. In the revised manuscript, we will augment the related work section with a discussion of graph-based learning methods in EDA and reinforcement learning techniques for physical backend design.
Q2
Swap Figure 1 and Figure 2 for better readability.
Typo in Page 2, line 93: “Therefor” → “Therefore”.
Text and details in Figure 5 are too small to read clearly.
Visualizations of congestion predictions in Figure 7 are not clear. Please consider improving contrast and resolution.
A2
We sincerely appreciate your valuable suggestions. We will incorporate these improvements in the final manuscript, including: 1) making the specified revisions to the content; 2) enhancing the clarity of all visualizations.
Q3
"How does DeepLayout perform on larger circuits with millions of components?"
A3
We sincerely appreciate your valuable suggestion. However, preparing larger circuit design data is extremely time-consuming. As a result, we have not been able to complete all the experiments related to large circuit designs at present. We will make every effort to update the experimental data related to large designs in the following days and discuss them with you.
Q4
"What are the computational resource requirements for pretraining DeepLayout?"
A4
The pre-training phase of DeepLayout is performed on an 8 A800 machine, achieving a per-epoch training time of 16 hours. These implementation specifics will be incorporated into the methodology section of our paper.
Q5
"Can DeepLayout generalize to other layout tasks (e.g., DRC prediction, power estimation)?"
A5
Thank you for your valuable suggestions. We have additionally designed post-routing timing prediction task that is directly related to evaluating post-routing layout quality.
Post-Routing Timing Prediction:
Timing is directly correlated with the chip's performance. The encoder of DeepLayout is initialized with pre-trained weights, while the decoder employs a specially designed graph neural network. We fine-tune the network using regression loss and perform timing prediction after few-shot fine-tuning. Following common practice in related literature, we report widely adopted metrics including Pearsonr, R², and MAE. The results are presented below.
| Model | 5 |5|5 | 10 |10|10 | 20|20|20 |
|-------|-----|----|-----------|----------------|-------------------------------|------------------|--|--|---------|
| | Pearsonr | R2 | MAE | Pearsonr | R2 | MAE | Pearsonr | R2 | MAE |
| timergcn | 0.4269 | 0.1434 | 0.1391 | 0.4432 | 0.1647 | 0.1382 | 0.5421 | 0.2598 | 0.1285 |
| DeepLayout| 0.7644 | 0.5046 | 0.0970 | 0.7974 | 0.5893 | 0.0877 | 0.8101 | 0.5933 | 0.0833 |
These experimental results further demonstrate that our proposed pre-training methodology and network architecture can effectively support downstream tasks for characterizing post-routing layout performance.
Due to the time-consuming nature of data preparation and network training, we were unable to complete all the experiments related to DRC prediction within the limited number of days. We will make every effort to update the experimental data related to large-scale DRC in the following days and discuss it with you.
Concerning the power prediction task: The current public datasets we employ lack power-related labels and critical information such as input vectors or toggle rates. We therefore designate power prediction as a primary downstream task for our future work.
Q6
"Do the authors plan to extend this to full-chip routing prediction?"
A6
We acknowledge the reviewer’s question regarding whether our prediction method operates on full-chip designs. Our approach indeed performs post-routing performance prediction on complete circuit chips. Specifically:
1.Full-Chip Input Representation: Throughout all stages—pre-training, fine-tuning, and testing—DeepLayout processes complete circuit graphs that represent the entire chip design.
2.Post-routing prediction target: In DeepLayout, the labels for all downstream tasks are extracted after the routing process. In other words, the current objective of DeepLayout is to make routing predictions.
This distinguishes our work from partial or tile-based prediction approaches, ensuring relevance for real-world chip design optimization. | Summary: This paper proposed a mask-based approach for circuit layout representation learning. Specifically, a grid-based partition is dedicated to dealing with the mask operation on layout. Two tasks are utilized to illustrate the potential, including wirelength estimation and congestion prediction. The results outperform previous approaches.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: No theoretical claims are involved.
Experimental Designs Or Analyses: Yes. The experiments are conducted on public benchmarks and compared with previous methods. Meaningful data is reported.
Supplementary Material: Not applicable.
Relation To Broader Scientific Literature: Not applicable.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Congestion and wirelength are typical proxies for ultimate metrics of a chip, such as power, performance, and area. Conducting experiments targeting these ultimate metrics may further strength the paper.
Other Comments Or Suggestions: 1. Line 92, "Therefore"
2. Algorithm 1, line 13, should the index i be involved?
Questions For Authors: 1. A 50% mask ratio leads to the best performance, and not the higher the better. Any analysis or insights for this?
2. I'm wondering if the performance difference has any potential correlation to the backbone networks used.
3. Robustness. For a specific mask ratio, if multiple experiments are conducted, what's the variation in the performance?
4. Will the code be publicly available for reproduction?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: R1
Q1:
"Congestion and wirelength ... strength the paper."
A1
We have additionally designed post-routing timing prediction task that is directly related to evaluating post-routing layout quality.
Timing is directly correlated with the chip's performance. The results are presented below.
| Model | 5| 5|5 | 10|10|10 | 20 |20 |20 |
|--|--|----|--|--|---|---|--|-|---|
| | Pearsonr | R2 | MAE | Pearsonr | R2 | MAE | Pearsonr | R2 | MAE |
| timergcn | 0.4269 | 0.1434 | 0.1391 | 0.4432 | 0.1647 | 0.1382 | 0.5421 | 0.2598 | 0.1285 |
| DeepLayout| 0.7644 | 0.5046 | 0.0970 | 0.7974 | 0.5893 | 0.0877 | 0.8101 | 0.5933 | 0.0833 |
Area prediction is unnecessary since post-routing layout area remains nearly unchanged from placement, allowing direct calculation.
The current public datasets we employ lack power-related labels and critical information such as input vectors or toggle rates. We therefore designate power prediction as a primary downstream task for our future work.
Q2:
Line 92, "Therefore"
Algorithm 1, line 13, should the index i be involved?
A2:
We appreciate your identification of this typo in the manuscript. additionally, in Line 12, "i" should be corrected to "vi," and in Line 13, "vk" should be updated to "vi." These corrections will be reflected in the final version of the paper.
Q3
"A 50% mask ratio ... for this?"
A3:
We hypothesize that this difference arises due to both the nature of circuit data and our specialized masking strategy:
From the data aspect, Images tend to exhibit a high level of information redundancy, while circuits contain rich information among sub-circuits, which is not easy to restore from the neighbour.
From the pre-training method aspect, DeepLayout implements mask nodes within selected grid cells, where each grid size is significantly smaller than the image patches (e.g., 16×16 in MAE). Meanwhile, each grid contains multiple graph nodes, different from the individual node mask style of graph SSL.
Thus, the optimal 50% mask ratio emerges as an intermediate value between these two modalities. Due to character limitations, I had to shorten the content. If you have other questions, we can discuss them below.
Q4
"I'm wondering ... networks used."
A4
Ablation study of encoder modules
Congestion prediction
| Method | 5 | 5 | 5 | 5 | 10 | 10 | 10 | 10 | 20 | 20 | 20 | 20 |
|---|---|---|---|--|--|--|--|--|---|--|--|--|
| | Pearsonr | MAE | RMSE | SSIM | Pearsonr | MAE | RMSE | SSIM | Pearsonr | MAE | RMSE | SSIM |
| train from scratch | 0.1154 | 0.0508 | 0.0874 | 0.1428 | 0.3047 | 0.0289 | 0.0423 | 0.2560 | 0.3398 | 0.0225 | 0.0386 | 0.4513 |
| wo/ MSGA | 0.4142 | 0.0288 | 0.0684 | 0.7152 | 0.4271 | 0.0135 | 0.0363 | 0.7755 | 0.4391 | 0.0132 | 0.0356 | 0.7777 |
| wo/ HGNN | 0.4138 | 0.0143 | 0.0376 | 0.7670 | 0.4298 | 0.0324 | 0.0728 | 0.7048| 0.4240 | 0.0335 | 0.0756 | 0.6996 |
| DeepLayout | 0.4270 | 0.0146 | 0.0379 | 0.7718 | 0.4383 | 0.0130 | 0.0360 | 0.7820 | 0.4418 | 0.0121 | 0.0349 | 0.7909 |
Post-routing net length prediction
|Method|5|5|5|10|10|10|20|20|20|
|------|---------|---------|---------|----|--|--|-|----|---|
| |Pearsonr|MAE |RMSE |Pearsonr |MAE |RMSE |Pearsonr |MAE |RMSE |
|train from scratch|0.3342|0.1332|0.1737|0.3048|0.1330|0.1765|0.3233|0.1313|0.1764|
|wo/MSGA|0.3634|0.1279|0.1712|0.3743|0.1270|0.1706|0.3848|0.1290|0.1684|
|wo/HGNN|0.3593|0.1311|0.1705|0.3667|0.1326|0.1701|0.3694|0.1303|0.1698|
|DeepLayout|0.3704|0.1305|0.1695|0.3806|0.1290|0.1689|0.3961|0.1270|0.1682|
Q5
"Robustness. ... performance?"
A5
In response, we adopted a 50% mask ratio and conducted three repeated pre-training runs, followed by evaluations on two downstream tasks.
Congestion prediction:
| Samples | Pearsonr (Mean ± Std) | MAE_1D (Mean ± Std) | RMSE_1D (Mean ± Std) | SSIM (Mean ± Std) |
|----|--|---|--|---|
| 5 samples | 0.4270 ± 0.0002 | 0.0138 ± 0.0000 | 0.0371 ± 0.0001 | 0.7743 ± 0.0005 |
| 10 samples| 0.4393 ± 0.0015 | 0.0128 ± 0.0002 | 0.0356 ± 0.0002 | 0.7828 ± 0.0020 |
| 20 samples| 0.4412 ± 0.0011 | 0.0123 ± 0.0001 | 0.0351 ± 0.0001 | 0.7890 ± 0.0013 |
Post-routing wire length prediction
| Samples | Pearsonr (Mean ± Std) | MAE_1D (Mean ± Std) | RMSE_1D (Mean ± Std) |
|--|---|--|---|
| 5 samples | 0.3688 ± 0.0039 | 0.1331 ± 0.0009 | 0.1698 ± 0.0004 |
| 10 samples | 0.3830 ± 0.0027 | 0.1316 ± 0.0020 | 0.1690 ± 0.0002 |
| 20 samples | 0.3949 ± 0.0016 | 0.1313 ± 0.0012 | 0.1681 ± 0.0004 |
Q6
Will the code ... reproduction?
A6
We promise to open-source the DeepLayout code upon paper acceptance. | null | null | null | null | null | null |
Shielded Diffusion: Generating Novel and Diverse Images using Sparse Repellency | Accept (poster) | Summary: The paper addresses the issues of limited diversity and replication of training images in text-to-image diffusion models by introducing a method to ensure that generated images are novel and diverse using sparse repellency.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: Yes, the proposed method is validated through comprehensive experiments.
Supplementary Material: No supplymentary material.
Relation To Broader Scientific Literature: The concept of repellency mechanisms has been explored in other areas of machine learning, such as anomaly detection and outlier detection.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The paper introduces the SPELL method, a novel approach to improve the diversity of generated images in text-to-image diffusion models by using sparse repellency terms.
2. The proposed SPELL significantly enhances the diversity of generated images without substantially impacting image quality.
3. The paper provides extensive experiment evaluations.
Weaknesses:
1. The SPELL method introduces additional computational overhead due to the repellency terms, potentially making it less efficient than simpler diffusion models.
2. The authors should compare the performance of the proposed SPELL method with varying text prompts, as the diversity of generated images can be significantly influenced by changes in the prompts.
3. The paper lacks organization, making it difficult to follow and identify the key points.
Other Comments Or Suggestions: Some visualization results should be removed to the main context.
Questions For Authors: I noticed artifacts in Figures 25 and 26, suggesting that the introduced SPELL method may negatively impact the accuracy of the generated images.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review and for your assessment that our experimental evaluation is extensive. To add to this, we add the additional experiments you have requested below, namely a runtime analysis and SPELL’s performance under text prompts of varying length. We also provide further results, including qualitative examples, in the responses to the other reviewers.
### The SPELL method introduces additional computational overhead due to the repellency terms, potentially making it less efficient than simpler diffusion models.
A runtime analysis can be found in Table 5 in the appendix, we also post it here for your convenience. It shows that **the generation time with and without SPELL is equal in the diversity setup**. This is because SPELL only adds one matrix operation to calculate distances and one to add the repellency terms, which is small compared to the computational cost of the diffusion backend.
| Model | Generation time per image (seconds) |
|--------------------------------|-------------------------------------|
| Baseline (Simple Diffusion) | 2.93 ± 0.12 |
| Simple Diffusion + SPELL | 2.94 ± 0.13 |
### The authors should compare the performance of the proposed SPELL method with varying text prompts
Thank you for the suggestion, **we have added your suggested experiment here** https://imgur.com/a/T6KdRAR . We sliced the CC12M dataset into groups of prompts with increasing length. As can be seen in the plot, SPELL consistently increases the diversity of both short and long prompts compared to the baseline model, without losing on the CLIP-Score, which measures prompt adherence.
### The paper lacks organization, making it difficult to follow and identify the key points.
Thank you for the feedback, we will add intuitions with the additional space of the camera-ready version of the paper.
**Thank you again for your feedback.** We would be happy to learn if the additional experiments and explanations resolve your concerns and update your evaluation of our paper. | Summary: This paper introduces an application of negative guidance on reference datasets (e.g., {training, validation} datasets) to enhance diversity and generate novel samples that differ from reference images. The paper leverages geometric steering to guide samples away from the reference dataset. To minimize the performance degradation caused by the proposed method, the smallest perturbation is applied to pull samples outside a ball centered at the datapoint in the reference dataset closest to the current sample. This approach enables the use of diverse images across various forms of latent diffusion, such as text-to-image and image generation. The experimental results demonstrate that the proposed method maintains generation performance while producing more diverse images.
## **Update after rebuttal**
I appreciate the authors graphical explanation of how SPELL performs unconditional generation. While I want to zoom in understanding the limitations and complete failure cases of this paper by varying their hyperparameters and reference datasets, the authors focused on presenting favorable scenarios where SPELL operates smoothly.
Overall, I am satisfied with the current state of this paper because they addressed effective negative guidance, which leads to diverse samples while also generating specific samples away from the reference images. As taking into to all aspects of manuscript and response, I maintain my original score
Claims And Evidence: The paper’s claim is clear and easy to understand. This paper introduces a novel negative guidance approach with a geometric interpretation. This geometric approach seems more clear and straightforward compared to particle guidance and CADS. Empirical evidence shows its superiority over previous methods
However, its limitations lie in its applications to latent diffusion models rather than pixel-space diffusion. This method heavily relies on the hyperparameter $r$, which acts as a reference data shield. However, the suggested value and its analysis are only based on latent diffusion. I believe it needs to validate the proposed method with pixel-space diffusion to enhance its visibility.
Methods And Evaluation Criteria: This paper compares the proposed model to baselines using appropriate benchmarks and metrics. To assess generating performance, they not only measure {FID}, but also demonstrate {Recall, Coverage, Density} to evaluate the diversity of the generated images by the proposed method.
Theoretical Claims: This paper doesn’t rigorously offer a theoretical framework for negative guidance using diffusion models. However, their conditions and the magnitude of negative guidance are based on a geometric interpretation. Their messages are understandable in terms of measuring distances and determining how close the current sample is to reference data points.
Experimental Designs Or Analyses: Its experimental design evaluates the performance of the model on diversity of generated samples. In particular, the hyperparameter $r$ plays a crucial role in determining both diversity and quality of the generated samples. However, as shown in Table 1, this method does not effectively enhance diversity while preserving image quality. Notably, the FID score is consistently sacrificed in favor of improving recall.
Supplementary Material: I reviewed the supplementary material, including additional experiments, and also saw the smallest value that steers the current sample away from the ball centered at a specific data point.
Relation To Broader Scientific Literature: This paper suggests that negative guidance can be interpreted geometrically.
Essential References Not Discussed: It would be beneficial to review the paper [1] that discusses negative guidance aiming for mitigation of memorization. Furthermore, [2] also addresses dynamic negative guidance while preserving generation performance and effectively negating a portion of the data distribution.
[1] Chen, Chen, Daochang Liu, and Chang Xu. "Towards memorization-free diffusion models." CVPR2025.
[2] Koulischer, Felix, et al. "Dynamic Negative Guidance of Diffusion Models." ICLR2025
Other Strengths And Weaknesses: ### **Weakness**
- This paper demonstrates that the proposed method consistently improves recall metrics, but the FID score is not maintained. However, the paper does not provide an explanation for this issue. While the FID score also considers the diversity of generated samples, the numerical evidence presented in Table 1 suggests that the generated images lack plausibility. I believe that this phenomenon is asked for further investigation.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review and for your interest in the theory behind SPELL that enables its outperformance. We delve into the theoretical connections between SPELL and other methods below, as well as the experimental results for pixel-space diffusion. We also provide further results in the responses to the other reviewers.
### I believe the paper needs to validate the proposed method with pixel-space diffusion to enhance its visibility.
We already validate on a pixel-space diffusion model, Simple Diffusion, as noted in lines 239 and 292. Please refer to Table 1 for its results. In general, **SPELL works independently of the space the diffusion model diffuses in**, whether it is a latent VAE space or a direct RGB diffusion, and whether it is an unguided, classifier-guided, or classifier-free guided model. This sets it apart from alternative approaches like CADS and IG.
### This paper doesn’t rigorously offer a theoretical framework for negative guidance using diffusion models. However, their conditions and the magnitude of negative guidance are based on a geometric interpretation.
SPELL’s repellency directions and magnitudes are based on a theoretical framework, which is explained in appendices A to C. The general problem we tackle is that we want to bring the probability / density that a diffusion model generates (an r-Ball around) a point $z_k$ to exactly zero, i.e. $P_0(B_r(z_k)) = 0$. The interesting part about this proof is that we need to choose the direction and magnitude of the intervention despite not knowing the density at $P_0(z_k)$ of the original diffusion model, since diffusion models do not provide density estimates. In Appendix B **we use Tweedie’s formula for a theoretical derivation of the exact direction and magnitude required to achieve this, leading to Equation (5)**. While in the main paper, we start with a geometric interpretation of Equation (5), we also give a conservative-field interpretation in Appendix C to connect SPELL to other negative-guidance based frameworks, and show SPELL’s theoretical relation to Diffusion Posterior Sampling in L165-219. We hope that these rigorous theoretical analyses help future researchers connect and develop SPELL as well as negative guidance.
### The proposed method consistently improves recall metrics, but the FID score is not maintained. However, the paper does not provide an explanation for this issue.
We attribute the decrease in FID to the fact that it is a reference-based metric that compares distributional similarity with a reference dataset. As SPELL increases the diversity and explores more of the image manifold, its output distribution is intentionally broader than the reference dataset that the diffusion models without SPELL are trained to match as closely as possible. This issue occurs with all reference-based metrics (precision, coverage, recall, density), which is why we report reference-free metrics like the Vendi and CLIP score. **SPELL’s trade-off is, however, better than for other diversity-inducing metrics** (CADS, IG, PG in Fig. 3), even for reference-based metrics. __To demonstrate the preservation of image quality, we’ve generated additional high-resolution SD3 + SPELL images here__ https://imgur.com/a/Dsh7gxb https://imgur.com/a/Uz0LhOz https://imgur.com/a/VGcvqNE https://imgur.com/a/zrZRuV2 https://imgur.com/a/cuSAADZ .
### It would be beneficial to review the paper [1] that discusses negative guidance aiming for mitigation of memorization. Furthermore, [2] also addresses dynamic negative guidance while preserving generation performance and effectively negating a portion of the data distribution.
Thank you for those interesting pointers! We will discuss them in the camera-ready version of the paper (this year's reviewing form does not allow to upload revised pdfs).
**Thank you again for your time during the busy rebuttal period.** We would be happy to learn if the additional experiments and explanations resolve your concerns and update your evaluation of our paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for clarifying my initial review. However, I have a question that remains unclear after the author’s rebuttals.
### Experiments on primitive diffusion models without conditional information
All experimental situations strongly favor conditional information, such as “text prompts” and “class labels.” I kindly request that “SPELL” be integrated with simple generation models, either “EDMv1” or “Improved DDPM,” without conditional information. Otherwise, simple examples like (Two-moon, Star, etc.) cases demonstrate this approach more effectively; The modality is not a concern; (The authors don’t need to validate this experiment on high-resolution images).
In addition to scientific writing, presenting limitations on the proposed method is crucial for contributing to academic fields. Based on my experience, conditional generation can guide a sampling trajectory more easily than vanilla diffusion models.
I believe that demonstration in various forms of diffusions models and presentation of failure cases or limitations strengthen this paper effectively.
——— Post authors response ———
I appreciate the authors graphical explanation of how SPELL performs unconditional generation. While I want to zoom in understanding the limitations and complete failure cases of this paper by varying their hyperparameters and reference datasets, the authors focused on presenting favorable scenarios where SPELL operates smoothly.
Overall, I am satisfied with the current state of this paper because they addressed effective negative guidance, which leads to diverse samples while also generating specific samples away from the reference images. As taking into to all aspects of manuscript and response, I maintain my original score
---
Reply to Comment 1.1.1:
Comment: Thank you for your fast response. We are happy to provide the additional experiment below.
### Two-moons experiment for unconditional generation
Sure, SPELL can also be used on unconditional diffusion models, because it does not rely on the conditioning signal (other than CADS and IG). **We provide your requested two-moons example here** https://imgur.com/a/JlRi8uR . The two sets of samples are generated with the exact same noise seeds, once without and once with SPELL. It shows that SPELL has a blue-noise like pattern that covers the distribution better than non-diversified sampling. We will add this toy experiment to the camera-ready version to better demonstrate how SPELL behaves. The reason we have used text-to-image and class-to-image models in the paper so far is that they are the most popular ones and present an interesting challenge where we need to become diverse without losing prompt/class adherence (hence the CLIP score tradeoff curves).
### Limitations section
Thank you for the suggestion. We agree that openly discussing limitations strengthens the contribution, which is why we discuss limitations, e.g., in L39-40 (abstract), L308-312 (left), L406-411 (right), 427-433 (left), L425-439 (right). With the additional page of the camera-ready version, we will introduce a dedicated limitations section to bundle these and motivate future research avenues.
**Thank you again for your time and fast interaction.** We would be happy to learn if these and the previous additions update your evaluation of our paper. | Summary: This paper introduces Shielded Diffusion, which aims to generate images outside of protected sets. These protected sets may include protected images, other data in the current batch, or data used during training. The authors determine whether the diffusion trajectory is expected to fall into protected sets during the sampling process, dynamically triggering a repellency term to ensure the sampling endpoint stays away from protected sets. The proposed method is training-free, capable of enhancing generation diversity while reducing the risk of model infringement (preventing the generation of training data).
## update after rebuttal
Although the author's reply to some extent resolved my concerns, I still believe that there is some room for improvement in the writing of this paper, as reviewer jFPF also mentioned "The paper lacks organization, making it difficult to follow and identify the key points."
In addition, in the author's reply, I found that flux+spell (https://imgur.com/a/FwJvkhG) seems not to work very well, with a relatively small difference before and after, and it affects the image quality (It seems to have affected the image harmony, such as color temperature, saturation.).
Claims And Evidence: This paper claims that the proposed technique enhances the generation diversity of diffusion models while only slightly disturbing the FID. It compares text-to-image and class-conditional diffusion models in Table 1, compares with other diversity-inducing methods in Figure 3, and conducts further Sparsity Analysis and qualitative result comparisons, which to some extent validate the paper's claim.
Methods And Evaluation Criteria: The method proposed in this paper helps with copyright protection and improves generation diversity, which is meaningful in practical applications.
Theoretical Claims: This paper's geometric explanation for SPELL is reasonable, as it aims to add disturbance to push $\hat{x}_0$ away when it potentially falls within radius r (equations 4 and 5).
Experimental Designs Or Analyses: This paper conducts experiments on class-to-image and text-to-image diffusion models, measuring multiple metrics including Recall, Cendi, FID, and Coverage. Additionally, it performs Sparsity Analysis, with experimental design and analysis that are reasonable to some extent.
However, I believe it could provide examples and analysis under the SD3 model with higher resolution images and more complex prompts, as practical usage tends to favor the generation of complex scenes and high-resolution images.
Supplementary Material: This paper does not have supplementary materials, but I have reviewed the analysis, pseudocode, and other content in the appendix.
Relation To Broader Scientific Literature: Sparsity is the biggest difference between this paper and other methods [1][2][3] controlling similar diversity-precision trade-offs. Other methods apply disturbances to change diffusion trajectories throughout the entire sampling time steps, while the method proposed in this paper uses ReLU weighting for dynamic activation, adding no disturbance when trajectories are moving toward more diverse directions. This helps ensure quality while improving diversity.
[1] Kynkäänniemi T, Aittala M, Karras T, et al. Applying guidance in a limited interval improves sample and distribution quality in diffusion models[J]. arXiv preprint arXiv:2404.07724, 2024.
[2] Sadat S, Buhmann J, Bradley D, et al. CADS: Unleashing the diversity of diffusion models through condition-annealed sampling[J]. arXiv preprint arXiv:2310.17347, 2023.
[3] Corso G, Xu Y, De Bortoli V, et al. Particle guidance: non-iid diverse sampling with diffusion models[J]. arXiv preprint arXiv:2310.13102, 2023.
Essential References Not Discussed: Based on my understanding of this field, the authors appear to have included most of the relevant literature discussions.
Other Strengths And Weaknesses: Strengths: The method proposed in this paper helps with copyright protection, which is meaningful for mitigating risks in AI-generated content; the proposed method is training-free and does not introduce additional computational overhead.
Weaknesses: This paper is obscure and difficult to understand, especially for non-domain experts. The reading experience is particularly poor in terms of logical flow. I believe the authors should express their content using more accessible, standardized, and logically structured narratives. For example, Sparse Repellency is a core concept of this paper, and the authors should explain this term first.
Other Comments Or Suggestions: In the introduction, the authors mention "This phenomenon is illustrated in Figure 1 for three popular diffusion models, Stable Diffusion 3 (Esser et al., 2024), Simple Diffusion (Hoogeboom et al., 2023) and MDTv2 (Gao et al., 2023)." However, Figure 1 only shows results for Simple Diffusion and MDTv2, with no results for Stable Diffusion 3.
Questions For Authors: 1. Could the authors explain why they did not conduct some experiments on larger resolution images (greater than 256*256)?
2. Could the authors explain why they did not test more complex scenarios beyond "a dog plays with a ball," and whether more diverse generation could be achieved in such cases?
3. Is there additional computational overhead introduced, and could they provide an analysis of this overhead?
4. Could the authors distill the paper's greatest contribution, rather than listing 5 lengthy points as in the Introduction?
I will maintain a rejection stance; further explanation from the authors would help me improve my score.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review and for acknowledging that SPELL tackles meaningful practical applications while remaining training-free. We are happy to provide the experimental results for high-resolution images, complex prompts, and compute overhead that you have requested below.
### Examples and analysis under the SD3 model with higher resolution images and more complex prompts
**We have added 1024x1024 images with prompts of varying length, generated both via SD3 and FLUX**, under the following links: https://imgur.com/a/Dsh7gxb https://imgur.com/a/Uz0LhOz https://imgur.com/a/VGcvqNE https://imgur.com/a/zrZRuV2 https://imgur.com/a/cuSAADZ https://imgur.com/a/oHouASK https://imgur.com/a/3mI2ynU https://imgur.com/a/knqPFhc https://imgur.com/a/FwJvkhG https://imgur.com/a/qigDza4 . We’ve generated the images one-by-one from the same seeds like in Figure 5. The results show that while the first samples of the unconditional model are already diverse enough (top rows) and SPELL does not act due to its sparsity, it starts diversifying the images in the later generations (bottom rows). This holds both for simple and complex prompts.
The experiments in our paper also already use complex prompts, with the CC12M prompts ranging from 15 to 491 characters. **We have added an ablation on SPELL’s diversity for different prompt lengths** here https://imgur.com/a/T6KdRAR . It shows that SPELL increases the diversity of the generated images even in the longest 90-100% percentiles of the prompts compared to the baseline model, without compromising on the prompt adherence measured by the CLIP score.
### I believe the authors should express their content using more accessible, standardized, and logically structured narratives.
Thank you for the feedback! We will highlight the narrative in the camera-ready version (this year's reviewing form does not allow to upload revised pdfs).
### Figure 1 only shows results for Simple Diffusion and MDTv2, with no results for Stable Diffusion 3.
We apologize for this editing issue. **We provide a Stable Diffusion 3 example here**, https://imgur.com/a/qNYX5tX , on top of the other SD3 examples linked above.
### Experiments on larger resolution images (greater than 256*256)
We have added results for 1024x1024 images, see the links above.
### More complex scenarios beyond "a dog plays with a ball,". Can more diverse generations could be achieved in such cases?
The “a dog plays with a ball” prompt was only used in Figure 5 to give a qualitative example. **Our quantitative experiments already use CC12M prompts with 15-491 characters, and we have added qualitative results for longer prompts in the links above**, following your recommendation. We will outline this in the camera-ready version.
### Is there additional computational overhead introduced, and could they provide an analysis of this overhead?
We provide an analysis of computational overhead in Table 5 in the Appendix for the diversity use-case, and also reposted here. As can be seen, **SPELL does not increase the generation time during diverse generation**, since it adds only one matrix operation to calculate distances and one to calculate the repellency direction (and in many timesteps does not even do this thanks to its sparsity (see Appendix H)) which is a small computation compared to the diffusion backbone.
| Model | Generation time per image (seconds) |
|--------------------------------|-------------------------------------|
| Baseline (Simple Diffusion) | 2.93 ± 0.12 |
| Simple Diffusion + SPELL | 2.94 ± 0.13 |
### Could the authors distill the paper's greatest contribution
We provide SPELL, a sparse and training-free mechanism to push diffusion trajectories away from already-generated or protected images. We will make this clearer in the abstract and introduction, thank you for your feedback!
**Thank you again for your reviewing time.** We would be happy to learn if the additional experiments and explanations resolve your concerns and update your evaluation of our paper. | Summary: This paper proposes sparse repellency (SPELL) to prevent diffusion models from generating images in a set of L2 balls. SPELL can be used to (1) protect diffusion models from generating training images; (2) encourage diversity between multiple generations. Extensive experiments shows the superiority of SPELL.
Claims And Evidence: Claim #1: SPELL is a new effective method for preventing diffusion models from generating samples inside shields (which are essentially L2 ball neighborhoods of points in a reference set).
Evidence: The methodology and discussion in Section 3 explains the SPELL method and its relationship to related works very well.
Claim #2: SPELL is empirically effective in terms of shielded generation of diffusion models.
Evidence: The experimental results in Section 4 shows SPELL can improve diversity with a little drop in image quality. In Section 4.3, the authors compare SPELL with other methods and presents a better Pareto front. In Section 4.6, SPELL decreases the proportion of samples inside shields for EDMv2 in ImageNet-1k, demonstrating its potential in large-scale setting and data privacy protection.
Methods And Evaluation Criteria: The method and evaluation criteria looks reasonable to me.
Theoretical Claims: I didn't redo the proof by myself, but the connection between SPELL and DPG/PG makes sense.
Experimental Designs Or Analyses: 1. Why does SD3 + SPELL (row2 and row 5 in Table 1) has significant degradation in FID and FD_DINO? Does this imply drop in image quality?
2. Can image protection with SPELL be applied to text-to-image models? Compared with the method in Section 4.6, T2I may need retrieval augmented generation to apply SPELL. Could the authors comment on this?
Supplementary Material: I wen through the entire appendix.
Relation To Broader Scientific Literature: The paper proposes a new method for training image protection and diversity enhancement for diffusion model inference. I regard these two problems as important problems for diffusion models. The proposed method, SPELL, although being simple, is a sensible method for these two methods that may enlighten future research on this direction.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Overall, I think the submission proposes a simple, reasonable method that is suitable for two important problems in sampling diffusion models, training image protection and diversity enhancement. The authors also did extensive experiments to support their method.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. which model does Figure 3 compare? There are 3 Text-to-Image models listed in the setting part, SD3, Latent Diffusion and Simple Diffusion.
2. Can the SPELL method be applied to rectified flow-based models? For example, FLUX.
3. The assumption of $B_k$ being disjoint in L#165-183, page 4, limits the range of $r$ ( if $r \rightarrow \infty$, $B_k$ will inevitably overlap). Can the authors provide a principal way for choosing $r$ for a specific reference set?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review and for recognizing that SPELL is a simple way to tackle two current problems in diffusion models at once. We are happy to share explanations and your requested FLUX experiments below.
### Can image protection with SPELL be applied to text-to-image models?
Yes, there is no difference in applying SPELL for image protection on class-to-image, noise-to-image, or text-to-image diffusion models. Unlike approaches like Interval Guidance and CADS, *SPELL is generalized away from the conditioning signal*, i.e., it does not rely on classifier- or classifier-free guidance, but acts on the diffusion trajectory and the output space of any diffusion model itself, even if the conditioning signal tries to push the diffusion trajectory close to a protected image. **SPELL can thus protect any-to-image diffusion models**. The reason why we demonstrate this on the class-to-image model MDTv2 in Section 4.6 as opposed to a text-to-image model is simply that MDTv2’s exact training dataset and preprocessing are public (ImageNet-1k) so that we can protect the exact training data, whereas text-to-image models like SD3 are trained on proprietary dataset splits.
### Why does SD3 + SPELL have significant degradation in FID and FD_DINO? Does this imply a drop in image quality?
We attribute the decrease in FID to the fact that it is a reference-based metric that compares distributional similarity with a reference dataset. As SPELL increases the diversity and explores more of the image manifold, its output distribution is intentionally broader than the reference dataset that SD3 without SPELL is trained to match as closely as possible. This issue occurs with all reference-based metrics (precision, coverage, recall, density), which is why we report reference-free metrics like the Vendi and CLIP score. **SPELL’s trade-off is, however, better than for other diversity-inducing metrics** (CADS, IG, PG in Fig. 3), even in reference-based metrics. **We’ve also generated additional high-resolution SD3 + SPELL images**, generated one-by-one with the same seeds as in Figure 5, to show the image quality https://imgur.com/a/Dsh7gxb https://imgur.com/a/Uz0LhOz https://imgur.com/a/VGcvqNE https://imgur.com/a/zrZRuV2 https://imgur.com/a/cuSAADZ
### Can the SPELL method be applied to rectified flow-based models? For example, FLUX.
Yes, SPELL can be applied to any diffusion model. **We have added examples for FLUX1.schnell** in 1024x1024: https://imgur.com/a/oHouASK https://imgur.com/a/3mI2ynU https://imgur.com/a/knqPFhc https://imgur.com/a/FwJvkhG https://imgur.com/a/qigDza4 Since FLUX1 already generates more diverse images by itself compared to SD3, SPELL’s sparsity means that it is activated on fewer pictures, and in more details, like background objects or colors. If you are interested in one-step diffusion models, we denote that SPELL should be used with a number of diffusion steps greater than 1, because it is applied after each diffusion step. If there is only one step, it will be applied effectively after the generation has already ended, which could introduce artifacts. We will add this discussion to the camera-ready version of the paper to outline SPELL’s applicability.
### Which model does Figure 3 compare?
Figure 3 shows results for Latent Diffusion, as denoted in the caption.
### The assumption of $B_k$ being disjoint in L165-183, page 4, limits the range of $r$ ( if $r \rightarrow \infty$, $B_k$ will inevitably overlap). Can the authors provide a principal way for choosing $r$?
In the protection case, the question of how big of a shield a user wants to create around a protected image is ultimately a user choice. Should they use a radius that large that shields start overlapping, and should a trajectory point exactly to the middle of two shields (and inside the radius of both shields), their terms cancel each other out, as we openly address in L179-180. In this case, the overlapping shields $z_1$ and $z_2$ can be merged with a combined radius of $r_\text{merged} = r + d(z_1, z_2)$ and $\frac{z_1+ z_2}{2}$ as shield center to retain the protection guarantee, as mentioned in L673 in the appendix. However, in practice we did not observe this to be necessary – even in the protection experiment on all 1.2M ImageNet-1k images, we did not have issues with overlapping shields. Due to the high-dimensional space, it is unlikely that a trajectory points exactly in the middle of two shields, where their terms would cancel each other out exactly. Rather, their terms point in slightly different directions, so that SPELL guides the trajectory points away from both shields throughout the diffusion. So, **a user is able to choose a shield radius as large as it suits their needs**. As a side note, for the diversity use case, we provide a principled way of choosing $r$ in L779 in the appendix.
**Thank you again.** We would be happy to learn if the additional experiments and explanations update your evaluation of our paper. | null | null | null | null | null | null |
BAME: Block-Aware Mask Evolution for Efficient N:M Sparse Training | Accept (poster) | Summary: This paper introduces a novel method, BAME, which maintains consistent sparsity throughout the N:M sparse training process. Specifically, BAME ensures sparse forward and backward propagation while iteratively performing Loss-aware Mask Adaptation (LMA) and Oscillation-aware Block Freezing (OBF) to adapt the mask. This approach not only optimizes the N:M mask but also ensures training efficiency. The authors have conducted extensive experiments to validate the effectiveness of BAME.
## update after rebuttal
I think my concerns are addressed and I am happy to keep my initial positive rating.
Claims And Evidence: Yes, the authors provide clear experiment results to support the theoretical foundations of LMA and OBF in BAME.
Methods And Evaluation Criteria: 1. The proposed method effectively addresses the current inefficiency of N:M sparsity methods.
2. The evaluation criteria align well with the image classification benchmarks, including CIFAR-10 and ImageNet.
Theoretical Claims: I have reviewed all the theoretical analyses and found them to be sound and correct.
Experimental Designs Or Analyses: The overall experimental design and analyses are well-structured and reasonable.
Supplementary Material: Not applicable.
Relation To Broader Scientific Literature: This paper effectively sets the stage for the problem of N:M sparsity and the efficiency limitations of current approaches, leading logically into the proposed solution with BAME.
Essential References Not Discussed: From the best of my knowledge, there is no essential reference that needs further discussion.
Other Strengths And Weaknesses: Strengths:
1. The motivation behind BAME is to maintain consistent sparsity during both forward and backward propagation is very attractive. This contrasts sharply with existing methods that rely on dense backward propagation for weight updates, which can be computationally expensive. This is a necessary prerequisite for efficient N:M training on future work.
2. Extensive empirical evidence supports the effectiveness of the BAME method, particularly in keeping state-of-the-art performance while drastically reducing the training FLOPs.
3. The mathematical analysis behind Loss-aware Mask Adaptation is solid. The entire paper is well-written and easy to understand.
Weaknesses:
1. The proposed metric in Loss-aware Mask Adaptation is somewhat common in traditional sparse methods, which limits the novelty of the paper to some extent. Nevertheless, it still make clear contribution to the N:M sparsity literature.
2. The analysis of SAD is too coarse-grained in the method part. The authors did not explore the mask variations across different layers in detail, which could further optimize the OBF mechanism.
Other Comments Or Suggestions: The authors only conducted experiments on image classification tasks. It would be beneficial to perform additional experiments on tasks such as object detection to validate the effectiveness of BAME in other domains.
Questions For Authors: As previously mentioned, does the variation in masks differ across different layers of the network? If so, could designing different OBF parameters for each layer based on these variations lead to better performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your careful review and constructive comments. Please kindly see our responses to your questions below.
**Q1**: The proposed metric in Loss-aware Mask Adaptation is somewhat common in traditional sparse methods, which limits the novelty of the paper to some extent.
**A1**: Thanks for this very insightful comment and we would like to share some explanations here. While we acknowledge that pruning-and-reviving is indeed a well-established technique in sparse training literature, BAME introduces two key innovations that differentiate it from prior work: (1) block-wise loss-aware mask adaptation (LMA) for N:M sparsity, and (2) oscillation-aware block freezing (OBF) to stabilize frequently-oscillating N:M blocks. Together, these novel components enable BAME to significantly reduce training overhead while achieving state-of-the-art performance in N:M sparse network training. We appreciate your comments and hope the above discussion clears up any misconceptions regarding our contribution in introducing BAME.
**Q2**: The analysis of SAD is too coarse-grained in the method part.
**A2**: We sincerely appreciate this insightful suggestion. Following your valuable comment, we conducted additional analysis by tracking the average SAD variation per layer throughout training, with the results for sparsifying ResNet-50 at 1:16 pattern listed below (due to length limit, we random select some layers to show).
| Layer | 1 | 7 | 18 | 22 | 39 | 42 | 45 | 46 |
| ----- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| SAD | 23.1 | 22.9 | 18.5 | 15.3 | 12.5 | 5.7 | 3.8 | 3.9 |
The above results reveal that deeper layers indeed exhibit significantly higher SAD values compared to shallower ones. Inspired by your suggestion, we further implemented a preliminary improvement: we linearly scaled the OBF parameter from 0 to 0.5 across network layers (shallow to deep). This modification demonstrates promising results on ImageNet with ResNet-50 under 1:16 sparsity, as shown in the following table:
| Method | N:M Pattern | Top-1 Accuarcy |
| ------------- | ----------- | -------------- |
| OBF | 1:16 | 72.0 |
| Layerwise OBF | 1:16 | 72.3 |
While such a heuristic modification shows potential, a more adaptive and elegant parameter allocation scheme could yield further performance gains, which we leave as a promising future work. We sincerely appreciate your expert suggestion in highlighting this valuable research avenue.
**Q3**: It would be beneficial to perform additional experiments on tasks such as object detection to validate the effectiveness of BAME in other domains.
**A3**: Following your professional suggestion, we further exploit the generalization ability of BAME on the object detection and instance segmentation tasks of the COCO benchmark[2]. The results are delineated as follows:
- Results on object detection tasks with Faster-RCNN[3] as the backbone.
| Model | Method | N:M | mAP |
| ------ | -------- | ---- | ---- |
| F-RCNN | Baseline | - | 37.4 |
| F-RCNN | SR-STE | 2:4 | 38.2 |
| F-RCNN | BAME | 2:4 | 38.5 |
- Results on instance segmentation tasks with Mask-RCNN[4] as the backbone.
| Model | Method | N:M | Box mAP | Mask mAP |
| ------ | -------- | ---- | ------- | -------- |
| M-RCNN | Baseline | - | 38.2 | 34.7 |
| M-RCNN | SR-STE | 2:4 | 39 | 35.3 |
| M-RCNN | BAME | 2:4 | 39.2 | 35.4 |
We will incorporate these results into our final version. Thanks for the valuable suggestion again.
[1] Microsoft coco: Common objects in context. In ECCV, 2024.
[2] Faster r-cnn: Towards real-time object detection with region proposal networks. In NeurIPs, 2015.
[3] Mask r-cnn. In ICCV, 2017.
Your time and effort in reviewing our paper are genuinely appreciated. If there are any additional questions or points that require clarification, we would be more than delighted to engage in further discussions. | Summary: This paper presents a novel approach for preserving sparsity in DNNs during training, with a focus on N:M sparsity. The authors introduce BAME (Block-Aware Mask Evolution), a technique that ensures both forward and backward propagation remain sparse while iteratively pruning and regrowing weights within predefined blocks. Unlike traditional methods that rely on dense backward propagation—often computationally costly—BAME offers a more efficient alternative. Experimental results on CIFAR and ImageNet show that BAME achieves performance comparable to or better than state-of-the-art methods while significantly reducing training FLOPs.
## update after rebuttal
My concerns are addressed. I will keep my rating.
Claims And Evidence: The claims are well-supported by both theoretical analysis and empirical results. The authors provide a detailed theoretical proof for the efficacy of LMA, along with extensive experiment to demonstrate the performance of BAME compared with other methods.
Methods And Evaluation Criteria: Yes, the proposed BAME method is well-suited for the problem of N:M sparse training in DNNs. The use of block-level mask evolution is innovative and addresses the efficiency limitations of traditional methods. The evaluation criteria, including benchmark datasets like CIFAR and ImageNet, are appropriate and widely accepted in the field, ensuring the results are meaningful and comparable to prior work.
Theoretical Claims: Yes, the theoretical claims regarding the LMA method appear to be correct. The proofs are well-structured and logically sound. No issues were identified in the theoretical analysis.
Experimental Designs Or Analyses: The experimental designs are sound and well-executed. The authors conduct extensive experiments across multiple datasets (CIFAR, ImageNet) and network architectures to validate the effectiveness of BAME.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The key contributions of this paper build on prior work in N:M sparse training. Specifically, the authors address the efficiency limitations of existing methods that rely on dense backward propagation, which is computationally expensive. By introducing block-level mask evolution, BAME offers a more efficient alternative that aligns with recent trends in reducing the computational cost of training DNNs. The paper effectively situates itself within the broader literature by comparing BAME to state-of-the-art methods and demonstrating its advantages.
Essential References Not Discussed: No, the paper adequately covers the relevant literature and provides sufficient comparisons with other methods.
Other Strengths And Weaknesses: *Strengths:**
1. The paper is well-structured and clearly written, making it easy to follow.
2. **Innovative Methodology**: The paper introduces a fresh perspective on N:M sparse training by focusing on block-level sparsity evolution. This approach effectively balances efficiency and performance, showcasing significant potential for reducing the computational costs associated with dense training.
3. **Theoretical Rigor**: The theoretical proofs are thorough and provide a solid foundation for understanding the design principles behind the LMA metric. Additionally, the visualization of mask oscillations effectively validates the design rationale of OBF.
4. **Empirical Evidence**: The experimental results are extensive and demonstrate the efficacy of BAME across various network architectures and datasets. The results show that BAME can achieve substantial reductions in training FLOPs without compromising model accuracy.
**Weaknesses:**
1. While BAME demonstrates lower training overhead compared to MaxQ, its performance appears to be slightly inferior to MaxQ in certain cases.
2. Another minor limitation of the proposed method is its reliance on hyperparameters. Future work could explore ways to automate N:M sparse training to reduce the need for such specific hyperparameter tuning.
Other Comments Or Suggestions: No
Questions For Authors: I keep up with the literature in this area.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive and motivating comments. Please kindly see our response to your comment below.
**Q1**: While BAME demonstrates lower training overhead compared to MaxQ, its performance appears to be slightly inferior to MaxQ in certain cases.
**A1**: We appreciate this insightful comment. We acknowledge that our method may exhibit marginally inferior performance to MaxQ in some scenarios. However, we would like to highlight that our approach achieves a significant reduction in training FLOPs—from 0.91× to 0.59× for training 2:4 sparse ResNet-50 compared to dense training. Thus, while maintaining comparable performance to MaxQ, our method offers substantial advantages in training efficiency.
**Q2**: Another minor limitation of the proposed method is its reliance on hyperparameters.
**A2**: We appreciate this constructive feedback and acknowledge that BAME does involve several hyperparameters. However, we would like to clarify that most of these hyperparameters follow established conventions from prior sparse training methods [1,2]. As such, they are considered well-established defaults in the field and typically do not require extensive tuning—for instance, the update interval ΔT is fixed at 100, following standard practice. Despite this, we recognize the potential effectiveness of automatically performing N:M sparse training without choosing hyperparameters, and we earmark this aspect for future work.
[1] Rigging the lottery: Making all tickets winners. In ICML, 2020
[2] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration. In NeurIPs, 2021.
We sincerely appreciate the time and diligence you’ve taken to participate in the review of our paper. If you have further questions, we are more than glad to discuss with you.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. My concerns are addressed.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your strong support and great interest in our work. We are also delighted that our rebuttal has effectively addressed your questions. | Summary: The paper introduces a novel approach called BAME (Block-Aware Mask Evolution) for training N:M sparse networks in an efficient manner. The authors argue that prior works often rely on dense gradient updates, which leads to considerable overhead. Instead, BAME keeps the network consistently N:M sparse throughout both the forward and backward passes. The core idea comprises two components: Loss-Aware Mask Adaption (LMA): and Oscillation-aware Block Freezing (OBF). Experimental results on CIFAR-10 and ImageNet show that BAME achieves accuracy comparable to or surpassing prior N:M sparsity methods, while consuming significantly fewer training FLOPs
Claims And Evidence: The paper’s main claim is that BAME’s consistent N:M sparsity (applied throughout forward and backward passes) can significantly reduce training overhead without sacrificing accuracy. The authors present empirical results on CIFAR-10 and ImageNet using models such as ResNet, MobileNet, and DeiT, demonstrating that BAME achieves on-par or better accuracy compared to prior N:M sparsity methods at substantially lower training FLOPs.
One limitation, however, is that these claims rely on theoretical FLOPs counts. The authors do not provide actual training or inference speed measurements, such as latency, throughput, or wall-clock time, which would give more direct evidence of real-world performance gains.
Methods And Evaluation Criteria: The proposed method applies periodic “pruning-and-regrowth” within each N:M block, guided by LMA and OBF that freezes blocks prone to mask instability.
- Top-1 Accuracy
- Training/Test FLOPs
- NM spasrity
While these metrics are typical and demonstrate clear benefits (especially in theoretical FLOPs), the paper omits direct measurements of actual training/inference speed (e.g., wall-clock time, throughput, latency on real hardware). Including such real-world measurements would strengthen the argument that the reduced FLOPs indeed translate to practical speedups and resource savings.
Theoretical Claims: The submission does not present extensive theoretical proofs beyond first-order Taylor approximations for analyzing loss impact.
Experimental Designs Or Analyses: - The experiments are well-structured: multiple N:M patterns (2:4, 1:4, 1:16) are tested on CIFAR-10 and ImageNet with widely used architectures. The ablation studies around hyperparameters (α,β,ΔT) and scheduling (when to start and stop mask adaption) support the validity of the method.
- The results consistently show superior or on-par accuracy with lower training FLOPs. The authors also compare with relevant baselines (SR-STE, ASP, LBC, Bi-Mask, MaxQ).
- The experiments seem sound overall, but additional real-world speedup measurements (wall-clock time) on hardware supporting N:M sparsity (e.g., A100 GPU) might further demonstrate the actual training speed benefits.
Supplementary Material: There is no supplementary material
Relation To Broader Scientific Literature: - BAME aligns with the growing literature on sparse training (RigL, SET, SNIP) and N:M sparsity (ASP, SR-STE, LBC, Bi-Mask).
- The core novelty is combining a loss-aware local pruning/regrowth scheme with an oscillation detection approach specifically tailored for N:M blocks.
Essential References Not Discussed: I did not identify any critical missing references that would drastically affect the paper’s context.
Other Strengths And Weaknesses: Strengths:
- Presents a well-motivated approach for block-level mask evolution.
- Simple idea but demonstrates strong empirical results across multiple networks/datasets.
- Reduces training FLOPs while retaining or improving accuracy compared to prior methods.
Weaknesses:
- More real-world training speed tests (beyond theoretical FLOPs) would be helpful to confirm the practical efficiency on hardware that supports N:M acceleration.
- The choice and tuning of certain hyperparameters (α,β,ΔT) might require domain knowledge, although the paper provides some guidance through ablation.
Other Comments Or Suggestions: I would recommend clarifying the initialization and final freeze phases in even more detail: how exactly are the pruned weights re-initialized? Are they set to zero or restored from historical values, etc.?
Questions For Authors: - For newly “revived” weights, are they always re-initialized to zero or do they retain their old (pre-pruned) value [1]? If always zero-initialized, does that hamper potential recovery for large updates? If not, how do you track historical states?
- Have you measured actual training speed on an NVIDIA A100 or similar GPU to confirm that the 0.29× or 0.39× training FLOPs factor translates into a similar wall-clock reduction? If so, please provide more context.
- You mention that OBF (Oscillation-aware Block Freezing) identifies blocks with high-frequency mask changes and freezes them to improve training stability. However, one might wonder if these frequent updates are actually a signal that those blocks are “sensitive” or “important.” In other words, rather than straightforwardly freezing such blocks, could there be a more nuanced approach—one that better leverages this apparent sensitivity while still mitigating the risk of oscillation?
[1] CHEX: CHannel EXploration for CNN Model Compression. CVPR 2022
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive and motivating comments. Please kindly see our response to your concerns below.
**Q1**: Actual training or inference speed measurement of BAME.
**A1**: We appreciate this constructive feedback. We first clarify that **reporting N:M sparsity patterns and theoretical FLOPs (rather than empirical acceleration ratios) is a standard practice in the N:M sparsity community** [1–3]. This is because N:M sparsity is inherently a hardware-software co-design problem: algorithmic works in this domain typically focus on ensuring that weight matrices satisfy N:M sparsity constraints during forward/backward propagation, while retaining the performance. Critically, once N:M sparsity is achieved in either phase (forward or backward), the acceleration effect becomes deterministic given certain hardware support[4,5]. For instance, [5] reports a 2.17× speedup for 2:8 sparse training when both forward and backward passes are sparse (equivalent to our BAME, T-mask[2], Bi-mask[3]), while methods with dense backward passes (e.g., SR-STE[1]) achieve only 1.33× speedup. Currently, the only open-source hardware framework is NVIDIA Ampere Sparse Tensor Core[4] that support 2:4 inference. Below are our test inference latency results on an NVIDIA A100 GPU (batch size=512):
| Method | N:M | Latency per Batch (ms) | Top-1 Acc.|
| - | - | -| - |
| ResNet-50 |- | 59.88|77.4|
| BAME | 2:4 | 37.19|77.4|
This demonstrates practical inference acceleration (1.61×) for BAME. Regarding training latency, existing N:M training frameworks are neither open-source nor easily deployable without specialized hardware expertise. Due to limited resources, **we prioritized algorithmic innovation (aligning with the community norms) to report only theortical FLOPs**. While FLOPs remain a standard metric for cross-method comparison, we will add the result of inference latency and clarify this point in our final version to avoid any misunderstanding.
**Q2**: Theoretical proofs beyond first-order Taylor approximations for analyzing loss impact.
**A2**: We fully agree that more rigorous theoretical derivations exist—such as second-order Hessian-based analysis. However, we employ first-order approximation due to its efficiency, as the derived metric w*g_w is readily available during training. This ensures training efficiency, and first-order approximations have also been widely validated as effective in reflecting loss impact[8], though exploring beyond them remains a promising future direction. Thanks very much for this insight.
**Q3**: Hyperparameters might require domain knowledge.
**A3**: We acknowledge that BAME does involve several hyperparameters, yet most follow established sparse training recipes[8] and require minimal tuning. Despite this, we recognize the potential of automatically performing N:M training without choosing hyperparameters, and we earmark this aspect for future work.
**Q4**: Clarify the initialization and final freeze phases in even more detail.
**A4**: We re-initialize the pruned weights to their historical values rather than set them to zero. This is directly motivated by Equation 6 of our manuscript, which evaluates the impact of restoring a pruned weight on the loss by leveraging its historical state. For the storage of historical states, we directly store and freeze the values of pruned weight. We ensure that the above clarification will be added to our final version.
**Q5**: The relationship between frequent updates and block importance.
**A5**: Thank you for sharing this insightful comment. Indeed, mask oscillation has been a persistent research point in the community. Zhou et al. [1] showed mask fluctuations correlate with loss reduction, where excessive variations yield suboptimal results. Their SR-STE solution increases weight decay on pruned weights to stabilize masks. The point you raised presents an intriguing research direction, as both our OBF and SR-STE seem to be heuristic to avoid mask oscillation. Perhaps these sensitive blocks may require specialized evaluation metrics to assess weight significance that avoids fluctuation without compromising performance. Addressing this challenge demands substantial algorithmic innovation, which we reserve for future work.
We sincerely appreciate the time and effort you have dedicated to reviewing our paper. Should you have any further inquiries, please let us know and we would be more than delighted to engage in further discussion with you.
[1] Learning N: M fine-grained structured sparse neural networks from scratch. In ICLR, 2021
[2] A Provable and Efficient Method to Find N:M Transposable Masks. In NeurIPs, 2021
[3] Bi-directional Masks for Efficient N:M Sparse Training. In ICML, 2022.
[4] Nvidia a100 tensor core gpu architecture, 2020
[5] Efficient N:M Sparse DNN Training Using Algorithm, Architecture, and Dataflow Co-Design. In IEEE TCAD, 2023.
[6] Rigging the lottery: Making all tickets winners. In ICML, 2020. | null | null | null | null | null | null | null | null |
Non-Asymptotic and Non-Lipschitzian Bounds on Optimal Values in Stochastic Optimization Under Heavy Tails | Accept (poster) | Summary: This paper aims at proving confidence bounds over the minimum of a stochastic optimisation problem, ie, find a high probability confidence interval for the value of $F(x^\star) = \min E[f(x,\xi)]$, given sampled datapoints $(\xi_1,\dots,\xi_N)$. Thus, this task typically boils down to proving a lower bound and an upper bound. While significant work has been done on this topic, existing work often either have a problematic dependence in the Lipschitz constant of $f$ or the metric entropy of the domain (which might explode in practice) or require light-tail assumptions and / or convexity.
The present work address these issues by proving three new non-asymptotic confidence bounds (NCB): (i) a NCB for non-Lipschitz convex problems under heavy-tailed assumptions, (ii) for the first time a non-Lipschitz and non-convex NCB with light-tail (sub-exponential) assumptions, and (iii) a NCB in a non-convex heavy-tailed setting, at the cost of additional assumptions.
The proof techniques rely on two existing techniques to construct confidence bounds, namely the sample average approximation (SAA) and the diametrical empirical risk minimisation (DRM), which are extended beyond their classical assumptions.
Claims And Evidence: The three main theoretical claims (ie, the three new confidence bounds mentioned in the summary above) are clearly presented. Full proofs are provided in the appendix (see the theoretical claims section).
To improve clarity, it could have been helpful to quickly discuss in the main text the origin of the constants appearing in the bound, especially the ones coming from the Marcinkiewicz-Zygmund inequality, used in the appendix, and from the moments of the Gaussian distribution.
Methods And Evaluation Criteria: To construct their new non-asymptotic confidence bounds, the authors build on the existing SAA and DRM techniques, the approach is well-motivated and make sense in this context.
The main assumptions are made on the tail of the distribution of $f$ (and a convexity assumption for theorem 4.1) and its gradient and are pertinent with the problem.
Assumption 4.8, on which rely the last result, seems significantly stronger. Even though the authors quickly argue that it can be satisfied in machine learning settings, it would probably require more discussion and/or examples.
Theoretical Claims: I did review most of the theoretical claims apart from the details of the proof of Theorem 4.9. I did not notice any obvious mistake.
I have only one remark: in the introduction, when discussing the main results 2, you mention that the growth of $\Delta_{p,q}$ leads to a $\mathcal{O}(\log d)$ dependence in the bound, this is correct but I did not see a proof, it could be worth adding it to the discussion.
Experimental Designs Or Analyses: N/A
Supplementary Material: I reviewed the proofs of the main results in the supplementary material.
Relation To Broader Scientific Literature: The paper is related to existing literature on confidence bounds, sample average approximation and diametrical empirical risk minimisation, which seems to be well-discussed by the authors, even though it is not my main domain of expertise.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: In general, I think that the limitations (finite diameter, assumption 4.8, ...) could be more largely discussed, especially since one extra page as allowed.
Other Comments Or Suggestions: Were you aware that you were allowed 8 pages of main text?
Beside that, here a a few minor remarks:
- $\sigma_g$ is mentioned in the introduction but the notation for the gradients have not yet been introduced.
- Line 184 second column: the $\forall x \in \mathcal{X}$ does not mean anything, as there is already $\inf_{x \in \mathcal{X}}$.
Questions For Authors: 1. You proofs strongly rely on the assumption that the domain $\mathcal{X}$ has bounded diameter, is it possible to relax this assumption? Are there negative results saying that this assumption is not necessary. In general, I think this assumption should be discussed a bit more as it is not clear that it can be satisfied in practical settings.
2. Regarding remark 4.3, does this mean that your results imply the light-tailed results? Is it true under the same assumptions as the existing light-tailed result? Are the constants worse?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's very careful and insightful evaluation.
1. Re: "Other Comments": We will make sure to make full use of the 8 page limit. To that end, we will present part of our additional numerical results and add more discussions in response to the comments of this reviewer. We will also fix the minor remarks in the full paper.
2. We performed numerical experiments on two sets of problems. (i) for convex problems, we considered a stochastic linear program; (ii) for nonconvex problems, we considered a neural network for regression. Please see more detail about our partial results on the convex case in response to reviewer [ptuU]. Here, we only present some partial results for the nonconvex case. More specifically, we considered the problem of training a two-layer neural network with a data generating process $Y=g(X)+\epsilon$, where $g$ is some unknown function in nature to be reconstructed through the observations of some $N$-many sample points of $(X,Y)$. For the purpose of simulation, we specified $g$ to be a two layer neural network with LeakyRulu activation function and randomly simulated weights. Here, $\epsilon$ is assumed to follow student $t$-distribution to simulate heavy-tailed underlying randomness. We tested the SAA and DRM problem corresponding to training a neural network in minimizing mean squared error with different combinations of the sample size $N$ and problem dimensionality $d$. We then compared the width of the region between confidence bounds (CB) under significance level $\alpha=0.1$ with benchmark (referred to as "Benchmark") by Oliveira and Thompson (2023). Table 3 below present the average ratio of the widths between our proposed CB and the benchmark CB, calculated as $(U_{proposed}-L_{proposed})/(U_{Benchmark}-L_{Benchmark})$, averaged out of 10 independently random replications. Here $U$ and $L$ denote the upper confidence bound and lower confidence bound, respective to their subscripts.
**Table 3**: Average ratio of the length of proposed CB and benchmark CB. (A number smaller than 1 means that the proposed CB has a smaller width and thus is better).
|Sample Size \(N\) | dim\(d\)=441 | Sample Size \(N\) | dim\(d\)=961 | Sample Size \(N\)| dim\(d\)=1681 |
|------------|--------|---------|---------|---------|---------|
| N = 300 | 0.0296 | N = 500 | 0.0230 | N=500 | 0.0093 |
| N = 340 | 0.0182 | N = 600 | 0.0116 | N=600 | 0.0087 |
| N = 380 | 0.0176 | N = 700 | 0.0119 | N=700 | 0.0089 |
| N = 420 | 0.0166 | N= 800 | 0.0112 | N=800 | 0.0084 |
3. Re: "Methods And Evaluation Criteria": We will add more thorough discussions on this assumption. One illustration to include is that this assumption is directly related to the assumption of the presence of a positive margin in separating the data population in the application of classification. E.g., Assumption 3.3 in Yuan and Gu (2020). Generalization error bounds of gradient descent for learning over-parameterized deep relu networks. (AAAI). In this example, the assumed constant margin therein can imply the said perturbed settings.
4. Re: "Claims & Evidence": We will add discussions in the full paper to illustrate the origin of the constants. In particular, "2.74" originates from the Marcinkiewicz's inequality.
5. Re: "Theoretical": We had a typo in this mentioned statement. We meant to claim that the quantity grows at the rate of $O(\ln d)$, when the true solution satisfies the problem structure that the solution lies within a $1$-norm ball with dimension-independent radius. This is the common assumption of weak sparsity in statistics and applied optimization. Under this assumption, the dependence on $d^{2/q}$ can be bounded by a constant as $q$ can be specified as $\ln d$. In the same case $\Delta_{p,q}$ would only grow at the rate of $O(\ln d)$. We are sorry about this oversight and will carefully discuss this in the revision.
6. Re: Q1: Benchmark results (e.g., Guigues et al., 2017, among others) all assume a bounded feasible region. In most practical problems, there exists at least one finite optimal solution $\mathbf x^*$. When this happens, one may use the distance $D$ from $\mathbf x^*$ to a known feasible solution $\mathbf x_0$ as the diameter of the feasible region (since the problem remain equivalent if one were to impose an additional constraint that $\mathbf x:\,\Vert \mathbf x-\frac{\mathbf x_0+\mathbf x^*}{2}\Vert_q\leq \mathcal D$. The question then boils down to how to estimate $\mathcal D$. To that end, one often may resort to cross-validation. The bounded assumption is also practical in training neural networks as one could expect weights normalization.
7. Re: Q2: Our results imply the same rate as the light-tailed results. Yet, to accommodate for heavy tails, the constants are relatively elevated; there is an addition of constant $K\leq 2.74$, which is in the result of Marcinkiewicz’ inequality. This quantity can hardly be improved without fundamental improvement on Marcinkiewicz’.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for taking the time to answer my questions and performing additional experiments.
I will keep my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the careful evaluation, insightful comments, and encouraging remarks. | Summary: The paper studies confidence bounds for the optimal value of a stochastic optimization problem. For the convex case, the paper derives bounds for a set of heavy-tailed assumptions, and the bounds do not depend on the Lipschitz constant of the function. For the non-convex case, the paper provides bounds for both light tailed and heavy-tailed assumptions, from a solution obtained via a DRM formulation.
Claims And Evidence: - The claims in the paper seem all sound.
Methods And Evaluation Criteria: N/A. The paper is theoretical and doesn't require empirical evaluation.
Theoretical Claims: I checked some of the proofs in the appendix (Thm 4.1 and 4.5) and didn't see any problems, although some of the details should be mentioned more clearly. For example, the $\epsilon$-net argument in Theorem 4.5 should be explained in more detail.
Experimental Designs Or Analyses: N/A. The paper is theoretical and doesn't require empirical evaluation.
Supplementary Material: Yes. I checked the proofs.
Relation To Broader Scientific Literature: I find it hard to appreciate the results in the paper. More precisely, I don't really understand the challenge to derive bounds in this paper. Once having assumptions 3.1 and 3.2, and the assumption that we have the optimal solution on the samples, it's conceivable that using concentration inequalities related to $p$-moments (if we care about heavy tails) should be sufficient (here it's Marcinkiewicz's inequality in appendix D).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: It's also hard to evaluate the dependence of the bounds on some of the parameters such as the dimensions and the $\Delta_{p,q}$ term. Are there any lower bounds saying what the best CB we can achieve?
Other Comments Or Suggestions: N/A
Questions For Authors: My general question about this paper is that, instead of a blackbox solution of the problem on the samples like in Algorithm 1, can there be any improvements to the bounds if we know also the stochastic optimization algorithm, such as SGD? SGD doesn't really fit in the framework of Algorithm, but we also have bounds in high probability, even in the heavy tailed case.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for in-depth evaluation and insightful comments. Below citations follow the same reference list as in our paper.
1. To answer the comment re: "[...]I don't see the challenge to derive bounds[...]", we would like to point out that we made conscious and non-trivial effort to simplify the proof so as to make it better verifiable and readable. Bounding optimal values has been a research question pursued by much literature, and heavy-tails are traditionally known challenge to related problems. For instance, earlier work on asymptotic results dates back to, e.g., King & Rockafellar (1993), among many others, and more recent work on non-asymptotic results include those presented Guigues et al. (2017) and Oliveira & Thompson (2023). We would hope to use these references as evidence to showcase that our research question (even in the case of convex problems) is not a trivial one. Indeed, to our knowledge, whether non-Lipschitzian and heavy-tailed bounds for optimal values are admissible even in the convex case has not been well understood thus far. After we connect the dots between the research question and some known mathematical tools (Marcinkiewicz's inequality), it was also to our surprise that our proposed answer and its proof would appear to be straightforward.
2. We are not aware of a reference that explicitly shows lower bounds on the major quantities in our setting (e.g., non-asymptotic, non-Lipschitz, and heavy-tailed). In the convex case, our rate is almost exactly the same as the benchmark bounds provided by Guigues et al. (2017) --- the only difference is that the dependence on the significance level $\alpha$ is $\ln(1/\alpha)$ in the results of Guigues et al. (2017), while, in contrast, our results grow at the rate of $O(\alpha^{-2/p})$. This change would be unavoidable since Guigues et al. (2017) deal with light-tailed problems whose tail is characterized with an exponential decay. Meanwhile, our focus is on heavy-tailed settings where central moments exist up to the $p$-th order. In nonconvex case, our results do not depend on Lipschitz constant and thus can often provide sharper (narrower) confidence intervals. (In this regard, please see Tables 1, 2 and 3 for some numerical comparisons provided in our responses to Reviewer 1/ptuU and Reviewer 4/sS3B).
3. $\Delta_{p,q}$ shows up only in our results for nonconvex case. Explications of this quantity is provided in Remark 4.6. As long as we can (coarsely) estimate (i) some estimate on the order of some existent central moment of the underlying randomness, and (ii) the diameter of the feasible region, then $\Delta_{p,q}$ can be estimated. The quantities to be estimated are no more than the benchmark results such as Guigues et al. (2017) and Oliveira & Thompson (2023).
4. While many results on SGD has focused on asymptotics, SGD does provide non-asymptotic bounds for optimal values, as is studied by Lan et al. (2012). To be consistent with Lan et al. (2012), we reviewed those results under the (alternative) name of mirror descent stochastic approximation (MDSA) in Section 2 of our manuscript. Our SAA-based bounds improves over the MDSA in terms of the dependence on Lipschitz constant. Furthermore, confidence bounds provided by Lan et al. (2012) apply to either the case on the two ends of a distribution spectrum --- namely, (i) the case where only the second moment of the underlying randomness is bounded; and (ii) the case where the underlying randomness is light-tailed (sub-gaussian). Our findings additionally provide results for cases where the underlying randomness has existent central moments up to the $p$th order for $p\geq 2$. One may also observe that our bounds present identical rates as MDSA in terms of the dependence on some critical quantities (other than Lipschitz constants), such as sample size and dimensionality. So, perhaps a shorter answer would be that SGD is not known to improve the non-asymptotic confidence bounds from the literature; instead, the existing results seem to only show that SGD-based non-asymptotic confidence bounds would be comparably less appealing. | Summary: The paper presents non-asymptotic bounds on the minimal value of a stochastic optimization problem. The novel contributions are that the resulting bounds do not depend on global Lipschitz constants of the integrand function or the objective and operate for heavy-tailed function value and gradient distributions.
Claims And Evidence: The paper's claims are theoretical and they are proven soundly.
Methods And Evaluation Criteria: They do make "sense", although, the main issue I find is that there is no discussion of estimating the problem constants that appear in the confidence bounds. Namely, this includes $\sigma_f$, $\sigma_g$, $\mathcal{D}\_q$, $\sigma\_\psi$, etc.
Theoretical Claims: I read through the proofs briefly, and paid closer attention to Appendix C due to the interest of the bounded surrogate argument for extending light-tailed analyses to possibly heavy-tailed distributions.
Experimental Designs Or Analyses: There are no experiments in the paper.
Supplementary Material: Yes, as the supplement mainly contained the proofs.
Relation To Broader Scientific Literature: Yes, in that the bounds avoid metric entropy and global Lipschitz factors. However, I find these claims slightly overstated. I believe the non-Lipschitzian aspect can be framed as having bounds that depend on the average gradient norms (or their deviations) as in Assumption 3.1, as opposed to the maximum gradient norms, which more clearly highlights the technical differences. Furthermore, the part that I find more strange is the discussion of metric entropy as an undesirable dependence, but the results of Theorem 4.5 and 4.9 include explicit dimension factors which are hidden in the statement of the results. It is not clear that these factors scale any better (or possibly worse) with metric entropies of various feasible sets.
Essential References Not Discussed: I believe at least some of the paper should be related to confidence bounds that arise from min-max methods similar to DRM. A seed reference (and references therein) I would like to see discussed is Duchi et al (2021). While I realize that DRM perturbs the parameters and not the inputs or input distributions, it would be worth it to discuss the technical differences (proof techniques, etc) of these other approaches to confidence bounds.
John C. Duchi , Peter W. Glynn , Hongseok Namkoong (2021) Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach. Mathematics of Operations Research.
Other Strengths And Weaknesses: **Strengths:** The paper is remarkably well-written and logically ordered.
**Weaknesses:**
My main weaknesses are summarized above.
- A confidence bound paper with no experiments and no ability to estimate the relevant constants seems below the bar, and this is my main concern. While there is clearly a theoretical focus, experiments are not a superfluous detail in this setting, which is motivated by assessing algorithm performance. I would increase the score significantly if these two points were addressed.
- If metric entropy is discussed, then I believe that the dimension dependence (i.e.~on $d$) should be shown in the main results.
Other Comments Or Suggestions: This is minor, but the computations on page 12 and 13 can be made a little easier to follow, perhaps with colors or by breaking up the long blocks of display equations.
Questions For Authors: Can you explain why the bounds in Theorem 4.1 (and other results) grow with a factor $\sqrt{p}$? Why do the intervals widen to infinity of the random variables are nearly bounded? I see this stems from D.1, but additional explanation would be helpful.
Ethical Review Concerns: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for insightful comments.
1. Re: Relation To Literature: Average/expected norm of gradient is essentially the (non-central) moment, which typically has two components: **(A)**. the norm of the gradient of the expected cost function, and **(B)**. the central moment of the gradient of the random cost function. The Lipschitz constant of a (population-level formulation of) stochastic program is associated with **(A)**, instead **(B)**. Assumption 3.1 imposes upper bound on component **(B)**. One can easily construct cases where the Lipschitz constant grows while **(B)** remain unchanged.
2. Theorem 4.1 is free from metric entropy; yet, we do NOT claim Theorems 4.5 and 4.9 are free from metric entropy. We will emphasize this more in revision. We explicate (and do not hide) dependence on $d$ in all our mentioning of Theorems 4.5 and 4.9. Our results do not depend on Lipschitz constants, which is often a critical quantity that may grow (rapidly) with $d$.
3. We will add detailed discussions on Duchi et al. (2021) and related references. As mentioned in Section 2, existing confidence bounds are either asymptotic or non-asymptotic. The focus of our paper is on the latter. In contrast, Duchi et al. (2021) provide novel asymptotic results based on DRO. We will also carefully discuss the differentiation as suggested by the reviewer.
4. Re Weakness on experiments: We performed experiments on two sets of problems. (i) for convex problems, we considered a stochastic linear program; (ii) for nonconvex problems, we considered a neural network for regression, which is known to entail data that represent potential heavy-tails in the underlying randomness. Please check our response to reviewer [ptuU] and reviewer [sS3B] for detailed experiment settings. The full paper will also show representations of the results in plots, where one can observe a strong scalability as dimensionality increases.
5. Re Weakness on estimating quantities: We would like to argue that the benchmark papers have not provided generic schemes to estimate the quantities either. For instance, in the numerical examples by the benchmark (Guigues et al., 2017), estimation of the tail parameters is made possible in an ad-hoc fashion by exploiting the assumption that the underlying distribution is uniform (and thus of bounded support set). Nonetheless, we agree that proper estimation of these quantities will make the results significantly better useful. One common (expedient) approach is to use sample average values of quantities evaluated at a computed solution to the SAA formulation. E.g., we may evaluate $\sigma_g$ (and $\sigma_f$) by calculating the sample $p$-norm distance of gradient (and objective function value, resp.) from their sample mean at a (near-)optimal solution to SAA. This approach has been employed in estimating quantities in linear regression (often, those quantities are special cases of $\sigma_g$ and/or $\sigma_f$).
6. Related to 5, another approach is to use the knowledge of the problem structure to (over)estimate the quantities. Indeed, in the experiment presented in response to Review [ptuU] for a stochastic linear program subject to a simplex, one may see that over-estimation of the aforementioned quantities is accessible. More specifically, for $\mathbf{x}\in\mathcal {X}$, $\Vert f(\mathbf x,\xi)-F(\mathbf x)\Vert_{L^{p}}=\Vert\sum_{i=1}^d\kappa_i (\xi_i-\mathbb E[\xi_i])x_i\Vert_{L^{p}}\leq\Vert\kappa\Vert_{\infty}\Vert (\xi-\mathbb E[\xi])^T\mathbf{x}\Vert_{L^{p}}\leq\Vert\kappa\Vert_{\infty}\Vert \Vert(\xi-\mathbb E[\xi])\Vert_{\infty}\Vert\mathbf{x}\Vert_1\Vert_{L^{p}} = \Vert\kappa\Vert_{\infty}\Vert\Vert(\xi-\mathbb E[\xi])\Vert_{\infty}\Vert_{L^p}$ and $\Vert g_f^*(\xi)-g_F^*\Vert_{L^{p}}=\mathbb E[\sum_{i=1}^d\vert\kappa_i(\xi_i-\mathbb E[\xi_i])\vert^p]^{1/p}=\Vert\kappa\Vert_p\mathbb E[(\xi_1-\mathbb E[\xi_1])\vert^p]^{1/p}$. The expected values herein can be estimated using Monte Carlo performed on an independent validation set of (no more than) the same number of sample points in the SAA formulation. $p$ can be determined based on its best performing choice; since our confidence bounds are effective for all admissible $p$, one may use the value of $p$ that yields the smallest width of the output confidence interval(s).
7. Our convex results are metric entropy-free. Comparable pattern is presented by Guigues et al. (2017). Our nonconvex result explicates the dependence on dimensionality $d$ as $d^{-2/q}$.
8. Re Questions for authors: The presence of $\sqrt{p}$ is often due to the use of Marcinkiewicz's inequality. Our results hold for all admissible choices of $p\geq2$; namely, one may choose $p$ from all possible values the one that would lead to the smallest width of the confidence interval. As in Remark 4.3, under light-tailed-ness (and thus all central moments exist) $p$ could be specified as $\ln(6/\alpha)$ to recover the same rate as per Guigues et al. (2017) in light-tailed setting. | Summary: The paper provides non-asymptotic confidence intervals for the solutions of stochastic optimization problems. Unlike previous work, their approach simultaneously covers non-Lipschitz and heavy-tailed problems. They also include analysis of non-convex and overparametrized cases.
Claims And Evidence: Claims are well supported.
Methods And Evaluation Criteria: N/A
Theoretical Claims: The proofs in the Appendix seem correct.
Experimental Designs Or Analyses: N/A
Supplementary Material: Yes
Relation To Broader Scientific Literature: The paper relaxes the assumptions for the analysis of stochastic optimization problem, which has potential impact in several subfields.
Essential References Not Discussed: I am not very familiar with the literature
Other Strengths And Weaknesses: The paper is clearly written and related work seems properly acknowledged. I believe the contributions are relevant. However, I miss evaluations of the confidence intervals in (at least) some toy example in order to verify their tightness. Are there any difficulties for evaluating the bounds in specific SO problems?
Other Comments Or Suggestions: The summarized title in the header of each page remains unchanged from the template, please correct that.
Questions For Authors: See *Strengths And Weaknesses*
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the great effort by the review in evaluating our manuscript. All papers cited in this response are the same as those included in the original submission.
1. Following the comments, we conducted two experiments on a convex and a nonconvex problem. For the former, we considered a stochastic linear program; and, for the latter, we considered a neural network in regression. In the revision of the paper, we will formally present these tables and plot out critical trends. In strategizing spacing, we only present our partial results for convex case here; results on nonconvex case are in our response to reviewer [sS3B]. We considered a stochastic minimization problem with constraints ${\mathbf x = (x_1,\cdots,x_d) : \sum_{i=1}^d x_i =1, \mathbf x\geq0}$ with a stochastic cost function $f(\mathbf x,\xi):= -\sum_{i=1}^d\kappa_i\xi_i x_i$, where we let $\kappa_i =0.08 +\frac{0.04(i-1)}{d}$ for $i = 1,2,\cdots,d$, and each $\xi_i$ is a power law distributed random variable with density function $p_{\xi}(x)=\frac{ab^a}{x^{a+1}}$ for $x>b$. Here $a>0$ and $b>0$ are two parameters in power law distribution; thus the highest order of existence of moments is $a-1$. In our experiments, we set $a=3.01, b=1$ (and correspondingly $p=2$) and tested for $10,000$ replications under the confidence level of $\alpha=0.1$. We presented the empirical coverage probability in Table 1, where the results generated by our approach is referred to as 'Proposed'. The empirical coverage probability is calculated as the proportion of replications in which $F^*$ lies within the confidence bounds, i.e. $L_{N,\alpha}\leq F^*\leq U_{N,\alpha}$.
- We also tested the benchmark scheme (referred to as "benchmark" hereafter) by Oliveira & Thompson (2023) in the same manner as the above.
**Table 1:** Estimated Coverage Probability for $\alpha = 0.01, a = 3.01$ and $p=2$. A number closer to 0.99 is better. Note here, problem quantities in our bound were over-estimated to mimic more realistic applications with limited knowledge of the problem.
| Sample Size \(N\) | Method | dim\(d\)=100 | dim\(d\)=500 | dim\(d\)=1000 | dim\(d\)=2000 | dim\(d\)=4000 |
|-------------------|--------|---------|---------|---------|---------|---------|
| N = 5 | Proposed | 1.0000 | 0.9999 | 0.9998 | 0.9999 | 0.9998 |
| N = 5 | Benchmark | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
|=============|========|=========|=========|=========|=========|=========|
| N = 10 | Proposed | 1.0000 | 0.9999 | 0.9999 | 0.9998 | 0.9997 |
| N = 10 | Benchmark | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
|=============|========|=========|=========|=========|=========|=========|
| N = 50 | Proposed | 0.9999 | 0.9998 | 0.9997 | 0.9999 | 1.0000 |
| N = 50 | Benchmark | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
|=============|========|=========|=========|=========|=========|=========|
| N = 100 | Propose | 0.9999 | 0.9997 | 0.9997 | 1.0000 | 0.9999 |
| N = 100 | Benchmark | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000|
|=============|========|=========|=========|=========|=========|=========|
| N = 500 | Proposed | 0.9997 | 0.9999 | 1.0000 | 1.0000 | 0.9999 |
| N = 500 | Benchmark | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
|=============|========|=========|=========|=========|=========|=========|
| N = 1000 | Proposed | 0.9998 | 1.0000 | 1.0000 | 0.9999 | 0.9998 |
| N = 1000 | Benchmark | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
- We also tested the length of confidence bound (CB) by averaging 10,000 replications, and we present the ratio of the length of our proposed CB and Benchmark CB. Please refer to our response to reviewer[sS3B] for the formal definition of length ratio.
**Table 2:** Average ratio between the width of proposed CB and Benchmark CB. (A number smaller than 1 means that the proposed CB has smaller width and thus is better).
| Sample Size \(N\) | dim\(d\)=100 | dim\(d\)=500 | dim\(d\)=1000 | dim\(d\)=2000 | dim\(d\)=4000 |
|-------------------|--------|---------|---------|---------|---------|
| N = 5 | 0.3938 | 0.2390 | 0.2117 | 0.0991 | 0.0558 |
| N = 10 | 0.3938 | 0.2390 | 0.2117 | 0.0991 | 0.0558 |
| N = 50 | 0.3938 | 0.2390 | 0.2117 | 0.0991 | 0.0558 |
| N = 100 | 0.3938 | 0.2390 | 0.2117 |0.0991 | 0.0558|
| N = 500 | 0.3938 | 0.2390 | 0.2117 | 0.0991| 0.0558 |
| N = 1000 | 0.3938 | 0.2390 | 0.2117 | 0.0991 | 0.0558 |
As we can see both our proposed CB and Benchmark CB are both accurate. The Benchmark provides very loose CB with all tested coverage probability to be $1.000$, and the its widths are usually 4 times longer than our proposed CB. Our proposed CB, under the same heavy-tailed assumptions, can provide a tighter CB compared with Benchmark.
2. Response to "Other Strengths And Weaknesses": Please find details in estimation under our reply to reviewer [nMkb]-item 5&6.
3. Response to "Other Comments Or Suggestions": We will be sure to modify this in the full paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the experiments, the tightness of the confidence bounds wrt to the benchmark of Oliveira & Thompson (2023) is a nice touch. I'll increase my score to 4.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for carefully reviewing this paper and for the encouraging evaluation. | null | null | null | null | null | null |
Retrieval-Augmented Perception: High-resolution Image Perception Meets Visual RAG | Accept (oral) | Summary: - The paper introduces a RAG-based approach to handle High-Resolution visual reasoning with MLLMs.
- The authors develop a Retrieval-Augmented Perception (RAP) framework to find the important image crops required for answering a given query based on the query-crop similarity and using those for inference.
RAP is an iterative process with two stages: (I) RE-Search (to find the optimal number of crops using confidence scores about the cropped image to answer the query from multiple MLLM feed-forward passes) and (ii) Spatial-Awareness Layout (to lay out the most similar crops while maintaining their relative positioning from the original image).
- The authors present an extensive analysis of factors affecting performance, including the number of retrieved image crops and their layout.
- The authors test their framework on the MME-Realworld, V*, and HR-Bench benchmarks and show improvements with different base MLLMs from the LLaVA series.
## update after rebuttal
I am raising my score to accept based on the rebuttal.
Claims And Evidence: - The authors claim RAP improves performance on high-resolution benchmarks over standard MLLMs built with fixed-resolution visual encoders, which are supported by experiments.
- The authors claim RAP is more efficient than other search-based retrieval methods, which is also well supported.
Methods And Evaluation Criteria: - The authors evaluate on high res benchmarks which is aligned with their RAG-based framework design motivation
Theoretical Claims: - There are no theoretical claims made in the paper, IMO.
Experimental Designs Or Analyses: - The authors analyze the different components of their framework:
- Their SL and RE-Search techniques bring on improvements along with the VisRAG method.
- The authors use the LLaVA-1.5-7B/13B and LLaVA-OV-0.5B models to test their method and show improvements. However, the mentioned models are not the best, and the community rarely uses these for their applications. To show the practical effectiveness of the approach, the authors should consider testing it with more powerful MLLMs like LLaVA-OV-7B/72B and Cambrian-8B/34B.
- Related to the above point, Cambrian is a multi-encoder framework that uses spatial compression on CLIP-ConvNeXT features, which are better for high-res images, so this is an important experiment, IMO.
Supplementary Material: - It included the code which looked A-okay.
Relation To Broader Scientific Literature: - High-resolution visual reasoning is an important problem and highly relevant to the current literature as also reflected in the design of benchmarks like V* and HR-Bench.
Essential References Not Discussed: Nope
Other Strengths And Weaknesses: - Did the authors test their framework on other widely benchmarks like DocVQA, ChartQA, TextVQA, AI2D, or any ocr/document-related tasks?
- The authors should compare their results to a native high-res MLLM like Oryx-MLLM (https://github.com/Oryx-mllm/Oryx).
- The authors show CogVLM has high performance in Fig. 1, but they don't include it in Tab.2 or share results with RAP with CogVLM as the base MLLM. why? I believe a more thorough evaluation with different benchmarks/models is important to show the complete effectiveness of the method.
- The authors do not report the throughput for the original LLaVA-1.5 in Tab. 5 and add other better general MLLMs like LLaVA-ov-72B/CogVLM/Oryx-MLLM to the mix for a better understanding of RAP's practical usefulness.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the constructive comments and suggestions.
> **Q1:** Experiments on widely used benchmarks.
**R:** We appreciate the reviewer's suggestion. As suggested, we conduct additional experiments on five widely used benchmarks: **DocVQA, ChartQA, TextVQA, AI2D, and MMStar**. As shown in the table below, incorporating *RAP* led to performance of 1.8% and 2.1% on LLaVA-v1.5 7B and 13B, respectively. We also observed that *RAP* brings more notable improvements on higher-resolution images. For example, on DocVQA, which has an average resolution of $1599\times 1241$, *RAP* improved performance by 4.7% and 2.3% for LLaVA-v1.5 7B and 13B, respectively.
| | DocVQA | ChartQA | TextVQA | AI2D | MMStar |
| -- | -- | -- | -- | -- | -- |
| LLaVA-v1.5-7B | 21.5 | 18.2 | 45.8 | 54.9 | 30.3 |
| **LLaVA-v1.5-7B w/ *RAP*** | **26.2** | **18.5** | **46.8** | **55.1** | **33.1** |
| LLaVA-v1.5-13B | 23.7 | 18.5 | 49.0 | 60.2 | 32.8 |
| **LLaVA-v1.5-13B w/ *RAP*** | **26.0** | **23.2** | **50.0** | **60.9** | **34.4** |
> **Q2:** Experiments with more powerful MLLMs.
**R:** We thank the reviewer for the constructive advice. To further demonstrate the effectiveness and generalizability of our *RAP*, we conduct experiments on HR-Bench using several advanced MLLMs, including **Oryx-1.5-7B, CogVLM-LLama3-19B, Cambrian-8B, LLaVA-ov-7B and 72B**. As shown in the table below, our *RAP* consistently boosts performance across all models. These results underscore the robustness of our *RAP* across a wide range of model architectures, highlighting its potential as a universal enhancement for high-resolution image perception. We will include the corresponding results and discussions in the revised version.
| | HR-Bench 4K | HR-Bench 8K |
| -- | -- | -- |
| Oryx-1.5-7B | 56.3 | 49.5 |
| **Oryx-1.5-7B w/ *RAP*** | **66.6** | **61.3** |
| CogVLM-LLama3-19B | 59.1 | 49.1 |
| **CogVLM-LLama3-19B w/ *RAP*** | **65.4** | **60.0** |
| Cambrian-8B | 45.5 | 37.9 |
| **Cambrian-8B w/ *RAP*** | **57.1** | **55.6** |
| LLaVA-ov-7B | 63.0 | 49.3 |
| **LLaVA-ov-7B w/ *RAP*** | **70.3** | **63.8** |
| LLaVA-ov-72B | 67.1 | 62.9 |
| **LLaVA-ov-72B w/ *RAP*** | **75.8** | **71.9** |
> **Q3:** Comparison with native high-res MLLMs.
**R:** We appreciate the reviewer for the comments. We'd like to clarify that due to limitations in visual encoders and LLMs' long-context handling, current MLLMs struggle with high-resolution (HR) image perception. Existing native high-res MLLMs mainly address this through two strategies:
- **Splitting HR images into multiple image crops**
These methods split HR images into smaller crops, encode them separately, and concatenate the features for the LLM. Oryx uses OryxViT to handle arbitrary resolutions, but processing 8K images produces up to 300K visual tokens, **heavily taxing the LLM's long-context capacity**. In practice, this often led to erroneous outputs, so we downsampled to 2K resolution (as recommended), which unfortunately resulted in loss of important visual details.
- **Using HR visual encoder**
Another line of work explores HR visual encoders designed to handle HR images. For instance, ConvNeXt-L supports inputs up to $768\times 768$ resolution. However, such encoders still **downsample 8K images to align with pretraining resolutions, limiting their effectiveness on HR images**. Even Cambrian, a multi-encoder framework, only supports resolutions up to $384\times 384$, necessitating downsampling for 8K inputs and resulting in significant loss of critical visual information.
Inspired by RAG's success in improving long-context capabilities in LLMs, we adapt it to MLLMs for HR image perception. *RAP* retrieves and preserves only the most relevant image crops, reducing input resolution while retaining essential information. Our *RAP* can be integrated into any MLLMs, significantly enhancing their perception of HR images. For example, LLaVA-v1.5 7B with *RAP* reaches 53.8% accuracy on HR-Bench 8K, outperforming Oryx-1.5-7B (49.5%) and Cambrian (37.9%), validating *RAP*'s effectiveness.
> **Q4:** The practical usefulness of our *RAP*.
**R:** We'd like to thank the reviewer for the comments. Here, to robustly demonstrate the practical usefulness of our *RAP*, we evaluate its impact on both throughput and accuracy. As shown in the table below, LLaVA-ov-0.5B equipped with *RAP* achieves performance comparable to the much larger LLaVA-ov-72B, yet operates with **2.3 times higher throughput and requires 22 times fewer parameters**. These results highlight *RAP*'s potential for efficient, high-performance deployment, particularly in resource-constrained environments such as edge devices.
| | Params | Throughput | Accuracy |
| -- | - | -- | -- |
| LLaVA-ov-0.5B | 0.5B | **40.0** | 42.8 |
| **LLaVA-ov-0.5B w/ *RAP (VisRAG)*** | 3.3B | 20.0 | **63.5** |
| LLaVA-ov-72B | 72B | 8.7 | 62.9 | | Summary: This paper proposes Retrieval-Augmented Perception (RAP), a training-free framework that enhances high-resolution image perception in multimodal large language models by leveraging RAG. RAP retrieves and fuses relevant image crops while maintaining their spatial relationships using a Spatial-Awareness Layout and dynamically determines the optimal number of retrieved crops through Retrieved-Exploration Search (RE-Search). Experiments show that RAP brings consistent improvement on HR benchmarks and general MLLM tasks.
## update after rebuttal
Thanks for answering my questions. I will keep my score.
Claims And Evidence: The claims are supported by clear and convincing evidence according to experimental results.
Methods And Evaluation Criteria: The proposed method enhances the perception of high-resolution images by MLLMs through retrieval augmentation, without requiring additional training. As a plug-in, it can theoretically be applied to any MLLMs. This is meaningful for the current MLLM's perception of high-resolution images. However, it may incur additional inference overhead.
Theoretical Claims: The paper does not present particularly complex techniques, with its effectiveness mainly demonstrated through experimental results. The ablation study in Table 4 and Figure 4 effectively emphasizes the validity of the method's design.
Experimental Designs Or Analyses: The experiments are conducted on high-resolution benchmarks and general benchmarks, with LLaVA-v1.5 7B & 13B and LLaVA-ov-0.5B as baselines for comparison. The metrics include accuracy and throughput. The experiments confirm the validity and effectiveness of the proposed method.
Supplementary Material: Yes. It provides the codes.
Relation To Broader Scientific Literature: The method seems to be similar to RAG applied to LLMs [a], enhancing the ability of LLMs to perceive long-context in NLP.
[a] Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG
Essential References Not Discussed: Although it can be considered as concurrent work, the authors are encouraged to include the latest papers [a, b] from ICLR2025 in the related work section.
[a] SV-RAG: LoRA-Contextualizing Adaptation of MLLMs for Long Document Understanding
[b] MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models
Other Strengths And Weaknesses: Strengths:
+ The paper explores applying RAG to MLLMs for the perception of high-resolution images. Through comprehensive experiments, it investigates the impact of the layout of retrieved image crops and the number of retrieved image crops on the final performance.
+ This paper contributes to the ability of MLLMs to perceive high-resolution images.
+ Experimental results show the effectiveness of the proposed methods.
Weaknesses:
- The method requires the introduction of an additional retriever, which consumes certain computational resources.
- Since the method relies on the model's confidence score regarding whether currently retrieved image crops can answer the question, it cannot be directly applied to closed-source models.
Other Comments Or Suggestions: Please consider to add citations to the papers mentioned above.
Questions For Authors: Although the authors emphasize that RAP is a RAG-based method, it also uses search techniques to enhance MLLMs' perception of high-resolution images. What is the fundamental difference between RAP and search-based methods?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We truly appreciate the reviewer for the thoughtful comments and suggestions, as well as the positive support.
> **Q1:** The concern about the extra retriever increasing computation.
**R:** We appreciate the reviewer's comments. Although our *RAP* includes an additional retriever module, it still delivers a significant performance gain despite the overhead. As shown in the table below, our evaluation on HR-Bench 8K demonstrates that integrating *RAP* with LLaVA-ov-0.5B, using VisRAG as the retriever, results in a total model size only 3.3 billion parameters. Despite this lightweight configuration, it achieves an impressive 20.7% improvement in accuracy. Notably, **LLaVA-ov-0.5B w/ *RAP* matches the performance of LLaVA-ov-72B while using approximately 22 times fewer parameters**, highlighting *RAP*'s efficiency, scalability, and strong potential for high-resolution visual understanding.
| | Params | Throughput | Accuracy |
| --------------- | ------ | ---------- | -------- |
| LLaVA-ov-0.5B | 0.5B | **40.0** | 42.8 |
| LLaVA-ov-0.5B w/ ***RAP (VisRAG)*** | 3.3B | 20.0 | **63.5** |
| LLaVA-ov-72B | 72B | 8.7 | 62.9 |
> **Q2:** Replace confidence score calculation in *RE-Search*.
**R:** We appreciate the reviewer's point. To clarify, the OpenAI API [R1] currently offers a `logprobs` parameter, which returns the log probabilities of output tokens — an essential feature we leverage in our *RE-Search* module to compute confidence scores.
Beyond this, we also explore an alternative approach using **generation-based confidence scoring**. Specifically, we prompt the MLLM to self-assess whether the image provides sufficient information to answer the question. The constructed prompt is as follows:
```Question: {}\nCould you answer the question based on the available visual information? Return only a JSON object with a numerical confidence score (0~10) of "Yes" like {"Yes": x}.```
We normalize the final score to the range $ [0,1] $ for consistency. As shown in the table below, incorporating this generation-based score into *RAP* resulted in a **7.4%** performance gain over the baseline, demonstrating its effectiveness.
The relevant results and discussions will be incorporated into the revised version.
| | HR-Bench 4K | HR-Bench 8K |
| ---------------------- | ----------- | ----------- |
| LLaVA-ov-0.5B | 51.5 | 42.8 |
| **LLaVA-ov-0.5B w/ *RAP* (logit-based confidence score)** | **61.3** | **63.5** |
| LLaVA-ov-0.5B w/ *RAP* (generation-based confidence score) | 57.0 | 52.1 |
[R1] OpenAI API, https://platform.openai.com/docs/api-reference/completions/create
> **Q3:** What is the fundamental difference between *RAP* and search-based methods?
**R:** We thank the reviewer for the constructive comments. We would like to clarify the main differences between our *RAP* and search-based methods as follows:
- **Search-based methods yield high-resolution inputs with multiple crops, while RAP lowers resolution by keeping only key spatially-aware crops**
Search-based methods model high-resolution images using a tree structure and apply search algorithms to identify key image crops. However, when a question requires information from multiple crops, methods like $DC^2$ tend to select the LCA (Lowest Common Ancestor), which is an image containing all relevant areas. This results in extremely high-resolution inputs. In contrast, our ***RAP*** employs a *Spatial-Awareness Layout* to selectively retain only the key image crops, effectively reducing image resolution while maintaining their spatial relationships (see Figure 4 in our paper).
- **Search-based methods typically follow a top-down strategy, while *RAP* starts directly from low-resolution image crops**
Search-based methods often treat the high-resolution image as the root node, splitting it into sub-images (such as a $2\times 2$ grid) to generate child nodes. However, high-resolution root images can mislead the MLLM by providing incorrect search cues at the outset, resulting in inefficient paths and potential errors. Our ***RAP*** mitigates this issue by starting directly with low-resolution image crops, enabling more efficient and accurate retrieval of relevant content (see Table 5 in our paper).
> **Q4:** Essential References Not Discussed.
**R:** Thanks for pointing out the work on SV-RAG and MMed-RAG. SV-RAG applies RAG to multimodal long-document understanding, while MMed-RAG enhances the factuality of Med-MLLMs. In our revised version, we will include these two works in the discussion of Multimodal RAG in the Related Work section. | Summary: This paper introduces a retrieval-augmented perception method for MLLMs, which retrieve and fuses relevant image crops from the full high-resolution image.
Specifically, a apatial-awareness layout is proposed, which is to maintain the relative positional relationships of the image crops.
In addition, a retrieved-exploration search dynamically selects the optimal number of crops based on the text-to-crop similarity and the model's confidence.
In experiments, the proposed method outperforms others by a large margin in both fine-grained single-instance perception and fine-grained cross-instance perception tasks.
Claims And Evidence: In the spatial-awareness layout, the compressed mapping is reversable so the proposed method can handle cross-instance perception task.
The paper appears to have successfully addressed the computational complexity limitations of search-based methods by progressively narrow down the search space with the proposed RE-Search inspired by A* algorithm rather than a simple divide and conquer approach or a complex search algorithm, in MLLMs' HR image perceptual capabilities.
Methods And Evaluation Criteria: The proposed method is evaluated on V∗ Bench and HR-Bench, showing improved performance by large margin with efficiency.
Theoretical Claims: The proposed RE-Search consists of two reward functions g and h inspired from A* search seems reasonable.
- g: similarity between current patches and the query.
- h: MLLM's confidence that the model can answer from given patches.
Experimental Designs Or Analyses: I'm uncertain whether the comparison methods are comprehensive enough, because I am not familiar with the HR image preception in MLLMs, but the types and quantities of comparison targets appear to be suitable, according to the `Related Work` section written by the authors.
The experimental results in the appendix effectively complement the experimental results presented in the main text.
However, the effect of crop size is not shown.
Supplementary Material: The appendix provides implementation details, including algorithms, as well as a thorough comparison of the performance of various hyperparameters and model sizes.
For both FSP and FCP tasks, a detailed case study is presented with visual comparisons.
All of these elements provide sufficient information to facilitate a thorough understanding of the proposed method.
Relation To Broader Scientific Literature: I believe this research could provide important contributions not only for HR image processing but also for extending into video perception.
Essential References Not Discussed: .
Other Strengths And Weaknesses: Section3 `Pilot Study` helps readers understand the intuition behind the proposed method more easily.
For example, Table1 shows that preserving relative position information is necessary to improve both FSP and FCP tasks.
Other Comments Or Suggestions: An analysis of failure cases is missing.
It is difficult to pick a specific case, but like all other methods the proposed method also have its pros and cons so it may have some shortcomings compared to existing methods.
Analyzing such cases will help for readers a deeper understanding of the proposed method.
Questions For Authors: .
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We truly appreciate Reviewer tave's constructive comments and positive support.
> **Q1:** The concern of the comparison methods are comprehensive enough.
**R**: We'd like to thank the reviewer for the advice. In fact, building on established works such as $DC^2$ [R1] and ZoomEye [R2], we conduct experiments on $V^*$, HR-Bench, and the widely used benchmark MME-RealWorld. Our study includes a comprehensive comparison with **12 widely adopted MLLMs**, and we further demonstrate the efficiency of our *RAP* method against leading state-of-the-art approaches (see Table 5 in our paper).
To echo the reviewer's concern, here we further strengthened our contributions by including additional experiments:
- We kindly refer the reviewer to our response to `Reviewer h1HT (Q2)` for additional experimental results on **five widely used benchmarks**: DocVQA, ChartQA, TextVQA, AI2D, and MMStar.
- We respectfully refer the reviewer to our response to `Reviewer TfPd (Q2)` for additional evaluations involving **five state-of-the-art MLLMs**, including LLaVA-ov-7B/72B, Oryx-1.5-7B, Cambrian-8B, and CogVLM-Llama3-19B.
[R1] Wang W, Ding L, Zeng M, et al. Divide, conquer and combine: A training-free framework for high-resolution image perception in multimodal large language models[C]. In AAAI 2025.
[R2] Shen H, Zhao K, Zhao T, et al. ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration[J]. arXiv preprint arXiv:2411.16044, 2024.
> **Q2:** The effect of crop size.
**R:** We appreciate the reviewer for the valuable advice. We'd like to clarify that we use the default crop size of $448\times 448$ recommended by VisRAG in our *RAP*. To further investigate the impact of crop size, here we perform additional experiments on HR-Bench 8K using LLaVA-ov-0.5B and show the results below. Our findings reveal that while variations in crop size result in relatively minor differences, all configurations of our *RAP* method yield substantial performance gains over the baseline.
| | HR-Bench 8K |
| --------------------------- | ----------- |
| LLaVA-ov-0.5B | 42.8 |
| LLaVA-ov-0.5B w/ *RAP* ($224\times 224$) | 61.4 |
| **LLaVA-ov-0.5B w/ *RAP* ($448\times448$)** | **63.5** |
| LLaVA-ov-0.5B w/ *RAP* ($896 \times 896$) | 60.0 |
> **Q3:** An analysis of failure cases is missing.
**R:** We'd like to thank the reviewer for pointing out this issue. Admittedly, while *RAP* demonstrates strong capability in accurately retrieving key visual information, its performance can degrade when the input question lacks critical object-specific details. A detailed failure case is available at the following anonymous link:
https://anonymous.4open.science/r/RAP_Case-1D48/Failure_Case_Study.md
In our revised version, we will incorporate a comprehensive analysis of this case to further clarify the limitations and potential directions for improvement.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my questions, I keep my rating as accept. | Summary: The paper works on a key challenge in the area of MLLMs -- the perception of high-resolution (HR) images. Centering on this significant problem, this paper leverages RAG to enhance MLLM’s ability to perceive HR images. The paper first explores the impact of the layout and the number of retrieved image crops on the model’s perception capability and further proposes Retrieval-Augmented Perception (RAP) to improve MLLM’s perception of HR images. RAP retrieves image crops while preserving spatial relationships and dynamically select the optimal number of retrieved image crops via Retrieved-Exploration Search (RE-Search). To the best of my knowledge, this is the first study towards exploiting the benefits of visual RAG for the challenging HR perception tasks. Experiments achieve an improvement of 24% on average on HR benchmarks.
Claims And Evidence: It looks to me that the paper mainly presents the following key claims:
1. This paper finds that applying RAG to MLLMs for HR image perception requires preserving the original spatial information of the retrieved image crops and that the number of retrieved image crops needed varies across different tasks. The specific experimental support is provided in Section 3 named Pilot Study.
2. Based on the above findings, this paper proposes a training-free framework, RAP, which maintains the relative positional relationships between retrieved image crops through Spatial-Awareness Layout and dynamically selects the appropriate top-K image crops using RE-Search. The authors show plenty of results in Tables 2 and 3, which convincingly show that RAP significantly enhances MLLM’s perception of hr images.
Methods And Evaluation Criteria: The proposed method is suitable for the task. The experiments conducted on the HR-Bench and MME-RealWorld are reasonable, as to my knowledge, many SOTA related works in this area also used the same benchmarks to evaluate the effectiveness of their methods.
[1] Wang et al. Divide, conquer and combine: A training-free framework for high-resolution image perception in multimodal large language models. 2024.
[2] Shen et al. ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration. 2024.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experiments validate the effectiveness of the proposed RAP method on both HR-Bench and MME-RealWorld. Although the authors claim to have evaluated on general benchmarks, the average resolution of MME-RealWorld is 2000×1500, which still falls under high resolution. It remains unclear whether the method is effective on more general MLLM benchmarks (e.g., MMStar).
Supplementary Material: I have reviewed the supplementary materials provided by the authors, which primarily consist of the Python code implementation of the method.
Relation To Broader Scientific Literature: RAP provides a novel and efficient method for the HR image perception of MLLMs. The method leverages the core idea of RAG to enhance the MLLM’s ability to perceive HR images.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The paper is well-structured and clearly explains the motivation and insights in an intuitive manner, tackling a highly significant problem. The authors start with a pilot study, proposing three questions and challenges, which then motivate the main methodology design. This logic makes it easy to grasp its main idea.
2. The authors introduce RAP, which achieves superior performance on high-resolution benchmarks and is designed with the merits of easy integration into existing open-source MLLMs. I believe that this could potentially be of interest to a broad multimodal audience.
Weaknesses:
1. It looks to me that RAP relies on the model’s confidence scores in RE-Search, making it seemingly unsuitable for closed-source models. Are there other solutions that can replace the calculation of confidence scores in the RE-Search? Any more discussions on the alternative solution would be appreciated.
2. Although the authors claim to have validated their method on general benchmarks for standard MLLM scenarios, they only conducted experiments on the general benchmark MME-RealWorld, which has an average resolution of 2000×1500. In fact, it should be noted that some existing related studies (e.g. (Wang et al., 2024b) in the paper ) not only conduct experiments on HR benchmarks but also evaluate on benchmarks with normal resolution images, thus validating the generalization of the method. There is, however, a lack of experimental evidence demonstrating its applicability to more common resolutions (e.g., MMStar).
Other Comments Or Suggestions: Please address:
1. Alternative computation of the confidence scores in RE-Search?
2. More experiments on more common resolution to show its generalization capability. I may consider raising my score, if the authors could show the effectiveness of their method and provide additional experiments on general benchmarks like MMStar.
Questions For Authors: One more minor question is in Table 1, I wonder why the FCP task still performs worse than the baseline even after preserving the spatial relationships of image crops?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We truly appreciate Reviewer h1HT's insightful comments and suggestions.
> **Q1:** Replace confidence score calculation in RE-Search.
**R:** The reviewer's point is well taken. We clarify that currently, APIs like OpenAI's [R1] provide the `logprobs` parameter, which returns the log probabilities of output tokens and can be used as confidence scores in *RE-Search*.
Admittedly, there are also some closed-source models that do not support `logprobs`. Here, to echo the reviewer's concern, we explore a simple alternative approach using **generation-based confidence scores**. Specifically, we design a scoring prompt that asks the model to evaluate whether the given image contains sufficient information to answer the question. The constructed scoring prompt is as follows:
```Question: {}\nCould you answer the question based on the available visual information? Return only a JSON object with a numerical confidence score (0~10) of "Yes" like {"Yes": x}.```
In the implementation, we rescale the final score to the range $[0,1]$. We conduct additional experiments on HR-Bench using LLaVA-ov-0.5B. The experimental results are shown in the table below. Compared to the baseline, using *RAP* with generation-based confidence scores achieved an average improvement of 7.4%, demonstrating that generation-based confidence scores through the model can also lead to significant performance gains.
The corresponding results and discussions will be included in the revision.
| | HR-Bench 4K | HR-Bench 8K |
| ------------------------------------------ | ----------- | ----------- |
| LLaVA-ov-0.5B | 51.5 | 42.8 |
| **LLaVA-ov-0.5B w/ *RAP* (logit-based confidence score)** | **61.3** | **63.5** |
| LLaVA-ov-0.5B w/ *RAP* (generation-based confidence score) | 57.0 | 52.1 |
[R1] OpenAI API, https://platform.openai.com/docs/api-reference/completions/create
> **Q2:** Experimental results on general benchmarks.
**R:** We appreciate the reviewer's advice. As suggested, we extend our *RAP* to LLaVA-v1.5 7B and 13B, evaluating it on DocVQA, ChartQA, TextVQA, AI2D, and MMStar. These five benchmarks cover resolutions ranging from 390 to 1600. Across both benchmarks and model scales, *RAP* consistently yields notable performance gains, reinforcing the robustness and generalizability of our approach. We will include the corresponding results on these broader benchmarks, along with further analysis, in the revised version.
| | DocVQA | ChartQA | TextVQA | AI2D | MMStar |
| -- | -- | -- | -- | -- | -- |
| LLaVA-v1.5-7B | 21.5 | 18.2 | 45.8 | 54.9 | 30.3 |
| **LLaVA-v1.5-7B w/ *RAP*** | **26.2** | **18.5** | **46.8** | **55.1** | **33.1** |
| LLaVA-v1.5-13B | 23.7 | 18.5 | 49.0 | 60.2 | 32.8 |
| **LLaVA-v1.5-13B w/ *RAP*** | **26.0** | **23.2** | **50.0** | **60.9** | **34.4** |
> **Q3:** Why does the FCP task still underperform despite preserving spatial relationships?
**R:** We thank the reviewer for the comments. We would like to clarify that the experiments in Table 1 were conducted using a fixed value of $K$. Our findings indicate that **the choice of $K$ significantly affects final performance**—using a fixed $K$ leads to suboptimal results on ***FCP*** compared to the baseline. To address this, we propose *RE-Search*, a method for adaptively selecting an appropriate $K$. As shown in Table 4, with *RE-Search* , both ***FSP*** and ***FCP*** outperform the baseline, demonstrating the effectiveness of our approach.
---
Rebuttal Comment 1.1:
Comment: Thank the author for the response. Since the authors have addressed my main concern and, after reviewing the other reviewers’ comments, I am now inclined to increase my score for this paper. However, I still have a few minor concerns regarding the authors’ rebuttal. I believe further clarification on these points is essential to make the paper more solid.
In the rebuttal, the authors proposed a generation-based confidence score, which requires the model to directly output its confidence in its own response. While this is a reasonable design—given that evaluation, benchmarking, and reflection based on large language models have become common practices and natural choices. However, generation-based confidence scores are ultimately just an alternative to logit-based confidence scores. They can be seen as an approximation of logit-based confidence scores. Therefore, before deciding to adopt generation-based confidence scores, it is necessary to analyze whether they can effectively approximate logit-based scores.
1. What is the degree of similarity between logit-based confidence scores and generation-based confidence scores?
2. In addition, it would be helpful to provide concrete examples to intuitively illustrate: (a) what types of answers result in high generation-based confidence scores, (b) what types result in low scores.
3. Generation-based confidence scores are also likely to be affected by prompt engineering. I suggest the authors present more alternatives for constructing such scores and conduct a comprehensive analysis of how different prompt designs impact the results.
---
Reply to Comment 1.1.1:
Comment: We truly appreciate Reviewer h1HT's insightful comments and suggestions, once again.
> **Q1:** What is the degree of similarity between logit-based confidence scores and generation-based confidence scores?
**R:** We analyze the correlation between generation-based and logit-based confidence scores on the HR-Bench 8K using the LLaVA-ov-0.5B. The findings are illustrated in the figure available at the following anonymous link:
https://anonymous.4open.science/r/RAP_Case-1D48/confidence_experiment.png
Our analysis reveals a high cosine similarity score of **0.97** between the two types of confidence scores, indicating a remarkable degree of alignment between their distributions. Interestingly, generation-based confidence scores tend to be consistently higher than their logit-based counterparts. This observation aligns with prior research [R1, R2], which suggests that LLMs, when used as evaluators, may introduce ***systematic biases***. Nonetheless, RAP utilizing generation-based confidence scores continues to deliver substantial improvements.
[R1] Wu M, Aji A F. Style Over Substance: Evaluation Biases for Large Language Models[C] In COLING 2025.
[R2] Wataoka K, Takahashi T, Ri R. Self-preference bias in llm-as-a-judge[J]. arXiv preprint arXiv:2410.21819, 2024.
> **Q2:** It would be helpful to provide concrete examples to intuitively illustrate.
**R:** We provide two examples to demonstrate (a) answers with low generation-based scores, and (b) those with high scores. A detailed figure is available at the following anonymous link:
https://anonymous.4open.science/r/RAP_Case-1D48/confidence_illustrate.png
Our findings indicate that when an image lacks useful information for answering the question, both generation-based and logit-based confidence scores tend to be relatively low. However, for images containing crucial information, images with higher resolution have lower logit-based confidence scores, while the generation-based confidence score remains higher.
> **Q3:** Comprehensive analysis of how different prompt designs impact the results.
**R:** To assess the impact of different prompt designs on generation-based confidence scores, we compare three distinct prompts:
(1) A simple modification of the logit-based confidence score prompt, directly generating the confidence score:
```
Question: {}
Could you answer the question based on the available visual information? Return only a JSON object with a numerical confidence_score (0~10) of "Yes" like {"Yes": x}.
```
(2) Expanding the confidence score range from $[0, 10]$ to $[0, 100]$:
```
Question: {}
Could you answer the question based on the available visual information? Return only a JSON object with a numerical confidence_score (0~100) of "Yes" like {"Yes": x}.
```
(3) Providing more detailed descriptions, including the goal, scoring criteria for different score ranges, constraints on output format, and output examples:
```
# Goal
Given a question: {} about visual content (e.g., images, charts, diagrams, or scenes), determine whether the question can be answered confidently using the available visual information. Return a JSON object with a numerical confidence_score (0-100) reflecting your certainty that the answer is "Yes." The confidence_score should be based on factors such as the clarity, relevance, and completeness of the visual information. For example:
- **0-30**: Low confidence (e.g., visual information is missing, irrelevant, or too ambiguous).
- **31-70**: Moderate confidence (e.g., partial or indirect visual evidence exists but requires assumptions).
- **71-100**: High confidence (e.g., visual information directly and unambiguously answers the question).
Ensure the response contains **only** the JSON object with the key "Yes" and the numerical score. Do not include explanations, markdown, or other text.
# Output Format
1. JSON object with a single key "Yes" and an integer value between 0 and 100.
2. No additional keys or text allowed.
3. Score must reflect confidence in a "Yes" answer, even if the true answer is "No."
# Example
{"Yes": xx}
# Your Answer
```
As shown in the table below, we observe that the prompt design has minimal impact on the final performance, while *RAP* consistently delivers significant improvements compared to the baseline (i.e., LLaVA-ov-0.5B). These results demonstrate that even the simplest prompt can yield strong performances, highlighting the robustness of our *RAP*.
| | Prompt | HR-Bench 4K | HR-Bench 8K |
| - | --| --| --|
| LLaVA-ov-0.5B | - | 51.5 | 42.8 |
| LLaVA-ov-0.5B w/ *RAP* (generation-based confidence score) | (1) | **57.0** | **52.1** |
| LLaVA-ov-0.5B w/ *RAP* (generation-based confidence score) | (2) | 56.8 | 51.3 |
| LLaVA-ov-0.5B w/ *RAP* (generation-based confidence score) | (3) | 55.4 | 51.3 | | null | null | null | null | null | null |
UniMate: A Unified Model for Mechanical Metamaterial Generation, Property Prediction, and Condition Confirmation | Accept (poster) | Summary: In this work, the authors propose a method that jointly solves topology generation, condition confirmation, and property prediction. Their approach consists of two stages: first, using three sets of encoders and decoders to embed different modality conditions into discrete latents; second, employing a diffusion/score-matching model to generate masked latents from unmasked ones. Experimental results demonstrate significant improvement over the baselines across all three tasks, with ablation studies validating the effectiveness of the proposed modules.
## update after rebuttal
The rebuttal has addressed my minor concerns, and I will maintain my rating.
Claims And Evidence: - This work appears to be the first to propose a unified formulation capable of handling multiple tasks, as shown in Table 1.
- The approach of embedding different modality inputs into a common latent domain for generation is innovative, though similar concepts have been explored recently in image-text applications [1].
- The manuscript's thorough evaluation, benchmarking against existing baselines, and open-source models provide strong evidence for result reproducibility.
[1] Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation
Methods And Evaluation Criteria: Overall, the proposed method has achieved significant improvement over all baselines on different tasks, including topology generation, property prediction, and condition confirmations. The ablation study has shown the effectiveness of joining encoding of latent spaces and alignment operation.
Theoretical Claims: As far as I know, this work does not provide a strong theoretical claim on the proposed framework.
Experimental Designs Or Analyses: The experiment setting looks reasonable to me, and the experiments show improvement on all tasks over existing baselines. However, since I am not a domain expert for mechanical design, I am not sure if some important baselines are missed in this case.
Supplementary Material: I have checked the Tripartite Sinkhorn algorithm, and it looks reasonable to me.
Relation To Broader Scientific Literature: Nil.
Essential References Not Discussed: Nil.
Other Strengths And Weaknesses: Nil.
Other Comments Or Suggestions: Nil.
Questions For Authors: I have one minor question about the inference process, specifically regarding the computation of the transport plan. For masked tokens, is the marginal distribution initialized directly as a Gaussian distribution? This point could use some clarification in future revisions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the reviewer’s time and constructive comments. We sincerely appreciate the opportunity to address the concerns and clarify our work.
**Q1:** Similar concepts have been explored in an image-text work (i.e., Janus, which is mentioned by the reviewer).
**A1:** We appreciate the reviewer for pointing out the paper Janus, a multimodal understanding and generation model. While we acknowledge that our work and Janus may share high-level conceptual similarities, there are significant distinctions between the two:
* **Task Focus**: Janus is designed for general vision-and-language understanding tasks, whereas our model specifically targets metamaterial design, a domain with unique challenges and goals not addressed in Janus.
* **Methodology**: Janus integrates image information by projecting it into the input space of a large language model (LLM), combining it with textual data for joint understanding. In contrast, our approach projects all three modalities into a shared embedding space that is not biased toward any single modality. This unified representation is better suited to the nature of metamaterial design problems.
**Q2:** The reviewer was not sure whether any important baselines are missed in our work.
**A2:** In our experiments, we have carefully selected representative baselines to provide a fair and comprehensive comparison. To the best of our knowledge, the models we included reflect the current state-of-the-art computer science works in the field of metamaterial design. Still, we are open to including additional baselines should we find any relative suggestions.
**Q3**: For masked tokens, is the marginal distribution initialized directly as a Gaussian distribution?
**A3:** We thank the reviewer for the question regarding token initialization. In our implementation, we initialize tokens based on the transport plan, assigning higher probabilities to tokens with higher values in the transport plan. In response to the reviewer's question, we explore several alternative initialization strategies: pure Gaussian noise, a mixture of transport plan-based initialization, and Gaussian noise. The results of these variants are presented below.
| | Fqua | Fcond | NRMSE_pp | NRMSE_cc |
|-------------------|--------|--------|----------|----------|
| Rand. Noise Init. | 0.0291 | 0.0795 | 0.0304 | 0.0448 |
| Mixture Init. | 0.0525 | 0.0814 | 0.0313 | 0.0446 |
| Trans. Plan Init. | 0.0274 | 0.0781 | 0.0244 | 0.0443 | | Summary: In this paper, the author proposed UNIMAE, a unified model that can tackle three tasks simultaneously, namely, the topology generation task, property prediction task, and condition confirmation task, by training a shared, aligned latent space using a novel TOT and frozen diffusion. In the three tasks, the UNIMAE shows 80.2%, 5.1%, and 50.2% higher performance.
Claims And Evidence: Most of the claims made in this paper are supported by clear evidence.
1. The author claims that it is a unified model for different tasks in mechanical Metamaterial generation, property prediction, and condition confirmation, which is proven in Table 2
2. The author claims that the aligned latent space is helpful in handling challenge 1 (mechanical metamaterial design involves three modalities with different formats and distributions.). This is supported in the ablation study case 2
However,
1. I noticed that from line 267 to line 269, the author claims that “it will be simpler to generate proper outputs, which increase the robustness of the generation process.”, which is not proven in later experiment
2. The author claims that using frozen diffusion to handle the challenge 2 (given the complexity of data, any part of the data can be missing and requires design effort. Therefore there can be various design tasks). But no ablation study supports this.
Methods And Evaluation Criteria: The paper proposed a generalized Sinkhorn algorithm by generalizing the transport plan iteration process for the alignment of more than two latent distributions and also uses partially frozen diffusion to tackle potentially different tasks such as mechanical metamaterial generation, property prediction, and condition confirmation with the unified model. These make sense.
As for the evaluation metric, I have some concerns about the validation of the metamaterial generation. It is unclear why Fqua and Fcond are valid metrics for evaluating generation quality. Maybe a FEM simulation is needed to validate the generated metamaterial.
Theoretical Claims: The main theoretical claims made by the author are
1. The round operation of latent tokens to the codebook, which is shown from equations 1 to 6.
2. the proposed generalized Sinkhorn algorithm by generalizing the transport plan iteration process, which is shown from 7 to 9 and algorithm 1 in the appendix.
3. The partially frozen diffusion, which is shown from equation 10 to 12.
They are all correct.
Experimental Designs Or Analyses: Yes, I review all of the experiments.
The main experiment conducted by the author can be summarized as follows:
1. The main results are shown in Table 2, where different baselines for different tasks are compared with the proposed unified method.
2. The time and memory efficiency comparison between the proposed method and the baseline models. But not clear why the author only compares three baselines instead of all the baselines mentioned in Table 2.
3. Choose the latent dimension and the code book size by the Fqua, but it is not clear why choose these two hyperparameters by only one evaluation metric.
4. Ablation study shows the effectiveness of the latent MTR alignment, but additional ablation study of partially frozen diffusion is needed.
Supplementary Material: Yes, I review all of the supplementary material, namely, the benchmark details, the metrics and the experiment details.
Relation To Broader Scientific Literature: Typically, the feature alignment is done for two modalities. This paper proposes a method to align features from three modalities and is capable of generalizing to more modalities. This can be very useful in other applications.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. This paper introduces a novel multi-modality alignment method that extends beyond conventional two-modality approaches.
2. It effectively integrates diffusion-based generation with selective token freezing, enabling flexible handling of multiple tasks.
Weakness:
1. For topology generation, it is unclear why Fqua and Fcond are valid metrics for evaluating generation quality. A more detailed explanation is needed.
2. Additionally, the generated topology does not appear to have been validated for effectiveness and manufacturability using numerical simulations. Demonstrating its practical applicability is crucial.
3. Further ablation studies are needed such as the size of the code book, and different soft-round strategy
Other Comments Or Suggestions: Further ablation studies are needed such as
1. the size of the code book and different soft-round strategies: I noticed that in parameter sensitivity, the author provides the Fqua VS. the codebook size and the latent dimension, but what about another evaluation metric? Why determine the hyperparameter with only one single evaluation metric?
2. The ablation study on partially frozen diffusion is missing.
Questions For Authors: See weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their time and constructive feedback. Please find our detailed responses and corresponding revisions below:
**Q1:** One point in the work that "it will be simpler...the generation process" is not proven in later experiments.
**A1:** Thank you for your detailed reviews. To address this issue, i.e., the effectiveness of using a codebook for diffusion starting point initialization, we conduct additional experiments (random noise initialization and mixed initialization), with detailed results reported in reply to Q3 from reviewer 79DH.
**Q2:** Ablation for partially frozen diffusion is needed.
**A2:** We appreciate your constructive suggestion. Accordingly, we compare the use of a partially frozen diffusion with a vanilla diffusion setup (i.e., where variables in all dimensions are denoised at each step). The results below verify the effectiveness of partially frozen diffusion. We will include the results in the final version of the manuscript.
| | Fqua | Fcond | NRMSE_pp | NRMSE_cc |
|----------------------------|--------|--------|----------|----------|
| Vanilla Diffusion | 0.0405 | 0.0813 | 0.0322 | 0.0423 |
| Partially Frozen Diffusion | 0.0274 | 0.0781 | 0.0244 | 0.0443 |
**Q3:** Further ablation studies are needed, such as the size of the code book and different soft-round strategies.
**A3:** Thanks for the valuable suggestion. According to your suggestion, in addition to the varying codebook sizes experiments conducted in Section 4.4. Below, we provide more results using a soft-round strategy (midpoint interpolation between the initial value and the target rounding token), which shows the effectiveness of codebook rounding.
| | Fqua | Fcond | NRMSE_pp | NRMSE_cc |
|-------------------|--------|--------|----------|----------|
| Soft Round | 0.0421 | 0.0787 | 0.0369 | 0.0449 |
| Codebook Round | 0.0274 | 0.0781 | 0.0244 | 0.0443 |
**Q4:** It is unclear why Fqua and Fcond are valid metrics for evaluating generation quality; generated topologies are not validated for effectiveness and manufacturability.
**A4:** Thanks for the detailed reviews. Our evaluation metrics are inspired by and adapted from prior literature that adopted similar concepts. Specifically:
* Fqua evaluates the quality of generated topologies by measuring their symmetry degree and periodicity.
* Fcond evaluates how well the generated topologies satisfy the input conditions by comparing them with ground truth topologies that fully match those conditions.
Please also refer to Reviewer 44SV **A1** for more detailed illustrations. We will include these detailed illustrations in our final version.
Furthermore, we have collaborated with a metamaterial lab to physically manufacture the generated topologies and test their properties. We plan to release a demonstration video showcasing this process after the double-blind review period concludes.
**Q5:** Why the author only compares three baselines in time and memory efficiency analysis?
**A5:** We appreciate your careful reviews. In the model efficiency analysis section, we present the results for three representative baselines for clarity and brevity. Below, we provide the complete set of results for all baselines, from which we can reach the same conclusion as stated in our manuscript, and we will include these in the appendix of the revised version.
| Model\Batch Size | 200 | 1000 | 2000 | 3000 | 5000 | 7000 | 10000 |
|------------------|--------|---------|--------|---------|---------|---------|---------|
| CDVAE | 0.2416 | 0.43716 | OOM | OOM | OOM | OOM | OOM |
| VisNet | 0.0285 | 0.0366 | 0.0575 | 0.0772 | 0.1085 | 0.1473 | 0.1931 |
| UniTruss | 0.0108 | 0.01385 | 0.015 | 0.01435 | 0.01406 | 0.01441 | 0.0199 |
**Q6:** In parameter sensitivity, why do the authors only use the metric Fqua?
**A6:** We thank you for the constructive feedback. Accordingly, we include the parameter sensitivity analysis w.r.t. all metrics. In the sensitivity study, since we consider the topology generation task to be relatively hard among the three tasks, and Fqua is a typical indicator of the generation quality, we provide the results related to Fqua. Below, we provide results for the other evaluation metrics as well. These will be added to the appendix in the final version.
Parameter Sensitivity regarding other metrics (Fcond/NRMSE_pp/NRMSE_cc)
| Lat. Dim\Codebk Size | 32 | 64 | 128 |
|----------------------|----------------------|----------------------|----------------------|
| 32 | 0.0836/0.0324/0.0402 | 0.0769/0.0288/0.0436 | 0.0833/0.0294/0.0419 |
| 64 | 0.0779/0.0309/0.0404 | 0.0813/0.0300/0.0409 | 0.0783/0.0295/0.0442 |
| 128 | 0.0795/0.0286/0.0409 | 0.0798/0.0331/0.0430 | 0.0818/0.0282/0.0437 | | Summary: The paper introduces UNIMATE, a unified model for mechanical metamaterial design that simultaneously addresses three key aspects: 3D topology, density condition, and mechanical property. Unlike previous approaches that typically consider only two modalities, UNIMATE integrates all three through a modality alignment module—which compresses and aligns diverse design information into a shared latent space—and a synergetic diffusion generation module that completes missing design tokens via a score-based diffusion model. Experimental results demonstrate that UNIMATE significantly outperforms existing models in topology generation, property prediction, and condition confirmation tasks, offering promising improvements and establishing a new benchmark for comprehensive metamaterial design.
Claims And Evidence: 1. The integrated approach is supported by experiments showing notable performance gains ( In the topology generation task, property
prediction task, and condition confirmation task, our model outperforms the second-best model by 80.2%, 5.1%, and
50.2%, respectively), but it relies on a new dataset and custom metrics.
2. The unification and the alignment operation both benefit to the model’s performance. The unification process boosts the performance by 37.5% in average, and the TOT alignment provides 7.8% boost.
Methods And Evaluation Criteria: Yes, the quantitive metrics and the ablations are convincing.
Theoretical Claims: I believe this is an application paper that employs multimodal alignment and generation for metamaterial synthesis, so it does not propose any specific theoretical novelty.
Experimental Designs Or Analyses: 1. The authors construct a new dataset (based on Lumpe & Stankovic, 2021) and introduce domain specific metrics (F_qua for topology quality and F_cond for topology matching) to evaluate their model. While these enable comprehensive testing, I would recommend that AC consider having domain experts in material design review its validity
2. They compare UNIMATE against multiple models across three tasks (topology generation, property prediction, condition confirmation). For some tasks, especially condition confirmation, they adapt models not originally designed for that purpose (e.g., forcing property prediction models to predict density). This adaptation might introduce bias, so while the comparisons are informative, they might not fully reflect each baseline’s intended performance.
Supplementary Material: Yes, more experimental details about dataset construction and network design might help readers to reproduce the method.
Relation To Broader Scientific Literature: None
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strength:
1. Multimodal Alignment: Uses a multimodal codebook to map heterogeneous data (3D topology, density, and mechanical properties) into a unified latent space.
2. Synergetic Generation: Introduces a partially frozen diffusion process with a transformer backbone that generates missing tokens while preserving the context of known tokens.
3. Flexible and Unified Design: Efficiently bridges diverse modalities, enabling robust metamaterial synthesis even with arbitrary missing inputs.
Weakness:
1. One potential weakness of the paper is the limitation in data scale. The approach relies on a newly constructed dataset, which could restrict the model's generalizability and robustness in real-world scenarios.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the reviewer's time and inspiring comments. Here, we summarize the major points from the reviewer and our rebuttal as follows:
**Q1:** The proposed model is evaluated on a new dataset and custom metrics.
**A1:** Our work focuses on developing a unified model capable of handling diverse metamaterial design tasks as a whole. This is a relatively underexplored area, and to the best of our knowledge, there has not been a comprehensive effort to address it so far. As a result, no suitable dataset currently exists for this task. Therefore, we propose our own dataset tailored to this problem. Additionally, we introduce custom evaluation metrics to assess the quality of generated structures by referring to previous material works. Specifically, the two metrics—Fqua and Fcond—are inspired by previous works [1–4]. References [1,2] emphasize the importance of symmetry and periodicity in metamaterial design, which directly motivates our Fqua metric that quantifies the degree of symmetry and periodicity in the generated topologies. Similarly, references [3,4] propose comparison techniques for assessing topology similarity and generation coverage. Building on these ideas, our Fcond metric evaluates how well the generated structures match the target conditions by comparing their topological features.
*[1]Abu-Mualla, Mohammad, and Jida Huang. "A Dataset Generation Framework for Symmetry-Induced Mechanical Metamaterials." Journal of Mechanical Design 147.4 (2025).*
*[2]Bitzer, Andreas, et al. "Lattice modes mediate radiative coupling in metamaterial arrays." Optics Express 17.24 (2009): 22108-22113.*
*[3]Tian Xie, Xiang Fu, Octavian-Eugen Ganea, Regina Barzilay, and Tommi S. Jaakkola. 2022. Crystal Diffusion Variational Autoencoder for Periodic Material Generation. In ICLR.*
*[4]Nils ER Zimmermann and Anubhav Jain. 2020. Local structure order parameters and site fingerprints for quantification of coordination environment and crystal structure similarity. RSC advances 10, 10 (2020), 6063–6081.*
**Q2:** The baselines are not originally designed for the tasks in our work.
**A2:** As mentioned in A1, this is a relatively new task, making it difficult to find baselines that are directly aligned with our objective. This challenge is further amplified by the fact that metamaterial design remains largely underexplored within the computer science community. Nevertheless, we conducted extensive experiments by tuning the hyperparameters of existing baseline methods to better assess their potential performance on our task. Here, we focus on the second best model, uniTruss, as an example. uniTruss has two main hyperparameters, learning rate (LR) and latent dimension. As shown in the following, we report the results on four LR and five latent dimensions. Results show that for both property prediction and density confirmation, the best LR is 5e-4 and the best latent dimension under this LR is 32.
(Property Prediction Task) Tuning learning rate, **bold** denotes the results reported in original manuscript.
| LR | 1e-3 | **5e-4** | 1e-4 | 5e-5 |
| -------- | -------- | -------- | -------- | -------- |
| NRMSE_pp | 0.0285 | **0.0271** | 0.0420 | 0.0440 |
| NRMSE_cc | 0.0899 | **0.0889** | 0.106 | 0.115 |
(Property Prediction Task) Tuning latent dimension for property prediction
| Latent dim | 16 | 32 | 64 | 128 | 256 |
| ---------- |----|----|----|-----|-----|
| NRMSE_pp | 0.0328 | **0.0271** | 0.0329 | 0.0298 | 0.0472 |
(Density Confirmation Task) Tuning latent dimension for density confirmation
| Latent dim | 16 | 32 | 64 | 128 | 256 |
| ---------- |----|----|----|-----|-----|
| NRMSE_cc | 0.0905 | **0.0889** | 0.0909 | 0.0899 | 0.891 |
**Q3:** More experimental details about dataset construction and network design might help readers to reproduce the method.
**A3:** We appreciate the suggestion to include more details about dataset construction and network design. We will incorporate the following information into the appendix to improve the clarity and completeness of the paper.
* For dataset construction, we use homogenization simulation to calculate the property of topologies. We will add details about how homogenization works. Also, we will add details about density selection, e.g., the range from which the edge radius is selected.
* For network design, we will add more details about each component of our model, including the layer number of the GCN encoder, the MLP encoder, the transformer backbone, the latent dimension of each component, etc. We hope this will help increase the clarity and reproducibility of our work. | null | null | null | null | null | null | null | null |
When to retrain a machine learning model | Accept (poster) | Summary: This paper presents a novel approach to maintaining the performance of open-world machine learning models operating in continuously changing environments. The primary focus is on addressing a critical challenge in such settings: determining when the model should be retrained. The objective is to balance the trade-off between minimizing retraining costs and maintaining model performance. The paper argues that existing methods become impractical when retraining costs are high. To overcome this, the authors propose a methodology that explicitly accounts for retraining costs while maintaining competitive model performance. The evaluation demonstrates that the proposed approach outperforms conventional methods in scenarios where retraining costs are a significant concern.
Claims And Evidence: The claims made in the paper are clear and convincing
Methods And Evaluation Criteria: Yes
Theoretical Claims: I did not check for correctness of any proofs.
Experimental Designs Or Analyses: Experimental Design seems appropriate.
Supplementary Material: I did not review Supplementary Material
Relation To Broader Scientific Literature: Key contribution of the paper related to the machine learning algorithms design for real world scenarios where the data goes through major distribution shifts.
Essential References Not Discussed: References seems adequate.
Other Strengths And Weaknesses: ## Strengths
- The paper is well-written and easy to follow.
- The approach is mathematically well-grounded, providing a clear and formal formulation.
- The paper addresses a practical and significant research problem in open-world machine learning, making a relevant contribution to the field.
## Weaknesses:
- Although the paper provides a mathematical analysis for the proposed online objective, the objective itself appears relatively straightforward, as it primarily combines a standard loss function with retraining costs.
- The performance improvements demonstrated in the evaluation are modest compared to traditional methods that rely on distribution shift signals for retraining, raising questions about the practical advantage of the proposed approach.
Other Comments Or Suggestions: Few sentences in the related works need a bit of clarification, for example
"Since the signal is designed to adapt a model rather than trigger a full retraining, these methods are not appropriate as retraining signals."
I think "as" should replaces with "for" but I could be wrong
Questions For Authors: Why the results seems incremental when compare the proposed method with the traditional method even though the paper considers high training cost?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Weaknesses
## 1. Although the paper provides a mathematical analysis for the proposed online objective, the objective itself appears relatively straightforward, as it primarily combines a standard loss function with retraining costs.
We disagree with the statement that the objective is straightforward. The objective described by Eqn. 1 is not a standard loss function with retraining costs. The loss function described in Eqn.1 is parameterized by a binary vector $\boldsymbol{\theta}$ that is subject to some constraints.
This doesn't resemble any standard setup. If the reviewer is instead referring to the training loss used to obtain the parameters of the Gaussian r.v. in Eqn. 13, it is true that a standard loss is being used, but this is only a small part of our methodology, and cannot be characterized as the objective of our method. This would overlook the second part of our methodology, described in Section 4.2, which is central to our method and is how we form our prediction $\boldsymbol{\theta}$.
## 2. The performance improvements demonstrated in the evaluation are modest compared to traditional methods that rely on distribution shift signals for retraining, raising questions about the practical advantage of the proposed approach.
We disagree with the statement that the improvements over distribution shift methods are modest. On average, in Table 1, the distribution shift methods are $23\%$ worse than ours, (or ours is $17\%$ better on average). This is a significant improvement, especially considering that they are on average $31\%$ worse than the oracle, and the oracle is $20\%$ better on average. We can include these summary statistics to better highlight the improvement of our method over these baselines.
Moreover, the experimental setting that we chose is beneficial to the distribution shift methods, as we stop the evaluation at a cost-to-performance ratio where the oracle finds that no retraining should be done. Since the distribution shift methods are entirely agnostic to any cost considerations, settings with a very high cost of retraining would be catastrophic to use. This can be easily inferred by extrapolating the results of the distribution shift methods in the top figure of Figure 2 (the green and red lines).
## 3. Few sentences in the related works need a bit of clarification, for example "Since the signal is designed to adapt a model rather than trigger a full retraining, these methods are not appropriate as retraining signals." I think "as" should replaces with "for" but I could be wrong
Yes, thank you for pointing this out. It is phrased a little awkwardly. We will rewrite line 101 from:
*Since the signal is designed to adapt a model rather than trigger a full retraining, these methods are not appropriate as retraining signals.*
to:
*Since the signal for these methods is designed to adapt a model rather than trigger a full retraining, they are not appropriate to be used as full retraining signals*.
## 4. Why the results seems incremental when compare the proposed method with the traditional method even though the paper considers high training cost?
This is related to our previous response regarding the performance of distribution shift methods. We deliberately avoided scenarios with excessively high costs, where the cost-aware methods would easily determine that no retraining is necessary, while the distribution shift methods would have a very poor performance. | Summary: One underexplored problem is that of when to retrain a model, assuming a sequence of datasets over time that experience distribution shift. This problem can be formulated as an off-policy reinforcement learning problem, where the goal is to find a policy with minimum cost. Authors define cost as the sum of costs from retraining and also costs from worse model performance from using an older model. Authors propose at each step to use past data to fit a future performance forecaster, which is a bayesian model for the performance of models trained on data at time i and evaluated on data at time j, using features such as the time between when the evaluation data is collected and when the model training data is collected. Authors then propose an online policy that uses this future performance forecaster to model total cost, and decide whether to retrain based on comparing fixed quantiles of this modeled total cost under training vs not training. The proposed method, called UPF, generally outperforms baselines including CARA.
Claims And Evidence: - Proposition 3.1. This seems only tangentially related to the rest of the paper.
- Proposed method UPF performs well empirically: the experiments seem sensible.
Methods And Evaluation Criteria: Proposed method:
- I think the formulation of the problem is very reasonable (also an improvement on CARA).
- I also think the Bayesian model is largely reasonable, though I am curious about ablations with / without $z_{shift}$.
Evaluations:
- Metrics:
- AUC for $\hat C_\alpha$, to estimate cost but aggregated over a sequence of values of $\alpha$ (tradeoff between retraining and performance costs)
- Number of retrains compared to an oracle
- Datasets and models:
- It is a little strange to use XGBoost as the model for tasks involving text (e.g. epicgames).
- Why do you choose between 188 models from huggingface for last-layer training for iWildCam? What happens if you don't?
- There could be more detail in how the datasets were split up into different datasets $D_t$.
- Otherwise it seems reasonable
Section A.8.1 advertises something different from what it delivers. The section title and beginning sounds like it will address the impact of using a normal approximation to fit the Beta parameters in the method. However, what it actually does is compare using a Beta model to a Normal model.
Theoretical Claims: I checked the proof of Prop 3.1. There appear to be errors that may affect the result.
- I think there is double counting in (33). The quantity being minimized includes $pe(t_1,t_2)$ and also $pe(t_2,t_2)$. However, the summand in (33) should not contain more than one term for $pe(\cdot ,t_2)$.
- When going from (42) to (44), it seems (44) is missing a $pe(t_r^*,t_{r+1}^*)$ term.
Experimental Designs Or Analyses: They look mostly reasonable; see Methods And Evaluation Criteria.
Supplementary Material: - I briefly reviewed the proof of Prop 3.1.
- I reviewed ablations.
- I reviewed the dataset section.
- I checked A.8.1: "impact of the normal approximation".
Relation To Broader Scientific Literature: There is a lot of previous work on distribution shift detection and diagnosis but this work addresses cost-aware policies for when to re-train a model.
Essential References Not Discussed: I am not aware of essential references not discussed, but I am also not familiar with other works that address this specific problem.
Other Strengths And Weaknesses: Strengths
- This paper is clearly written.
- The proposed formulation and method are very reasonable (with caveats explained elsewhere).
- The experiments seem reasonable.
Other weakness
- I am struggling to think of a setting where this problem is realistic. My understanding is that in many industry settings, the question of when to retrain a model is not based on cost-aware modeling, but rather based on legitimate and important operational/organizational/procedural reasons that I imagine are costly and/or difficult to try to change.
- A common problem in modeling under distribution shift over time is how much past data to train on. Here, this problem is not part of the formulation (which is fine).
Other Comments Or Suggestions: - Eq 12 has an extra )
- Discussions about scaling laws seem unrelated to the paper, as scaling laws are not about retraining on new data from different/updated distributions.
- Line 232 right side: "setting $\delta = 0.5$ simply selects the decision that minimizes the expected total cost" -- I assume you mean median
- Comments in blue in the appendix proofs are helpful but you may want to distinguish them from the proof in other ways, since they will not be blue when printed in black and white.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Methods
## 1. Bayesian model: ablations with/without feature $z$
Experimentally, we saw that without the shift feature $z$, the results were on average $5-10\%$ worse, with some datasets being more affected than others (the electricity and the synthetic datasets were mostly unaffected). We can include this as an ablation study.
## 2. XGBoost justification - use for tasks involving text
Better suited models could be used for each specific dataset, but we decided to keep a single model in these experiments for simplicity. We chose a tree-based algorithm since they can handle different datatypes.
## 3. Why choose between 188 models for iWildCam? What happens if you don’t?
This experiment was designed to highlight that the setup does not assume any particular model $f$ to be consistently used, and to illustrate a scenario where the training cost is not restricted to the training of a single model. The procedure to obtain $f$ is actually outside of our framework; it is part of what defines the dataset in our setting. We could have also chosen a single model, which would have resulted in lower accuracy values.
## 4. Details on dataset construction
Thank you for pointing this out. We will include additional details in Appendix A.6.
## 5. Section A.8.1 advertises something different from what it delivers
That is true. It is slightly misleading, and we will change the title of the section to *BETA APPROXIMATION VS NORMAL*.
# 6. Theoretical claims - errors in Proof of Prop. 3.1
Thank you very much for taking the time to verify our result. Yes, there is double counting in 33. The summand should be offset by a minus 1 in the first two terms to avoid this. The second mistake you pointed out in (44) is also related to this missing offset. The added terms should start at $s=t^*_{r+1}+1$ and not at $s=t^*_{r+1}$. We will fix these errors. This doesn't affect the final result.
# Weaknesses
## 7. Not a realistic setting.
If a practitioner has no flexibility on when/if to retrain the model due to some operational/organizational/procedural reasons, it is true that our algorithm is not applicable, since there is no retraining problem. However, we believe that this is a rare case and that the problem we are addressing is ubiquitous in industrial settings. Many practitioners do have this flexibility and are actually faced with the problem of deciding when to retrain their model. Indeed, we were motivated to formulate this problem due to collaborations with several industry partners who have live, deployed models, and must make this exact decision.
Many companies collect data in real-time, and their interest is to forecast something based on this most recent data. That is the case for companies that operate recommender systems, perform manufacturing control (e.g., detecting faults), or employ fraud detection systems, for example.
The cost of retraining is indeed usually related to external costs (i.e., not computation), but this is precisely what we are trying to tackle in this work. While quantifying these costs can be challenging, these types of risk assessments and quantification are actually made all the time, and are inherently considered when a decision to retrain is being made. Financial transaction companies running fraud detection systems do estimate the potential financial cost that could be incurred if a newly deployed model performed poorly.
The contribution of our work is to provide
a formal framework for practitioners to incorporate cost estimates into retraining decisions. It asks them to provide a **direct specification** of what amount of accuracy gain is worth this holistic cost of retraining, at a given time step. Our formulation has the considerable advantage of enabling this explicit quantification. This formulation is the most interpretable and therefore useful for setting this problem and making decisions.
## 8. A common problem in modeling under distribution shift over time is how much past data to train on. Here, this problem is not part of the formulation
This is a valid point and could be an interesting extension to our formulation. Rather than just deciding whether to retrain, we could choose to retrain on varying amounts of samples, adding another dimension to our prediction space. Instead of only the timestep index and model index, we would now have multiple models to choose from.
# Comments
## 9. Discussions about scaling laws seem unrelated to the paper.
The organization of this paragraph made it seem like the scaling law applies to the distribution shift scenario, which, as the reviewer pointed out, is not the case. It belongs to the section where we describe the performance of a given model remaining constant (i.e., no distribution shift).
We will reorganize this paragraph by moving the scaling law discussion before the discussion about distribution shift.
## 10. Other typos
Thank you for bringing these to our attention. We will correct them. | Summary: This paper faces the complex task of understanding when a machine learning model needs to be retrained in the presence of drift. In doing so, it takes into account the problem associated with the trade-off between retraining cost and poor model performance. Their approach is based on forecasting the evolution of the model performance over time and using this information to later decide whether to retrain or not.
The authors report a set of experiments addressing classification tasks showing the performance of the devised approach.
Claims And Evidence: The claims made in the work seem sound to me.
Methods And Evaluation Criteria: I agree with the authors that this is a tough problem since many things need to be taken into account, and it is difficult to define a proper cost-to-benefit ratio or to analyse the type and amount of non-stationarity in the environment.
I believe that the work may benefit from a discussion about the types of non-stationarities that can be handled using this approach. Indeed, since the authors define a forecasting approach for the model performance, I assume that the presented approach is more suited for slowly changing environments rather than abruptly changing ones, which are harder to handle.
Concerning the proposed method, I believe that one of the main weaknesses lies in the forecasting procedure. Some of the concerns are about the choice of the feature vector $r_{i,j}$. It seems to not be informative enough since the only feature containing information about data is the $z_{shift}$ feature which relates the amount of difference (in L-1 norm) of the mean of input features $\bar{x}\_t$ with the mean at the previous timestamp $\bar{x}\_{t-1}$. Using only this feature may for example prevent the identification of a "real concept drift" (with this expression, I mean when the input distribution does not change but the target distribution conditioned on the input features does).
Another concern is that, as far as I understood, since the first feature refers to the used model, if the algorithm chooses to retrain or not the model at time step i+1, this influences the target $a_{i,j}$. This means that the prediction of future values also depends on the previous choices of retraining or not the model. I wonder whether this may represent a problem.
Other simplifications lie in the use of a linear model for prediction and in the choice of the beta distribution for the prediction of forecasted values. How can these choices limit the applicability of the approach to more general and complex scenarios?
Concerning the evaluation criteria, the authors make a good work in presenting their results by testing it under different values of the retraining cost $\alpha$.
Theoretical Claims: I did not check the proofs of the theoretical claims.
Experimental Designs Or Analyses: The design of the experiments is done extensively and the presented results seem sound to me.
Supplementary Material: I reviewed Appendix A.7 describing the performance forecaster.
Relation To Broader Scientific Literature: The related works are clearly discussed. With respect to the most related work of Mahadevan & Mathioudakis (2024), this paper removes some strong assumptions concerning the data distribution and the impact on model performance and presents a more general objective that takes into account both the retraining cost and the average performance.
Essential References Not Discussed: The related works are thoroughly discussed.
Other Strengths And Weaknesses: Among the strengths of the work, I would mention the extensive and detailed numerical simulations showing the benefits of using the presented approach.
Concerning the weaknesses, see the Section on "Methods And Evaluation Criteria".
Other Comments Or Suggestions: N/A
Questions For Authors: See Sections above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: # Methods
## 1. Types of non-stationarities: approach is more suited for slowly changing environments rather than abruptly changing ones
That is a good point. It is true that the best scenario for our approach is a slowly changing environment, and that abrupt changes would be harder for our model to handle. We will clarify the scope of our paper by adding a comment to the effect that our model is mostly adapted to gradual performance changes in the limitations section and state that abrupt changes in performance remain an open problem.
That being said, the combination of probabilistic forecasting and a risk-averse decision algorithm in our method provides some defence against abrupt changes in the environment. Since our approach does not rely solely on pointwise estimates but also produces probabilistic estimates through mean and variance predictors, a failure in capturing sudden changes will result in high-variance estimates for past models. Due to our risk-averse design, the model would be biased toward frequent retraining to avoid catastrophic performance drops. We believe unexpected performance rises are less likely than sudden drops, which is why our approach is designed this way.
## 2. Forecasting procedure: choice of the feature vector
There are certainly various designs of input features that could be more informative. However, we aimed to keep our approach simple, as we are in some ways introducing a new problem setting. Our priority was to ensure broad applicability across settings, rather than focusing on more complex feature combinations that might yield better performance but reduce generalizability.
## 3. Forecasting procedure: inability to detect concept drift
The point about concept drift is an important one. We agree that it is crucial for the model to identify concept drift, and in fact, our model is capable of capturing pure concept drift where $p(Y_1|X_1) \neq p(Y_2|X_2)$ with $p(X_1) = p(X_2)$.
In short, the presence of concept drift will be reflected for a model $f_0$ that was trained on dataset $D_0 (X_0, Y_0)$ in the shift of performance on the dataset $X_1, Y_1$ versus the performance on the dataset $X_2, Y_2$. Even if the dataset shift feature stays constant, since we also feed the time index as input, our model can pick up this trend. The performance will be a decreasing function of $t$ and that can be captured in the model.
This is an important point worth highlighting. We will add a comment in the text about the model's capability to capture concept drift and also include the possibility of designing more informative input features as a future direction.
## 4. Dependence on past predictions: prediction of future values also depends on the previous choices of retraining or not the model
This is an excellent point. It is true that retraining decisions influence the online dataset $\mathcal{I}^{online}$, which in turn affects the prediction algorithm. This effect is, in fact, ignored by our algorithm.
Empirically, we do see that this is not a problem as we observe a good performance. However, it would be a good research direction to investigate strategies to allow for more non-optimal decisions to provide training data for the prediction algorithm in other decision regions.
We will add the following comment to highlight this fact by adding in line 259:
"*As constructed, past decisions influence the dataset $\mathcal{I}^{online}$ available for the next iteration, but this effect is ignored by the algorithm. Empirically, we find that the algorithm performs well despite this. One direction worth investigating is the incorporation of random decisions to allow the predictor to learn over a broader region of actions and responses.*"
## 5. Simplifications: linear model for prediction, beta distributions. Do these limit the applicability to more general and complex scenarios?
While our approach uses simple components, we do not consider it limits the applicability of the approach to complex scenarios. Indeed, the iWildCam experiment represents a reasonably complex setting, with distribution shift, and 188 architectures of different natures.
Given the nature of the retraining problem, where data is scarce and potentially noisy, a linear regression model with variance estimation is a suitable choice, especially when only 10-20 samples are available for learning.
Furthermore, since this is a relatively new problem formulation, we found it appropriate to introduce a simple yet robust method that can be applied across various settings.
That being said, the limited modeling complexity of the chosen method may indeed restrict the model's expressiveness, and exploring more complex algorithms in the future would be valuable.
Regarding the Beta assumption, it is a common distribution for modeling continuous random variables over a finite interval. Please see our response to point 1 of Reviewer 1 for more discussion. | Summary: In this paper, the authors proposed a novel strategy to determine when to retrain deployed machine learning models. Specifically, the authors first developed a future performance forecaster for predicting the performance of models built in future time steps. Building on top of it, the authors model the problem of whether to retrain a model or not into a decision-making problem and connect it to a reinforcement learning problem. The empirical studies can demonstrate the effectiveness of the proposed method.
Claims And Evidence: Overall, the claims are quite sound and clear. But there are still some minor issues:
+ First, I feel that more explanations are needed to clearly explain why the random variables $A_{i,j}$ can be modeled as a Beta distribution. I feel that there is a jump in the flow which may confuse readers.
+ Second, I am wondering how the future performance forecaster is related to Section 4.2. Is it true that the future performance forecaster is trained first and then used for the derivation in Section 4.2, right? If this is true, how is it trained in the first place without knowing the ground-truth future performance numbers? If this is not true, how is its training process correlated to the following RL algorithm?
+ Third, in Section 4.2, the authors discussed how rules for determining $\Theta_t$ at each time step $t$. Are these rules derived by the RL algorithm? More clarifications of its connection to the RL formulation would be essential.
Methods And Evaluation Criteria: I think the general method looks good to me, which is fully backed by rigorous theoretical analysis. The evaluation criteria is also appropriate.
Theoretical Claims: Yes, I checked the theoretical claims and proofs, which are all correct.
Experimental Designs Or Analyses: I think the experimental designs and analysis look good to me.
Supplementary Material: Yes, I checked the supplementary material, including the theoretical analysis and parts of the additional experiments. They all look good to me.
Relation To Broader Scientific Literature: This paper is closely related to the literature on distribution shifts.
Essential References Not Discussed: No, everything is cited properly.
Other Strengths And Weaknesses: Strengths:
+ The idea of framing when to determine to retrain a model as a reinforcement learning problem is novel. This is also supported by a rigorous theoretical analysis.
+ Except for some confusing points described above, the overall flow of the paper looks good to me.
+ The authors also performed extensive experiments to demonstrate the effectiveness of the proposed methods.
Weakness:
- I would suggest that the authors add some additional discussion to explain the connection between the retraining problem and the continual learning problem, which I think are pretty close.
- Although the theoretical analysis in the paper is perfect, the theoretical analysis (say the one in Appendix A.11) seems to be unrelated to the distribution differences between $D_t$ and $D_{t+1}$. Does this mean that the theoretical analysis applies to datasets with arbitrary distributions? More discussion on this point would be essential.
Other Comments Or Suggestions: See above
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Claims and Evidence
## 1. Beta r.v. motivation:
We agree that there could be a better motivation for the Beta distribution and will add sentences to improve the flow. Our choice to use a Beta distribution was motivated by the facts that 1) Beta distributions are appropriate (and a common choice) for continuous r.v.s defined over a finite interval; and 2) the empirical accuracies can be interpreted as sum of Bernoulli outcomes and the Beta is the conjugate prior. Moreover, the model performs well empirically, and we conducted some experiments with a Gaussian-based alternative that can be seen in Appendix A.8.1, where we can see that the Beta model slightly outperformed. We recognize that this is a modeling choice and the framework could be applied with other distributions.
## 2. Future performance forecaster is trained first and then used for derivation in Section 4.2
Yes, we train the forecaster on some available training data collected at some past timesteps (the offline data, denoted $\mathcal{I}^{offline}$ in Section 3.1). We incorporate a modeling assumption that the predictive relationships learned during this offline period also exist in the operational period, allowing us to forecast performance in future timesteps.
## 3. Are rules in Section 4.2 derived from RL algorithm?
No, these rules are not directly derived from the RL formulation. Section A.11 aims to clarify and highlight the connection between our approach and an offline RL formulation, though it is not a perfect match. To make this clearer, we will rename the appendix "Connection to an Offline RL Formulation." Since the problem is closely related to offline RL, we found it important to emphasize these connections. However, the addressed task is not ideal for RL, as discussed in the main text, which is also reflected in the performance of a basic offline RL baseline in Table 7.
# Weaknesses
## 4. Additional discussion to explain the connection to continual learning
There is a connection between the "when to retrain" problem and continual learning and we will add more text to discuss. The main difference lies in how cost is incorporated in the problem formulation and whether the setting allows for future planning. In particular, continual learning approaches do not ever delay updates due to cost considerations. For example, in a when-to-retrain scenario, I might decide to hold off from retraining in period 7, because I only have budget to retrain one more time, and if I retrain in the next window in period 8, it will give me better performance for periods 9-11. In continual learning, the primary goal is typically to maintain performance, sometimes while trying to reduce the number of gradient updates. In contrast, our when-to-retrain formulation is abstracting away the training procedure and focuses more on cost considerations. Here, cost is not limited to gradient updates. It is a parameter set by the practitioner which will influence the algorithm’s behavior, as the algorithm explicitly considers the cost trade-off when deciding whether to retrain.
Moreover, our algorithm does not aim to optimize how models are retrained/updated under distribution shift, which is a core focus of continual learning.
In fact, continual learning techniques could be combined with our algorithm to balance the decision of whether to apply an update. Even if continual learning mechanisms indicate that gradient updates should be applied to adapt to distribution shift, our approach could deem that it would still be better to wait due to other training cost considerations.
As suggested, we will add a more detailed explanation of the connections and differences between our problem and the field of continual learning, incorporating these key points.
## 5. Theoretical analysis: does it apply to datasets with arbitrary distributions
Yes, in theory, the result in Eqn. 7 does not impose any structure on the dataset distributions, so it is the case that it would apply to datasets with arbitrary distributions. It is also the case for Appendix A.11. However, since the result relies on bounding the performance gap of two models trained on subsequent datasets $\mathcal{D_i}$ and $\mathcal{D_{i+1}}$, evaluated on dataset $\mathcal{D_t}$, it will likely be related to a function of the distributions of these datasets. We will add a comment to this effect as suggested. This is exemplified in Appendix A.5, where we present examples illustrating how the performance quantities $pe_{i,t}$ can be characterized for a simple Gaussian model. In this example, the bound is indeed a function of the dataset characteristics, i.e., its size $|\mathcal{D}|$, variance $\sigma$ and dimensionality $d$. | null | null | null | null | null | null |
Learning Safe Control via On-the-Fly Bandit Exploration | Accept (poster) | Summary: This paper proposes a safe control method to address the problem of infeasible safety filter. The authors use Gaussian processes to learn the system dynamics. They use the lower confidence bound of the control barrier function (CBF) to identify the feasibility of the safety filter, and sample exploratory controls according to the upper confidence bound of CBF to reduce the model uncertainty when it is infeasible. They provide the guarantee of feasible safety filter under finite samples of online exploration, and evaluate the proposed safe control methods on numerical simulation of cruise control and quadrotor control.
Claims And Evidence: The authors claim that the proposed safe control method is safe with high probability, and shows evidences via
1. theoretically proving the feasibility of safety filter within finite exploratory samples.
2. experimentally demonstrating the numerical results over the safe control of two different dynamical systems.
However, I think these evidences may not fully support the claim:
1. The author did not provide safety guarantee during the exploratory sampling procedure.
2. The default sampling frequency in the experiment of is 1e5 hz, which may not be practical in real-world experiments.
Methods And Evaluation Criteria: Overall I think the proposed method makes sense and benchmark used for evaluation are common-used embodiments in safe control.
Theoretical Claims: As mentioned above, the absence of safety analysis during the exploration phase hinders the guarantee of high-probability safety. I think further analysis about the unsafe probability of using UCB to sample should be conducted.
Experimental Designs Or Analyses: During the experiment, the authors track the value and LCB of the CBF function, and demonstrate the effectiveness of the proposed optimistic sampling method. They also show the control trajectories of the quadrotor.
I have following concerns regarding to the experiment part:
1. As mentioned above, the 1e5 sampling frequency may be too high for real-world deployment, Under low sample frequency the proposed method cannot guarantee high safe probability, as shown in Figure 5.
2. Some implementation details are missing (e.g. the setting of $\beta$,$\alpha$, and $\epsilon$).
Supplementary Material: I went through the appendix, including the benchmark settings, main lemmas and theorems.
Relation To Broader Scientific Literature: Existing safe controls requires that the safety filter is always feasible, or there is a backup policy to ensure safety during exploration. The authors propose to use UCB of the CBF to sample optimistically safe controls to learn the CBF to address the issue of infeasibility safety filter or unavailable backup policy. However, I think the provided theoretical and empirical results cannot fully support this claim.
Essential References Not Discussed: The main idea of this paper is to introduce an optimistic exploratory phase when pessimistic safety estimation is infeasible. I think an important work [1] is missing, which also introduces a UCB-based exploration strategy and online learns a backup policy to guarantee high-probability safety. I think the authors should include this paper in discussion.
[1] Sukhija, Bhavya, et al. "Gosafeopt: Scalable safe exploration for global optimization of dynamical systems." Artificial Intelligence 320 (2023): 103922.
Other Strengths And Weaknesses: **Strength**
1. The addressed safe control problem without feasible safe filter is important.
**Weakness**
1. Theoretical and experimental concerns have been mentioned above.
2. The proposed method uses GP to jointly predict over the concatenation of states and controls, which may be difficult to scale to higher-dimensional systems.
Other Comments Or Suggestions: 1. Some typos: line 412: duplicated number. line 414: should be Figure 3?
Questions For Authors: 1. In Figure 5, seems the rate of failure under 10 hz is better than 100 hz and 1000 hz. Can you explain why this happens?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you kindly for your review. We have corrected all typos and addressed all your major comments below. We want to stress at this point that our method does, in fact, guarantee safety, even throughout exploration.
**Safety guarantee during exploration:** Our method does guarantee safety during exploration. This is, in fact, the whole point and one of the main contributions of our method. Our method explores in a way that recovers the feasibility of the CBF safety filter **before** the safe set is exited. In other words, the closed-loop system is always in the safe set during exploration. This is illustrated by Figure 1 and shown theoretically in Theorem 3.2. The key idea in the proof is to show that if we collect data with our exploration strategy quickly enough, we *must* recover feasibility before exiting the safe set. This is achieved by upper-bounding the information that can be gained inside the safe set, then showing that each collected data point contributes sufficiently toward reducing the information within the safe set (Lemma B.9). This is a crucial aspect of our method, and we will make this clearer in the revised manuscript by explicitly stating it in the abstract and introduction.
**Specifications for experiments:** In the experiments, we set $\beta=10$, $\epsilon=0$ and $h$ equal to the identity function. The high choice of $\beta$ aims to avoid underestimating the model error. Note that $\beta=2$ is otherwise frequently chosen in the literature, e.g., in Berkenkamp et al. (2017) and Wachi et al. (2018). The choice of $\epsilon=0$ corresponds to assuming that the CBF constraint can be enforced robustly, whereas $h$ equal to the identity function is for simplicity.
Berkenkamp, F., Turchetta, M., Schoellig, A., & Krause, A. (2017). Safe model-based reinforcement learning with stability guarantees. Advances in neural information processing systems, 30.
Wachi, A., Sui, Y., Yue, Y., & Ono, M. (2018). Safe Exploration and Optimization of Constrained MDPs Using Gaussian Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).
**Sampling time:** Although the sampling time required to achieve safety consistently in the examples is high, note that they correspond to a setting with no prior knowledge. Hence, it is reasonable to expect a high sampling time, as otherwise, sufficient information cannot be collected before exiting the safe set.
**GoSafeOpt paper:** Thank you for pointing out this paper to us. The paper does indeed employ similar tools and has a similar motivation. Some important differences include:
- On a high level, the goal of GoSafeOpt is to expand a set that is already safe. Although our method can also be used for the same end, its main goal is to render the system safe under high levels of uncertainty. This is particularly useful for settings where other safety certificates can potentially fail.
- A further difference that is related to the one above is that GoSafeOpt requires a set of initially safe control parameters at the beginning of exploration to guarantee safety. In contrast, our approach can guarantee safety purely through exploration, without requiring an initially safe set of control parameters.
- In GoSafeOpt, the goal is to tune a parametric controller, and the GP-based safety conditions depend on the controller's parametric structure. In our setting, the GP model is independent of any underlying control structure.
We will include this comparison in the revised paper.
**Less safety violations at 10Hz:** We agree that this is somewhat unexpected since a higher sampling rate should yield a better model faster, which translates to safer control. However, it is important to remember that at lower sampling frequencies, the system can quickly reach the boundary of the safe set before more than a handful of data has been collected. In these settings, the exploratory input is applied for a considerable amount of time before potentially reverting to the safety filter-based control, making its role non-neglegible. Hence, we speculate that the exploratory control is particularly beneficial for safety in this particular instance. | Summary: ## update after rebuttal
My evaluation of the paper has not changed after the rebuttal. The technical part of the paper is correct but does not bring much new insight on the topic. The assumption of known CBF for a system is stronger than existence of partial models and backup controllers. The experiments do not show particular benefits of using CBF, compared to reasonable designs of other exploration schemes.
The paper described a method for using control barrier functions to guide exploration under model uncertainty. Theoretical analysis on sample complexity is given in the bandit exploration setting. Experiments are shown on quadrotor and cruise control which have dynamics that are well-understood.
Claims And Evidence: The claims on safety-guided bandit exploration are supported, modulo the limitation of experiments only on system that have well-understood dynamics with high controllability and limited uncertainty.
Methods And Evaluation Criteria: The main problem is although the paper claims to provably achieves safety "in a general setting that does not require any prior
model or backup controller" there is the requirement of a fully known control barrier function. This assumption is very much questionable, because the difficulty of coming up with CBFs are well-known even under fully specified dynamics models. The proposed framework can only be useful when the model uncertainty can be dominated by the known CBF and control authority, which is a setting that hardly requires the whole CBF mechanism in the first place. For instance, this assumption is equivalent to requiring a backup controller that's very easy to formulate. It is thus misleading to claim that the methods achieve safety in a general setting.
Theoretical Claims: Under the assumptions the theoretical claims on sample complexity is reasonable.
Experimental Designs Or Analyses: See above.
Supplementary Material: none
Relation To Broader Scientific Literature: The assumptions about the systems make the need of CBF questionable. By only comparing with CBF-based methods, the paper is not well-positioned in the literature.
Essential References Not Discussed: It should compare with methods that do use prior knowledge about the dynamics and backup controllers, because the need for a predefined CBF is implicitly imposing even stronger assumptions.
Other Strengths And Weaknesses: none
Other Comments Or Suggestions: none
Questions For Authors: The main part of the paper says it is ok to assume prior model of the system $\hat f$ and $\hat g$ to be zero. How do you come up with the CBF as input to the system in that case? What happens if the collected data show that the learned dynamics violates the CBF that is assumed given? In the setting of the experiments, how are the assumption different from having a nominal understanding of the system dynamics and backup controller?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you kindly for your review. We have addressed your comments and questions below.
**Knowledge of CBF and difference to nominal understanding of the system dynamics/backup controller:** Although the CBF assumption cannot be ensured in a general setting, it still offers flexibility, and there are ways to relax it in practice. Typically, only a portion of the safe set is visited during control; hence, the CBF function only has to satisfy the corresponding conditions for the corresponding subset of the state space. Furthermore, we can achieve conservatism by initially restricting the safe set to a region that is easy to control and then iteratively expanding it as more data is collected. This potentially allows us to gradually improve the CBF as more system knowledge allows us to expand the safe set. Alternatively, we can include conservatism in the safety requirement by computing the CBF condition from a collection of (potentially valid) CBFs and taking the worst case.
Additionally, we note that the setting where a CBF is known but the dynamics are unknown is an open ongoing research question that has garnered increased attention recently:
- End-to-End Safe Reinforcement Learning through Barrier Functions for Safety-Critical Continuous Control Tasks (Cheng et al., 2019)
- Reinforcement Learning for Safety-Critical Control under Model Uncertainty, using Control Lyapunov Functions and Control Barrier Functions (Choi et al., 2020)
- Learning for safety-critical control with control barrier functions (Taylor et al., 2020)
- Constraint-Guided Online Data Selection for Scalable Data-Driven Safety Filters in Uncertain Robotic Systems (Choi et al., 2023)
- Safe Barrier-Constrained Control of Uncertain Systems via Event-triggered Learning (Lederer et al., 2024)
- Learning-Based Prescribed-Time Safety for Control of Unknown Systems With Control Barrier Functions (Huang et al., 2024)
Note that these works require additional assumptions beyond a known CBF to achieve safety.
Furthermore, we believe our method still has value even in settings where the CBF is not strictly valid. One of our paper's main contributions is that we use exploration to recover safety if all other safety certificates fail. This strongly contrasts with other approaches, which either assume enough controllability to stay within the safe set at all times or require a safe backup controller. Hence, it provides a principled way to design an exploration-based controller that aims to achieve safety whenever other certificates become invalid. We will include this discussion in the revised paper.
**Comparison with other baselines:** We are currently implementing a comparison with the heuristic proposed by (Choi et al., 2023), which computes control inputs by maximizing the LCB whenever infeasibility occurs instead of exploring the state space. We will also include a comparison with the baseline with a fully known model. We will report the results during the second round of reviews.
**Designing a CBF with a zero model prior:** In general, it is impossible to design a CBF without any prior model knowledge. However, it is still less challenging than deriving an accurate system dynamics model, and this requirement can frequently be relaxed, as discussed above. Moreover, note that allowing a prior model of zero is not strictly an assumption of our method but rather something that it allows for, e.g., if we want to learn an unbiased model purely from data. Our paper highlights this because it contrasts with other methods, which require more assumptions/backup controllers even in settings where a prior model is available.
**CBF design in experiments:** The CBF was taken from the cruise control setting from Ames et al. (2021). It restricts the velocity of the vehicle it comes closer to the vehicle in front. For the quadrotor, the first CBF (for avoiding ground collision) is similar to the cruise control setting, whereas the second CBF (for avoiding overrotation) corresponds to a quadratic equation that constrains the velocities and the orientation of the quadrotor.
**What happens if the collected data show that the learned dynamics violates the CBF that is assumed given?** The CBF is not necessarily learned from the model. Hence, it can violate the learned dynamics. In fact, it can violate the dynamics under pessimism (this corresponds to the CBF LCB), which is when our method explores.
Choi, J. J., Castaneda, F., Jung, W., Zhang, B., Tomlin, C. J., & Sreenath, K. (2023). Constraint-guided online data selection for scalable data-driven safety filters in uncertain robotic systems. arXiv preprint arXiv:2311.13824.
Ames, A. D., Grizzle, J. W., & Tabuada, P. (2014, December). Control barrier function based quadratic programs with application to adaptive cruise control. In 53rd IEEE conference on decision and control (pp. 6271-6278). IEEE.
---
Rebuttal Comment 1.1:
Comment: The rebuttal fails to address my primary concerns: while the results follow from standard derivations under the restrictive assumptions, the writing of the paper exaggerates their significance, and will very likely mislead practitioners and future research in this important area.
The authors make the following claims in the abstract
>By combining a safety filter with exploration in this manner, our method provably achieves safety in a general setting that does not require any prior model or backup controller, provided that the true system lies in a reproducing kernel Hilbert space.
This very strong claim stands in sharp contrast with the Assumptions 2.1-2.8 that are made later on in the paper. While some of these assumptions are standard, the assumption of having a CBF is in most realistic cases way stronger assuming an approximate model or having a backup controller -- computing CBFs themselves can be extremely challenging and pretty much not doable in high-dimensions under completely precise models. The authors also said in the rebuttal
>In general, it is impossible to design a CBF without any prior model knowledge.
Then why would you make a statement in the paper that clearly is contradictory with this fact, which is well-known to the CBF community while clearly misleading to anyone who's unfamiliar with these techniques?
We all agree that some approximation of the dynamics and CBF conditions can be allowed without hurting safety, both formally and practically. The technical part of the paper is one way of formalizing some aspect that has been folklore for a long time: If the assumptions are strong enough to ensure that the estimated system behaviors concentrate sufficiently under collected data, such that the confidence bounds guarantee safety with enough margin under a precomputed CBF over a nominal dynamics model and controller, then such explorations can be safe, simply following forward invariance. Although the use of GP and RKHS make the conditions on the sampling part more precise, they do not really bring new insights to the techniques or the problems.
I'm happy to go into detailed discussions about the problems of accepting the paper as is.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply.
**Claims:** We understand your concerns regarding our claims and how we phrase them. Naturally, our goal is not to mislead the community or anyone unfamiliar with control barrier functions. We are happy to revise our claims and provide more context on control barrier functions. In the revised paper, we will highlight the limitations of requiring a CBF and how to obtain them (e.g., via low-fidelity reduced order models (Molnar et al., 2024; Cohen et al. 2024) or from the learned model). We will reformulate the abstract and the sentence you pointed out in it as follows:
*In this paper, we assume to know a control barrier function, which we combine with a learned model to specify a robust certificate that ensures safety if feasible [...] We show theoretically that combining a safety filter with bandit exploration in this manner achieves safety, provided the control barrier function is valid and the true system lies in a reproducing kernel Hilbert space.*
Similarly, we will reformulate our claims in the introduction, main body of the paper, and conclusion to stress that a valid control barrier function is necessary to achieve our theoretical results. Furthermore, we will include a paragraph dedicated to our assumption's limitations in the discussion section, which will include the discussion from our previous rebuttal comment.
Please note that our method does, in theory, allow for a zero prior model of the dynamics, which we think is an unusual and interesting result, particularly since it contrasts with existing literature on learning with CBFs, where a strictly non-zero model is required in addition to a CBF (Taylor et al., 2020; Choi et al., 2020; Lederer et al., 2024). However, we agree that, while this is a theoretically interesting feature, in most practical settings where a CBF is known, we will have some understanding of the dynamics, and this theoretical result should be presented in that light. With this in mind, we will align our claims in the abstract and introduction with the more commonly encountered setting of a known CBF with partially/superficially known dynamics. We will present the feature of not needing a prior model of the one-step dynamics from our theoretical result as a remark, accompanied by this discussion.
**Further contributions:** We respectfully disagree with your latter statement:
*The technical part of the paper is one way of formalizing some aspect that has been folklore for a long time: If the assumptions are strong enough to ensure that the estimated system behaviors concentrate sufficiently under collected data, such that the confidence bounds guarantee safety with enough margin under a precomputed CBF over a nominal dynamics model and controller, then such explorations can be safe, simply following forward invariance. Although the use of GP and RKHS make the conditions on the sampling part more precise, they do not really bring new insights to the techniques or the problems.*
Please note that our setting does not assume any additional CBF margin during exploration (i.e., we do not change the size of the safe set). Instead, the tightening of the constraint is used as a trigger such that whenever infeasibilities occur, our method explores to collect data in a way that recovers feasibility of the constraint and maintains safety.
Additionally, while our work presents a valuable theoretical contribution in formalizing this "folklore", we also make an important methodological contribution in our sampling scheme: a significant part of our contribution is how data is collected here. In particular, an arbitrary data collection scheme is insufficient, as it does not guarantee that sufficient information is collected to make the CBF constraint valid again on time. This is where the bandit setup becomes critical, as it provides a framework to analyze which control inputs are most informative and will contribute most strongly towards maximizing the CBF time derivative. These techniques have not been used in the CBF literature and provide a novel way of addressing infeasibility of the CBF constraints.
**Additional experiments:**
As you suggested, we included a baseline with a backup controller (Choi et al., 2023). We tested their approach with a varying number of prior data. You can find the mean rate of success for their method below.
| **Number of data** | 0 | 5 | 10 | 20 | 50 | 200 | 500 | 1000 |
|----|----|---|---|----|---|----|----|---|
| **Success (safety) rate** | 0.0 | 0.0 | 0.0 | 0.3 | 0.4 | 0.5 | 0.9 | 0.9 |
Cohen, M. H., Molnar, T. G., & Ames, A. D. (2024). Safety-critical control for autonomous systems: Control barrier functions via reduced-order models. Annual Reviews in Control, 57, 100947.
Molnar, T. G., Cosner, R. K., Singletary, A. W., Ubellacker, W., & Ames, A. D. (2021). Model-free safety-critical control for robotic systems. IEEE robotics and automation letters, 7(2), 944-951. | Summary: The paper proposed an online safe control algorithm with Gaussian process models of the dynamics and bandit-type exploration to learn the dynamics. Then, the learned dynamics are combined with a control barrier function to ensure online safety. The control signal is solved with a safety filter with a lower confidential bound for robustness. The data collection requires a high frequency to prevent safety violations and infeasibility of the safety filter optimization.
## Update after rebuttal
My major concern is the connection to previous thoeritical papers about safe exploration, which is resovled by the rebuttal.
Claims And Evidence: I examined all the claims in the paper, and all of them are supported by clean and convincing evidence.
Methods And Evaluation Criteria: The proposed method makes sense. The assumptions might be too strong, but it is a general problem of control barrier function-based methods, not the problem of authors.
The evaluation criteria might miss some baselines; now, it only includes random exploration. I suggest the authors to include [1] and its following work like [2] to compare the performance.
[1] Sui, Yanan, et al. "Safe exploration for optimization with Gaussian processes." International conference on machine learning. PMLR, 2015.
[2] Wachi, A., Sui, Y., Yue, Y., & Ono, M. (2018). Safe Exploration and Optimization of Constrained MDPs Using Gaussian Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).
Theoretical Claims: I briefly checked the proof of Theorem 3.2, it looks good and corret to me. I didn't check the correctness line by line.
Experimental Designs Or Analyses: Like I said in Methods And Evaluation Criteria, The evaluation criteria might miss some baselines; now, it only includes random exploration. I suggest the authors to include [1] and its following work like [2] to compare the performance.
[1] Sui, Yanan, et al. "Safe exploration for optimization with Gaussian processes." International conference on machine learning. PMLR, 2015.
[2] Wachi, A., Sui, Y., Yue, Y., & Ono, M. (2018). Safe Exploration and Optimization of Constrained MDPs Using Gaussian Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).
Supplementary Material: I briefly reviewed appendix B.
Relation To Broader Scientific Literature: The results are novel in the sense of combining control barrier functions, but similar ideas have been explored in other safe exploration literature like
[1] Sui, Yanan, et al. "Safe exploration for optimization with Gaussian processes." International conference on machine learning. PMLR, 2015.
[2] Wachi, A., Sui, Y., Yue, Y., & Ono, M. (2018). Safe Exploration and Optimization of Constrained MDPs Using Gaussian Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).
[3] Prajapat, Manish, et al. "Near-optimal multi-agent learning for safe coverage control." Advances in Neural Information Processing Systems 35 (2022): 14998-15012.
Essential References Not Discussed: The results are bandit-like exploration algorithm to ensure safe control with control barrier functions, but similar ideas have been explored in other safe exploration (without using control barrier functinos) literature like
Other Strengths And Weaknesses: # Strength
1. The authors explain a lot about the solution existence, problem difficulties, and limitations. I appreciate them very much.
2. The experimental results show the effectiveness of pthe roposed algorithm.
# Weakness
1. The required sampling time might not be realistic in the real world.
2. As the data accumulates, the dimension of the safety filter optimization problem will increase.
Other Comments Or Suggestions: N/A. Good paper.
Questions For Authors: 1. Can you report the computation time in the experimental section?
2. Can you explain the difference between this paper and previous safe exploration with GP and bandit style in the following references?
If the differences are properly justified, I will further improve my score. Overall, it is a good paper.
[1] Sui, Yanan, et al. "Safe exploration for optimization with Gaussian processes." International conference on machine learning. PMLR, 2015.
[2] Wachi, A., Sui, Y., Yue, Y., & Ono, M. (2018). Safe Exploration and Optimization of Constrained MDPs Using Gaussian Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).
[3] Prajapat, Manish, et al. "Near-optimal multi-agent learning for safe coverage control." Advances in Neural Information Processing Systems 35 (2022): 14998-15012.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you kindly for reviewing our paper. Please find our answers to your questions and comments below.
**Comparison with other references and baselines:** Thank you for suggesting the additional references. Although the methods of Sui et al. (2015), Wachi et al. (2018) and Prajapat et al. (2022) employ similar tools (the most notable being a combination of GP-based LCBs and UCBs for safety and optimistic exploration), there exist some fundamental differences between their methods and ours. Arguably the biggest difference on a conceptual level is that exploration in their approaches is driven by return maximization. In contrast, exploration in our approach is driven by the need to ensure safety. Moreover, their approaches do not allow the (pessimistically) unsafe states to be visited, whereas ours does, using exploration to reduce uncertainty while in those states. Other differences are as follows:
- Their methods require a tabular MDP setting, which scales poorly with the number of state space dimensions and actions.
- They are designed for a discrete-time setting, whereas our approach considers a continuous-time setting.
- Their methods require the transition dynamics to be known perfectly, whereas our approach does not.
- A further difference tied to the previous point is how optimistic exploration is used. The methods of Sui et al. (2015) and Wachi et al. (2018) utilize exploration to expand the safe set and maximize the returns under optimism. However, the safe set is expanded only by visiting safe states, i.e., by incrementally moving toward the unsafe states. Note that this is typically only achievable with perfectly known dynamics. In contrast, our approach expands the safe set by directly exploring the states that are unsafe under pessimism.
After carefully analyzing the method and code of Wachi et al. (2018), we have concluded that to implement their method on one of our baselines, we would require significant changes to their method:
- First, we would need to discretize the dynamical systems. While this is possible for the cruise control example, it is practically infeasible for the quadrotor. This is because the state space is 10-dimensional, and the nonlinear dynamics are strongly state-dependent, meaning that a dense graph is required. However, this implies a graph with 10^20 nodes for just 20 support points per dimension.
- Secondly, we would need to address the fact that the dynamics are unknown in our setting. To this end, we could either use the GP mean to estimate the state-transition dynamics or a pessimistic estimate of the transition dynamics under model uncertainty and the pessimistic safety constraint. Moreover, the state transition graph would have to be updated after every time step, which was not intended under the original algorithm.
Overall, we feel that directly implementing their code on our setting would entail design decisions that would significantly alter the original method, making a fair comparison difficult. However, we will thoroughly discuss these papers in the revised manuscript. Moreover, we will include a comparison with the heuristic proposed by Choi et al. (2023), which corresponds to an equation that attempts to maximize the LCB whenever infeasibility occurs.
**Regarding computation time:** Our method mainly requires computation time for GP predictions and to solve the optimization problems involving the LCB (exploration) and UCB (safe control) of the CBF. Please note that, due to the nature of the CBF optimization problem, we only require a single GP pass to formulate the required vectors and matrices, after which we only have to solve a convex optimization problem. Hence, the GP computation scales with the data, whereas the CBF optimization does not. However, sparse inference tools can be easily applied to improve scalability. Below, we report the average computation time for each operation as a function of the data using vanilla GPs (we use the GPyTorch toolbox without GPU acceleration):
| | GP | Optim. |
| ---- | ------ | ---- |
| n=0 | 0.0399 | 0.0346 |
| n=10| 0.0407 | 0.233 |
| n=50| 0.0410 | 0.0174|
| n=200| 0.0448 | 0.0194 |
| n=500| 0.0654 | 0.0182 |
The GP column corresponds to the average time of GP computations per iteration, which scales with the data, as expected. The Optim column corresponds to the average time taken by the optimization problem, which stays approximately constant with the amount of data.
---
Rebuttal Comment 1.1:
Comment: Apologize for the late reply. Thanks for the clarifications, now the novelty are more clear to me. I will update my score.
It will be helpful to include the discussion briefly in the revision. | Summary: The work presents an approach to design certifiably safe feedback controllers for a priori unknown systems. The authors propose using gaussian processes to approximate the learned model (or its error with respect to the true model). Then, they leverage bandit theory to propose upper and lower bounds to the model estimates, using the lower bounds for ensuring safety through a control barrier function and deciding when infeasibility occurs. Under infeasible conditions (when the controller is not able to ensure safety given the current model bound), the method uses the upper confidence bound to guide exploration and collect data to improve the model estimates. They provide theoretical results showing that under assumptions, the method ensures safety up to a given probability bound. They demonstrate the effectiveness of the work in several control problems.
Claims And Evidence: The claims proposed are well supported, and the theoretical results and experiments justify the contributions proposed.
Methods And Evaluation Criteria: The theoretical results are sensible, and the proposed experiments seem adequate for the method proposed.
Theoretical Claims: I checked the presented proofs for the main theorem (and necessary lemmas presented in the appendix) to the best of my capacity (given the reviewing load). The results seem correct and the proofs were well structured. I did not find any issues.
Experimental Designs Or Analyses: The experiment results are thorough and showcase the effectiveness of the work. I do have a question regarding the statement in the introduction on the given examples not being solvable by the current state of the art. The experiments are interesting, but I do not really understand how these are not solvable by current methods, given that they seem fairly standard control problems.
Supplementary Material: I reviewed the additional technical results and proofs.
Relation To Broader Scientific Literature: The authors do a good job at relating the work to existing safe control literature. I do not have any comments on this.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
- The paper has solid theoretical results, does not overstate the claims and showcases the method effectively.
- The idea of leveraging bandit bounds for model uncertainty decision making is interesting.
Weaknesses:
- Some (critical) statements are made and quickly dismissed, see below.
- The method still depends on critical knowledge or careful heuristics when picking the kernels, and requires having valid control barrier functions.
Other Comments Or Suggestions: See above and below. Overall, I find this a solid paper.
Questions For Authors: - The main question I have is regarding the assumption that we do not know the system dynamics and we need to learn them, and yet we do know a control barrier function for the system. Is this realistic for most applications?
- The choice of the kernel (although it is discussed) is not obvious. Is there any disadvantage to choosing "universal kernels"? And what are their limitations?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you kindly for your review. Below, you will find our answers to your comments.
**Regarding the claim of solvability of the benchmarks:** This claim is tied to the controller having a prior mean model of zero, no measurement data, and only a CBF to guide it. Although existing methods can effectively control these systems, the control design in existing works is informed by a prior (non-zero) model of the dynamics. Alternatively, a backup controller is required. Our approach only requires the CBF and a well-specified GP to inform the control design. We will reformulate the sentence and include this discussion in the revised paper.
**The choice of kernel is not obvious:** We agree with this statement. Choosing appropriate kernels for modeling dynamical systems is generally a nontrivial task, and there exists an extensive body of literature on how to do so while mitigating model misspecification; see, e.g., Berkenkamp et al. (2019), Fiedler et al., (2021), Capone et al., (2022). Crucially, although many kernels can satisfy the requirements of our approach, some kernels can generalize far better than others, significantly reducing the amount of data required to achieve safety. For example, if the system has a linear component, it is typically beneficial to include a linear component in the kernel, and sometimes, a linear kernel can even outperform a universal kernel in this sense. We will include this discussion in the paper.
**Knowing a CBF without knowing the dynamics:** While it is generally impossible to formulate a valid CBF without any form of model knowledge, doing so is easier than obtaining an actual system model, since the existence of a CBF typically implies the existence of (potentially infinitely) many others. Furthermore, the assumption that the CBF is valid can be relaxed in many ways: the CBF only needs to be valid in parts of the state space that are visited by the closed-loop system. Moreover, our approach can gradually expand the safe set, enabling incremental improvements to the CBF. We also note that the assumption that a CBF is known despite not having an accurate model is common in the literature; see, e.g., Cheng et al. (2019), Choi et al., (2020), Taylor et al., (2020).
Berkenkamp, F., Schoellig, A. P., & Krause, A. (2019). No-regret Bayesian optimization with unknown hyperparameters. Journal of Machine Learning Research, 20(50), 1-24.
Capone, A., Lederer, A., & Hirche, S. (2022, June). Gaussian process uniform error bounds with unknown hyperparameters for safety-critical applications. In International Conference on Machine Learning (pp. 2609-2624). PMLR.
Cheng, Richard, Gábor Orosz, Richard M. Murray, and Joel W. Burdick. "End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks." In Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, pp. 3387-3395. 2019.
Choi, J., Castañeda, F., Tomlin, C. J., & Sreenath, K. (2020, July). Reinforcement Learning for Safety-Critical Control under Model Uncertainty, using Control Lyapunov Functions and Control Barrier Functions. In Robotics: Science and Systems (RSS). 2020.
Fiedler, C., Scherer, C. W., & Trimpe, S. (2021, May). Practical and rigorous uncertainty bounds for Gaussian process regression. In Proceedings of the AAAI conference on artificial intelligence (Vol. 35, No. 8, pp. 7439-7447).
Taylor, A., Singletary, A., Yue, Y., & Ames, A. (2020, July). Learning for safety-critical control with control barrier functions. In Learning for dynamics and control (pp. 708-717). PMLR. | null | null | null | null | null | null |
Feature Shift Localization Network | Accept (poster) | Summary: This paper proposes a feature shift localization network that can localize feature shifts between newly given two datasets by learning how to localize feature shifts using multiple datasets with various synthetic feature shifts. The experiments show that the proposed method can accurately and efficiently detect feature shifts compared to existing methods.
## Update after rebuttal
Thanks to your response.
My concerns have been addressed.
As I no longer find any significant weaknesses, I have decided to increase my score by one.
Claims And Evidence: As claimed, the effectiveness and efficiency of the proposed method were validated in the experiments.
Methods And Evaluation Criteria: The proposed framework is interesting and reasonable to learn how to localize feature shifts.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The method has been experimented on multiple datasets, which gives it a certain level of soundness/validity. One point of interest is, how robust is the proposed method to differences in data distributions during training and testing?
Supplementary Material: I have reviewed Section A.
Relation To Broader Scientific Literature: Yes, feature shifts are important in practice.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Please see comments on other questions.
Other Comments Or Suggestions: - Including a more detailed explanation of the comparison methods would make the paper easier to understand.
Questions For Authors: 1. How robustly does the proposed method perform to differences in data distributions during training and testing?
2. The proposed method has the advantage of allowing data comparison without retraining once it has been learned. However, there is little discussion regarding the network's training time. It is expected that the training time will be significant (maybe more than existing methods), but to what extent?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: *1. Including a more detailed explanation of the comparison methods would make the paper easier to understand.*
**We agree that the paper will benefit from a more detailed description of the competing methods. Therefore, we are adding a new section titled “Benchmarking Methods” in the Appendix that provides a detailed description of the methods for the camera-ready version of the paper.**
*2. The method has been experimented on multiple datasets, which gives it a certain level of soundness/validity. One point of interest is, how robust is the proposed method to differences in data distributions during training and testing?*
**The datasets and manipulation types used during the network training are different from the ones used during evaluation, showing that the network generalizes well to previously unseen data and manipulation types, making the network applicable out-of-the-box without the need of performing any re-training or fine-tuning.**
*3. The proposed method has the advantage of allowing data comparison without retraining once it has been learned. However, there is little discussion regarding the network's training time. It is expected that the training time will be significant (maybe more than existing methods), but to what extent?*
**Table 3 includes the training time of different configurations and section “D.1. Training Setup and Hardware Specifications” in the Appendix provides additional information regarding the training time and hardware used. Furthermore, as shown in the paper, the network performs excellently in unseen data and manipulations, so we expect that most of the users will make use of the pre-trained network without the need for any fine-tuning or training.**
***
**Thank you for your constructive review. We have updated the main text to better describe the novelty and impact of our work, and to emphasize the benefits of our significant speed improvements over the previous state of the art. Furthermore, we have enriched the manuscript with: (a) two new benchmark datasets, CIFAR10 and COIL-100 (https://tinyurl.com/3sx9ppwh); (b) a new baseline in the ablation study (using only statistical measures without neural networks) (https://tinyurl.com/4zkm9psn); (c) a more detailed speedup analysis (https://tinyurl.com/3bh642v5); (d) an improved related work section; and (e) an additional section in the Appendix with a detailed description of the benchmarking methods. We believe that with these additions, the manuscript has improved significantly, and given that our proposed technique is novel and obtains state-of-the-art performance while being orders of magnitude faster than previous works, we kindly suggest increasing the review score.** | Summary: This paper presents FSL-Net, a neural network designed to effectively localize feature shifts through its architectural design. More specifically, FSL-Net is built in two stages, a statistical descriptor network which is proposed to extract underlying distributional information from the inputs, and a prediction network which leverages these signals to localize feature shifts. This work attempts to build upon prior work through significant improvements in scalability. Additionally, the authors provide extensive experimental evaluation across multiple benchmarks.
Claims And Evidence: The authors provide compelling claims of FSL-Net's superiority in these areas:
- It is clear that FSL-Net outperforms prior methods in scalability and speed.
- Similarly it is also clear to the reviewer how FSL-Net can handle High-Dimensional Features without retraining.
The areas that I would like to see some additional clarifications on:
- It is not clear to the reviewer how applicable feature shift localization would be in the real-world, given that there could be cases where all or none of the feature presents any discernable feature shift.
Methods And Evaluation Criteria: The reviewers found the proposed evaluations extensive and found that all of the benchmarks proposed match with prior works cited by the author.
Theoretical Claims: The reviewer found no direct rigorous mathematical proofs for their claims.
Experimental Designs Or Analyses: The reviewer found the experimental design of the paper to be sound, with benchmarks chosen inline with prior works. However, the reviewer would like to see additional non-tabular datasets included in the experimental analysis.
Supplementary Material: No additional supplementary materials were given beyond the additional details presented in the appendix.
Relation To Broader Scientific Literature: To the reviewer, the primary contribution FSL-Net provides is the scalability and ease of use in real-world applications. With relation to prior works, it does appear that this is an important overlooked aspect in feature shift localization.
Essential References Not Discussed: The reviewer is unaware of any nessacary references not included.
Other Strengths And Weaknesses: Please see claims & evidence and experimental designs sections.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: *1. It is not clear to the reviewer how applicable feature shift localization would be in the real-world, given that there could be cases where all or none of the feature presents any discernable feature shift.*
**We consider cases where either no features or only a subset exhibit a shift. If every feature experienced a shift, it would be impossible to determine whether the differences arise naturally between the reference and the query. Thus, we base our approach on the assumption that the true, unmodified query comes from the same distribution as the reference. We have revised the text to make this clearer.**
*2. However, the reviewer would like to see additional non-tabular datasets included in the experimental analysis.*
**We acknowledge the reviewer’s request for additional non-tabular datasets. Our study includes two genomics datasets (Founders and Canine), and we have added two image datasets (CIFAR-10 and COIL-100). Performance results for these new datasets can be found in (https://tinyurl.com/3sx9ppwh). On average, FSL-Net outperforms DataFix while offering significant speed improvements.**
***
**Thank you for your constructive review. We have updated the main text to better describe the novelty and impact of our work, and to emphasize the benefits of our significant speed improvements over the previous state of the art. Furthermore, we have enriched the manuscript with: (a) two new benchmark datasets, CIFAR10 and COIL-100 (https://tinyurl.com/3sx9ppwh); (b) a new baseline in the ablation study (using only statistical measures without neural networks) (https://tinyurl.com/4zkm9psn); (c) a more detailed speedup analysis (https://tinyurl.com/3bh642v5); (d) an improved related work section; and (e) an additional section in the Appendix with a detailed description of the benchmarking methods. We believe that with these additions, the manuscript has improved significantly, and given that our proposed technique is novel and obtains state-of-the-art performance while being orders of magnitude faster than previous works, we kindly suggest increasing the review score.** | Summary: The authors propose a feature shift detector: the goal is to identify a maximum-size set of features with zero distance for the corresponding marginal distributions. The overall architecture uses three parts for different types of inputs: basic aggregated statistics, MMD-like features generated in linear (moment extraction) and non-linear (neural embedding) ways. The experiments follow the framework introduced in a previous relevant work [Barrabes et al., 2023] by taking both datasets and competitors from that work and showing marginal improvement over the work by Barrabes while the proposed method is more efficient.
## update after rebuttal
I raised my score to weak accept.
Claims And Evidence: The overall problem statement seems a bit strange to me. Why would one solve this specific problem instead of a general feature selection? While some existing papers have been published in this area, can you expand the experiments to include more traditional problem statements?
However, let’s take a further look at a proposed solution. The author combines three types of features, ranging from simple aggregates to learnable nonlinear transformations. The combination of all these features does provide empirical improvement over the existing approaches. Thus, the overall strategy is justified.
However, given the ablation in Table 3, the main take-off from this study is that we can achieve an F1 score of 0.71 that surpasses all of the rival methods by just mean, std, …, and histogram—and then apply the squared differences between these features for two datasets! Another minor comment here is that DataFix’s F1 in the original paper exceeds 0.9, thus suggesting that it works better than FSL-Net (I would conclude that both these approaches show similar empirical performance).
Methods And Evaluation Criteria: As has been already said, the authors took their protocol as well as used datasets from [Barrabes et al., 2023]. It is OK, but I would be happy to see additional experiments based on the protocol from [Rabanser, 2019] with higher dimensional settings. Another concern is that a similar solution can be provided with modern approaches to measure the distance between distributions like Neural Wasserstein [1] or regularisation provided by Sinkhorn divergences [2]. They should capture the main characteristics of the distribution as well.
[1] Korotin, Alexander, et al. "Do neural optimal transport solvers work? a continuous wasserstein-2 benchmark." Advances in neural information processing systems 34 (2021): 14593-14605.
[2] Feydy, Jean, et al. "Interpolating between optimal transport and mmd using sinkhorn divergences." The 22nd international conference on artificial intelligence and statistics. PMLR, 2019.
Theoretical Claims: This paper does not make any theoretical claims. The only thing I don’t understand is why the authors decided to use a strict definition for the mapping F, requiring exactly zero divergence instead of an upper bound of epsilon from [Barrabes et al., 2023]. Another thing that requires clarification is why you use convolutional neural networks instead of attention-based networks for tabular data—do you assume ordering of features?
Experimental Designs Or Analyses: Apart from the protocol focused on tabular datasets, I’m fine with the experimental design.
Supplementary Material: I consulted Appendix during the examination of the paper, while without careful study.
Relation To Broader Scientific Literature: My main worry about this paper is that it has too narrow focus, while potentially, a bigger impact should be possible for the proposed architecture.
Essential References Not Discussed: It seems that all necessary related works were mentioned.
Other Strengths And Weaknesses: This paper is promising, as the idea seems interesting, and the method seems to work.
Other Comments Or Suggestions: -
Questions For Authors: Crucial:
1. The overall problem seems a bit strange to me. I believe a broader adoption of the proposed network would justify its introduction.
Major:
2. Your results indicate that simple aggregates (mean, standard deviation, histogram-based features) alone can yield an F1 of 0.71, surpassing specific competitor methods. Could you comment on why this more straightforward configuration performs so strongly and whether you see a trade-off between simplicity and performance?
3. Have you compared your feature shift detector to modern distribution distance techniques (e.g., Neural Optimal Transport [Korotin et al., 2021] or Sinkhorn divergences [Feydy et al., 2019]) to see if they capture distribution characteristics more effectively?
4. Would you consider testing with the higher-dimensional protocol from [Rabanser, 2019] to provide a broader picture of your method’s performance?
Minor:
5. Motivation for Zero-Divergence Criterion: Why did you opt for identifying a set of features with exactly zero marginal distribution distance rather than allowing a small ϵ\epsilonϵ-threshold (as in [Barrabes et al., 2023]) or a more general feature-selection-based approach?
6. In the original DataFix paper, the reported F1 surpasses 0.9, yet it does not outperform FSL-Net in your comparisons. Could you clarify these discrepancies and explain whether the experimental setups or evaluation metrics differ?
7. You use convolutional neural networks (CNNs) for tabular data. Is there an assumption that the features have an inherent “ordering”? Can you explain why attention-based architectures (common for tabular data) were not explored?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: *1. Why would one solve this problem instead of feature selection? Can you expand the experiments to include more traditional problem statements?*
**Feature shift localization appears in many industries. In healthcare, integrating multiple data sources leads to “batch effects” caused by differing data standardization, collection, and processing. E-commerce and industrial applications need to localize when features have a change in their distribution with respect to a reference. While feature selection aims to find a subset good enough for a predictor, shift localization predicts for each feature if it is inducing a distribution shift. While techniques for feature shift localization can be applied to feature selection problems, and vice versa, each has differing goals.**
*2. Simple aggregates (mean,std…) have F1=0.71 surpassing competitors. Could you comment on why this performs so strongly and a trade-off between simplicity and performance?*
**In Table 3, all configurations include the “Prediction Network” (PN). Thus, the “SM” configuration incorporates statistical measures (mean, std, etc) which are fed into the Prediction network, obtaining an F-1 score of 0.7. To avoid confusion, we have added “+PN” to all entries. Moreover, we have introduced an SM-only configuration, where no prediction network is used and the statistical measures are thresholded. This SM-only approach sees a drop in performance to an F-1 score of 0.3, showcasing the importance of the PN. See updated Table in (https://tinyurl.com/4zkm9psn).**
*3. Have you compared your feature shift detector to Neural Optimal Transport [Korotin et al., 2021] or Sinkhorn divergences [Feydy et al., 2019] to see if they capture distribution characteristics more effectively?*
**While optimal transport techniques can characterize divergences, they do not provide a direct methodology to localize the divergent features, which is the main focus of this paper, and would require modifications in order to be applied for the localization task. However, we agree that such literature should be mentioned in the related work, therefore, we are updating the manuscript to cite optimal transport techniques.**
*4. Would you consider testing with the higher-dimensional protocol [Rabanser, 2019]?*
**Note that our study already includes two very high-dimensional datasets – Founders with 10k features and Canine with 198k features (see Appendix B.1). We have added an extra evaluation using CIFAR10 and COIL-100 from [Rabanser, 2019], comparing FSL-Net and DataFix, where FSL-Net performs better on average while providing significant speed improvements (https://tinyurl.com/3sx9ppwh).**
*5. Why did you opt for identifying a set of features with exactly zero marginal distribution distance rather than allowing a small epsilon [Barrabes et al., 2023]?*
**While [Barrabes et al., 2023] introduces an epsilon, it is not applied in their experiments or evaluation at any point. We opted to remove it for simplicity. As it is expected to be small, FSL-Net should remain applicable without issues.**
*6. In the DataFix paper, the F1>0.9, yet it does not outperform FSL-Net in your comparisons. Could you clarify these discrepancies and explain whether the experimental setups or evaluation metrics differ?*
**The DataFix paper reports the median across datasets and manipulations, which we believe can be misleading and not truly reflect the performance of the methods. In FSL-Net, we instead report the mean across datasets and manipulations, which explains the small inconsistencies between numbers. We are adding median-based results in the Appendix, showing that FSL-Net provides the best performance in most datasets and manipulations.**
*7. You use convolutional neural networks. Is there an assumption that the features have an inherent “ordering”? Can you explain why attention-based architectures (common for tabular data) were not explored?*
**We use convolutions to scale across high-dimensional datasets. In Sec. 3: “Sample-wise Invariance, Feature-wise Equivariance, and Locality”, we describe that we obtain (approximate) feature ordering equivariance by shuffling the features at each batch during training, so that the network does not rely on feature ordering. We also explored attention without positional encoding, which has built-in feature ordering equivariance, but as shown in section C (Appendix), it underperformed.**
***
**Thank you for the constructive review. We have included: (a) CIFAR10 and COIL-100 datasets; (b) a new baseline using statistical measures without neural networks; (c) a new speedup analysis (https://tinyurl.com/3bh642v5); (d) an improved related work; and (e) an additional description of the benchmark methods. We believe that with these additions, the manuscript has improved significantly, and given that our proposed technique is novel, accurate, and orders of magnitude faster than previous works, we kindly suggest increasing the review score.**
---
Rebuttal Comment 1.1:
Comment: Thank you a lot for your answers! You clarified numerous issues related to the experiments of the paper and now I believe that you provide a faster and slighty better performing method if compared to DataFix.
On the other hand, I am still skeptical about the problem statement, that was my main concern in the initial review.
Given these concerns, I would change my scores accordingly in the main review. | Summary: This work presents the FSL-Net, a neural network designed to quickly and accurately identify feature shifts in large, high-dimensional datasets, overcoming challenges faced by existing methods. Trained on diverse datasets, FSL-Net can localize shifts in unseen data without requiring retraining. The method looks reasonable to me. However, since I am not familiar with the literature, I am not able to access the novelty of this work.
Claims And Evidence: The deriviation, algorithm implementation as well as the experimental results looks good.
Methods And Evaluation Criteria: Yes
Theoretical Claims: There is no proofs
Experimental Designs Or Analyses: The experiments looks good to me. The proposed method achieves better tradeoff between running time and the F1 score. Though the F1 score is only slightly better than DataFix.
Since I am not familiar with the literature, so I am not sure how important does the running time means to the task. Specifically, the average running time of DataFix is very close to the proposed method. The only signifcant difference is the max running time. I am not sure how signficant does the max running time mean to this specific task.
Supplementary Material: NA
Relation To Broader Scientific Literature: I am unfamiliar with the literature.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: See the above discussion.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: *[1] This work presents the FSL-Net, a neural network designed to quickly and accurately identify feature shifts in large, high-dimensional datasets, overcoming challenges faced by existing methods. Trained on diverse datasets, FSL-Net can localize shifts in unseen data without requiring retraining. The method looks reasonable to me. However, since I am not familiar with the literature, I am not able to access the novelty of this work.*
**To the best of our knowledge, FSL-Net is the first network that provides strong pre-training for feature shift localization, allowing it to generalize to both unseen datasets and shift types. This makes it possible to localize shifts with state-of-the-art accuracy by performing a single forward pass, without the need for re-training the network or expensive computations as required in previous works. Note that feature shift localization is an important task in several areas, including healthcare, e-commerce, and industrial applications.**
*[2] Since I am not familiar with the literature, so I am not sure how important does the running time means to the task. Specifically, the average running time of DataFix is very close to the proposed method. The only signifcant difference is the max running time. I am not sure how signficant does the max running time mean to this specific task.*
**FSL-Net provides state-of-the-art performance while being orders of magnitude faster than DataFix (up to 135x) across all data sizes. Figure 2d shows the computational time of FSL-Net vs DataFix as the dataset size (num features x num samples) grows, and an additional Figure (https://tinyurl.com/3bh642v5) showing relative speed improvements will be included in the Appendix. This has significant consequences: it allows its application for processing high-dimensional large databases present in biomedicine and e-commerce. For example, the “Phenotypes” dataset from the “UK Biobank”, a major healthcare dataset, requires over 12h of compute time with DataFix, whereas FSL-Net completes the same task in just a few minutes, while also providing improved F-1 score (see Figure 3 in Appendix E). While DataFix provides good localization accuracy, its high computational requirement makes it impractical for real-world large datasets (e.g. UK Biobank with 300k samples and 1M dimensions), making FSL-Net a very valuable alternative by offering excellent localization accuracy alongside efficient scalability. We are updating the main text to clearly highlight these significant speed improvements and its impact in downstream applications.**
***
**Thank you for your constructive review. We have updated the main text to better describe the novelty and impact of our work, and to emphasize the benefits of our significant speed improvements over the previous state of the art. Furthermore, we have enriched the manuscript with: (a) two new benchmark datasets, CIFAR10 and COIL-100 (https://tinyurl.com/3sx9ppwh); (b) a new baseline in the ablation study (using only statistical measures without neural networks) (https://tinyurl.com/4zkm9psn); (c) a more detailed speedup analysis (https://tinyurl.com/3bh642v5); (d) an improved related work section; and (e) an additional section in the Appendix with a detailed description of the benchmarking methods. We believe that with these additions, the manuscript has improved significantly, and given that our proposed technique is novel and obtains state-of-the-art performance while being orders of magnitude faster than previous works, we kindly suggest increasing the review score.** | null | null | null | null | null | null |
Diving into Self-Evolving Training for Multimodal Reasoning | Accept (poster) | Summary: This paper investigates self-evolving training for multimodal reasoning through the lens of reinforcement learning, identifying three key factors: Training Method, Reward Model, and Prompt Variation. The authors propose a continuous self-evolving training scheme that inherits optimizer states between iterations, develop a multimodal Process Reward Model (PRM) for reranking responses, and analyze the impact of unlabeled data. They also examine training dynamics, identifying performance saturation as a key challenge, and propose an automatic balancing mechanism to adjust sampling temperature. These components are combined into a framework called M-STAR (Multimodal Self-evolving Training for Reasoning) which is evaluated across multiple multimodal reasoning benchmarks using different model sizes. The authors report consistent performance improvements, particularly on MathVista where their approach achieves a 6.7% absolute improvement over the pre-evolved model.
## update after rebuttal
The authors address most of my questions, e.g. PRM as effective reranker, and statistical significances of results. I still have concerns about the effectiveness of the self-improving methods as the improvements are small in most benchmarks. I am still on the fence, slightly more positive. So I increase my rating. AC, please note that, I think the paper is borderline.
Claims And Evidence: The paper makes several claims that are not fully supported by convincing evidence:
1. The authors claim their PRM is effective as a reranker despite not good as a verifier. While they show some analysis in Figure 2, the explanation lacks depth and rigor. The paper notes that PRM-selected responses have fewer reasoning steps and are more relevant to queries, but doesn't provide a compelling theoretical or empirical explanation for why this makes PRM effective in the self-evolving training context.
2. The improvements reported on benchmarks are modest (mostly 1-6% absolute gains), and it's not clear how significant these improvements are statistically.
3. The paper claims that continuous self-evolving training with proper intervals is better than traditional approaches, but the performance differences shown in Table 1 are small (less than 3% in many cases, e.g. 57.2% vs 55.1%), raising questions about the significance of this contribution.
Methods And Evaluation Criteria: The methods and evaluation criteria are generally appropriate for the problem at hand. The authors use established multimodal reasoning benchmarks and conduct controlled experiments to isolate the impact of different components. However, there are some issues:
1. The authors use GPT4-o to measure "response relativity," but don't provide sufficient details about the reliability of this metric or potential biases in this evaluation approach.
2. The paper makes a case of PRM. This seems to be opposite to the findings in the DeepSeek-R1 paper (though the paper is on text based reasoning tasks). Therefore, this deserves more in-depth evaluation.
Theoretical Claims: The paper makes limited theoretical claims, primarily framing self-evolving training as a reinforcement learning problem. The formulation appears sound, but the paper doesn't develop this theoretical foundation into novel insights that significantly advance our understanding of self-evolving training.
Experimental Designs Or Analyses: The experimental design has several issues:
1. The analysis of why PRM works as a reranker despite being ineffective as a verifier is insufficient. This is a critical insight that could be valuable to the field, but the paper doesn't explore it deeply enough.
2. The experiments on training dynamics are interesting but preliminary.
Supplementary Material: Yes, all of it.
Relation To Broader Scientific Literature: The paper positions itself within the self-evolving training literature but doesn't clearly differentiate its contributions from prior work. The authors mention related approaches like STaR, ReST, and ReST^EM, but don't provide a detailed comparison of how their approach differs technically or conceptually.
The paper also doesn't adequately address recent findings that question the effectiveness of PRMs for reasoning tasks, such as those mentioned in the DeepSeek-R1 paper, which found that PRMs have limitations in guiding reasoning tasks due to challenges in defining fine-grained steps and assessing their correctness.
Essential References Not Discussed: ReFT: Reasoning with Reinforced Fine-Tuning
Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, Hang Li
Other Strengths And Weaknesses: Strengths:
- The paper provides a systematic exploration of different components of self-evolving training for multimodal reasoning.
- The introduction of continuous self-evolving training with inherited optimizer states is a potentially useful contribution.
- The analysis of training dynamics and the proposed automatic balancing mechanism addresses an important challenge in self-evolving training.
Weaknesses:
- The paper lacks novelty in its core components. Most of the techniques have been explored in prior work, and the paper doesn't clearly articulate what is fundamentally new.
- The improvements over baselines are modest, raising questions about the practical significance of the approach.
- The most intriguing finding—that PRM can be effective as a reranker despite being ineffective as a verifier—is not explored deeply enough to provide meaningful insights.
- The paper doesn't adequately address the limitations of PRMs for reasoning tasks that have been identified in recent literature.
Other Comments Or Suggestions: - The paper would benefit from a clearer articulation of its novel contributions relative to prior work.
- A more rigorous analysis of why PRM works as a reranker despite being ineffective as a verifier would significantly strengthen the paper.
- The paper should address recent findings that question the effectiveness of PRMs for reasoning tasks and explain why their approach might overcome these limitations.
Questions For Authors: 1. Your paper shows that your PRM is ineffective as a verifier but effective as a reranker. This is an intriguing finding that contradicts conventional wisdom about PRMs. Can you provide a more rigorous analysis or theoretical explanation for why this is the case? How does this relate to recent findings in papers like DeepSeek-R1 that question the effectiveness of PRMs for reasoning tasks?
2. The improvements reported on benchmarks are relatively modest (mostly 1-6%). Have you conducted statistical significance tests to ensure these improvements are meaningful? How do these improvements compare to the variance observed across different training runs?
3. How does your approach specifically differ from prior work like STaR, ReST, and ReST^EM beyond the application to multimodal reasoning? What are the novel technical contributions that distinguish your work?
I am open to upgrade my rating if important questions are addressed or clarified satisfactorily.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your reivew.
# Q1
> 1. ...shows that your PRM is…
> 2. How... relate to recent findings in papers like DS-R1…
## Q1.1 PRM Analysis
First, we want to clarify an important distinction between BoN selection and reranking using PRM. But due to the space limit, please refer to **Q1 in Response to Reviewer gU79** for details about this **clarification and analyses** below.
To better investigate and validate the behaviour of the PRM, we conducted several analyses, which help better understand the PRM and our findings.
**Value Function Analysis**:
PRM serves as a value estimator trained via Monte Carlo rollouts. We show our PRM is with a lower MSE for correct responses (0.081) than incorrect ones (0.124), supporting its strength in reranking valid reasoning paths over verifying incorrect ones.
**Human Evaluation**:
We validate GPT-4o’s relativity score (Figure 2) via human annotation across 50 examples, showing ~84% agreement and confirming its reliability as an alignment measure.
**Readability Analysis**:
Using the Flesch Reading Ease score, we find that top-2 reranked responses are more readable and less erratic, indicating PRM’s benefit in producing clearer, more coherent outputs.
**Qualitative Case Study**:
Even when final answers are correct, PRM prefers responses with more coherent reasoning, illustrating its advantage in identifying high-quality rationales beyond binary correctness.
## Q1.2
Thanks for the insightful question!
1. As discussed in **Q1.1**, our use of PRM plays a distinct role in our framework. First, our reward strategy also follows rule-based—we filter out responses with incorrect answers before using PRM. In this way, PRM is used more like a reranker, helping us select trajectories with the highest-quality reasoning steps that are also consistent with the original question.
2. Second, the training setup differs significantly from R1. While in DeepSeek-R1’s discussion, it applies policy gradients at every step using PRM (as in GRPO), which can introduce reward hacking issues, our method is STaR-like and avoids this risk by not using step-wise policy gradients.
3. While PRMs have higher ceilings, rule-based rewards are more robust and stable—one reason for their success in large-scale setups like R1. Still, we believe exploring PRMs further is valuable for pushing performance even higher.
# Q2
> The improvements reported…. Have you conducted statistical significance…? How do these improvements…?
## Q2.1
In the context of self-evolving or RL-based methods on reasoning tasks, 3–6% absolute gains are substantial. Prior work such as RestEM reports similar improvements in the 1–6% range. Importantly, our benchmarks span diverse subtasks, and we observe much larger improvements on some of individual benchmarks, for instance, a notable 12% absolute gain on geometry problem-solving. These improvements are consistent across different model sizes and benchmarks.
To validate the statistical significance of our results, we perform t-test on the predictions, to assess whether the hypothesis that “MiniCPMV with M-STaR in Table 4 is better than Cont. Self-Evolving + PRM-Rerank on MathVista” is statistically significant or not. Our results demonstrate that the result is significant with a **p-value < 0.042**.
Regarding stability, we observe a variance of around **0.04** across three independent runs of our static optimal strategy, which is acceptable given the computational constraints. Hence, we do not perform extensive repeated runs in later experiments due to expensive computation requirements.
## Q2.2
> The paper claims … in Table 1 are small (less than 3% in many cases, e.g. 57.2% vs 55.1%),...
As mentioned in **Q2.1**, continuous self-evolving training is one part of our overall framework and not expected to drive all performance gains alone. Similar to prior work (e.g. Tulu-2.5), changes to the training algorithm typically yield modest yet meaningful improvements in reasoning tasks.
Nonetheless, optimising each component is crucial to fully realising the potential of the complete M-STaR framework. We conduct a significance t-test specifically for cont self-evolving training and Iterative RFT in Table 1 on MathVista, which results in a **p-value < 0.02**—proving the significance.
And the improvement across different models is also consistent according to Table 4.
# Q3
> How does your approach differ …? …novel technical…?
We will clarify the distinctions between our approach and prior STaR-like methods in the related work section. Briefly, each component of our framework introduces a new design aimed at improving and advancing beyond traditional STaR-like approaches—such as continuous self-evolving training, using PRM as a reranker, dynamic self-evolution, and their integration into a unified framework for multimodal reasoning.
These components did not exist in the mentioned previous works, and they lead to significant gains as we reported in the submission. | Summary: This paper identifies three key components in multimodal reasoning models that require further exploration. It systematically analyzes and unveils the critical aspects of training methods, reward models, and prompt design. Additionally, it proposes the use of appropriate temperature adjustment to balance exploration and exploitation. The final approach is validated through extensive experiments.
## update after rebuttal
Thank the authors for the clarifications. I would like to keep my current rating.
Claims And Evidence: Most claims made in the submission are supported by experimental evidence.
Line 175-178 “...switching over the Improve and Generate steps too frequently makes the learning process unstable, leading to a lower score, especially on the in-domain test set.” lacks of evidence.
Methods And Evaluation Criteria: I believe that the methods and corresponding evaluation metrics in this paper are well-aligned and detailed.
Theoretical Claims: N/A
Experimental Designs Or Analyses: I did check all the experimental designs and analyses.
Supplementary Material: N/A
Relation To Broader Scientific Literature: This paper provides practical insights for the multimodal reasoning community, particularly in the design of training methods and process reward models.
Essential References Not Discussed: The paper is self-contained.
Other Strengths And Weaknesses: - The paper is well-written and easy to follow.
- This paper systematically presents the key factors for improving model performance.
Other Comments Or Suggestions: 1 Please re-organize the first paragraph in Section 3.2 to make sure that “iteration interval” is defined first before being used to avoid unnecessary confusion.
2 “…we fix training methods as continuous self-evolving with 45k interval” Please specify which iteration interval the 45k interval refers to. [6.25%,12.5%,25%,50%,100%]
Questions For Authors: 1 In Table 6, continuous evolving performs worst in MiniCPMV2.5 FQA task, can the authors explain this result?
2 Also in Table 6, for MiniCPMV2.5, continous evolving with PRM + rerank negatively influence a lot on TQA task. Meanwhile, for Phi3.5, continous evolving performs really bad on TQA task but PRM + rerank has a positive influence on this task. What’s the dynamic behind this observation?
3 Can the author explain the motivation to focus on the short iteration interval as mentioned in line 146-149 in section 3.2?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your review. And we appreciate your recognition of your work. We would address your concerns one by one:
# Q1
> Line 175-178 “...switching over the Improve and Generate steps too frequently makes the learning process unstable, leading to a lower score, especially on the in-domain test set.” lacks of evidence.
Thanks for your question. In our table 1., less interval means switching over the Improve and Generate Steps more frequently, and we can found from the results that too small intervals would hurt the perf in in-domain test set (MathV360K testset)
# Q2
> Please re-organize the first paragraph in Section 3.2 to make sure that “iteration interval” is defined first before being used to avoid unnecessary confusion.
Thanks for your suggestion. We would follow your suggestion to define the “iteration interval” first and further clarify it.
# Q3
> “…we fix training methods as continuous self-evolving with 45k interval” Please specify which iteration interval the 45k interval refers to. [6.25%,12.5%,25%,50%,100%]
We are sorry for the confusion. Since the total amount of training data is 180K (line 124), [6.25%,12.5%,25%,50%,100%] correspond to 11K, 22K, 45K, 90K, 180K respectively.
# Q4
> In Table 6, continuous evolving performs worst in MiniCPMV2.5 FQA task, can the authors explain this result?
> Also in Table 6, for MiniCPMV2.5, continous evolving with PRM + rerank … on TQA task. Meanwhile, for Phi3.5, continous evolving performs really bad on TQA task but PRM + rerank has a positive influence on this task. What’s the dynamic behind this observation?
Thank you for your thoughtful observations regarding the performance dynamics in Table 6. We address both points below in a unified explanation.
- Overall, **M-STaR** consistently outperforms other variants, and significance testing (p < 0.046) supports the robustness of this improvement.
- Compared to GPS and MWP, which demand more complex multimodal reasoning, tasks like TQA and FQA rely more heavily on perception abilities, such as visual understanding or interpreting structured text layouts. While reasoning is still required, the relative emphasis shifts more toward perception in these tasks.
- Without dynamic monitoring, the self-evolving training process may saturate, over-optimizing for reasoning ability and neglecting perception skills. This leads to unstable or suboptimal performance on tasks like TQA and FQA.
- As shown in Table 6, M-STaR effectively mitigates these issues. It adaptively adjusts the evolution process by introducing dynamic control to avoid training saturation. The improvements are especially significant when both the LLM and vision encoder have sufficient capacity, confirming M-STaR’s effectiveness across diverse subtasks.
# Q5
> Can the author explain the motivation to focus on the short iteration interval as mentioned in line 146-149 in section 3.2?
Thank you for your question.
We focus on short iteration intervals because our study explores optimal training design through the lens of RL. In the context of LLM training, several prior works [1–3] have shown that online training methods with appropriate iteration interval outperform offline ones. In our self-evolving training framework, a shorter iteration interval corresponds to an more online training regime, where the model can adapt more quickly to newly generated samples. Our findings are consistent with existing RL literature, which emphasizes that iteration intervals should be appropriately short—not too large (to avoid stale or offline updates), and not too small (to prevent excessive variance across iterations).
[1] Direct Language Model Alignment from Online AI Feedback ICML2024
[2] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
[3] SimpleRL-Zoo: Investigating and Taming Zero Reinforcement Learning for Open Base Models in the Wild
---
Rebuttal Comment 1.1:
Comment: Thank the authors for clarifications. I would like to keep my current rating. | Summary: The paper introduces M-STAR—a framework that reframes self-evolving training for multimodal reasoning as a reinforcement learning (RL) problem. It identifies three critical factors (training method, reward model, and prompt variation) and proposes a continuous self-evolving training variant. A novel Process Reward Model (PRM) is designed to assess intermediate reasoning quality, and an adaptive exploration strategy (via automatic temperature tuning) is introduced to mitigate performance saturation. Extensive experiments across multiple benchmarks (e.g., MathV360K, MathVista) and model sizes (MiniCPM-V-2.5, Phi-3.5-Vision, InternVL2-2B) demonstrate consistent gains.
Claims And Evidence: Claims:
(i) continuous self-evolving training outperforms conventional iterative methods
(ii) integrating a multimodal PRM enhances candidate selection
(iii) adaptive temperature tuning effectively balances exploration and exploitation
Evidence:
The authors support these claims with controlled ablation studies (e.g., Table 1 and Table 2) and analysis of metrics such as Reward-Pass@2 and Pass@K. However, the effectiveness of PRM as a reranker—despite underperformance on standard verification metrics—needs further clarification.
Methods And Evaluation Criteria: The proposed method is well aligned with the multimodal reasoning challenge, specifically addressing the scarcity of high-quality chain-of-thought annotations. The evaluation covers multiple baselines, model sizes, and both in-domain and out-of-domain benchmarks.
Theoretical Claims: The reformulation of self-evolving training as an RL objective is sound, and the derivations (e.g., Eq. 1 and Eq. 2) appear correct. While the continuous optimization approach is promising, further details on convergence guarantees and stability—especially during adaptive temperature tuning—would enhance the theoretical contribution.
Experimental Designs Or Analyses: The experimental design is comprehensive, with clear comparisons between various training strategies (iterative, continuous, PRM-enhanced). The analysis of exploration–exploitation dynamics via Reward-Pass@2 is particularly insightful. However, the potential impact of noisy unlabeled data in the PRM setup warrants deeper discussion.
Supplementary Material: The supplementary sections provide details on model architectures, training hyperparameters (Appendices A–C), additional experimental results (Appendices D–H), and further analysis of the adaptive exploration strategy.
Relation To Broader Scientific Literature: The paper is well-situated within recent work on self-training, multimodal reasoning, and RL-based training methods (e.g., references to Singh et al. (2023)[1], Zelikman et al. (2022)[2], Hosseini, Ali, et al.[3] and related chain-of-thought literature).
[1] Singh, Avi, John D. Co-Reyes, and Rishabh Agarwal. "Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models." ICLR 2024 Workshop on Navigating and Addressing Data Problems for Foundation Models
[2] Zelikman, Eric, et al. "Star: Bootstrapping reasoning with reasoning." Advances in Neural Information Processing Systems 35 (2022): 15476-15488.
[3] Hosseini, Arian, et al. "V-STaR: Training Verifiers for Self-Taught Reasoners." CoRR (2024).
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
- Novel reformulation of self-evolving training as an RL problem.
- Introduction of continuous training and adaptive exploration, supported by thorough empirical evaluations.
- Detailed analysis of reward model dynamics and exploration–exploitation trade-offs.
Weaknesses:
- Some ambiguity remains regarding the PRM’s role as a reranker given its lower performance on standard verification metrics
- limited diversity in benchmark tasks may restrict claims about generalizability.
Other Comments Or Suggestions: Minor typos (e.g., “mutlimodal” instead of “multimodal” in Line 013 col 2 word 6) should be corrected. Though not critical, some explanations—especially around the adaptive temperature mechanism—could be clarified for enhanced readability.
Questions For Authors: 1. Can you clarify the apparent discrepancy between the PRM’s verification metrics (e.g., BoN, weighted voting) and its effectiveness in reranking responses?
2. How robust is the continuous self-evolving training process to variations in iteration interval and temperature adjustment parameters?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Q1
> However, the effectiveness of PRM as a reranker—despite underperformance on standard verification metrics—needs further clarification
Thank you for your question.
First, we would like to clarify an important distinction between Best-of-N (BoN) selection and reranking using PRM. When using PRM for BoN, no ground-truth answers are provided—meaning the PRM must identify both correct and incorrect responses independently. In contrast, during our training (excluding the unlabeled prompt setting), the responses are already **filtered using ground-truth answers**, allowing the PRM to focus on reranking correct responses. We then select the top-2 responses as high-quality samples. This is why the BoN results do not align with our usage of PRM during training.
To better investigate and validate the behaviour of the PRM, we conducted four analyses:
- **Value Function Perspective**:
Our PRM is trained via MC-rollouts [1, 2], acting as a value function estimating the expected reward (i.e., answer correctness) from an intermediate reasoning step. To assess accuracy, we compare the predicted value score (the PRM score at the step with the lowest value) against an empirical value score (the proportion of 16 rollouts from that step leading to a correct answer).
We then compute the mean squared error (MSE) between the PRM-predicted value and this empirical value across the dataset, grouped by whether the original response was ultimately correct or incorrect.
| | Correct | Wrong |
|-----|-----|-----|
| MSE | 0.081 | 0.124 |
This result indicates two things.
1. It highlights why our PRM here is more suitable as a reranker rather than a verifier—while PRM effectively distinguishes good reasoning trajectories for correct responses, its predictions are less reliable for incorrect responses, where it can be more easily confused.
2. Responses filtered by answers with higher PRM scores are with higher quality, making them better samples to learn.
This suggests future evaluations could separate reranking performance (when answers are known) from value estimation (on unlabeled or incorrect responses).
- **Human Evaluation**:
To validate the reliability of the relativity score presented in Figure 2 in our paper and assess whether GPT-4o is unbiased, we conducted a human evaluation to measure the alignment between questions and responses. Specifically, we invited two human annotators to label 50 responses. To facilitate more consistent judgments, we categorized the relativity score into three levels: not relevant (corresponding to original scores 1–4), somewhat relevant (scores 4–7), and very relevant (scores 8–10). We then computed the agreement between the human annotations and GPT-4o's automatic scores. The average agreement exceeded 84%, which, according to prior research on automatic evaluation [3], indicates a strong level of consistency. This result supports the reliability of both our proposed relativity score and the GPT-4o annotations shown in Figure 2.
- **Readability Analysis**:
We use the Flesch Reading Ease (FRE) metric, we found that top-2 responses are more consistently readable than other samples with slightly higher mean (64.48 vs. 64.25) and much smaller variance (28.48 vs. 55.97)
This suggests PRM reranking helps reduce incoherent or erratic language, yielding clearer responses overall.
[1] Math-Shepherd: ... ACL2024
[2] ProcessBench
- **Qualitative Case Study**:
The stepwise reward provides a finer signal of reasoning quality. Even when answers are correct, reasoning may be flawed. For example:
```
Question: What type of MRI is shown in the image?\nChoices:\n(A) T2-weighted \n(B) Diffusion-weighted \n(C) FLAIR \n(D) T1-weighted.
Top-2 Response: The choice is D, because the image is described as having a \"very bright signal\", ….\n\nDark areas on T1-weighted MRI typically …\n\n# Answer\n\nD
Other Response: The given MRI is indicated as a diffusion-weighted analysis.\n\nTherefore, the correct answer to the problem is: \n\n# Answer\n\nD
```
Although both predict the right answer, the top-2 response has clearer and more consistent reasoning, while the other includes misleading claims—underscoring the value of stepwise evaluation.
# Q2
> limited diversity in benchmark tasks
As shown in Tables 5 and 6, the five benchmarks we used include many diverse subtasks, covering not only the math domain but also tasks such as visual qa, figure qa, logic-qa, scientific reasoning, spatial reasoning, etc, which are very diverse and challenging. And they are also common practices to evaluate multimodal reasoning (e.g. math-llava)
# Q3
> How robust is the continuous self-evolving..
For the robustness of hyperparameters, in table 4, we follow the same parameters as Table 1,2 to train Phi-3 and InternVL models. Considering they are two models with different series and sizes, but the results remain consistent with MiniCPMV, we believe it proves hyperparameters in MSTaR are robust. | Summary: The authors reframe self-evolving training for multimodal reasoning through the lens of RL and indentify three factors: training method, the use of reward model, and prompt variation. They train the first multimodal, process-based reward model for multimodal reasoning and
demonstrate its usefulness in further enhancing performance; and find that adding more unlabeled queries helps only when having perfect reward signals (e.g., the oracle groundtruth answers), and it hurts the performance if the reward model does not generalize well on unseen data.
Claims And Evidence: The authors reframe self-evolving training for multimodal reasoning through the lens of RL and indentify three factors: training method, the use of reward model, and prompt variation. But the experiments do not deeply study how these three factors effect the final results. (i.e., the ablation studies are not enough, the settings of the ablation studies focus primarily on minor hyperparameter adjustments, leading to conclusions that align with conventional expectations.)
Methods And Evaluation Criteria: The technical contribution is limited, it seems the authors use some existing LLM techniques in MLLM. However, what are the new technical challenges induced by MLLM and how do the authors solve them? The paper lacks a hypothesis-driven structure that ties the findings to the central research question, which appears more like a tech report rather than a research paper.
Theoretical Claims: Not available.
Experimental Designs Or Analyses: 1. Why do the authors choose Minicpm, internvl, and phi models as base models? Can the proposed method use in LLaVA and QwenVL? I think these two models are more common in MLLM research.
2. The experiments are not solid enough, the proposed method should conduct on more multimodal reasoning benchmarks.
3. The compared baselines are not enough, For instance, the author should compare their method with some existing self-evolution methods in LLM and MLLM reasoning methods.
4. Can the proposed method further improve the existing MLLM reasoning methods? Just using a base model (like Minicpm) is not solid.
5. The ablation studies should delve deeper into algorithmic comparisons. For instance, contrasting with techniques like RLHF, DPO, GRPO, as well as exploring different training methodologies (e.g., multi-training stages), the sequence of training stages.
Supplementary Material: Yes. All parts.
Relation To Broader Scientific Literature: The paper lacks a hypothesis-driven structure that ties the findings to the central research question, which appears more like a tech report rather than a research paper.
Essential References Not Discussed: NA.
Other Strengths And Weaknesses: 1. The paper is well-organized and easy to follow.
2. The research topic: enhancing reasoning in large multimodal models without external annotated data is a good topic.
Other Comments Or Suggestions: The related work about Multimodal Reasoning is not enough, the authors should consider more recent papers.
Questions For Authors: How do you train the PRM? Can you give me a more clear explanation? The data, training method, etc.
What is unlabeled prompt? In line 126, I find the answer is missing. Is it just a math question?
Besides, why do you use the word 'prompt' to describe this answer missing setting, I think is a little bit weird. Why not use label?
In Tab5., what is the training data? Just the math data can improve?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your time and effort in reviewing our paper.
# Q1
> limited technical novelty
Overall we have contributed:
- A pilot study to enhance multimodal (MM) reasoning validated by comprehensive studies,
- The first self-evolving training recipe that blended online training and PRMs in MM reasoning
- The new method, monitoring training dynamics, and the experiments to show the effectiveness of it.
- Significant improvements on various multimodal MM benchmarks
They span three underexplored areas:
- A comprehensive RL-based analysis of self-evolving training;
- A new training framework and PRM tailored to multimodal reasoning;
- A methodology for tracking and interpreting training dynamics, offering insights into model evolution.
# Q2
> lack of a hypothesis-driven structure
Our central research question is outlined in the earlier sections of the paper (Abstract, Sec 1 Lines 12-26, 43-48).
While writing structures vary across papers, we adopt a component-wise, empirical exploration to derive insights—a format used in prior works like [1,2]. We believe this structure does not diminish the rigour&clarity of our contributions.
[1] Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback, NeurIPS 2024
[2] What Matters When Building Vision-Language Models? NeurIPS 2024
# Q3
> Why do the authors choose Minicpm, internvl, and phi models ...
We selected them as they were among the strongest open-source LMMs of their sizes (2B, 4B, 7B) at the time of our experiments, offering solid reasoning abilities necessary for self-evolving training.
We prioritized:
- Models with solid baseline reasoning, critical for RL-based methods
- Stronger open-source models, as improving them is more meaningful than weaker ones.
Although LLaVA and Qwen-VL are widely used, they underperformed during our study period. For instance, LLaVA-1.6 achieved only ~20–30 on MathVista, and Qwen-VL-Chat lagged behind MiniCPM-V 7B ( ~40 v.s. 52.8).
Our method is model-agnostic and can be extended to these models in future work.
# Q4
> Evaluation on more MM reasoning benchmarks
In Tables 5 and 6, the five benchmarks we use include many diverse subtasks, covering not only math but also tasks such as visual qa, figure qa, logic-qa, scientific reasoning, spatial reasoning, etc, which are very diverse and challenging. And previous works[1,2] often evaluate on just 1-2 benchmarks
[1] Bootstrapping Mathematical Reasoning for Multimodal Large Language Models EMNLP2024
[2] Mathematical Visual Instruction Tuning with an Automatic Data Engine ICLR2025
# Q5
> baseline comparisons are insufficient
Our work is the first to systematically apply self-evolving training to MM reasoning. In Table 1, we also include the most widely used self-evolution baselines in text-only settings: Iterative RFT (aligned with STaR [1] and ReST [2]) and RestEM—three of the most established methods at the time of submission.
[1] STaR: Bootstrapping Reasoning With Reasoning Neurips2022
[2] Reinforced Self-Training (ReST) for Language Modeling
# Q6
> Can the proposed method further improve the existing MLLM reasoning methods? Just using a base model (like Minicpm) is not solid.
As shown in Table 4, we have validated our method on three models of varying sizes, and the conclusions remain consistent. Also, as in many MLLM/LLM reasoning works (e.g. ReST, V-STAR), it's common to mainly conduct experiments on top of general models rather than stacking different reasoning methods.
# Q7
> The ablation studies should delve deeper into algorithmic comparisons…
At the time of submission, STaR-like methods were among the most effective and compute-efficient for reasoning tasks, especially in MM contexts, as deployed to develop e.g. Llama3, ReST-MCTS* etc. Other RL paradigms like GRPO were less explored in this setting and are beyond our scope.
# Q8
> Related Work
We leave related work in **Appendix I**. And we will move them to main body and add more recent papers in the next revision.
# Q9
> PRM Training Details
We have provided details in **Appendix D**, but briefly: The PRM is trained using via MC-Rollout, where 50K questions are sampled and each is completed with up to 16 responses using a converged model checkpoint. Stepwise annotations are generated based on completion correctness, and the PRM is trained with token-level MSE loss. The dataset is balanced across correct/incorrect responses and question types.
# Q10
> Unlabeled Prompts
Unlabeled prompts simulate real-world scenarios where collecting answers is difficult. Since PRM is trained on diverse reasoning steps, we test whether it can generalize to unlabeled data—enabling broader scalability beyond labeled datasets.
# Q11
> Training Data
In line 121 of our paper, we use MathV360K, which includes not only MM math problems, but also a diverse range of tasks like function QA, figure-based QA, and more, which covers a broad spectrum of multimodal reasoning scenarios. | null | null | null | null | null | null |
Measuring In-Context Computation Complexity via Hidden State Prediction | Accept (poster) | Summary: The paper introduces the Prediction of Hidden States (PHi) loss to measure the complexity of computation in neural sequence models. The authors argue that traditional next-token prediction loss does not adequately capture the task complexity. To fix this, they propose evaluating the model’s ability to predict its own future hidden states. The PHi layer is introduced to measure the unpredictability of hidden states, which correlates with task complexity. The method is tested across different tasks and architectures, demonstrating that PHi loss distinguishes between complex and simple computations.
Claims And Evidence: * Next-token prediction loss is an unreliable indicator of computational complexity.
- Evidence: Tasks involving random token sequences yield high next-token prediction loss but have no meaningful computation.
* Hidden-state predictability (PHi loss) is a better metric for measuring in-context computation complexity.
- Evidence: The authors show that PHi loss increases for complex reasoning tasks, in-context language learning, and step-by-step mathematical reasoning.
* PHi loss correlates with the description length of formal languages learned in-context.
- Evidence: The study demonstrates that the PHi loss reflects the complexity of probabilistic finite automata (PFA)-based tasks.
Methods And Evaluation Criteria: * Next-token prediction loss is an unreliable indicator of computational complexity.
- Evidence: Tasks involving random token sequences yield high next-token prediction loss but have no meaningful computation.
* Hidden-state predictability (PHi loss) is a better metric for measuring in-context computation complexity.
- Evidence: The authors show that PHi loss increases for complex reasoning tasks, in-context language learning, and step-by-step mathematical reasoning.
* PHi loss correlates with the description length of formal languages learned in-context.
- Evidence: The study demonstrates that the PHi loss reflects the complexity of probabilistic finite automata (PFA)-based tasks.
Theoretical Claims: No theoretical claim was made in this work.
Experimental Designs Or Analyses: * Insert the PHi layer into different models to evaluate.
* train-from-scratch models: Transformers and LSTMs
* Pre-trained LLMs: a frozen Llama 3B model
Supplementary Material: No supplementary material.
Relation To Broader Scientific Literature: No direct relation to broader scientific literature.
Essential References Not Discussed: The paper extensively cites prior work in in-context learning, VAEs, information bottlenecks, and neural sequence models.
Other Strengths And Weaknesses: **Strengths**
* PHi loss gives a new way to evaluate the task complexity.
* The method is tested on multiple architectures and datasets.
**Weaknesses**
* The PHi layer's effectiveness depends on where it is inserted in the model.
* Although promising for small models, computing the PHi loss requires adding a PHi layer and training the model. This could make it difficult to apply on larger-scale LLMs.
Other Comments Or Suggestions: * Testing PHi loss on encoder-only models (e.g., BERT) and encoder-decoder models (e.g., T5) could provide further insights.
Questions For Authors: * How sensitive is PHi loss to model size and dataset scale?
* Is it possible to combine the PHi loss with other model analysis tool to pinpoint where in-context learning occurs?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Document with additional figures: https://tinyurl.com/yp4ucedn
## 1. Location of the PHi Layer in the Model
> The PHi layer's effectiveness depends on where it is inserted in the model.
We want to emphasize that in a fully trained model—such as the transformer in Section 3.1—the results are very robust across different placements of the PHi layer. This is supported by new experiments (Figure 5 in the linked document).
With regard to pre-trained LLMs, the reviewer is correct that the effectiveness of the PHi layer in measuring in-context computation complexity depends on roughly where in the model it is placed. Based on known findings from LLM interpretability research and other empirical considerations, it is not surprising that PHi layers placed about two-thirds of the way through the model tend to perform best. Below we detail our reasoning for choosing placing the PHi layer after layer 18, 20, 22 and 24 of the Llama 3B model in the subsequent experiments.
- https://arxiv.org/abs/2407.09298 shows that the early and final layers in an LLM serve distinct purposes, making them unsuitable for measuring in-context computation.
- https://arxiv.org/abs/2410.10912 demonstrates that early layers in LLMs are more brittle and harder to prune—making them more vulnerable to the noise introduced by the PHi layer’s information bottleneck.
- PHi layers placed early in the model show clear signs of posterior collapse and thus give no clear results.
- We selected positions based on where the *variance* of the PHi loss across tasks is highest—not where interesting tasks produce the largest losses (although these two happen to coincide).
These choices are independent of any specific findings about which tasks are associated with high or low PHi loss. We therefore do not believe our selection introduces bias in the experimental outcomes. We will elaborate on this reasoning in the updated paper.
A practical benefit of later PHi layer placement is that it reduces computational cost, since we backpropagate through fewer layers of the pre-trained LLM.
## 2. Further Points
> Although promising for small models, computing the PHi loss requires adding a PHi layer and training the model. This could make it difficult to apply on larger-scale LLMs.
In pre-trained LLMs, only the PHi layer is trained—this requires very modest compute. Training the PHi layer for 10,000 steps in the Llama 3B model, as done for all experiments in Section 3.2, takes between 2.5 and 6 hours on a single consumer-grade GPU (NVIDIA RTX 3090), depending on the PHi layer’s position. We see no reason this approach couldn't scale to larger models.
> Testing PHi loss on encoder-only models (e.g., BERT) and encoder-decoder models (e.g., T5) could provide further insights.
We agree—measuring in-context computation complexity in these models would be interesting. However, it likely requires conceptual changes to the PHi layer. In decoder-only models, we have a clear compression scheme for the token and latent sequences. In encoder-only models, hidden states can incorporate information from both past and future tokens, complicating the setup.
> How sensitive is PHi loss to model size and dataset scale?
There *is* a dependency between PHi loss and model size: the KL divergence between two diagonal Gaussians is a sum over dimensions, so the PHi loss scales with the dimensionality of the hidden states. For this reason, PHi loss (in its current form) should only be used to compare tasks within the same model, not across different models. We never claim to compare models in the paper. We also do not interpret the absolute magnitude of the PHi loss. Addressing this limitation is a goal for future work—for example, through the use of quantized latent states.
As for the PHi layer’s capacity and training data needs, our method is flexible. In the Llama 3B experiments, we trained the PHi layer on roughly 40 million tokens from a standard natural language dataset. Figure 7 in the linked document shows the training curves. The PHi layer's architecture is designed to match the scale of a single LLM layer. If the reviewer finds it helpful, we are happy to provide additional experiments comparing different PHi layer capacities and training data volumes in a future reply.
> Is it possible to combine the PHi loss with other model analysis tool to pinpoint where in-context learning occurs?
**Where in the sequence or dataset?** Although we focus on the aggregated PHi loss in the paper, the method supports token-wise PHi loss measurement. This allows us to pinpoint where in a sequence non-trivial in-context computation occurs (e.g., see Figure 13 in the Appendix).
**Where in the model?** Figures 6&7 in the paper offer some preliminary insights into where interesting computation may be happening within a pre-trained LLM. However, further research is needed to confirm and extend these findings. | Summary: This paper proposes a novel method for probing the hidden representations of neural sequence models: the prediction of the hidden states (PHi) layer. This layer combines an encoder that generates latent variables from the hidden states and an autoregressive LMs that generates latent variables from previous latent variables. The corresponding parameters are trained to minimise the KL divergence between encoder and autoregressive LMs, together with the standard autoregressive objective on the output of the whole model. The authors test their new measure on several tasks, including both cases where the neural network is trained together with the PHi layer from scratch and cases where the PHi layer is inserted into a pre-trained LLM. The tests show how the PHi loss correlates with the computational complexity of the task while the standard next-token prediction loss does not.
## update after rebuttal
The rebuttal cleared the confusion expressed in the "Other Strengths and Weaknesses" section, and I have increased the score accordingly.
Claims And Evidence: It is far from obvious why the proposed procedure should measure "a model's ability to predict its own hidden states", as stated in the abstract. Nevertheless, experimental results agree with the trends claimed by the authors at the end of section 1. The statistical significance of these trends remains open for debate, especially for experiments with LLMs (figures 6 and 7).
Methods And Evaluation Criteria: Yes.
Theoretical Claims: There are no theoretical claims.
Experimental Designs Or Analyses: Experiments seem sound although there are aspects of the PHi layer design that I do not understand:
1: is the autoregressive part of the PHi layer used to compute the output, or does it just enter to train the encoder?
2: is eq. (4) implying that the first part of the networks does not receive gradients from the negative log-likelihood loss?
3: how is the encoder incentivised to have the latents depend on the previous latents, given that the autoregressive component of PHi is also initialised randomly and trained from scratch? (see also Strengths and Weaknesses section)
Supplementary Material: Only part of Appendix A.
Relation To Broader Scientific Literature: This paper could have a significant impact on the literature devoted to understanding the operations performed by LLMs, including mechanistic interpretability and other studies of hidden representations.
Essential References Not Discussed: I am not aware of any.
Other Strengths And Weaknesses: The problem considered by the paper is timely and relevant and the solution based on the idea of measuring "how well the model can predict its own future hidden states" is both interesting and original. However, I cannot completely follow the rationale behind the design of the PHi layer. I would understand the comment after eq. (5), "the model is incentivized to maintain ...", if the prior were some fixed distribution. However, the fact that the prior is trained from scratch via Eq. (2) confuses me, as the loss only forces it to agree with the encoder distribution and not with the distribution of post-activations from the training data. For instance, why not train the prior with a standard next-token prediction objective but in the latent (instead of token) space?
Other Comments Or Suggestions: Typo in section 2.3, third row ("we need to ensure that ensures").
Typo in the caption of Figure 5 ("because of a posterior collapse, most likely due to posterior collapse").
Questions For Authors: I have no further questions. My `weak accept' evaluation stems from the confusion expressed in the "strengths and weaknesses" section: if cleared, I would raise the evaluation. --- updated after response.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Document with additional figures: https://tinyurl.com/yp4ucedn
## 1. Questions about the PHi Layer
> 1: is the autoregressive part of the PHi layer used to compute the output, or does it just enter to train the encoder?
The purpose of the autoregressive part (i.e., the causal self-attention layer) is to make the prior $p_\chi$ more powerful. It plays no role in computing the next token prediction of the model. More details below.
> 2: is eq. (4) implying that the first part of the networks does not receive gradients from the negative log-likelihood loss?
No—the gradients flow back through the reconstructed $h'$, to the encoder $q_\psi$, the original hidden states $h$ and to the bottom layers $B_\beta$. No gradients are blocked in the architecture. This ensures that when trained end-to-end, as in Section 3.1, the model is encouraged not only to *predict* its own next hidden states, but also to make those next hidden states more *predictable*.
> 3: how is the encoder incentivised to have the latents depend on the previous latents, given that the autoregressive component of PHi is also initialised randomly and trained from scratch? (see also Strengths and Weaknesses section)
The latents $z$ are sampled from the posterior $q_\psi(\cdot | h_t) = q_\psi(\cdot | x_1, \dots, x_t)$. They have to contain the information necessary to predict the next token $x_{t+1}$, otherwise the NLL loss increases. The prior has to predict the posterior distribution but does not have access to the most recent input token $x_t$. However, we make it autoregressive and give it access to the previous latents, since their information content has already been accounted for by previous PHi losses.
In short: the autoregressive prior $p_\chi$ is trained to predict the posterior. The posterior $q_\psi$ is trained to allow accurate next token prediction from the reconstructed state $h'$, while staying as close as possible to what the prior predicts.
## 2. Two Simpler Approaches and Why they Do not Work
> ...if the prior were some fixed distribution.
> ...why not train the prior with a standard next-token prediction objective but in the latent (instead of token) space?
The reviewer raises two interesting questions which we also explored during the development of our method:
- Can we simply use an information bottleneck with a fixed, uninformative prior and no hidden state prediction (similar to a vanilla VAE, no autoregressive prior)?
- Can we use a simpler next hidden state prediction objective? Since the latent or hidden state is continuous, it cannot be the usual categorical cross-entropy, but maybe mean squared error (MSE)?
Each of these simpler approaches has shortcomings that our method avoids:
If we use a fixed prior in a pre-trained LLM, it tends to overestimate the information in the hidden state sequence. For example, consider a sequence where the hidden states all lie in the same small subspace. With a fixed prior, we would have to pay the same large price in terms of KL-divergence for every single one of the hidden states. An autoregressive prior, in contrast, quickly adapts to the subspace, reducing the KL for subsequent steps. Our new results include an ablation showing that an information bottleneck without hidden state prediction gives no meaningful results in practice (see Figure 6 in the linked document).
As for using a straightforward next hidden state prediction loss such as MSE: this encourages the model to scale down the norm of the hidden states to near-zero, potentially leading to machine precision issues. Even if we fix the norm, the model could encode information in tiny perturbations from a default vector, which lead to low MSE but can carry arbitrary information. Without a noisy information bottleneck, the approach lacks a clear information-theoretic interpretation.
Our PHi layer addresses both issues:
- The autoregressive prior ensures we pay only for genuinely *new* information not already contained in previous tokens.
- The noisy information bottleneck prevents hidden state collapse and provides a clear, principled measure of the information in the latent states.
## 3. Statistical Significance of the Results
We emphasize that we report 95% confidence intervals or p-values for all experiments, and find all reported results to be statistically highly significant. See Figure 2 in the linked document for additional confidence intervals for the partial correlation in the MATH rationale experiment. These were missing in the submitted paper.
For a discussion of how we selected the PHi layer location, please refer to our response to reviewer Uw5E, paragraph 1.
If there are any remaining questions, we are very happy to answer them in a future response.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification: the response cleared my confusion, thus I raised my mark to 4. In particular, It could be useful to include part of the response to 1. in the revised manuscript, e.g. in the discussion immediately after Eq. (5).
---
Reply to Comment 1.1.1:
Comment: Thank you for the suggestions, we will incorporate them into the updated paper. We would also like to take this opportunity to thank all reviewers for their comments and for reconsidering their initial scores in light of our responses. The reviews have led to valuable clarifications in the updated version of the paper. | Summary: I think this paper introduces the PHi (Prediction of Hidden States) layer as a novel way to measure the complexity of computation performed by neural sequence models by examining how predictable their hidden states are. The authors show that this metric correlates better with intuitively "interesting" computation than next-token prediction loss.
Claims And Evidence: the evidence provided through experiments on both smaller models and LLMs is adequate but not completely convincing, as the connection to formal notions of complexity remains mostly theoretical.
Methods And Evaluation Criteria: I think the proposed PHi layer is an elegant approach, but its placement in pre-trained models seems somewhat arbitrary, raising questions about the robustness of the method.
Theoretical Claims: n/a
Experimental Designs Or Analyses: I think the experiments cover a good range of scenarios, but the evaluation of "interestingness" remains subjective, and the baseline comparisons are limited.
Supplementary Material: yes I check.
Relation To Broader Scientific Literature: The paper makes interesting connections to information theory and mechanistic interpretability, but could better situate itself within the growing body of work on understanding LLM computation
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: I think the proposed metric provides a novel lens for quantifying when models perform meaningful computation. The correlations between PHi loss and task complexity across different domains are intriguing. The application to correctness prediction in self-generated reasoning chains demonstrates practical utility.
However, the paper lacks rigorous justification for where to place the PHi layer in pre-trained models, with results varying significantly based on layer position. The evaluation relies heavily on intuitive notions of complexity rather than formal definitions. The method seems sensitive to hyperparameters and architectural choices, raising questions about generalization.
The major concern is that it is only tested on small model, can we test this on Llama7/8+B, etc?
Other Comments Or Suggestions: n/a
Questions For Authors: 1) How sensitive are the results to the specific architectural choices in the PHi layer?
2) Is there a principled way to determine optimal layer placement in pre-trained models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Document with additional figures: https://tinyurl.com/yp4ucedn
## 1. Robustness towards Hyperparameters and PHi Layer Placement
>...the paper lacks rigorous justification for where to place the PHi layer in pre-trained models
While it is true that the properties of the PHi layer vary depending on its placement in a pre-trained LLM, our choices are not arbitrary. In our response to Uw5E, paragraph 1, we give several principled reasons—independent of any specific results on interestingness or computational complexity—why the chosen positions are sensible for the PHi layer. We highlight that we selected layers that maximize the variance of PHi loss across different tasks. A more detailed account of these considerations will be included in the updated paper.
> How sensitive are the results to the specific architectural choices in the PHi layer?
In general, our method is quite robust to the choice of hyperparameters and the exact architecture of the PHi layer components. For example, the sizes of the MLPs and the attention module are chosen simply to align with the rest of the model architecture. We show in the paper that our method works with transformers, LSTMs, and pre-trained LLMs.
We did not perform extensive hyperparameter tuning, but selected reasonable and stable settings and used them consistently. In response to this review, we tested varying the weight of the PHi loss (relative to NLL loss) from 1 to 10, and found that higher weights actually perform better than the one reported in the paper, see Figure 5 in the linked document. If the reviewer is interested, we would be happy to provide a more detailed robustness analysis for specific hyperparameters in a future response.
## 2. Subjectivity of Interestingness
> The evaluation relies heavily on intuitive notions of complexity rather than formal definitions.
The reviewer raises a valid point: Notions of complexity and interestingness are inherently difficult to formalize. Many definitions—such as Kolmogorov Complexity and Sophistication—are non-computable and cannot be directly used in empirical studies. Some appeal to intuition is therefore difficult to avoid.
However, we have attempted to ground our evaluation in formal and objective measures wherever possible. For the PFA-based tasks, we use the description length of the automaton $C(A)$, as described in section 3.1, to quantify complexity. For the complexity of the non-ICLL tasks, we can also give a concrete description length: the complexity of memorized sequences is $\log N$, where $N$ is the number of memorized sequences. For memorized programs, it is $\log M$, where $M$ is the number of memorized programs. For random sequences, the complexity is 0, as no inference is possible.
We agree that the natural language tasks in Section 3.2.1 are categorized in a less formal way, but believe our reasoning is principled and goes beyond intuition. In the MATH dataset experiments, we rely on dataset-provided difficulty levels, which range from simple to difficult. In this context, we take “difficult” to be synonymous with “complex.”
We also want to emphasize that we never claim to measure the *absolute* or objective interestingness of a sequence—only the *relative* differences between sequences or tasks for a given model.
We will further clarify these points in the updated paper.
## 3. Misunderstanding about Experiments with Large Models
> The major concern is that it is only tested on small model, can we test this on a BERT/GPT2 or even bigger model such as Llama1B etc,
The paper includes extensive experiments with large models—specifically, the LLaMA 3.2B model with 3 billion parameters, which is significantly larger than BERT, GPT-2, or LLaMA-1B. Please see Sections 3.2.1, 3.2.2, and 3.2.3.
Our method is also fairly scalable. Training the PHi layer in the pre-trained LLaMA 3B model takes approximately 3 hours on a single consumer-grade GPU (NVIDIA RTX 3090). Upon acceptance, we will release our code, which requires only minor modifications to work with other open-source LLMs.
## 4. Additional Points
> ...baseline comparisons are limited.
We are not aware of other methods that directly measure the complexity of in-context computation in sequence models. Does the reviewer have any specific baselines in mind? The two alternative approaches discussed in our response to rCDj, paragraph 2, may be of interest.
> The application to correctness prediction in self-generated reasoning chains demonstrates practical utility.
We are glad the reviewer appreciates this. The updated paper will include an expanded experiment that confirms and extends this result on the MATH dataset (see response to 6ZwR, paragraph 1, and Figures 3&4 in the linked document).
> ...could better situate itself within the growing body of work on understanding LLM computation
We have aimed to highlight the major relevant work and would be grateful for any suggestions of related literature we may have missed. | Summary: The paper proposes a "prediction of hidden states" (PHi) layer, which can be used to quantify the complexity of the computation being performed in a neural model. The layer exists between the activations of a sequence model such as a Transformer. It maps the activations to latent variables. It then computes the KL between the encoding of the next symbol and learned "prior" that attempts to predict this encoding given the encodings of previous symbols. This KL divergence therefore quantifies the new information in the next symbol that is not predicted by the prior.
The experiments show that this KL divergence metric, termed the "PHi loss", correlates with various other measures of interest to a greater degree than the overall model's negative log likelihood (NLL) of the next token. For example, while the NLL is high for predicting random strings, "PHi loss" is relatively low. Intuitively, this is because the "PHi loss" is computed over the latent representations, which are implicitly encouraged to only contain information relevant towards predicting future symbols. Random sequences have high NLL because they have high conditional entropy. Experiments show that "PHi loss" correlates with the complexity of in-context learning tasks. Finally, experiments show that "PHi loss" over chain of thoughts correlates with accuracy of the final answer for GSM-8k, suggesting this is a desirable property for answer rationales.
## Update after rebuttal
Thank you for your response and clarifying the relationship between the information that the posterior and prior condition on. I would recommend seeking to clarify the presentation and intuition around this if possible, in the revised version. I also appreciate further exploring the application of the method to select for rationales leading to correct answers. I will raise my score from 2 to 3.
Claims And Evidence: TBD (see questions)
Methods And Evaluation Criteria: Yes
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: The experimental settings and analysis seemed reasonable.
Supplementary Material: No
Relation To Broader Scientific Literature: To the best of my knowledge, the proposed metric is novel. The paper discusses relevant prior work in Section 4.
Essential References Not Discussed: Not that I am aware of
Other Strengths And Weaknesses: Strengths
* The proposed measure offers a new perspective on quantifying the complexity of the behavior of a neural model such as a Transformer.
* The proposed measure correlates with description length of tasks in an intuitive way.
* Perhaps most intriguingly, the proposed measure when applied to rationales for mathematical reasoning appears to correlate with answer accuracy.
Weaknesses
* I may be confused, but I was unclear regarding some aspects of the learned prior. Presumably, if the model used to represent the learned prior was sufficiently expressive, would it be possible for "PHi loss" to go to zero during training? Even at inference time, could the prior have equivalent complexity to the overall model, and therefore we would expect low "PHi loss" even on complex tasks, since both the overall model predicting the next token and the prior predicting the next hidden state can capture all of the same information (up to the limit of the mutual information between previous tokens and the next token)? In other words, in theory, does the method implicitly require some capacity constraint on the prior in order to be non-vacuous?
* I worry that some of the results may be contingent on the specific decisions and hyperparameters used for implementing the encoder, prior, etc. As there is no "train and test split" for most of the experiments, it is difficult to assess the degree to which some conclusions may be contingent on these choices.
* It would be helpful to have a bit more clarity on what precisely the authors mean by quantifying "an upper bound on the complexity of this implicit model... that is generated in-context to predict next tokens". The connection to work on description lengths, inspired by the MDL principle and Kolmogorov complexity, seemed potentially helpful but I didn't fully understand the connection. The role of the prior, the complexity of which is not accounted for in "PHi loss", seems to make the connection unclear. It seems like clarifying this would be quite helpful for the paper and its motivation. It could also help identify relevant methods from prior work to compare the proposed method against.
I will be open to revisiting my judgements if the authors can help clarify these points for me.
Other Comments Or Suggestions: Just some minor nits:
* Section 2.3 typo - "to ensure that ensures that"
* Section 6 typo - "a powerful objective for in applications"
Questions For Authors: See "Weaknesses" above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Document with additional figures: https://tinyurl.com/yp4ucedn
## 1. Rationales for Mathematical Reasoning
> Perhaps most intriguingly, the proposed measure when applied to rationales for mathematical reasoning appears to correlate with answer accuracy.
We agree that this is an intriguing finding. Since the submission, we were able to further confirm and extend it, please see Figures 3&4 in the linked document. We repeated the experiment described in Section 3.2.1 on the MATH dataset and observed very similar results: Here, too—across all tested layers—correct rationales are associated with high PHi loss, both across all question pairs and within the subset of counterintuitive pairs.
The MATH dataset further allows us to break down the questions by difficulty. This reveals an interesting pattern within the counterintuitive subset: for easy questions, high PHi loss does not correlate with correct rationales, whereas for difficult questions, there is a strong positive correlation between high PHi loss and correctness. We will include these results in the updated version of the paper.
## 2. Learned Prior and Capacity Constraints
The autoregressive prior lacks access to the most recent input token $x_t$. Hence, unless $x_t$ is completely uninformative, the PHi loss will not go to zero. We can see this by re-writing Equation 2 in the paper. Recall that $h_t$ is a function of $x_1, \dots, x_t$ and $z_t$ is a (stochastic) function of $x_1, \dots, x_t$. Plugging this in, we can express the PHi loss at time step $t$ as: $$
L_{\text{PHi}}(t)=D_{\mathrm{KL}}\Bigl( q_\psi(\cdot \mid x_1, \dots, x_t)\big\|p_\chi(\cdot \mid x_1,\dots,x_{t-1})\Bigr).
$$
This makes it clear that, in general, prior and posterior are not the same, even if the prior is highly expressive.
That said, the reviewer is right that in practice, the prior module $p_\chi$ not only lacks access to the most recent token, but also has less capacity than the part $B_\beta$ of the model that computes the target hidden state (the posterior distribution). In this sense, we are measuring the information gain resulting both from the new token *and* from the 'irreducible' computation the model performs in response to previous tokens. Disentangling these two components is an interesting challenge for future work.
## 3. Train/Test Split and Methodology
> As there is no "train and test split" for most of the experiments...
It is important to clarify that there *is* a strict split between training and test/evaluation data for all experiments in the paper. The data used to train the models and PHi layers is distinct from the data used in the evaluations (with the exception of the memorization tasks, of course). We will emphasize this more clearly in the paper and provide additional information about train/test split in the appendix. As we describe in responses to other reviews, our method is fairly robust towards different hyperparameters (see response to u77y, paragraph 1 and Figure 5 in the linked document).
We agree with the reviewer that the results of Section 3.2 are somewhat contingent on the placement of the PHi layer in the Llama 3B model. However, we did our best to be transparent and methodically sound: Experiment 3.2.1 serves as an exploration and allows us to select sensible placements of the PHi layer. In response to Uw5E, paragraph 1, we detail the practical and theoretical considerations of this selection, which we cannot repeat here due to lack of space. All subsequent experiments are conditional only on this one selection process. We will make this clearer in the revised paper.
## 4. PHi Loss and the Minimum Description Length Principle
Unfortunately, we do not have the space in this reply for a full explanation, but we will clarify this connection in the updated paper. Here, for brevity we adopt the notation of https://arxiv.org/abs/2410.14086, where $p_\theta$ is the in-context learned model of some data $D$. The MDL says we should minimize $K(p_\theta) + K(D | p_\theta)$.
The synthesized model $p_\theta$ has to pass through the information bottleneck in order to be effective. The cumulative PHi loss quantifies the amount of information crossing that bottleneck (i.e., the information contained in the latent sequence), and is thus an upper bound of $K(p_\theta)$. Our comment above about the prior, in addition to section 2.4 in paper which shows the existence of a compression scheme for the latent sequence, should make it clear that the autoregressive prior *is* accounted for in the PHi loss. Meanwhile, the cumulative NLL loss is an upper bound of $K(p_\theta) + K(D | p_\theta),$ and therefore also an upper bound of $K(p_\theta)$. However—as we argue throughout the paper—it is a significantly weaker bound than the PHi loss.
If the reviewer would like further clarification, we are very happy to give more details and answer all remaining questions in a future response. | null | null | null | null | null | null |
Faster and Stronger: When ANN-SNN Conversion Meets Parallel Spiking Calculation | Accept (poster) | Summary: This paper innovatively combines parallel spiking calculation with ANN-SNN Conversion to propose a high-performance and ultra-low-latency parallel conversion framework, which can also be applied in more general conversion scenarios (e.g. ReLU, QCFS with different quantization level). Experimental results have demonstrated that the superiority of the proposed method in terms of inference speed and performance.
Claims And Evidence: The theoretical claim (e.g. Theorem 4.1) made in the submission is supported by convincing experimental validation.
Methods And Evaluation Criteria: The derivation of the step-wise optimal shift term and the layer-wise error calibration based on DA-QCFS function ensure the fidelity of the conversion process.
Theoretical Claims: I have checked the correctness of the proof for theoretical claims in this paper.
Experimental Designs Or Analyses: According to Table 3-4 and Figure 2, compared to traditional conversion methods, parallel conversion achieves more superior performance within the same time latency.
Supplementary Material: I have reviewed the code implementation in the supplementary material and the additional content in Appendix.
Relation To Broader Scientific Literature: This paper provides a new perspective for further exploring the supervised learning schemes of SNNs (ANN-SNN Conversion, STBP Training, etc).
Essential References Not Discussed: At present, no previous work has been found to be omitted or improperly cited in this paper.
Other Strengths And Weaknesses: Strengths:
This paper explores the parallel spiking computation model from the perspective of conversion learning, establishing an equivalent mathematical mapping relationship between each time-step and the corresponding cumulative spike firing rate, which can also be considered as revealing the performance upper-bound of the parallel spiking model from another perspective. In addition, the authors further analyze the optimal value of the shift term and recognize the potential layer-by-layer distribution problem of the spike sequence, thus extending the proposed method to a more general conversion framework.
Weaknesses:
In Section 4.3, the authors mention that the computational cost of parallel conversion can be further optimized. I suggest that the authors can compare the specific computational overhead of vanilla IF model and parallel model in more detail from the perspective of operands (ADD, MUL), so as to make the parallel conversion scheme more convincing.
Other Comments Or Suggestions: The parallel conversion framework explores the performance upper-bound of parallel spiking computing (both precision and speed) from a new perspective.
Questions For Authors: SNNC-LP(AP) has also utilized an error calibration scheme before. What are the differences between the DA-QCFS based calibration method proposed in this work and SNNC-LP(AP)?
Li,Y., Deng,S., Dong,X., Gong,R., and Gu,S. A free lunch from ANN: Towards efficient, accurate spiking neural networks calibration. In International Conference on Machine Learning, 2021.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## To Reviewer mMJc
We are pleased that you recognize the relevant content of this work in terms of theoretical claims and experimental validation, as well as pointing out that our method provides a new perspective for SNN supervised learning. We will elaborate on your questions and comments.
> The authors can compare the specific computational overhead of vanilla IF model and parallel model in more detail from the perspective of operands (ADD, MUL)
**A1:** Thanks for the question. For the vanilla IF model, the inter-layer computation (assuming the synaptic layer is a fully-connected layer $[C,C]$) involves $O(T_\text{IF} NC^2)$ multiplication/addition operations, here $N,T_\text{IF}$ respectively denote the total number of tokens and time-steps. The intra-layer calculation involves charging, firing and resetting, with a total of approximately $2T_\text{IF}$ addition and $T_\text{IF}$ comparison operands for each neuron.
For the parallel spiking model, the inter-layer computation is consistent with that of the IF model, while the intra-layer calculation can be combined with various optimization techniques proposed in Section 4.3. Specifically, when we convert $\mathbf{\Lambda}^l_\text{PC} \mathbf{I}^l$ to $[\frac{1}{T_\text{PC}},...,\frac{1}{T_\text{PC}-x+1},...,1]\odot\sum_t\mathbf{W}^l\mathbf{s}^{(l-1)}$, only $T_\text{PC}$ addition operands are involved, where $[\frac{1}{T_\text{PC}},...,\frac{1}{T_\text{PC}-x+1},...,1]$ can be further fused with $\theta^{l,t}$ at each time-step. This step involves $T_\text{PC}$ comparison operands. However, due to the existence of sorting property, the comparison operand can actually be further reduced to $O(\log T_\text{PC})$. Overall, using parallel conversion framework can save approximately $2T_\text{IF}-T_\text{PC}$ addition and $T_\text{IF}-O(\log T_\text{PC})$ comparison operands for each neuron layer, $O(T_\text{IF} NC^2)-O(T_\text{PC} NC^2)$ multiplication/addition operations for each synaptic layer, as well as achieving more superior inference performance and speed. Generally speaking, when traditional ANN-SNN Conversion reaches the same level of performance as Parallel Conversion, one can find that $T_\text{IF}\gg T_\text{PC}$.
> What are the differences between the DA-QCFS based calibration method proposed in this work and SNNC-LP(AP)?
**A2:** Thanks for the comment. SNNC-LP(AP) is committed to extremely low-cost error calibration from pre-trained ANNs to converted SNNs, where the average spike firing rate used for calibration is based on the assumption of uniform input current distribution. Due to SNNC-LP(AP) still using the vanilla IF model in the inference stage, there is still a gap between the estimated firing rate after calibration and the actual firing rate layer by layer.
In comparison, the motivation of our calibration method is to achieve efficient inference of SNN under any time latency. Since parallel spiking calculation can satisfy the assumption of input current distribution, the process of replacing the rectified DA-QCFS module with parallel spiking neurons is lossless. The goal of calibration is to regulate the distribution of input current within a specified number of time-steps, so that the estimated firing rate of each layer is aligned with the highest learning accuracy as much as possible, thereby preparing for the parallel conversion process in advance.
---
Rebuttal Comment 1.1:
Comment: Thanks for authors rebuttal. My concerns have been addressed. | Summary: This work introduces a novel parallel conversion learning framework that establishes a mathematical mapping between each time step of parallel spiking neurons and the cumulative spike firing rate. The lossless and sorting properties of the conversion process are theoretically validated, and the optimal shifting distance for each step is identified. Additionally, it integrates distribution-aware error calibration to further improve accuracy. Moreover, the proposed framework achieves a top-1 accuracy of 72.90% on ImageNet-1k using ResNet-34 within only 4 time-steps, showcasing significant performance improvements in both conventional and training-free conversion scenarios.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes. Appendix A.1
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: [1] Scaling spike-driven transformer with efficient spike firing approximation training. IEEE T-PAMI 2025.
Other Strengths And Weaknesses: Strengths :
1. This paper proposes a low-time-step train-free ANN2SNN method to reduce the SNN training burden and inference cost.
2. Well-written and organized.
3. This work establishes a mathematical mapping between parallel spiking neurons and cumulative spike firing rates, theoretically validates the lossless and sorting properties of the conversion process, and derives the optimal shifting distance for each step.
Weaknesses:
1. The proposed parallel transformation is limited to the convolutional architecture, and it remains an open question whether it can be applied to the Transformer architecture.
2. The proposed parallel transformation can only convert fixed ANN architectures, and the characteristics of SNNs such as spike-driven may not be guaranteed during conversion.
Other Comments Or Suggestions: 1. Please discuss in detail the difference between the conversion method in this paper and I-LIF [1], and whether the spike trains in this paper can be adjusted at will.
2. The proposed parallel transformation is limited to the convolutional architecture, and it remains an open question whether it can be applied to the Transformer architecture.
[1] Scaling spike-driven transformer with efficient spike firing approximation training. IEEE T-PAMI 2025.
Questions For Authors: 1. Please discuss in detail the difference between the conversion method in this paper and I-LIF [1], and whether the spike trains in this paper can be adjusted at will.
2. The proposed parallel transformation is limited to the convolutional architecture, and it remains an open question whether it can be applied to the Transformer architecture.
[1] Scaling spike-driven transformer with efficient spike firing approximation training. IEEE T-PAMI 2025.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## To Reviewer nBwe
We are delighted that you think that our method is well written and organized, as well as being validated in both theoretical and experimental dimensions. We will discuss your questions in detail in the following content.
> The difference between the conversion method in this paper and I-LIF.
**A1:** Thanks for the question. From the perspective of conversion error, I-LIF transmits integer spikes during the training stage and can switch to vanilla LIF neurons during the inference stage. However, this can only ensure that I-LIF satisfies the input current assumption consistent with the QCFS function in the sub-slices composed of every $D$ time-steps ($D$ denotes the maximum integer spike value), and cannot fully achieve lossless conversion. In addition, I-LIF is commonly used in the field of STBP training. In comparison, parallel conversion is a learning framework specifically designed for ANN-SNN Conversion and has the property of lossless conversion.
From the perspective of calculation mechanism, I-LIF, as an enhanced version of the vanilla LIF model, still consists of three processes: charging, firing and resetting. The overall computing process is serial and has a temporal direction. In comparison, our scheme adopts parallel spiking calculation, saving the processes of charging and resetting, and enabling more efficient SNN inference.
> Whether the spike trains in this paper can be adjusted at will?
**A2:** For ANN-SNN Conversion, the key to learning lies in the average spike firing rate. As shown in Eq.8, our $\Lambda_\text{PC}^l$ is formed by the fusion of $\Lambda_\text{PRE}^l$ and $\Lambda_\text{POST}^l$, thus possessing the ability to homogenize the distribution of input current. In other words, theoretically, arbitrarily shuffling the order of the output spike sequence $\mathbf{s}^{(l-1)}$ will not affect the prediction of the average spike firing rate $\mathbf{r}^{l,T}$ in the next layer.
From the perspective of adjusting the length of the spike sequence and the number of firing spikes, by combining the calibration scheme discussed in Sec 4.2 and Eq.9, we can arbitrarily adjust the length of the spike sequence and ensure that the converted SNN model has advanced performance within any time period.
> It remains an open question whether the proposed parallel transformation can be applied to the Transformer architecture.
**A3:** Thanks for the comment. Assuming that the input currents through $\mathbf{Q}^l,\mathbf{K}^l,\mathbf{V}^l$ are respectively $\mathbf{I}_Q^l,\mathbf{I}_K^l,\mathbf{I}_V^l\in\mathbb{R}^T$. If the input current is directly passed through the corresponding parallel spiking neurons $\text{SN}(\cdot)$ and the attention score is calculated, it will introduce potential computational complexity of $O(T^2)$. Therefore, one viable solution is to pre-calculate the average input current $\mathbf{I}_K^{l,\text{avg}},\mathbf{I}_V^{l,\text{avg}}\in\mathbb{R}$, then complete the calculation for $\left(\text{SN}(\mathbf{I}_Q^l){\mathbf{I}_K^{l,\text{avg}}}^{\top}\right)\mathbf{I}_V^{l,\text{avg}}$.
At this point, one can note that the computational complexity of each step is at the level of $O(T)$ and always maintains $\mathbf{r}^{l,T}=\sum_t\mathbf{s}^{l,t}$. When $\mathbf{r}^{(l-1),T}=\sum_t\mathbf{s}^{(l-1),t}$, $\mathbf{I}^{l,x}=\Lambda_\text{PRE}^l\mathbf{W}^l\mathbf{s}^{(l-1)}=\mathbf{W}^l\mathbf{r}^{(l-1),T}, \forall x\in[1,T]$ holds, ensuring that the input current entering parallel neurons satisfies the distribution assumption and guarantees the precision of predicting the average spike firing rate layer by layer during the conversion process.
---
Rebuttal Comment 1.1:
Comment: I appreciate your response and extra experiments. Most of the concerns have been addressed. | Summary: This paper propose a parallel ANN-SNN conversion framework. The author firstly categorizes and summarizes various conversion paradigms in the field of ANN-SNN conversion learning, then proposes an efficient conversion method based on parallel spiking computing, which relate each time-step to the cumulative spike firing rate. Experimental results show that the proposed methods can achieve SOTA performance on several benchmark datasets.
Claims And Evidence: YES
Methods And Evaluation Criteria: yES
Theoretical Claims: The theoretical claims and proofs have been validated.
Experimental Designs Or Analyses: The experimental comparison with previous SOTA methods has been checked.
Supplementary Material: Yes, I have checked the supplementary material.
Relation To Broader Scientific Literature: The academic theme of this work is related to the ANN-SNN Conversion learning with high efficiency, the author respectively considers solutions for the non-uniform problem of the spike sequence and the simulated average firing rate.
Essential References Not Discussed: nO
Other Strengths And Weaknesses: Strengths
1.The theoretical analysis of the lossless property is solid.
2.The authors point out the non-uniform problem for both $[\mathbf{s}^{l,1},…,\mathbf{s}^{l,T}]$ and $\mathbf{r}^{l,T}, \mathbf{r}^{l,\tilde{T}}, T \neq \tilde{T}$, which is the foundation to establish theuniversal parallel conversion framework .
3.Compared to previous works, the performance advantage of parallel conversion is remarkable.
Weaknesses
1.The authors mainly focus on the classification performance of SNNs. Can this method be generalized to other tasks?
2.The author has verified the effectiveness of Parallel Conversion on CNN backbone. However, it lacks the test on Transformer-based SNNs. I suggest to add the analysis to make the contribution of this work more comprehensive.
3.The proposed method needs to train an ANN with quantization activation function, which brings training costs.
Other Comments Or Suggestions: For more comments or suggestions, please refer to the “Other Strengths And Weaknesses” section.
Questions For Authors: As shown in Tab.1, the author further divides the concept of conversion learning into ANN-SNN Conversion (ReLU or QCFS), Conversion Rectification and Parallel Conversion. I am curious about the specific differences between Conversion Rectification and Parallel Conversion? For example, from the perspectives of performance and calculation overhead, etc.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## To Reviewer HMDR
We would like to thank for your acknowledgement about our approach in terms of theoretical analysis and performance advantages, we will provide further answers and clarifications for your questions and concerns.
> Can this method be generalized to other tasks?
Thanks for this comment. To validate the generalization ability of our method on visual tasks, we further conduct experimental verification on semantic segmentation tasks. We attempt training-free parallel conversion based on DA-QCFS for Pascal VOC dataset [1] and ResNet50-FCN/DeepLabv3 structure [2, 3], as shown in Tab. R1. Experimental results indicate that our scheme can also achieve approximately lossless parallel conversion in segmentation tasks.
**Table R1:** Experimental results of training-free conversion on Pascal VOC dataset.
| Arch. | Metric (%) | ANN | T=8 | T=16 | T=32 |
| :----------------: | :--------: | :---: | :---: | :---: | :---: |
| ResNet50-FCN | Pixel Acc. | 88.76 | 86.99 | 88.12 | 88.60 |
| ResNet50-FCN | mIOU | 51.93 | 46.71 | 49.70 | 51.25 |
| ResNet50-DeepLabv3 | Pixel Acc. | 91.46 | 89.87 | 91.21 | 91.38 |
| ResNet50-DeepLabv3 | mIOU | 63.03 | 58.91 | 62.48 | 62.72 |
> The analysis of Parallel Conversion for Transformer-based SNNs?
**A2:** Thanks for the comment. The most critical difference between Transformer and CNN is the matrix multiplication between multi-branch inputs (e.g. $\mathbf{Q}^l{\mathbf{K}^l}^{\top}, \text{Attn}^l\mathbf{V}^l$). If we directly insert QCFS functions after $\mathbf{Q}^l$ and $\mathbf{K}^l$ weight layers in the pre-training stage of ANN, and then replace QCFS modules with parallel spiking neurons $\text{SN}(\cdot)$ in the SNN inference stage, this may introduce $O(T^2)$ computational complexity within the self-attention modules. Therefore, we can consider pre-calculating the average input current for one of the branches (e.g. $\mathbf{I}^{l,\text{avg}}_Q$) and then calculating the attention score $\mathbf{I}^{l,\text{avg}}_Q \text{SN}({\mathbf{I}^l_K}^{\top})$ to maintain the computational complexity at $O(T)$. It is worth noting that at this point, parallel conversion will still maintain its unique lossless and sorting properties on Transformer-based SNNs.
> The proposed method needs to train an ANN with quantization activation function, which brings training costs.
**A3:** For parallel conversion based on QCFS ANN, we usually need to complete the pre-training process of the quantized ANN model. However, our method also validate its effectiveness in training-free conversion cases. We can directly obtain the corresponding converted SNN model through utilizing quantization, error calibration, and parallel neuron replacement on an open-source and training-free ANN checkpoint.
> The specific differences between Conversion Rectification and Parallel Conversion from the perspectives of performance and calculation overhead?
**A4:** Thanks for the question. The concept of Conversion Rectification usually refers to the relevant schemes for secondary optimization of the converted SNN model in the inference stage, aiming to reduce the representation gap between pre-trained ANNs and converted SNNs. The previous related works were generally based on IF neuron and its variants, and most of the works cannot guarantee the complete elimination of conversion errors in theory. Compared to these methods, our parallel conversion has faster inference speed and the property of lossless conversion . Additionally, due to its small number of time-steps, it can always keep the calculation overhead within a reasonable range.
[1] The PASCAL Visual Object Classes (VOC) Challenge, IJCV, 2010.
[2] Fully convolutional networks for semantic segmentation. CVPR, 2015.
[3] Rethinking Atrous Convolution for Semantic Image Segmentation, 2017. | Summary: This work presents a novel route for SNN supervised learning by jointly adopting ANN-SNN Conversion and parallel calculation. The main contributions of the paper include the proof of optimal shifting distance, further promotion of parallel conversion framework based on QCFS, and experimental demonstration on various benchmarks.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I have checked the relevant proof
Experimental Designs Or Analyses: I have checked the experimental results about the Accuracy and Acceleration Ratio of this method.
Supplementary Material: The authors have included the code in the supplementary material.
Relation To Broader Scientific Literature: This work investigates a new conversion method that combines the performance advantages of ANN-SNN conversion with the low-latency advantage of STBP Training in the inference stage. It is a new learning route in SNN field.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths
1. The narrative logic of this work is clear. Experiments have shown that parallel conversion has significant advantages in accuracy and acceleration ratio compared to traditional conversion methods.
Weaknesses:
1. The discussion on threshold recording and error calibration techniques is not sufficient. For example, as shown in Table 2, the authors need to further explain why some cases require the use of the above two techniques, while others do not?
2. In Section 4.3, the author needs to clarify more clearly which calculation step does the “sorting property” save in terms of computational cost?
Other Comments Or Suggestions: The authors' proof of the theorem in Appendix is somewhat concise, and more additional explanation can be introduced to enhance its readability.
Questions For Authors: 1: In Section 5.3 (Figure 2) and Appendix A.2 (Figure S1), how are the corresponding throughput rates calculated for serial inference based on IF model and parallel inference based on parallel spiking model? Please clarify it.
2: For ResNet-50 and ResNet-101 (Figure 2.c-2.d), it seems that their corresponding Acceleration Ratio do not further increase like other cases when the number of time-steps is large ($T\geq 32$). What is the specific reason for this? Does it means the proposed method work only for low-latency SNNs?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## To Reviewer 72NY
We sincerely appreciate your recognition for the novelty and experimental effectiveness of this work. We will strive to address your concerns in detail in the following section:
> The discussion on threshold recording and error calibration techniques is not sufficient.
**A1:** Thanks for the question. In this work, threshold recording is used to confirm the initial maximum threshold and effectively utilize the distribution of input current, as QCFS explicitly provides learnable thresholds during the pre-training stage of ANN, threshold recording is only used in training-free conversion based cases. The motivation of error calibration lies in the fact that QCFS can only eliminate conversion errors ($T\neq\tilde{T}$) when the input current follows a uniform distribution, as shown in Theorem 4.1(ii). However, since the assumed condition may not be fully satisfied in practical situations, additional error calibration can further enhance the performance of converted SNNs under low time latency (where ClipReLU can also be considered as a case of $\tilde{T}\gg T$).
> Which calculation step does the “sorting property” save in terms of computational cost?
**A2:** The sorting property actually effectively saves multiplication operations in $\Lambda_\text{PC}^l\mathbf{I}^l+\mathbf{b}^l$ and comparison operations between $\Lambda_\text{PC}^l\mathbf{I}^l+\mathbf{b}^l$ and $\theta^l$. Specifically, when we utilize the sorting property, it actually only involves calculation related to $\Lambda_\text{PC}^{l,\text{idx}}\mathbf{I}^l+\mathbf{b}^{l,\text{idx}}\geq\theta^{l,\text{idx}}$, where $\text{idx},\text{len(idx)}=O(\log T)$ is the index time-step set selected by the sorting property.
> How are the corresponding throughput rates calculated?
**A3:** We randomly select a subset of data (1000 images) to calculate the average inference time for the above two schemes. We choose scientific tools to calculate the precise time and the specific process has been submitted into the supplementary materials. For serial inference, we feed data into the SNN backbone at each time-step, while parallel inference packages consecutive $T$ time-steps into a batch and feeds it into the network all at once.
> For ResNet-50 and ResNet-101, it seems that their Acceleration Ratio do not further increase like other cases?
**A4:** Thanks for this insightful comment. Due to the fact that $T$ time-steps will be passed into the corresponding network backbone at once during parallel inference, for experimental cases with large model parameter quantity or time-steps, the required hardware memory is large and the utilization rate tends to be saturated. Therefore, under a environment with limited hardware support, the corresponding acceleration ratio may not further increase. However, even so, the advantage of parallel conversion is very obvious when $T\geq 32$ (e.g. the corresponding acceleration ratio exceeds $17.5\times$ for Fig 2.c-d). In addition, due to the theoretically lossless property of our method, in practical scenarios, we usually only need a small number of time-steps to achieve the same level of performance as the pretrained ANN for the converted SNN.
> The authors' proof of the theorem in Appendix is somewhat concise.
**A5:** Thanks for the suggestion. We will further improve the relevant proof process in the final submission. | null | null | null | null | null | null |
Scaling Laws for Task-Optimized Models of the Primate Visual Ventral Stream | Accept (spotlight poster) | Summary: This paper explores how scaling model size, dataset size affects the alignment of artificial neural networks with primate visual ventral stream behaviors and neural responses. The scaling law is investigated over diverse models on benchmarks including v1, v2, v4, IT and behavior data.
The authors offer interesting findings w.r.t. neural alignment and behavioral alignment.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: The scaling laws of large models have been extensively studied and applied. This paper explores another scaling law of models from the perspective of brain alignment. The authors offer an interesting perspective on how mainstream models align with human core object recognition behavior, and the impact of this biological alignment.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths
- The paper is well-written, well-organized and easy to follow.
- The paper provides extensive experiments across various model architectures and well-illustrated visualizations.
- The authors provide several interesting findings, including (1) scaling is particularly beneficial for higher-level visual areas, (2) neural alignment saturates in most conditions whereas behavioral alignment continuously improves when scaling.
Weaknesses
- The practical implications of the authors' conclusions remain unclear. For instance, while the authors highlight that models with strong architectural inductive biases, such as convolution-based ResNets, show better neural alignment trends than transformer-based ViTs, ViTs are still more widely adopted in the community due to their superior overall performance compared to convolution nets.
- The authors reveal the scaling laws between neural alignment, behavioral alignment, and data size, model size through a series of impressive experiments. However, what is the relationship between improving neural alignment and enhancing downstream performance? What insights can these scaling laws provide for improving model design?
Other Comments Or Suggestions: None
Questions For Authors: See Other Strengths And Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review — we’re glad you found the paper well-written and organized, appreciated the breadth of our experiments and visualizations, and found the findings insightful. We respond to each of your comments below point-by-point.
> The practical implications of the authors' conclusions remain unclear. For instance, while the authors highlight that models with strong architectural inductive biases, such as convolution-based ResNets, show better neural alignment trends than transformer-based ViTs, ViTs are still more widely adopted in the community due to their superior overall performance compared to convolution nets.
Since our primary goal is modeling brain information processing (i.e., neuroscience), we consider task performance mainly as a proxy rather than the ultimate target. Indeed, our experiments show ViTs excel at behavioral alignment (Figs. 5a, 6b) but underperform relative to convolutional models in neural alignment benchmarks (Figs. 5a, 6a). This also holds for very large models trained on massive datasets (Fig S2, ConvNeXt vs ViT). Practically, this suggests a promising path forward could involve hybrid architectures that combine biological inductive biases (e.g., convolutional or hierarchical connectivity) with transformer-based scalability and flexibility such as a VOneBlock (Dapello et al. 2020) combined with ViTs (Dosovitskiy et al. 2020), thus balancing task performance with improved neural alignment. Specifically, one viable strategy could be distilling representations from high-performing transformer models into architectures with hierarchical inductive biases resembling biological circuitry.
> The authors reveal the scaling laws between neural alignment, behavioral alignment, and data size, model size through a series of impressive experiments. However, what is the relationship between improving neural alignment and enhancing downstream performance? What insights can these scaling laws provide for improving model design?
In this study, we primarily focus on building better models of the human brain, rather than developing better ML models for various tasks. In this context, our findings show a clear correlation between behavioral alignment and task performance (Figs. 1, 6b), consistent with the widely accepted notion in the ML community that increased compute leads to better-performing models. In contrast, neural alignment shows positive but diminishing returns with improved object recognition performance (Fig. 6a), indicating that improving neural alignment may not always directly translate into better downstream task performance.
Practically, our scaling laws offer concrete guidance for future model design targeting neural alignment. First, our results strongly suggest prioritizing larger, richer, and more diverse datasets, as they consistently yield greater neural alignment improvements compared to merely scaling model complexity (Fig. 3a). Second, our findings emphasize the crucial role of biologically inspired inductive biases, particularly in scenarios constrained by limited compute or data, as these priors substantially enhance alignment efficiency (Figs. 2, 3b). Lastly, the graded scaling effect across the cortical hierarchy (Fig. 5) indicates that early visual regions (V1/V2) are the most challenging to align through scale alone, suggesting that architectures incorporating stronger biologically-informed priors (e.g., VOneNet-style architectures) may be necessary to achieve further improvements.
Thanks again for your review. We ask the reviewer to consider raising their score in light of this paper’s placement in computational neuroscience and a focus on modeling the brain.
---
Rebuttal Comment 1.1:
Comment: The authors' response has addressed all my concerns. Thus I decide to raise my rating. | Summary: This paper seeks to measure scaling laws for task-optimized models of the primate visual ventral stream. Several models from multiple families were trained with different amounts of compute and training data. They were then compared on their alignment to different areas of the visual cortex (V1, V2, V4 and IT), as well as on behavior data from non-human primates. Scaling laws were then fit to relate how neural and behavioral alignment scale with flops and data. The top result is that neural alignment asymptotes with scaling. The paper then breaks down these results in terms of areas, architectures, dataset dependence, etc.
Claims And Evidence: This is a very thorough investigation of scaling laws in visual cortex, fitting hundreds of models on a broad range of downstream tasks. The authors' top-line result, that alignment saturates, is surprising (though in line with prior literature); it's very well-supported; and it is a significant conceptual advance for the field.
Methods And Evaluation Criteria: The authors fit a broad range of models: ResNet, AlexNet, EfficientNet, ViTs, ConvNext, Cornet. They selected two different core datasets: imagenet and ecoset; and alternatives including iNaturalist, infiniMNIST, Places365, etc. They also try out alternative training scenarios beyond supervised learning, including SimCLR, DINO and adversarial fine-tuning. This is a very broad range of models that supports their main conclusions.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The methods are all standard and well-accepted; where the paper innovates is in the thoroughness of its evaluation. I have no qualms.
Supplementary Material: I briefly perused the supplementary. There's interesting information here. including the correlation of the private and public benchmarks, their cross-checking against existing pre-trained models, and the evolution of alignment over training. There's a sufficient amount of material here that this short paper could be a journal submission.
Relation To Broader Scientific Literature: Much has been written about the fact that Brain-Scores and similar measures are saturated, and that multiple models converge on the same (wrong) representations; examples include Conwell et al. (2024) and Linsley et al. (2023). However, this submission stands out for its completeness, the depth of its analysis, and its focus on primate vision.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and thoughtful comments. We are particularly encouraged that the reviewer highlighted the key strengths that we were indeed most excited about in our paper:
- Our systematic, extensive evaluation of scaling laws across hundreds of models and diverse benchmarks, providing a new insight into the role of architectural bias and choice of training, and a recipe for optimal compute allocation.
- Thorough comparisons across various architectures (ResNet, EfficientNet, ViT, ConvNeXt, CORnet), multiple datasets (ImageNet, EcoSet, iNaturalist, infiMNIST, Places365), and alternative training methods (SimCLR, DINO, adversarial training).
- Clear evidence showing neural alignment saturates with increased model and dataset scaling, providing a significant conceptual step forward to encourage the exploration of new modeling ideas.
- The depth and rigor of our analyses, including validation with held-out benchmarks and alignment dynamics during training, showing for instance that under controlled model training, task performance and brain alignment remain weakly correlated.
- Comprehensive results specifically targeted at the primate visual cortex, offering deeper insights beyond prior work, such as the ordered effect of scale on alignment and the role of inductive biases.
We greatly appreciate the reviewer's supportive assessment and enthusiasm for our study. | Summary: In the paper "Scaling Laws for Task-Optimized Models of the Primate Visual Ventral Stream", the authors investigate scaling laws for alignment of machine learning models to the primate visual processing. They assess neural alignment as well as behavioral alignment, and find that there is a saturation point in neural alignment, but not in behavioral alignment. They further evaluate the properties of models with strong inductive bias versus weak inductive bias, and find that models with stronger inductive bias are more sample- and compute-efficient. Further analyses include the individual assessment of areas along the visual hierarchy and the comparison of different training strategies.
Claims And Evidence: The authors claim that there is a saturation point in neural alignment, but not in behavioral alignment. They also claim that increasing both parameter count and training dataset size improves alignment, with data providing more gains over model scaling. Architectures with stronger inductive bias and datasets with higher-quality images are more sample- and compute-efficient. Finally, they claim that model alignment with higher-level brain regions and especially behavior benefits the most from scaling. They support these claims with a wide range of experiments and fits of scaling laws to the data. Given the range of models and analyses that they carried out, the evidence for their claims looks solid to me.
Methods And Evaluation Criteria: The authors use benchmarks from Brain-Score, which includes comparisons of model outputs to recordings from the primate visual ventral system, as well as a comparison of model predictions with behavioral data. They use a range of model architectures, including AlexNet, ResNets, CORnet-S, EfficientNets, Vision Transformers, and ConvNeXt, and fit various functional forms of the scaling laws to account for inductive biases in the model architectures and their data requirements. Models were assessed on a range of image datasets, including ImageNet and EcoSet, as well as a range of other image datasets. They also assessed alternative training strategies, including SimCLR, adversarial training, supervised learning, and self-supervised learning. Bootstrapping was used to quantify uncertainty. The selection of models includes state-of-the-art models and a wide range of architectures, which is a strength of the paper. The authors also provide a detailed analysis of the training strategies, which is important for understanding the results. The evaluation criteria are well-defined and the methods are appropriate for the research questions.
Theoretical Claims: The paper does not make theoretical claims.
Experimental Designs Or Analyses: There are experiments assessing model size and dataset training size, as well as the effect of inductive bias on alignment. The authors also assess the alignment of models with different brain areas and behavior. Then there are experiments using alternative training strategies. The breadth of the experiments is a strength of the paper, and the authors provide a detailed description of the methods used.
Supplementary Material: The supplementary material includes brief implementation details, descriptions of the image datasets, and additional results. There is a validation on private data, a pretraining analysis, an assessment of training evolution, and additional results on the effects of the training strategy. These are all relevant to the main text and provide additional insights.
Relation To Broader Scientific Literature: The functional form of the misalignment follows Hoffmann et al., 2022, a study for the scaling of large language models. There are changes in the function for optimal allocate of compute to account for empirical observations. The benchmarking itself follows Schrimpf et al., 2018. Linsley et al., 2023 showed that large DNNs rely on different visual features than than those encoded by area IT. In light of these previous works, it might be expected that scaling up models and data would be insufficient to achieve better alignment for neural data. Therefore, the main findings of the paper are not all that surprising and I find the novelty of the paper to be limited. The findings are empirical in nature and do not provide a deeper understanding for the reasons behind the observed scaling laws, other than the inductive biases of the models.
Essential References Not Discussed: To the best of my knowledge, the paper discusses all essential references.
Other Strengths And Weaknesses: The paper is well-written with good structure, sufficient details for making the work reproducible and a clear presentation of the results.
Other Comments Or Suggestions: Figure panel 3b is not described in the text. It would be helpful to include a brief description of the figure in the main text.
Questions For Authors: In line 313, right column, you speculate that factors other than task performance influence neural alignment. Do you have any ideas about what these factors might be?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your positive review and are glad you found the paper well-written, the results clear, and the evidence supporting our claims solid. Below, we address your questions point by point.
> In light of these previous works, it might be expected that scaling up models and data would be insufficient to achieve better alignment for neural data. Therefore, the main findings of the paper are not all that surprising and I find the novelty of the paper to be limited. The findings are empirical in nature and do not provide a deeper understanding for the reasons behind the observed scaling laws, other than the inductive biases of the models.
Our work substantially advances previous literature by providing a more systematic, controlled, and comprehensive analysis that isolates how individual factors—model parameters, dataset size, and compute resources—each influence alignment with multiple brain regions and behavior. Earlier studies on the other hand used heterogeneously pre-trained models and compared to limited brain datasets.
- **Generalization of brain regions:** Prior works focused narrowly (e.g., only IT in Linsley et al., 2023) or used noisy fMRI signals (Conwell et al., 2022). We evaluate alignment across the full primate ventral stream (V1–IT) using high-resolution intracortical recordings, along with object recognition behavior—both novel in this context.
- **Model comparability:** Previous studies used pretrained models with inconsistent training recipes, making comparisons hard. Our models are all trained from scratch under uniform conditions, enabling a fair, apples-to-apples comparison across architectures and datasets.
- **Our comprehensive study led to updated findings:** We show that task performance does correlate with neural alignment (Fig. 6a), but with diminishing returns—contrary to prior conclusions. Moreover, our work quantifies the individual effects of data, parameters, and compute (Figs. 2–4), and provides a parametric framework for understanding scaling (Figs. 4–7), going well beyond simple correlations in earlier work.
- Finally, we contribute new insights such as a graded scaling effect across the visual hierarchy and a clear behavioral/neural alignment dissociation (Fig. 5), offering deeper understanding of how brain-like representations emerge. We’ll emphasize these distinctions more clearly in the revised manuscript and welcome suggestions for where to further highlight them.
> Figure panel 3b is not described in the text.
We thank the reviewer for pointing out this oversight. Figure 3b illustrates that model families with weaker inductive biases (e.g., ViT and ConvNeXt) begin training with lower neural alignment scores and consequently require larger training datasets to achieve alignment comparable to architectures with stronger inductive biases (e.g., ResNet, EfficientNet). Additionally, we find that recurrence, as implemented in CORnet architectures, provides a significant initial advantage in alignment relative to purely convolutional models, particularly in low-data regimes. However, this advantage diminishes with increased training data. Overall, these findings emphasize that strong inductive biases—such as convolution and recurrence—facilitate better alignment when data is limited, whereas extensive task-driven optimization eventually mitigates differences across architectures. We will integrate this explanation into the main text of the revised manuscript.
> In line 313, right column, you speculate that factors other than task performance influence neural alignment. Do you have any ideas about what these factors might be?
Our findings suggest that architectural inductive biases significantly influence neural alignment independently from task performance. Additionally, as illustrated in Figure 5b, higher cortical regions and behavioral alignment benefit more from task optimization compared to early visual areas. Moreover, supplementary analysis (Figure S7) shows that correlations between alignment and task performance also follow the cortical hierarchy. These observations imply that incorporating stronger biologically-inspired priors (e.g., convolutional constraints, local receptive fields, recurrence) specifically for modeling early visual regions, combined with more flexible, data-driven layers for higher cortical areas—akin to the design philosophy of architectures like VOneNet—may yield improved neural alignment. Furthermore, explicitly incorporating neural data into training procedures (via co-training or fine-tuning, see our response to the reviewer **weze**) represents a promising additional strategy to surpass the current limitations of purely task-optimized models.
Thanks again for your review. In light of the systematic advances over prior work—such as our controlled experimental setup, broader neural benchmarks, and novel findings on graded scaling and behavioral dissociation—we kindly ask you to consider raising your score.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed and structured rebuttal. The authors put forward good points regarding the novelty of their work and I encourage them to highlight these better in the final version of the manuscript if it gets accepted. I will raise my score to 4. | Summary: The paper introduces scaling laws for task-optimized models in fitting neural recordings. These laws stem from the observation that neural networks trained on classification tasks have emerged as the most effective models for decoding neural activity in the brain. The study then evaluates a measure of alignment across neural and behavioral benchmarks using publicly available datasets from BrainScore. Finally, the authors draw conclusions about how dataset size and architecture type influence behavioral responses, shedding light on their roles in neural decoding performance.
Claims And Evidence: While the claim that 'architectures with stronger inductive biases are more sample- and compute-efficient' is compelling, the evidence provided is limited to convolutional neural networks (CNNs). Other architectures with well-established inductive biases—such as CorNet or recurrent neural networks (RNNs)—are notably absent from the analysis. Including these architectures would strengthen the validity of the claim, as their exclusion leaves critical gaps in testing the generalizability of the hypothesis.
Methods And Evaluation Criteria: yes.
Theoretical Claims: I checked the derivation for scaling laws, it seems correct.
Experimental Designs Or Analyses: I checked the experimental design for:
* Total flops vs Alignment score.
* Parameter size Alignment score.
* Number of samples vs Alignment score.
* Learning dynamics and architecture nature.
* Adversarial finetunning and benefits to neural fit.
Supplementary Material: I reviewed all the Suplemental material.
I checked the details on datasets , validation of private data, models that have been pretrained on public available repos, such as torchvision. Training evolution. The performance on models that have been not trained. Effect of training objective. Simclr training impact in regions.
Relation To Broader Scientific Literature: The paper offers a wide range of experiments that validate the scaling laws, however the results described seem to resemble facts that were reported before. For instance, the effects of model size on neural alignment and has been previously reported (Linsley et al, 2023, Conwell et al, 2022 ).
Essential References Not Discussed: The cites cover most of the literature on the field.
Other Strengths And Weaknesses: Strenghts:
The paper offers a good view of the progress in the field in terms of neural and behavioral benchmarks.
It offers a good overview, covering a very long list of models.
Weakness:
The main limitation I find of the paper is that it is little indication on how to move forward and how the scaling laws can impact and inform architecture or data diet development?
Other Comments Or Suggestions: No.
Questions For Authors: * How can the scaling laws inform future selection of architectures data to improve our understanding of the visual cortex?
* How biological inductive bias can be quantitatively described so it can be easy to understand the claim "Architectural Bias influences alignment in behavior?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review - we’re glad you found the paper informative and comprehensive in its overview of models and progress on neural and behavioral benchmarks. We respond to each of your comments below.
> Claims And Evidence - limited model set:
The reviewer might have missed this in the paper, but a core focus of our work is to evaluate a variety of architectures (Figs 2, 3, S2) from standard CNNs (ResNet, AlexNet, EfficientNet) to recent ones (ConvNeXt), Vision Transformers (ViT, DaVit, FastViT, LeViT, MaxxViT, MobileViT) and CORnet. The recurrent architecture CORnet is notably present in all our analyses (lines 133-145 left column; Figs 3, 4, 6) but we do not observe a difference in its alignment at larger scales. Interestingly in Fig 3b, CORnet initially exhibits improved alignment in low-data conditions, indicating that recurrence might provide a meaningful inductive bias for efficiency under limited training samples. However, as the amount of training data increases, the advantage offered by recurrence diminishes, suggesting that deeper feedforward models can approximate recurrent processing by effectively "unrolling" recurrent computations through additional layers. Thus, recurrence remains beneficial primarily in data-constrained scenarios, providing sample efficiency advantages, but larger datasets reduce its relative advantage. Inspired by the reviewer’s question, we will highlight this point more prominently in the updated paper.
> The paper offers a wide range of experiments that validate the scaling laws, however the results described seem to resemble facts that were reported before.
Please see our first response to the reviewer **6Eek**.
> Weaknesses
Please see the answer below for your first question.
> How can the scaling laws inform future selection of architectures data to improve our understanding of the visual cortex?
Our scaling laws explicitly guide future architecture and dataset development across varying computational budgets. Specifically, our results suggest that architectures with biologically-inspired inductive biases—like convolution or recurrence—achieve superior alignment more efficiently, especially under limited compute (Figs 2, 3b, 4b). Furthermore, our findings strongly advocate for developing richer and more ecologically valid datasets, which consistently yield greater alignment improvements compared to increasing model complexity alone (Fig 3a, Sec 3.3). Additionally, the graded effect of scaling observed across the cortical hierarchy (Fig. 5) suggests that early visual areas (V1/V2) benefit relatively little from scaling alone, highlighting the need for stronger biologically-informed inductive biases early in the processing pathway, e.g. VOneNet. Finally, we hypothesize that to push neural encoding models beyond the current alignment plateau, explicitly incorporating neural recordings into model training may be necessary. Our preliminary results on fine-tuning models on IT neural data (Papale et al., 2024) are promising: https://imgur.com/a/okfmRu8 (anonymous link for figures). Fine-tuning on a single region from another dataset improves neural alignment, with a stronger effect for a weakly-biased model (ViT-Small) than for a CNN (ResNet50).
We will clearly discuss these directions and practical implications in our revised manuscript.
> How biological inductive bias can be quantitatively described so it can be easy to understand the claim "Architectural Bias influences alignment in behavior?
We note that the exact subtitle ("Architectural Inductive Bias Influences Alignment and Scaling Behavior") was not specifically about behavioral alignment but rather about scaling properties in general, i.e. the “behavior of scaling”. We see how the phrasing can be misleading and will revise it for clarity.
Our results show that biological inductive biases mainly affect the scaling properties of neural alignment, especially in low-data regimes. Specifically, Figure 5b illustrates clearly that behavioral alignment benefits significantly more from task optimization compared to neural alignment. Further, Figure 5a demonstrates that for behavioral alignment, models with strong versus weak inductive biases closely follow the same scaling trajectory, while significant differences remain for neural benchmarks. Indeed, after extensive training, models with weaker inductive biases (e.g., ViTs) even achieve slightly higher absolute behavioral alignment scores than models with stronger inductive biases.
Quantitatively, these biases shift neural alignment scaling more than behavior. Exploring other priors—like local connectivity or predictive coding—is an exciting direction.
Thank you again for your review. We kindly ask the reviewer to consider raising their score and take into account the results that might have been missed in the first pass, such as the diversity of architectures, the novelty of our findings, and the impact on future model building. | null | null | null | null | null | null |
A Unified View on Learning Unnormalized Distributions via Noise-Contrastive Estimation | Accept (poster) | Summary: In this work the authors present a unification of noise contrastive estimation (NCE) losses. In particular they consider a class of risks based on optimizing the density ratio of the model density compared to a known noise density (canonically a uniform distribution on a set known to contain the support of the true distribution) with a convex function in a Bregman-Divergence-style loss. From here they consider "$\alpha$-centered NCE", where the risk is normalized in some sense, and "$f$-conditional NCE" where the noise density is based on the training samples.
For these settings they prove that the population version of these losses will indeed work in a realizable modeling scenario (Prop 1.1, Prop 2.1, Prop 2.2). I addition they show that flexible forms of these losses using the correct loss parameters includes several well-known NCE losses, showing that the framework presented covers useful cases (Theorem 3.1, 3.2) and follow up with asymptotic and finite sample analysis (middle of section 3 and onwards).
## Update after rebuttal:
Nothing to update, I keep my score.
Claims And Evidence: This is a purely theory paper so this isn't so applicable.
Methods And Evaluation Criteria: N/A
Theoretical Claims: I have not checked the proofs carefully, but scanned them, and nothing immediately stands out as being problematic.
Experimental Designs Or Analyses: N/A
Supplementary Material: Other than a quick scan of the proofs, I have not.
Relation To Broader Scientific Literature: This does indeed seem to be a nice framework covering many NCE estimators with have previously been explored and includes some nice general results on the behavior of these. As stated in the intro, this covers MLE, MC0MLE, Global GISO, along with others.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: Overall the paper seems very strong to me. It is a natural sort of paper which generalizes many previous results and expands upon them.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their effort in reviewing our paper. We will incorporate all the comments from the reviews and revise our manuscript accordingly. | Summary: This paper provides a unified perspective on various estimators for learning unnormalized distributions (also known as energy-based models) using Noise-Contrastive Estimation (NCE). Specifically, they introduces $\alpha$-Centered NCE ($\alpha$-CentNCE) and f-Conditional NCE (f-CondNCE) as generalized versions of traditional NCE.
Building on this, their contributions include
1. they show that estimators such as Maximum Likelihood Estimation (MLE), Monte Carlo MLE (MC-MLE), and GlobalGISO can be viewed as special cases of $\alpha$-CentNCE.
2. f-Conditional NCE (f-CondNCE) generalizes previous conditional noise-contrastive estimation methods and clarifies their relationship with score matching.
3. they obtain finite-sample convergence rates for different estimators under regularity assumptions, which are novel for most NCE-based estimators.
4. they provide conditions under which these estimators achieve the parametric rate of convergence.
The paper is mainly a theoretical paper.
Claims And Evidence: yes
Methods And Evaluation Criteria: This paper is not a methodology focusing paper but rather a theoretical paper. Their proposed framework of $\alpha$-Centered NCE ($\alpha$-CentNCE) and f-Conditional NCE (f-CondNCE) are sound with supporting theoretical results.
Theoretical Claims: I haven't been able to check every detail of the proof but the proofs of their main theorems 4.1-4.3 are correct to me.
Experimental Designs Or Analyses: The paper has no experiments. It is a pure theoretical paper.
Supplementary Material: Yes, the proofs of their main theorems 4.1-4.3.
Relation To Broader Scientific Literature: The problem of estimating unnormalized probability distributions that this paper focuses on is fundamental in energy-based models (EBMs) and appears in various fields, including generative modeling, density estimation, and graphical models. There exist many standard estimators including Maximum Likelihood Estimation (MLE), Monte Carlo MLE (MC-MLE), and GlobalGISO. For them, the paper reframes these seemingly different estimators under a unified NCE-based framework, showing that various inference methods can be viewed as special cases of α-Centered NCE (α-CentNCE). Moreover, the paper also connects to the score matching (SM), introduced by Hyvärinen (2005), provides an alternative estimation principle that avoids the computation of partition functions. Previous research (Ceylan & Gutmann, 2018) suggested that f-CondNCE approximates Score Matching in the limit as noise approaches zero while this paper challenges this claim, showing that f-CondNCE does NOT converge to Score Matching but instead exhibits diverging variance in the small-noise regime.
Essential References Not Discussed: I am not an expert in this area but to my best knowledge, the authors tried their best to relate works that are essential to understanding their key contributions.
Other Strengths And Weaknesses: Strengths:
1. They paper theoretically unifies learning methods for unnormalized models including different estimation techniques (MLE, MC-MLE, Score Matching, etc.) while these estimators have only been developed independently, often with limited understanding of their connections until this paper.
2. It also introduces f-Conditional NCE (f-CondNCE), which extends conditional NCE and clarifies its theoretical behavior. Importantly, f-CondNCE corrects a misleading interpretation in prior work (Ceylan & Gutmann, 2018), showing that it does NOT converge to Score Matching in small-noise regimes.
3. The paper establishes finite-sample convergence rates for a broad class of NCE-based estimators, including $\alpha$-CentNCE and f-CondNCE. Moreover, they showed that most of these estimators achieve a parametric convergence rate of O(n^{-1/2}), which is the best possible rate in standard statistical settings.
Weakness:
1. The paper suggests that $\alpha$-CentNCE offers advantages over traditional Noise-Contrastive Estimation (NCE) in certain cases, but it does not fully formalize when and why $\alpha$-CentNCE is strictly better (while I only believe this is more a future direction rather than being completely solved in this one paper). For example, the paper proposes that centering may reduce variance but does not formally prove how variance changes as a function of $\alpha$. Studying this would provide guidance on how to pick $\alpha$ in practice. I would assume it involves some bias-variance tradeoff analysis.
2. The paper proves that f-CondNCE suffers from variance explosion in the small-noise limit, debunking prior claims, while it leaves an open theoretical problem: Can a modified version of f-CondNCE control variance growth while still approximating Score Matching?
3. The paper mentions that noise selection is crucial for the performance of NCE-based methods. However, the paper does not derive any results on optimal noise selection or analyze how different choices of $q_n(x)$ affect statistical efficiency.
4. I am not sure if these are standard assumptions in this area but The paper assumes that the sufficient statistic $\psi(x)$ is bounded while in commonly used models like Gaussian models $\psi(x)=x$ and log-linear models where $\psi(x)=\log(x)$, they are not. Also, the paper assumes that the noise distribution $q_n(x)$ is fixed and independent of the learned model while I believe in practice, this noise distribution should be tuned and selected dependent on the data.
Other Comments Or Suggestions: NA
Questions For Authors: Please see the four questions listed in "Other Strengths And Weaknesses".
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s careful assessment of our manuscript and their constructive feedback. Below, we respond to the points raised under *Weaknesses* to clarify our contributions. In summary, we believe that all the issues raised are important and warrant separate, dedicated future investigations.
* As pointed out by the reviewer, studying the $\alpha$-CentNCE and its advantage over the NCE counterpart $f_\alpha$-NCE is a valuable direction for future work. We leave this for subsequent study.
* As noted in Remark 3.3 and Remark 4.3, $f$-CondNCE suffers from variance explosion. However, our finite-sample analysis assumes that a single conditional sample $y_i$ is drawn for each data point $x_i$ (see line 201). As alluded to in Remark 3.4, this variance may be controlled by using a large number of conditional samples to reduce statistical noise. This reflects a fundamental trade-off between computational efficiency and statistical accuracy. A more thorough analysis of this trade-off is left for future work.
* Choosing an optimal noise distribution for NCE is an important but open problem in the literature; see, for example, the recent study by Chehab et al. (UAI 2022). We will add a paragraph discussing this point to clarify the current understanding in the literature.
* As the reviewer noted, we assume bounded exponential family distributions throughout. This assumption is fairly standard in the theoretical analysis of such estimators to enable tractable analysis; see, for example, (Shah et al., 2023). One possible justification is that many real-world distributions have bounded statistics and can thus be modeled by bounded exponential family distributions. That said, extending the analysis to unbounded distributions is of theoretical interest and also important for handling heavy-tailed data. In Section 5, we remark on the technical challenges involved in such an extension under `Beyond Bounded Exponential Families`. We leave this theoretical direction for future work.
* Lastly, as the reviewer pointed out, for NCE-type arguments to work well in practice, using a data-adaptive noise distribution is crucial, as a mismatch between the noise and data distributions can significantly degrade performance. A theoretical investigation into data-adaptive choices of noise distributions is another compelling direction for future research. | Summary: This paper presents a unified framework for learning unnormalized distributions through noise-contrastive estimation (NCE), introducing two variants: alpha-CentNCE and f-CondNCE. It demonstrates that alpha-CentNCE generalizes existing methods like MLE, MC-MLE, and GlobalGISO, while f-CondNCE reveals limitations in prior connections to score matching, showing diverging variance in small-noise regimes. The analysis provides novel finite-sample convergence guarantees for exponential families, establishing theoretical foundations for several NCE-based estimators previously studied in isolated contexts.
Claims And Evidence: The claims are supported by explicit connections to established methods like MLE and GlobalGISO via Theorem 3.1. The analysis leverages rigorous optimization principles (Bregman divergences) and builds convergence guarantees for exponential families using prior frameworks (Shah et al., 2023), offering a structured unification of disparate estimators through NCE variants.
Methods And Evaluation Criteria: The proposed methods are well-aligned with the problem of learning unnormalized distributions. By showing that alpha-CentNCE recovers MLE (Fisher, 1922), MC-MLE (Geyer, 1994), and GlobalGISO (Shah et al., 2023) as special cases (Theorem 3.1), the authors demonstrate consistency with established estimators for exponential families. Similarly, connecting f-CondNCE to pseudo-likelihood (Besag, 1975) and ISO (Vuffray et al., 2016) aligns with prior theoretical frameworks for Markov random fields. The use of Bregman divergences further roots the approach in foundational optimization principles, ensuring compatibility with existing density ratio estimation literature (Sugiyama et al., 2012). These connections validate the methods’ coherence with both classical and modern paradigms.
Theoretical Claims: I’ve taken a brief look at the proof,and it seems fine for now.
Experimental Designs Or Analyses: This is a theoretical paper, so no experiments need to be reviewed.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper proposes a unified NCE-based framework that connects disparate estimators (e.g., MLE, pseudo likelihood, GlobalGISO) as special cases of alpha-CentNCE or localized NCE, resolving their fragmented theoretical treatment. This view aligns with and generalizes prior results, such as Shah et al. (2023)’s GlobalGISO analysis and Besag (1975)’s pseudo likelihood, while clarifying their implicit ties to contrastive learning. Building on this framework, the authors derive new results, including finite-sample convergence rates for NCE estimators and a counterexample showing f-CondNCE’s variance diverges in small-noise regimes—contradicting earlier claims of equivalence to score matching. The synthesis enables systematic extensions of classical methods under a single theoretical lens.
Essential References Not Discussed: No
Other Strengths And Weaknesses: This paper is not well-written and is somewhat difficult to follow.
Other Comments Or Suggestions: typo:
- On page 1, in the right column, line 4: there is a double "is."
Questions For Authors: Your unified framework elegantly connects existing estimators (e.g., MLE, pseudo-likelihood, ISO) via alpha-CentNCE, but how generalizable is this unification to non-exponential family models? For instance, do the finite-sample convergence guarantees for alpha-CentNCE extend to broader classes of unnormalized distributions, such as energy-based models with deep parameterizations, or are they inherently limited to bounded exponential families?
Additionally, while your analysis reveals divergence in f-CondNCE’s variance under vanishing noise, what practical guidance does this imply for choosing noise distributions in real-world applications?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their effort in reviewing our manuscript and for the valuable feedback. Below, we provide responses to the questions.
* We highlight that the discussion in Section 3 does not assume exponential family distributions, except in the comparison with GlobalGISO, which is specifically designed for exponential family distributions. (We remark that GlobalGISO requires the underlying model to be an exponential family distribution in order to assume that a certain statistic is analytically computable.) However, as pointed out by the reviewer, our current asymptotic and finite-sample analysis is only applicable to bounded exponential family distributions. In Section 5, we discuss the technical challenges of extending the results to unbounded distributions under `Beyond Bounded Exponential Families`. Analyzing this type of estimator for energy-based models beyond exponential family distributions is an important but open problem.
* While our Theorem 3.4 reveals the asymptotic behavior of the empirical conditional NCE objective, we currently do not provide practical guidance on the choice of the noise distribution. As we mention in Remark 3.4, the statistical noise in the empirical conditional NCE objective can potentially be controlled by using a large number of *slicing vectors* (which essentially corresponds to samples from the conditional distribution). We believe this direction merits a separate investigation and leave it for future work. | Summary: The paper provides a unified perspective on noise-contrastive estimation (NCE) methods for learning unnormalized distributions, integrating several previously separate approaches under a common framework. It introduces two new variants: $\alpha$-centered NCE ($\alpha$-CentNCE) and $f$-conditional NCE ($f$-CondNCE), which generalize prior estimators and clarify their relationships. The analysis also revisits the theoretical properties of $f$-CondNCE, challenging prior claims about its connection to score matching.
Claims And Evidence: Theorems and propositions accompany the claims. I read through the remarks and comparisons but did not check the detailed proofs.
Methods And Evaluation Criteria: Formulating the problem as an instance of Bregman divergence minimization makes sense, as it appears to align with the existing approaches in the literature.
Theoretical Claims: The main paper does not appear to contain proof sketches or outlines. Detailed proofs in the appendix should essentially follow the work of Shah et al. (2021b, 2023) and are left unchecked.
Experimental Designs Or Analyses: The paper does not contain experimental/numerical evaluations.
Supplementary Material: Not applicable.
Relation To Broader Scientific Literature: The paper builds on a rich body of noise-contrastive estimation (NCE) work. It generalizes and unifies several estimators previously developed in the research communities. For example, it connects MLE (Fisher, 1922), MC-MLE (Geyer, 1994), and GlobalGISO (Shah et al., 2023) as "special cases" of its proposed $\alpha$-centered NCE framework. Finally, the paper provides a finite-sample convergence guarantee for these estimators.
Essential References Not Discussed: I'm not an expert on this specific line of research, but the current literature discussion looks good to me.
Other Strengths And Weaknesses: Here are the strengths:
- The paper presents a unified theoretical framework for NCE-based estimators, connecting prior methods such as MLE, MC-MLE, GlobalGISO, and pseudo-likelihood estimation within a single perspective (and objective).
- The work corrects a misconception about CondNCE and score matching, showing that its variance diverges in the small-noise regime.
- The paper establishes finite-sample convergence guarantees, while most prior works provided only asymptotic consistency results.
There are also some weaknesses:
- The paper is highly theoretical and does not have empirical validation. Since we have a concrete (convex) optimization objective here, I wonder if the authors could find an exponential family and conduct experiments for, say, different $\alpha$ values.
- The finite-sample convergence rates rely on bounded statistics assumptions, which may limit applicability to real-world unnormalized models. The authors also pointed this out in the final discussions.
- The claim that $\alpha$-CentNCE unifies multiple estimators lacks a discussion of practical trade-offs. For example, under what conditions would one prefer the estimator in the middle of the second row of Table 2 over others?
Other Comments Or Suggestions: - Formulas and tables often do not appear on the same page where they are referenced. Maybe this is a property of the paper template, but having them on the same page would be nice.
- Adding proof sketches to help readers understand the underlying connections among the theorems/claims would be beneficial.
- On page 1, right panel, line 13 reads, "The motivation ... is (is) to ..."
Questions For Authors: Please look and respond to the questions and suggestions in the earlier sections. Thanks!
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s effort in reviewing our manuscript. We will incorporate the constructive feedback in our revision.
### Comments on Weaknesses
We acknowledge that our work primarily focuses on the theoretical unification of different estimators. As suggested by the reviewer, however, experiments involving certain exponential family distributions would indeed provide valuable insights into the behavior of the proposed estimators under various choices of $f$ and/or $\\alpha$, as well as for unbounded distributions that go beyond the boundedness assumption in our analysis. In our revision, we will include experiments in the Appendix on the convergence rates of the estimators for both bounded and unbounded distributions, as demonstrated in (Shah et al., 2023). Although a thorough study of the trade-offs associated with different choices of $\\alpha$ or $f$ warrants a separate investigation, we will include these experiments to offer an initial indication of such behaviors.
### On Suggestions
We thank the reviewer for the thoughtful suggestions following a careful reading of our manuscript. We will incorporate these points in our revision, including proof sketches in the main text using an additional page. | null | null | null | null | null | null |
Generalization and Robustness of the Tilted Empirical Risk | Accept (poster) | Summary: Building on the notion of tilted empirical risk, this paper develops upper bounds of generalization error for tilted empirical risk (defined as tilted empirical risk minus the regular population risk) under negative tilt and moment-bounded loss. The first set of results (Theorems 3.5 and 3.11) are for the case where there is no distribution shift, where both uniform convergence bounds and information-theoretic bounds are derived. The second set of results (Theorems 4.3 and 4.5) extend the first set to the case with distribution shift. Under distribution shift, the TV distance between the training and testing distribution arises in the upper bound. The paper then proceeds to discuss the optimized choice of the tilt value for a given pair of training and testing distributions and experimentally investigate the gain of TERM under optimal tilt with respect to ERM (from Appendix G, the optimal tilt seems to have been obtained via grid search). Finally, the paper studies TERM with KL regularization, deriving Gibbs posterior as the optimal solution (resembling its classical counterpart) and an information-theoretic generalization bound thereof.
Similar results for bounded losses are provided in Appendix H together with PAC-Bayes, Rademacher complexity and stability bounds.
Claims And Evidence: I did not fully verify the proofs, but the proof techniques appear standard and the theoretical results look correct. The experimental results look convincing.
Methods And Evaluation Criteria: Methods and evaluation protocols and criteria look sound to this reviewer.
Theoretical Claims: I went through all proofs but without tracing every step. The results make sense to me.
Experimental Designs Or Analyses: The experimental designs and results are valid.
Supplementary Material: I went through all appendix.
Relation To Broader Scientific Literature: The paper falls into the broad category of learning theory. It builds upon the work of Li et al, which introduces tilted empirical risk, and serves as its natural and necessary extensions.
Essential References Not Discussed: The coverage of existing generalization bounds on distribution shift and domain adaptation (BTW, when you want to measure distribution shift, in practice you pretty much deal with the scenario where you are given a labelled training set and an unlabelled testing set with shifted distribution, then you are entering the regime of unsupervised domain adaptation) is not adequate. In fact, there is a large volume of literature in this context, all missing from discussion. Below is a tiny sample and I encourage the authors to dig more.
1. Zou, et al, Towards Robust Out-of-Distribution Generalization Bounds via Sharpness, ICLR2024.
2. Wang and Mao, On f-Divergence Principled Domain Adaptation: An Improved Framework, NeurIPS2024.
3. Ye et al, Towards a Theoretical Framework of Out-of-Distribution Generalization, NeurIPS2021
Regarding (in-distribution) information-theoretic and PAC-Bayes generalization bounds, the paper covers some early references, but more recent developments are largely left out. The authors may wish to consult a recent review article
Hellstrom, et al, Generalization Bounds: Perspectives from Information Theory and PAC-Bayes, Foundations and Trends in Machine Learning, 2025.
Other Strengths And Weaknesses: The work is comprehensive in its theoretical development on generalization upper bounds related to tilted empirical risk, but somewhat weak in terms of the application and usefulness of their results. Specifically,
1. What benefit can one exploit from this work?
2. Is there any algorithmic improvement that can be derived or inspired from the theoretical results of this paper?
The paper will be greatly improved if results along this line are adequately developed, and I am open to increase my score if these questions are appropriately addressed.
Other Comments Or Suggestions: The bounds on distribution shift all involve the TV distance of the training and testing distributions. Without knowing these distributions, the bounds cannot be computed or estimated. It would be much more useful if the authors could derive such bounds where the distribution shift is measured by the training and testing samples (rather than by their distributions), much like those developed in domain adaptation literature, e.g. in the flavor of Theorem 3 in Ben-David and Blitzer ''A theory of learning from different domains'', 2009. If this can be done, one can exploit the optimal choice of the tilt value for the given training and testing samples (rather than for the two distributions, to which we do not have access in practice).
Questions For Authors: See Other Strengths And Weaknesses and Other Comments Or Suggestions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, and generally positive assessment of the paper. We will address their concerns as detailed below.
> Distribution shift and domain adaptation literature
**R1:** Thank you for introducing the works on generalization bounds under distribution shift and domain adaptation.
- Our focus in this work is on tilted empirical risk in supervised learning scenarios with clean or noisy training samples. In contrast, domain adaptation typically involves unlabeled samples, which differs from our setting. The type of distribution shift we consider primarily arises from noisy labels or outliers. Nevertheless, we will include the aforementioned references and others to expand the discussion on related work in this space and better contextualize our contributions.
- One of the key contributions of our work is the study of tilted empirical risk under unbounded loss functions, assuming only a bounded $(1+\\epsilon)$-th moment of the loss. In contrast, works such as [1], [2], and [3] assume bounded loss functions. We hope that our results can be combined with those in [1–3] to extend their applicability to heavy-tailed scenarios. We will include these references in our revision to better situate our work within the existing literature.
Finally, thanks for mentioning [5]. We will include it in the discussion of the related work.
>What benefit can one exploit from this work? Is there any algorithmic improvement that can be derived or inspired from the theoretical results of this paper?
**R2:** The main aim of this work is to provide a theoretical foundation for the tilted empirical risk which is introduced in [4]. Furthermore, our results help to **bridge a theoretical gap** by showing how tilting affects generalization bounds and excess risks, beyond intuition or heuristics. This can be particularly useful for practitioners designing robust models in supervised learning tasks, including classification, regression, and robustness-aware optimization. For instance, algorithms could:
- Employ tilted empirical risk as an objective in training robust classifiers or regressors,
- Use our generalization guarantees to guide regularization schemes. In particular, a tilted Gibbs posterior as a learning algorithm is novel. We consider it as future work to study this algorithm in practice.
- Tuning $\\gamma$ is a main bottleneck for deploying the tilted loss framework (for example automatic tuning of tilt parameters). We can observe that in many cases the data-driven $\\gamma$ performs better than the ERM solution in the distribution shift scenario for both Gaussian and Pareto outliers. The data-driven approach for selection of tilt under distribution shift scenario is not proposed in [4]. In the no-distribution shift scenario, we would like to emphasize that TERM with a small negative tilt value for the case of i.i.d. samples had not been explored in prior work including [4], and the usefulness of TERM in this scenario is actually uncovered by the theoretical developments.
> TV distance and other divergences
**R4:** Thanks for raising this point. For this purpose, we use Definition C.4 in [1]:
$$d\_{\\mathcal{H} \\Delta \\mathcal{H}}(\\mu;\\tilde\\mu) := 2 \\sup\_{\\mathcal{A}(h) \\in \\mathcal{A}\_{\\mathcal{H} \\Delta\\mathcal{H}}} \\Big| Pr\_{\\mu}(\\mathcal{A}(h)) \- Pr\_{\\tilde \\mu}(\\mathcal{A}(h)) \\Big|$$
where $\\mathcal{H} \\Delta \\mathcal{H}$ is defined as $\\mathcal{H} \\Delta \\mathcal{H} := \\{ h(x) \\oplus h'(x) : h, h' \\in \\mathcal{H} \\}$, $\\mathcal{A}\_{\\mathcal{H} \\Delta \\mathcal{H}}$ represents the learning algorithm space under the hypothesis $\\mathcal{H} \\Delta \\mathcal{H}$ and $\\oplus$ is the XOR operator, e.g., $\\mathbb{I}(h'(x) \\ne h(x))$.
Then, applying Corollary C.7 in [1] to proof of Proposition 4.2, we have,
$$\\frac{1}{|\\gamma|}\\Big|\\log(E\_{\\mu} \\exp(\\gamma\\ell(h,Z))) \- \\log(E\_{Z\\tilde \\mu}\\exp(\\gamma\\ell(h,Z))) \\Big|\\leq \\frac{d\_{\\mathcal{H} \\Delta \\mathcal{H}}(\\mu; \\tilde \\mu)}{\\gamma^2} \\frac{\\exp(|\\gamma|\\kappa\_u)-\\exp(|\\gamma|\\kappa\_s)}{\\kappa\_u-\\kappa\_s}$$
where $d\_{\\mathcal{H} \\Delta \\mathcal{H}}(\\mu; \\tilde \\mu)$ can be estimated using training and test data samples. Note that, for linear empirical risk, we cannot derive the result in terms of $d\_{\\mathcal{H} \\Delta \\mathcal{H}}(\\mu; \\tilde \\mu)$ for unbounded loss functions. We clarify this discussion in the revised manuscript.
---
**References:**
- [1]: Zou, et al, Towards Robust Out-of-Distribution Generalization Bounds via Sharpness.
- [2]: Wang and Mao, On f-Divergence Principled Domain Adaptation: An Improved Framework.
- [3]: Ye et al, Towards a Theoretical Framework of Out-of-Distribution Generalization.
- [4]: Tian Li, et al. On tilted losses in machine learning: Theory and applications.
- [5]: Hellstrom, et al, Generalization Bounds: Perspectives from Information Theory and PAC-Bayes,
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their responses. I have no further questions and, in light of the clarifications provided, I will increase my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Z1pS,
We just wanted to sincerely thank you for taking the time to carefully read our rebuttal and for your thoughtful consideration of our responses. We truly appreciate the constructive feedback you provided throughout the review process, and we are grateful for your support and for the updated evaluation of our work.
Your detailed comments and suggestions were very helpful to us in improving our paper, and we are glad that our clarifications could address your concerns.
Thank you again for your time, effort, and support.
Best regards,
Authors | Summary: This paper gives a detailed and extensive study of the tilted empirical risk, focusing on particular on generalization bounds. Both uniform convergence bounds and algorithm-dependent information-theoretic bounds are provided, and robustness guarantees under distribution shifts are analyzed. On the basis of the theoretical results, a data-driven approach for determining the level of tilt is proposed and evaluated experimentally. Finally, inspired by the information-theoretic bounds, a KL-regularized version is studied and the optimal posterior is determined and analyzed.
## update after rebuttal
I thank the authors for their response. I retain my positive evaluation.
Claims And Evidence: Yes. The experimental results do not necessarily support making strong claims, but the paper does not overstate them either.
Methods And Evaluation Criteria: Overall, yes. It is not clear whether the results reported in Table 2 are particularly informative, though, as the risk is very low already for normal ERM. Some applications to real-world datasets could be interesting, but the current approach makes sense in order to illustrate the results. Also, the experiments on linear regression indicate a sizeable gap between the data-driven tilt parameter and the optimal one. Some further discussion of this, and the potential sources of looseness, could be useful. (Presumably related to $n\rightarrow\infty$ step).
Theoretical Claims: I checked up to App. E in some detail and did not identify any issues.
Experimental Designs Or Analyses: See above
Supplementary Material: Only appendix as discussed above
Relation To Broader Scientific Literature: This paper is nicely positioned in the literature, and covers a lot of previously unaddressed questions in the use of tilted empirical risk minimization. The inclusion of practical guidelines for e.g. parameter selection strengthens this further.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The theoretical and algorithmic contributions of this paper are strong. However, the presentation could be improved. The main body of the paper consists of dense mathematical statements with various conditions and parameters, and not much intuition or analysis is provided. For example, providing the reader with some interpretation of the various cases in Thm 3.11 and the effect on the bound could be beneficial—at present, several shorthand notations need to be unpacked and cases with nested conditions need to be interpreted without hints.
Other Comments Or Suggestions: The content of the tables and figures is often small and somewhat blurry. The term “non-linear generalization error” is not particularly specific—using an alternative term could be good if possible.
1. $I_1$, $I_3$, and $I_4$ are used but seemingly no $I_2$.
2. In Thm. 4.5, “if (a) or (b) hold” before the equation is superfluous?
3. $P^*_{H|S}$ doubly defined in Eqs. (23) and (24)
4. Line 419 right column: “boudned”
Questions For Authors: 1. What is the proper interpretation of the cases in Thm. 3.11 and the $\zeta$ parameter?
2. Can you elaborate on when the proposed $\gamma_{\text{data}}$ is expected to work well or not? Can the approach be altered for finite-data settings?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, and generally positive assessment of the paper. We will address their concerns as detailed below.
> Also, the experiments on linear regression indicate a sizeable gap between the data-driven tilt parameter and the optimal one. Some further discussion of this, and the potential sources of looseness, could be useful.
**R1:** In deriving $\\gamma\_{data}$, we employ an asymptotic approach assuming $n \\to \\infty$ ( the sample size approaches infinity). Consequently, this introduces a theoretical gap between the data-driven and the optimal parameter selection. Nonetheless, the data-driven tuned tilt demonstrates superior performance compared to the Empirical Risk Minimization (ERM) approach. Therefore, TERM with data-driven tilt can be helpful in practice. We will mention in the section on conclusion and limitations that a better and more realistic data-driven approach to selecting $\\gamma$ could be an area for future work.
> What is the proper interpretation of the cases in Thm. 3.11 and the $\\zeta$ parameter?
**R2:** Theorem 3.11 is derived by using the upper and lower bounds on the expected non-linear generalization error (Proposition 3.9 and 3.10). Note that, for the upper bound in Proposition 3.9, due to the sub-exponential assumption, we have two cases. Then, for small values of $\\kappa\_t$ and assuming $\\frac{2I(H;S)}{|\\gamma|^{1+\\epsilon}\\kappa\_t^{1+\\epsilon}}\>n$, we can achieve the convergence rate of $O(1/n^{-\\epsilon/(1+\\epsilon)})$. However, for the lower bound, we introduced the $\\zeta$ parameter to handle the logarithm function as shown in the proof of Proposition 3.10. Similarly to the upper bound, for small values of $\\kappa\_t$ and $\\zeta$, we can expect to achieve again the rate of $O(1/n^{-\\epsilon/(1+\\epsilon)})$.
> Can you elaborate on when the proposed $\\gamma\_{data}$ is expected to work well or not? Can the approach be altered for finite-data settings?
**R3:** To address the finite-data setting, we need to focus on minimizing the bound presented in Theorem 4.3, which involves a complex interplay of exponential, polynomial, and inverse power terms of $\\gamma$. This complexity motivates our consideration of the asymptotic regime, which provides better theoretical insights into the parameter behavior. Empirically, our experiments validate this approach, showing that the data-driven tilt consistently outperforms the Empirical Risk Minimization (ERM) solution—which is desired.
Finally, thanks for pointing out some typos in the draft. They are fixed now. | Summary: This paper investigates the generalization error of the tilted empirical risk (TER), a non-linear risk metric for supervised learning introduced by Li et al. (2020). The study focuses on the robustness regime under negative tilt, where TER is used to mitigate the impact of noisy outliers.
The paper provides uniform convergence and information-theoretic bounds on the tilted generalization error (the difference between population risk and tilted empirical risk) for unbounded loss functions with a finite 1+\epsilon-th moment. The paper extends TER’s applicability by analyzing its robustness against noisy training outliers. Theoretical guarantees for TER are provided under distribution shift, showcasing its stability compared to traditional empirical risk minimization. The paper includes experimental evaluations that validate the theoretical bounds.
Claims And Evidence: This paper is primarily theoretical, with detailed proofs provided to support the claims.
Methods And Evaluation Criteria: The evaluation in Section 5 appears solid. The achieved population risk using the data-driven approach for selecting gamma outperforms ERM in the presence of outlier noise.
Theoretical Claims: I did not verify all the proofs of the theoretical results in detail, but I reviewed the main steps for Proposition 3.2, Lemma 3.7, and Proposition 3.9, and they appear reasonable to me.
Experimental Designs Or Analyses: The experiments serve more as a sanity check and are limited to logistic regression and simple linear regression. This is OK, as the primary contribution of the paper is theoretical.
Supplementary Material: In addition to the proofs I reviewed, I also read the experimental details in Appendix G.
Relation To Broader Scientific Literature: The paper extends prior work on tilted empirical risk (Li et al., 2020) by establishing theoretical generalization bounds under negative tilt, connecting to broader studies on robust learning, generalization error analysis, and risk minimization in the presence of outliers and distribution shifts.
Essential References Not Discussed: Not I am aware of.
Other Strengths And Weaknesses: Strength: The results hold under more general conditions of bounded moments, rather than relying on the sub-Gaussian assumption or a bounded loss function.
Weakness: In Section 3.1, only finite hypothesis class bounds are provided, with limited discussion on how to generalize to a continuous hypothesis class.
Other Comments Or Suggestions: Strength: The results hold under more general conditions of bounded moments, rather than relying on the sub-Gaussian assumption or a bounded loss function.
Weakness: In Section 3.1, only finite hypothesis class bounds are provided, with limited discussion on how to generalize to a continuous hypothesis class.
Questions For Authors: The authors keep emphasizing negative gamma, which I understand is due to the inequality direction, requiring negative gamma to provide a valid upper bound. However, since all the bounds depend on the absolute value of gamma, what is the practical benefit of using a negative gamma? Is there any intuitive theoretical explanation?
For the discussion after Theorem 4.3, does the improvement of TER under distribution shift stem from its performance bound depending on total variation rather than KL divergence? Could the authors elaborate further on why TER with negative tilting is essential for achieving this improvement?
## update after rebuttal
Thanks for the rebuttal. I believe this is a solid paper, and I am maintaining my original score.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and generally positive assessment of the paper. We will address their concerns as detailed below.
> finite hypothesis class
**R1:** We thank the reviewer for this valuable comment. Indeed, while Section 3.1 focuses on finite hypothesis class bounds, the approach can be naturally extended to continuous hypothesis classes using well-established techniques.
For example, one can construct an $\\epsilon$-net over the hypothesis space, thereby discretizing the space. In this construction, we select a finite subset $ H' \\subset \\mathbb{R}^m $ such that for every $ h \\in H $, there exists a $ h' \\in H' $ with $ \\|h \- h'\\| \\leq r $. By applying our finite hypothesis result to this discretized set and controlling the approximation error through the Lipschitz property of the loss function, we effectively generalize our bounds to the continuous case. This method is exemplified in \[2\], which demonstrates that “for an uncountable hypothesis space, we can always convert it to a finite one by quantizing the output of the smallest set $ H' $ such that for all $ h \\in H $ there is a $ h' \\in H' $ with $ \\|h \- h'\\| \\leq r $, where the Lipschitz maximal inequality (Lemma 5.7 in \[3\]) is derived using a similar quantization technique.
Alternatively, one may also derive infinite hypothesis space results using VC-dimension arguments in binary classification:
First, we observe that Lemma 2 in \[1\]—a symmetrization lemma—applies to functions of the form $f(Z)=\\exp(\\gamma \\ell(h,Z))\\in\[0,1\]$. Using this lemma, we replace the expected value $E\[f(Z)\]$ with its empirical average over an independent ghost sample. Next, we apply Bernstein's inequality to bound the deviation of the empirical average from the true expectation. In doing so, we substitute the variance term $Var(\\exp(\\gamma \\ell(h,Z)))$ by $|\\gamma|^{1+\\epsilon}E\[|\\ell(h,Z)|^{1+\\epsilon}\]$. The VC-dimension of the hypothesis class appears when we apply a union bound over a finite cover of the function class induced by the hypotheses using Sauer’s lemma to bound the growth function via the VC-dimension. Finally, as in the proofs of Propositions 3.3 and 3.4, we take the logarithm of both sides of the resulting inequality and rearrange the terms to derive the final generalization bound.
Furthermore, we can also apply the similar approach to derive upper bounds based on covering numbers as introduced in \[1\].
We will clarify this discussion in the Appendix of the revised manuscript.
> negative and positive tilt
The theoretical guarantees developed for negative tilt cannot be directly extended to the positive tilt scenario. For more details please check **R4** in response to **Reviewer 8SUL**. Addressing this limitation and exploring positive tilt in TERM is an avenue for future work, as mentioned in the Conclusion section. The primary application of tilted empirical risk for negative tilt is enhancing robustness to outliers. We have expressed the bounds in terms of $|\gamma|$ to facilitate a more intuitive understanding of the results. We will add a remark on positive tilt theoretical challenges.
> Improvement of TER under distribution shift
Under an unbounded loss function assumption with a bounded $(1+\\epsilon)$-moment for $\\epsilon\\in(0,1\]$, we can derive an upper bound on generalization error using the Tilted Empirical Risk (TER) with a negative tilt. Specifically, by leveraging the negative tilt parameter $\\gamma \< 0$, we ensure the boundedness of $\\exp(\\gamma \\ell(h,Z))$, which allows us to establish an upper bound in terms of total variation distance. The total variation distance is bounded for all outlier distributions. However, the KL divergence can be unbounded. Therefore, the upper bound on TER is bounded for all outlier distributions. In contrast, for linear empirical risk, while can derive an expression for an upper bound in terms of the KL divergence, this expression can be unbounded for some outlier distributions, due to the unboundedness of the KL divergence.
---
**References:**
[1]: Bousquet, Olivier, Stéphane Boucheron, and Gábor Lugosi. "Introduction to statistical learning theory."
[2]: A. Xu and M. Raginsky. Information-theoretic analysis of generalization capability of learning
algorithms.
[3]: R. van Handel. Probability in high dimension. | Summary: This paper studies the generalization error of tilted empirical risk (TERM), a method for fair and robust learning in empirical studies. The paper first studies in-distribution generalization, considering unbounded loss, negative tilt parameters, and finite hypothesis spaces, and gives a convergence rate of $O(1\sqrt n)$ if the second moment of the loss is finite. Subsequently, the analysis is expanded to distribution shifts. The derived bounds include two bias terms: one arising from the use of tilted loss and the other from the total variation between train and test distributions. A data driven approach is proposed to optimize for the tilt to trade-off between in-distribution and out-of-distribution generalization. Finally, the paper examines mutual-information regularized tilted empirical risk minimization, achieving a 1/n convergence rate, which aligns with findings for the Gibbs posterior, but gets rid of their sub-Gaussian assumption.
Claims And Evidence: The comparison between TERM and ERM is not sufficiently addressed. L292-L312 in section 4 claims that an upper bound for ERM in terms of total variation is not feasible. A lower bound for ERM's generalization error would have supported this argument. Additionally, the comparison between generalization gaps are insufficient, because the total excess risk is the sum of the empirical risk and the generalization gap. An upper bound for the total excess risk would consolidate the result.
Methods And Evaluation Criteria: The insights are not fully addressed for the in-distribution generalization of TERM (section 3). How does its excess risk compare to that of ERM? It is stated in the abstract that TERM has a novel application under no distribution shift. However, its advantage over ERM is not revealed in this case.
Theoretical Claims: I did not observe obvious errors throughout the theory though I have not checked the proofs. The results are built on finite hypothesis class and negative tilts.
Experimental Designs Or Analyses: Simulation studies are conducted to validate the data-driven selection of the tilt.
Supplementary Material: I have not checked the proof in the appendix.
Relation To Broader Scientific Literature: This paper is the first to establish generalization bounds for TERM, as claimed in the Related Work section. Notably, TERM improves robustness against outliers with negative tilt and improves robustness under subpopulation shift with positive tilt. However, this paper only addresses the negative tilt, thereby somewhat limiting its significance.
Essential References Not Discussed: Related works are clearly discussed.
Other Strengths And Weaknesses: This paper is well-written, with coherent logical flow. Propositions serve as proof sketches.
Other Comments Or Suggestions: NA
Questions For Authors: Is there an analytical form for the $\gamma$ that minimizes the upper bound in Theorem 4.3?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, and generally positive assessment of the paper. We will address their concerns as detailed below.
>Comparison between TERM and ERM
**R1:** We should clarify that we cannot derive an upper bound for ERM in terms of total variation distance if the loss function is unbounded as it would require bounding the quantity
$$\\sup\_{\\ell(h,Z)\\in\\mathcal{L}\_{\\epsilon}}|\\mathbb{E}\_{Z\\sim \\mu}\[\\ell(h,Z)\]-\\mathbb{E}\_{\\tilde Z\\sim \\tilde \\mu}\[\\ell(h,\\tilde Z)\]| $$
where $\\mathcal{L}\_{\\epsilon}$ is the set of loss functions with bounded $(1+\\epsilon)$-th moment.
Note that if the loss function can take arbitrarily large values, then small differences in distributions ($\\mu$ an $\\tilde \\mu$) can lead to potentially unbounded differences in expected risk.
>Total excess risk
**R2**: Due to space limitations, we only briefly mentioned in Lines 290–292 that *"Using Lemma 3.7, we can derive an upper bound on the excess risk under distribution shift."* Specifically, this upper bound on excess risk is twice the generalization error bound obtained via the uniform convergence approach. As a result, we are able to compare the excess risk bounds for both TER and ER. The upper bound is as follows,
$$
\begin{aligned}\\mathfrak{E}\_{\\gamma}(\\mu)&\\leq \\frac{4\\exp(|\\gamma| \\kappa\_s)}{(1-\\zeta)|\\gamma|}\\sqrt{\\frac{|\\gamma|^{1+\\epsilon}\\kappa\_s^{1+\\epsilon}B(\\delta)}{n}} \+\\frac{8\\exp(|\\gamma| \\kappa\_s)B(\\delta)}{3n|\\gamma|(1-\\zeta)} \+\\frac{2|\\gamma|^{\\epsilon}}{1+\\epsilon}\\kappa\_u^{1+\\epsilon}+\\frac{2\\mathbb{TV}(\\mu,\\tilde{\\mu})}{\\gamma^2 }\\frac{\\big(\\exp(|\\gamma|\\kappa\_u)-\\exp(|\\gamma|\\kappa\_s)\\big)}{(\\kappa\_u-\\kappa\_s)},
\end{aligned}
$$
where $B(\delta)= \log(\mathrm{card}(\mathcal{H}))+\\log(2/\delta)$. Note that, under out-of-distribution, our bound on excess risk for TER is in terms of total variation distance. In contrast, we can not derive an upper bound on excess risk of linear empirical risk with unbounded loss function in terms of total variation distance.
> In-distribution generalization of TERM
**R3:** Thank you for your comment. For TERM under no-distribution shift, we conducted experiments in the Appendix using data driven $\\gamma\_{data}$ derived from Theorem 3.5. In particular, in lines 1519-1536 (Appendix G), we provide experiments for logistic regression without outliers. Furthermore, in lines 1616-1619, we provide experiments to show a data-driven approach for linear regression without outliers (no distribution shift). We thus observe that TERM has an application in the no-distribution shift scenario under heavy-tailed distributions.
> Positive tilt
**R4:** Characterizing the generalization of TERM with a positive tilt is not a goal of our paper. We would like to clarify the main challenges of deriving results for TERM performance with positive tilt ($\\gamma\>0$) under unbounded loss functions (\*\*Assumption 4.1\*\*). Certain results and theoretical tools in our work are specific to negative tilt and do not extend to positive tilt under unbounded loss function assumption. Specifically:
- **Bernstein inequality** and **the exponential term $\\exp(\\gamma\\ell(h,z))$**: For positive tilt, the exponential term $$\\exp(\\gamma\\ell(h,z))$$ becomes unbounded under Assumption 4.1. As a result, the Bernstein inequality, as utilized in Theorems 4.2 and 4.3 for negative tilt, cannot be applied to positive tilt. In contrast, for negative tilt, $$\\exp(\\gamma \\ell(h, z))$$ remains bounded even for unbounded loss functions.
- **Lemma C.9:** The inequality
$$\\begin{aligned}\\mathbb{E}\[X\]-\\frac{1}{\\gamma}\\mathbb{E}\[e^{\\gamma X}\]\\leq \\frac{|\\gamma|^{1+\\epsilon}}{1+\\epsilon}\\mathbb{E}\[|X|^{1+\\epsilon}\],\\end{aligned}$$
which holds for $0\\leq X$ and $\\gamma \< 0$, is not applicable to positive tilt. Notably, the proof of Lemma C.9 relies on the inequality $e^{\\gamma X} \\leq 1 \+ \\gamma X \+ \\frac{|\\gamma X|^{1+\\epsilon}}{1+\\epsilon} $, which is valid only for $\\gamma X \\leq 0.$
Therefore, the theoretical guarantees developed for negative tilt cannot be directly extended to the positive tilt scenario. Addressing this limitation and exploring positive tilt in TERM is an avenue for future work, as mentioned in the Conclusion section.
> analytical form for the $\\gamma$
**R5**: The bound in Theorem 4.3 involves **exponential**, **polynomial**, and **inverse powers** of $\\gamma$. There may be a numerical minimizer for this upper bound, but the complexity of the expression prevents us from solving analytically in a tractable way. For this purpose, we analyzed the behavior of the bound in the asymptotic regime $\\gamma=0$ and $\\gamma\\rightarrow \-\\infty$. | null | null | null | null | null | null |
Enhancing Statistical Validity and Power in Hybrid Controlled Trials: A Randomization Inference Approach with Conformal Selective Borrowing | Accept (poster) | Summary: This paper proposes to use Fisher Randomization Test (FRT) in RCTs when leveraging external controls (EC). Since FRT only uses the randomization distribution of a test statistic under the sharp null, it always provides valid type-I error control regardless of how the potentially biased ECs are incorporated. In this way, ECs are used as a tool for constructing more powerful test statistics. In doing so, the authors recognize the issue of bias in ECs, and propose to improve the test statistic by selectively borrowing ECs by thresholding their conformal p-values. To choose the threshold, a cross validation paradigm is also proposed. The proposed methods are demonstrated via simulations and real data illustrations.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I briefly checked the proofs which seem correct to me.
Experimental Designs Or Analyses: I checked the simulation setups and they seem sound.
Supplementary Material: I reviewed the proof and additional simulation results.
Relation To Broader Scientific Literature: - Propose the use of FRT in borrowing information from ECs in RCT analysis.
- Demonstrate the use of selective borrowing in developing more powerful test statistics.
- Demonstrate the use of conformal inference in selective borrowing.
Essential References Not Discussed: I didn't find important ones to my knowledge.
Other Strengths And Weaknesses: Strengths:
- The results are rich.
- Propose the importance of FRT in RCT analysis and suggest a new way of using ECs.
Weakness:
In general, this paper contains a lot of results but the motivation and results can be organized in a better way. I list some confusions when reading the paper:
- The comparison with existing methods which motivates the method is not fully reasonable. In particular, the hybrid doubly robust estimators are aimed for providing more accurate estimators, but the authors propose to switch gears to a randomization test for the sharp null, which makes it not fully comparable with the existing method. This makes the arguments that motivate the current proposal a bit weak.
- The definition of conformal p-values are not new in this paper, and it takes too much space in the paper.
- I'm not sure what is the connection between the MSE of $\hat\tau$ and the power of the resulting RCT. In reading the paper, I sometimes felt like the authors wanted to improve MSE of the test statistics but sometimes thought they want to improve the power of RCT. I would suggest the authors clean up the storyline.
Other Comments Or Suggestions: N/A
Questions For Authors: - Is there a connection between MSE of $\hat\tau$ and power of RCT?
- Is it possible to get back to the estimation problem based on this approach?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful and constructive comments. Below, we have provided our detailed, point-by-point responses.
**1. Connection between MSE of $\hat{\tau}_\gamma$ and power of FRT**
(i) Variance and power: Theorem 2.4 (power analysis for consistent test statistics) shows that a lower variance of a consistent test statistic leads to a higher power of FRT.
(ii) Bias and power: Simulation-based power analysis demonstrates that borrowing biased ECs can severely reduce FRT power.
(iii) MSE and power: Taking both variance and bias into account, we observe that a lower MSE of the test statistic is associated with higher FRT power. While it remains theoretically challenging to link MSE and power when the test statistic is irregular, our experiments show that MSE-guided $\gamma$ selection offers a unified solution that performs well for both estimation accuracy and power.
We also revised our storyline as follows:
Our primary goal is to improve the power of the Fisher Randomization Test (FRT) for testing the null hypothesis of no treatment effect by borrowing external controls (ECs), compared to relying solely on RCT data (No Borrowing). This is guided by three key insights: (i) Theorem 2.4 shows that the power of FRT can be increased by reducing the variance of a consistent test statistic, which can be achieved by borrowing unbiased ECs to augment the small RCT control sample; (ii) borrowing biased ECs leads to inconsistency and significantly reduces the power of FRT; (iii) these insights motivate our use of Conformal Selective Borrowing to enhance power by borrowing unbiased ECs and discarding biased ones from a larger EC pool.
To this end, we introduce an intermediate objective: testing the exchangeability of each EC and selectively borrowing those deemed unbiased. We recognize that the power of this intermediate testing step is limited by the size of the RCT control group. To address this, we propose tuning the selection threshold $\gamma$ (the significance level of the exchangeability test) to directly target our primary objective. Although tuning based on empirical FRT power would be ideal, it is impractical because (i) it requires specifying an alternative hypothesis and (ii) it is computationally expensive to compute power across a grid of $\gamma$ values.
As a practical alternative, we use the MSE of the Conformal Selective Borrowing estimator as a proxy to guide $\gamma$ selection. This MSE-guided approach offers several benefits: (i) experiments show it improves FRT power over RCT-only analysis, even though the theoretical connection between MSE and FRT power is challenging due to the irregular nature of the test statistic; (ii) it yields strong selection performance and supports our intermediate objective (see Fig. 2 and Fig. 14); (iii) with MSE-guided $\gamma$, CSB serves as both a powerful test statistic and a accurate ATE estimator; (iv) empirical MSE can be approximated leveraging the No Borrowing estimator, and we provide a non-asymptotic excess risk bound for the adaptive procedure.
**2. Conformal selective borrowing (CSB) as a powerful test statistic and a reliable, efficient estimator**
Although our primary motivation is to improve the power of FRT, the final method, CSB with MSE-guided $\gamma$ selection, applies to both estimation and testing:
(i) Theorems 3.7 and 3.8 provide non-asymptotic excess risk bounds for the proposed estimator.
(ii) In simulations, CSB shows better estimation performance than adaptive lasso selective borrowing (ALSB) in terms of MSE under small and moderate sample sizes. This is demonstrated in Figures 6(C) and 11(C), where both methods are compared purely on estimation accuracy without involving inference procedures.
(iii) In real-world experiments, CSB improves both robustness and efficiency over No Borrowing (NB) and Full Borrowing (FB). Specifically, CSB reduces the standard error by 20% compared to NB and mitigates the bias of FB. The ATE estimate under CSB (0.138) is close to NB (0.142), while FB overestimates it (0.241).
These results suggest that CSB is not only effective for testing under sharp nulls but also serves as a reliable and efficient estimator of treatment effects.
**3. Presentation**
Following the reviewer’s suggestion, we moved the full conformal $p$-value (which is computationally infeasible) and jackknife+ (a special case of CV+) to the appendix. We kept the split conformal $p$-value and CV+ $p$-value in the main text, as both are used in our experiments. | Summary: This paper proposes a method for combining (potentially biased) external controls (ECs) with data from a randomized control trial, in an effort to improve power to detect causal effects, without sacrificing Type 1 error (false positive) control. For controlling Type 1 errors, the key insight is to use a Fisher Randomization Test (FRT), which allows for computation of exact p-values for Fisher's "sharp null" hypothesis (potential outcomes are constant across treatment/control) with any test statistic, including those using potentially biased ECs. For improving power, there is no "free lunch" (as noted by the authors and other related work), but an approach is proposed for "selective borrowing" of less biased ECs, which is illustrated empirically in simulated data and a real-data application to a chemotherapy randomized trial.
**Update after rebuttal**: As stated below, I will keep my score. My impression of the paper is somewhere between a 3 and a 4.
Claims And Evidence: The main claims as I understood them:
* Their approach will have valid level / false positive rate in finite samples for the Fisher sharp null.
* Their power can be better than "no borrowing" (i.e., just using the RCT) if the bias is small
* Their approach is better suited to finite samples than other selective borrowing approaches like Gao et al. 2023.
The claims are supported by clear and convincing arguments, and backed up by synthetic experiments where the bias can be controlled. I did find one claim a little difficult to understand (regarding post-selection inference), which I have mentioned in "Questions for the Authors" below, but I think this is a minor point.
Methods And Evaluation Criteria: Yes, for the most part (one important missing baseline, excluded due to computational concerns). See my comment on "relation to the broader scientific literature"
Theoretical Claims: I did not see any issues in the proofs. I checked the proof of Theorem 2.3, which seemed fairly immediate from the definition of the FRT, and Theorem 2.4, which similarly seems to follow fairly directly from auxiliary lemmas from related work (which I did not check). While I did not review the proofs of Propositions 3.1-3.6 in depth, they seem to follow from standard arguments, though I'm a little less familiar with the conformal prediction literature. Theorems 3.7 and 3.8 follow from standard results (and some algebra, which I did not check in detail) in non-asymptotic statistics.
Experimental Designs Or Analyses: The experimental design for the synthetic data seems sound to me, and is broadly similar to how I would have set up a synthetic data experiment for this method, probing failure modes as a function of the bias. The real-data experiment is more an "illustrative application / case study" rather than an experiment that tests a particular hypothesis, but that's a fairly standard approach for these types of papers in my experience.
Supplementary Material: Yes, I skimmed the entire supplement, and read the related work (Section A), some of the proofs (Sections B.1, B.2, B.7, B.8, etc), and the comparison to ALSB in detail (Section C.4, Figure 6).
Relation To Broader Scientific Literature: This paper adds yet another method to a growing literature (well-documented in Section A) on combining observational and experimental data in an adaptive fashion, without making assumptions on the validity of the observational data.
The core insight of this paper, in my view, is that Fisher Randomization Tests can be used with any test statistic, and therefore they are a good candidate for "improvement" with external controls, since the false positive rate is controlled exactly regardless of how good the test statistic is.
Similar to other work in this area, this paper identifies, at least experimentally, a no "free lunch" phenomenon, , i.e., there are moderate biases which can lead to loss of performance. However, beyond some intuition-building theory (e.g., Theorem 2.4), the authors do not formally analyze this relationship between bias and power. That said, I am sympathetic to the difficulty in doing so.
It is harder to judge the significance of the conformal selective borrowing approach. While it is novel, it's not clear (on it's own, without the FRT component), how this approach compares to other selective borrowing approaches, especially given the lack of experimental comparison (on an apples-to-apples basis) with the Adaptive Lasso Selective Borrowing (ALSB) approach.
Essential References Not Discussed: Overall, I found that the related work (in the Appendix) was quite comprehensive, but would have appreciated having more of that content in the main paper, if space permits.
Other Strengths And Weaknesses: Beyond the lack of direct apples-to-apples comparison with ALSB, I have two (relatively minor) concerns regarding clarity / significance.
1. I'm not sure how to square the null hypothesis tested under the FRT (the "sharp null") with the more conventional null hypothesis that e.g., the average treatment effect is equal to zero, which doesn't require that $Y_1 = Y_0$, but rather just that $E[Y_1] = E[Y_0]$. Hence, while the FRT controls the false discovery rate under the sharp null, it's not clear that it controls the false discovery rate under more conventional null hypotheses. It may be worth clarifying this point in the paper, or otherwise commenting on what is "lost" by relying on FRT.
2. It's not clear to me how novel / original some of the theoretical results are, and it would be worth clarifying which theoretical results are considered most novel by the authors. Theorem 2.3, for instance, seems like an obvious consequence of FRT, and I'm surprised that it's given as a Theorem in this paper, as opposed to being cited from elsewhere (though given my lack of familiarity with the FRT literature, I don't have a citation to provide).
Other Comments Or Suggestions: Some minor grammar / presentation points:
* On Line 056, right-hand column: "Let the binary treatment denote by $A$" -> "Let $A$ denote the binary treatment"
* Figure 1 is extremely small. Note that you can use figure* in two-column formats to get a figure that crosses both columns, which would be better for this figure.
Questions For Authors: I have listed these in priority order, with the first two questions being particularly important to clarify.
1. I am less familiar with randomization inference (a la the FRT). Does this procedure also provably control the false positive rate under the more conventional general null hypothesis that $E[Y_1 - Y_0] = 0$? I would assume not, since it takes $Y_1, Y_0$ as fixed, as opposed to random?
2. How does CSB compare to ALSB under an apples-to-apples comparison? For context: The comparison in Figure 6 is not "apples to apples", instead it shows FRT + CSB versus ALSB with asymptotic inference. It would be nice to see either (a) FRT+CSB versus FRT+ALSB, or (b) CSB and ALSB considered head-to-head where both use asymptotic inference.
3. Which of the theoretical results are the most novel, in the view of the authors? To me, it seems like most of the theory is relatively straightforward, and the main contributions are a bit more conceptual (e.g., the realization that one can use the FRT here), but I'm open to arguments that there are more novel theoretical contributions here.
4. What is the concern with "post-selection inference" here? I don't quite follow the claim (see 090-092 LHS, "We account for selection uncertainty in FRT and offer valid post-selection inference"). The only justification I see is on lines 222-225 (LHS), "By using $T(A) = |\hat{\tau}_{\gamma}|$ as the test statistic for FRT and allowing $\mathcal{E}(\gamma)$ to vary with resampling A in FRT could account for selection uncertainty and provide valid post-selection inference". This is mainly just a conceptual argument for why there is no post-selection inference concern, right? I.e., under the null, the entire procedure is run to get a sampling distribution of the test statistic, which can depend in arbitrary ways on the data?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful and constructive comments. Below, we have provided our detailed, point-by-point responses.
**1. Clarification on sharp null hypothesis**
The sharp null hypothesis $Y_i(1) = Y_i(0)$ for all $i \in$ RCT states that there is no individual treatment effect for any unit. Another common form of the sharp null is conditional independence, $Y_i \perp A_i \mid X_i$, meaning the observed outcome is independent of treatment assignment given covariates. In contrast, the hypothesis that the average treatment effect (ATE) is zero is often called the "weak null."
In a finite-sample exact sense, FRT guarantees Type I error control under the sharp null but cannot guarantee it under the weak null. However, recent work has shown that FRT can also asymptotically control the Type I error under the weak null using studentized or pre-pivoted test statistics (Wu & Ding, 2021; Cohen & Fogarty, 2022). The RCT sample size is typically small in our context, such as trials for rare diseases. Asymptotic approximations may be unreliable, so we rely on the sharp null for a finite-sample exact test.
While this reliance on the sharp null rather than the weak null may be viewed as a limitation of FRT, to our knowledge, there is currently no testing procedure that controls Type I error in a finite-sample exact sense under the weak null without additional distributional assumptions. Section 6 also discusses possible future directions beyond the sharp null.
**2. Significance of the Conformal Selective Borrowing (CSB) and comparison to Adaptive Lasso Selective Borrowing (ALSB)**
Compared to existing selective borrowing approaches such as ALSB, the significance of CSB can be summarized as follows:
(i) Model-free flexibility: CSB is a model-free approach that allows flexible choice of conformal scores depending on data characteristics. For example, in our real-world application where the outcome exhibits heavy tails and heteroscedasticity (see Figure 14), we use conformalized quantile regression (Romano et al., 2019) for selection. Distance-based scores such as nearest-neighbor conformity scores (Shafer & Vovk, 2008) can be used for binary outcomes. This flexibility is difficult to achieve with model-based methods like ALSB.
(ii) Estimation: CSB performs better than ALSB in terms of MSE under small and moderate sample sizes. This is demonstrated in Figures 6(C) and 11(C), where both methods are compared purely on estimation accuracy without involving inference procedures.
(iii) Computation: CSB is compatible with the FRT, while ALSB is not readily applicable with FRT due to its computational complexity. This highlights an advantage of CSB when exact finite-sample inference is desired.
(iv) Apples-to-apples comparison under asymptotic inference (Asym): We conducted a comparison between CSB+Asym and ALSB+Asym (see [here](https://anonymous.4open.science/r/doc-E6B4/sim_csb_asym.pdf)). CSB+Asym generally achieves better Type I error control than ALSB+Asym and performs comparably when $b=1$.
**3. Novelty and implications of theoretical results**
Theorem 2.3 builds on classical FRT results (e.g., Lehmann and Romano, 2005, Theorem 15.2.1), but to our knowledge, this is the first formal result establishing the validity of FRT in the context of hybrid controlled trials. It confirms that FRT can be applied in this setting and clarifies how it should be applied to ensure validity. Specifically, Theorem 2.3 provides the following practical guidance and important caveats:
(i) Type I error control is guaranteed only when we permute assignments according to the actual experimental design, that is, permuting treatment assignments only within the RCT while keeping the EC assignments fixed. An important caveat is that permuting across all treated and control units, including ECs, would invalidate the theorem.
(ii) To maintain validity, the test statistic should vary with the permuted treatment vector $A$, meaning the selected set of ECs should also be updated under each permutation to account for selection uncertainty. Fixing the EC selection across permutations would also invalidate the theorem.
Theorems 3.7 and 3.8 provide novel non-asymptotic MSE bounds that guide the design of the adaptive selection threshold in finite samples. Please also see our response to Reviewer f66S, Point 1.
**4. Post-selection inference**
The reviewer's understanding is correct. The selection uncertainty is fully incorporated into the reference distribution by allowing the selected set to vary with the resampled treatment vector $A$ in the FRT. Therefore, there is no post-selection inference concern when following this principle. However, as noted in the previous point, a key caveat is that the selected EC set should not be fixed during permutation, as this would ignore selection uncertainty and invalidate the test.
**5. Presentation**
We moved the related work to the main text, revised Line 056, and enlarged Figure 1.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response - those clarifications are very helpful. I am inclined to keep my score (I'm somewhere between a 3 and a 4 after the response). Reading the review of GG5D, I would also suggest being more explicit upfront that there is no free lunch here, that you have proposed a valid approach that "might" improve power, if all goes well.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful follow-up and valuable suggestions. We're pleased that your score has risen to 3.5, even though the official score may not reflect this according to the new scoring scale.
We fully agree that it is important to be upfront about the limitations. Our proposed method controls the Type I error for testing the sharp null and might improve the power of RCT-only analysis when the bias of ECs is either absent or detectable. When the bias is difficult to detect, our method may incur some power loss, though it still maintains valid Type I error control.
We also appreciate the importance of positioning our method within the context of the existing literature. The no-free-lunch limitation is recognized in existing review papers (Oberst et al., 2022; Lin et al., 2024), which point out that no method can uniformly and significantly outperform RCT-only analysis across varying levels of hidden bias, although different approaches optimize the risk-reward trade-off from different perspectives. The most challenging scenarios are those where bias is present but complex to correct or difficult to detect. Our main difference from existing literature is twofold: (i) we prioritize exact Type I error control in small samples first, then seek to improve power; (ii) we optimize the risk-reward trade-off between no borrowing and full borrowing from the perspective of conformal selective borrowing, motivated by our real data, where some ECs are unbiased while others are not.
We will incorporate this important discussion more explicitly in the introduction section to better align with your and Reviewer GG5D’s helpful feedback. | Summary: This paper proposes a randomization inference framework that can combine the data from randomized controlled trials with external controls. The proposed method controls the Type-I error in finite samples by leveraging conformal inference to select appropriate samples from external controls. In particular, the selection threshold provides an interpolation between no-borrowing and full-borrowing approaches, which can be tuned by minimizing the mean squared error. Some simulation results and real-world applications show the applicability of the proposed method.
Claims And Evidence: Basically, the claims in the submission are clear and supported by proofs or simulation studies. However, I feel that the bounds in Theorems 3.7 and 3.8 are loose and not easy to comprehend.
In addition, I don't quite understand why the authors can relate $\hat{\tau}_{\hat{\gamma}}$ to super efficiency on the second column of Line 295. Can the authors provide some theoretical justification to this claim?
Methods And Evaluation Criteria: The proposed methods make sense to me. However, for the simulation setups, the dimension of covariates is $p=2$, which is too small and won't lead to meaningful conclusions. Can the authors try some larger values for $p$? Also, the current covariates $X$ have independent coordinates. Can we introduce any dependence within the coordinates of $X$?
Theoretical Claims: I basically checked all the proofs, and they looks correct. A minor question is how $\Phi$ is related to variances of $\epsilon_{\gamma}$. Can the authors provide some short discussion on it?
Experimental Designs Or Analyses: I checked the experimental results and already pointed out my concerns for the simulation setups above.
Moreover, for Figures 5 and 10, the authors should also report the value of $\hat{\gamma}$. The reason is that the proposed method CBS under the "optimal" value $\hat{\gamma}$ performs not as good as the cases under other values of $\gamma$. This is a severe issue that may limit the impact of this paper.
Supplementary Material: Yes, I read all parts of the supplementary materials.
Relation To Broader Scientific Literature: This paper combines the approaches in conformal inference literature with other randomization tests to propose a new framework for combining the data from randomized controlled trials with external controls. The estimator relies on the one proposed by Li et al. (2023), but the authors also relax the mean exchangability condition imposed by Li et al. (2023).
Li, X., Miao, W., Lu, F., and Zhou, X.-H. Improving efficiency of inference in clinical trials with external control data. Biometrics, 79(1):394–403, 2023.
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: As I mentioned above for the simulation results, the proposed method under the optimal choices of selection parameter does not seem to perform as good as the case under other arbitrary choices.
Other Comments Or Suggestions: 1. Second column of Lines 178 to 182: The sentence "Based on..." seems to repeat what have been discussed in the last paragraph.
2. Second column of Line 225: There should be a division in the definition of $p_j^{split}$. The same issues happen to $p_j^{jackknife+}$ and $p_j^{cv+}$.
3. The plot in Figure 2(D) needs to be zoomed in.
Questions For Authors: See above.
Ethical Review Flag: Flag this paper for an ethics review.
Ethics Expertise Needed: ['Other expertise']
Ethical Review Concerns: The paper exceeds the 8-page limits and potentially discloses the authors' identities.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful and constructive comments. Below, we have provided our detailed, point-by-point responses.
**1. Bound in Theorems 3.7 and 3.8**
We explain the key terms in Theorems 3.7 and 3.8 and their practical implications as follows:
(i) The term $c\Delta|\delta_1|$ involves $\delta_1$, the bias of the consistent No Borrowing estimator, which is of order $1/n$, and $\Delta$, the maximum bias across all $\gamma \in \Gamma$. This motivates us to apply pre-propensity-score matching on all ECs to prevent potentially large bias during the data preparation phase.
(ii) The term $c\Delta\Phi\sqrt{\log (|\Gamma|/\iota)}$ involves $\Phi$, the largest standard deviation proxy $\phi$ across $\gamma \in \Gamma$, which is of order $O(1/\sqrt{n})$. The grid size $|\Gamma|$ is fixed (e.g., 10 in our experiments), making this term well-controlled.
(iii) The term $\\max (c\Phi^2\sqrt{\log (|\Gamma|/\iota)}, c\Phi^2\log (|\Gamma|/\iota))$ arises from sub-exponential tail bounds and scales similarly to (ii).
(iv) The terms $\\max\_{\gamma\in\Gamma} |\hat{V}(\hat{\tau}\_{\gamma} - \hat{\tau}\_1) -\kappa_{\gamma}^2|$ and $\\max\_{\gamma\in\Gamma} |\hat{V}(\hat{\tau}\_{\gamma}) - \sigma^2_{\gamma}|$ corresponds to the estimation errors for $\kappa^2_{\gamma}$ and $\sigma^2_{\gamma}$. This motivates the use of a sufficiently large number of bootstrap replicates to ensure accurate variance estimation.
Overall, Theorems 3.7 and 3.8 provide concrete guidance for implementing $\gamma$ selection in practice and justify the design of our MSE-guided adaptive procedure, even if the bounds themselves are conservative due to their non-asymptotic and worst-case nature.
**2. Clarifying the connection to super-efficiency-type behavior**
The reviewer questions the super efficiency of $\hat{\tau}_{\hat{\gamma}}$. We acknowledge that this was a misstatement; our intent was not to claim that it is super-efficient, but rather that it exhibits behavior similar to the Hodges estimator. We have revised the sentence as follows:
"This phenomenon highlights that $\hat{\tau}_{\hat{\gamma}}$ behaves similarly to the Hodges estimator (Le Cam, 1953) and to integrated estimators in data fusion (Yang et al., 2023; Oberst et al., 2022): improving upon the baseline estimator (here, the No Borrow estimator) in certain regions of the parameter space (where there is no bias in ECs) inevitably leads to worse performance in other regions (where the bias in ECs is difficult to detect)."
**3. Additional simulations**
We additionally consider $p = 5$ and $X \sim N(0, \Sigma)$, where $\Sigma$ is a Toeplitz matrix with $(i,j)$-th entry $\Sigma_{ij} = \rho^{|i-j|}$ and $\rho = 0.3$, to introduce dependence among the coordinates of $X$. We did not consider larger $p$ since the sample size is small, with only 25 RCT controls. The simulation results (see [here](https://anonymous.4open.science/r/doc-E6B4/sim_supp.pdf)) show similar patterns and demonstrate the robustness of our method.
**4. $\Phi$ and $\epsilon_\gamma$**
$\Phi$ is the largest standard deviation proxy $\phi$ over $\gamma \in \Gamma$. If $\epsilon_\gamma$ is Gaussian, then $\phi$ equals its standard deviation. We previously mislabeled $\phi$ as a variance proxy and have corrected this.
**5. Value and performance of $\hat{\gamma}$**
Since $\hat{\gamma}$ is adaptive and varies across simulation replicates, we separately report its values in Figure 4 and analyze its behavior in Section C.2.
We do not expect $\hat{\gamma}$ to outperform all fixed $\gamma$ values uniformly. Prior work has shown that no method can uniformly outperform No Borrowing (corresponding to $\gamma = 1$) without additional assumptions (Oberst et al., 2022; Lin et al., 2024). Instead, our proposed $\hat{\gamma}$ is designed to improve power when the bias of ECs is either absent or detectable. When the bias is difficult to detect, it is reasonable that our method may incur some inevitable power loss, though within an acceptable range. Importantly, Conformal Selective Borrowing+FRT always controls the Type I error, even in such challenging cases.
**6. Presentation**
We removed the sentence "Based on...", corrected the missing division symbols, and zoomed in Figure 2(D).
**7. Ethical issue**
As per ICML guidelines, the Impact Statement is not counted toward the 8-page limit. We initially understood this to apply to the Software and Data section and had no intention of violating the page limit.
We did not disclose any identities or URLs in the submission. We mentioned the availability of the R package solely to highlight the applicability and reproducibility of our work. We sincerely apologize if our identity may have been inadvertently revealed in the package documentation, which is external to the submission; this was entirely unintentional and not meant to compromise anonymity. To avoid any potential concern, we have removed the Software and Data section from the manuscript. | Summary: Authors study integration of external (historical) controls into randomized controlled trials in a principled way using conformal p-values, to guarantee type-1 error rates with finite samples under potential violation of the exchangeability assumption between trial and external controls.
### update after rebuttal
I thank the authors for their detailed response. In particular, the motivation/logic they outline in the beginning through 3 items: (i), (ii), (iii) are helpful and a more detailed and polished version should definitely be included in the updated manuscript. While those items helped me understand the contributions of the manuscript a little better, I still have concerns regarding sample splitting in the RCT to test ECs against: this decreases power (reduced sample size in the RCT, as the samples we use for testing ECs should not be used again in downstream analyses). While there is some empirical evidence suggesting improved power overall, the fundamental tradeoff here should be more clearly analyzed theoretically. Therefore, I maintain my score.
Claims And Evidence: The thing that bothers me the most is that it is not clear what authors claim to do and what they end up doing. I can understand the idea of using conformal inference to "select" external controls based on their conformity score, which is straightfoward. However the following question is not answered sufficiently in my opinion:
- You use RCT controls to select in the first place. In that case, why would you not still be limited statistically by the number of controls in the RCT?
The question above is not answered/discussed verbally or as a theorem. By the latter, I mean the following. Authors claim that their new approach controls the type-1 error and improves power. I do not see a result where they "prove" they improve power. And they should be more clear that this "power" only relates to identifying whether or not the treatment has any effect, and not necessarily what is the size of the effect.
A criticism to this conformal approach would be that each control is tested for pooling individually, and then the high probability guarantees are obtained straightforwardly by union bound. As authors briefly mention, this will suffer from low power and may introduce bias. This to me is the point where the paper is making its main contribution via an improved selection mechanism (correct me if I am wrong). However, my main issue here is the following:
- You motivate minimizing MSE of the estimator as a proxy to guide better selection and leverage consistency of the estimators in doing that. The consistency is prone to the same issues you cite to be immune to: asymptotical validity, etc.
- The results you have here again do not connect to an improved power for the FRT, but rather seem to be self-contained and remain as a proxy/heuristic to improve the power.
Please correct me if I am wrong. My main challenges with this paper is that I do not understand what is the one most important result its trying to prove (higher power for for FRT?) and how does the things you do/prove in the paper connect to that? I think the flow of methodology and the motivation of the paper is not very clear as it stands.
Methods And Evaluation Criteria: Real-world experiment is well described and interesting. It is a suitable experiment to run for this paper.
Theoretical Claims: I did not check the proofs carefully. Most results seem to be standard from the conformal inference literature.
Experimental Designs Or Analyses: The conformal selective borrowing approach in the real-world experiments seem to be not making a big difference compared to the no borrowing approach. An exception is slightly reduced p-vals and SE's, which might be a function of sample size alone. This falls in line with my earlier intuititon/concern regarding not having improved power as the selection is still limited by the RCT (See Claims And Evidence).
Supplementary Material: There is not one
Relation To Broader Scientific Literature: Current manuscript focuses on a specific problem and claims to improve it using ideas from conformal inference. It does not relate to the broader scientific literature in a significant way.
Essential References Not Discussed: There is a vast body of work on historical controls, especially in the epidemiology literature. Current manuscript does a poor job covering that.
Other Strengths And Weaknesses: See Claims And Evidence
Other Comments Or Suggestions: See Claims And Evidence
Questions For Authors: See Claims And Evidence
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful and constructive comments. Below, we have provided our detailed, point-by-point responses.
**1. Clarifying the objective and the role of MSE minimization in improving FRT power**
Our primary objective is to improve the power of the Fisher Randomization Test (FRT), which is limited when using only RCT data (No Borrowing). To achieve this, we rely on three key insights:
(i) Variance and power: Theorem 2.4 (power analysis for consistent test statistics) shows that power can be improved by reducing the variance of a consistent test statistic, which can be achieved by borrowing unbiased ECs to augment the limited sample size of the RCT control arm;
(ii) Bias and power: borrowing biased ECs renders the full borrowing estimator inconsistent, severely reducing the power of FRT;
(iii) Bias-variance trade-off and power: these motivate the use of Conformal Selective Borrowing to improve FRT power of RCT-only analysis by borrowing unbiased ECs and discarding biased ones from a large EC pool.
Based on above insights, we introduce our intermediate objective: testing the exchangeability of each EC. We acknowledge that the number of RCT controls statistically limits the power of this exchangeability testing. Therefore, we propose tuning the selection threshold $\gamma$ (i.e., the significance level of the EC exchangeability test) to optimize our primary objective directly. While using empirical FRT power to tune $\gamma$ would be ideal, this approach (i) requires specifying an alternative hypothesis and (ii) is computationally intensive, as computing FRT power across a grid of $\gamma$ values is costly.
This leads to our final choice: using the MSE of the Conformal Selective Borrowing estimator as a proxy to optimize $\gamma$. Our MSE-guided $\gamma$ selection offers several advantages: (i) experiments show it achieves our primary goal of improving FRT power compared to RCT-only analysis, though we acknowledge that the theoretical link between MSE and FRT power is challenging due to the irregularity of the test statistic; (ii) it yields strong selection performance and supports our intermediate objective (see Fig. 2 and Fig. 14); (iii) with MSE-guided adaptive $\gamma$, Conformal Selective Borrowing serves as both a powerful test statistic and an accurate ATE estimator; (iv) empirical MSE can be approximated by leveraging the No Borrowing estimator, and we provide a non-asymptotic excess risk bound for the adaptive procedure.
**2. Finite-sample validity of FRT**
For the validity of FRT (Type I error control), we do not rely on asymptotic arguments such as estimator consistency, as shown in Theorem 2.3. This is a key advantage of FRT over existing integrative methods whose validity depends on asymptotic theory.
For power improvement, the optimality of the selected $\gamma$ does rely on the consistency of the No Borrowing estimator, as shown in Theorem 3.8. However, our experiments show that Conformal Selective Borrowing with adaptive $\gamma$ improves FRT power even in small-sample settings.
**3. Significance in real-world experiments**
The real-world experiments show that Conformal Selective Borrowing (CSB) improves both robustness and efficiency over No Borrowing (NB) and Full Borrowing (FB): (i) Given that CSB+FRT theoretically controls Type I error in finite samples, which existing integrative methods do not, it yields a significant result (p < 0.05) compared to the borderline NB result (p = 0.055), addressing the underpower issue in the original study. (ii) CSB reduces the standard error by 20% compared to NB. (iii) CSB mitigates the bias of FB; the ATE estimate under CSB (0.138) is close to NB (0.142), while FB overestimates it (0.241). We acknowledge that the extent of improvement depends on the quality of the real data at hand. To further evaluate our method in finite samples, our simulations show that CSB can improve FRT power by up to 45% compared to NB.
**4. Related work on historical controls**
Due to the manuscript's space limitation at the original submission, we included the comprehensive literature review on historical control borrowing in Appendix A. We acknowledge the value of such a review in the main text and will move the related work on historical controls to the Introduction section at the resubmission. Please let us know if we have missed any relevant literature. | null | null | null | null | null | null |
Right Time to Learn: Promoting Generalization via Bio-inspired Spacing Effect in Knowledge Distillation | Accept (poster) | Summary: This paper, inspired by the spacing effect, proposes Spaced KD, which distills the student model by a teacher who pretrains s-steps ahead of the student. This paper demonstrates theoretically that Spaced KD produces flatter loss landscapes, and proves experimentally the superior performance of Spaced KD in both online KD and self KD scenarios.
## update after rebuttal
Further responses from the authors show that their method leads to a slight increase in training time without other additional overhead, and can be combined with other KD methods to improve performance.
Claims And Evidence: The authors claim that Spaced KD has a lower update frequency of the teacher model compared to online KD. However, Spaced KD updates the teacher model s steps ahead of the student model, as shown in Figure 1 and Algorithm 2, and updates the teacher model s times per advance, which does not reduce the number of updates to the teacher model. I would like the author to explain this.
Methods And Evaluation Criteria: Spaced KD is easy to understand and implement, and experimental results show that Spaced KD further improves distillation performance in both online and self KD.
However, I am still confused about how Spaced KD applies to self KD. [1] uses the final output to guide the intermediate output for self-distillation, so the teacher and student are the same model but with different capacities. This self-distillation is designed to reduce distillation overhead. If Spaced KD is used in self-distillation, is it necessary to additionally store the parameters of the teacher model, which is updated s steps ahead during training? How much would this increase the training overhead?
[1] Be your own teacher: Improve the performance of convolutional neural networks via self distillation. In ICCV. 2019.
Theoretical Claims: There is no obvious error in the theoretical analysis presented in this paper.
Experimental Designs Or Analyses: 1. I suggest that the authors include the additional overhead that Spaced KD incurs compared to vanilla KD, such as memory and time during training.
2. It is recommended that the authors provide the accuracy of the teacher model in their experiments to show the performance gap between Spaced KD trained students and teachers. In other words, is it possible that Spaced KD-trained students can outperform teachers? As far as I know, some distillation methods have been able to enable their students to perform beyond the teacher model.
3. In Table 5, the authors should provide a comparison with other distillation methods.
4. The authors achieved further performance gains by using Spaced KD on other KD methods in Table 4. However, I consider these methods to be too dated and suggest that the authors use the most recent KD methods as a baseline to further demonstrate the generalizability of their method.
Supplementary Material: I have checked the code provided in the supplemental material and it seems to be fine.
Relation To Broader Scientific Literature: The method proposed in this paper is simple and easy to implement, and can be better combined with previous distillation methods to further improve performance.
Essential References Not Discussed: This paper presents preliminaries for understanding its method and cites relevant papers in Section 3. However, I would still recommend including citations and comparisons to more recent KD literature (years 2023 & 2024).
Other Strengths And Weaknesses: I believe that the method proposed in this paper can contribute to the development of bio-inspired KD algorithms, and the experimental results demonstrate the superior performance of Spaced KD. However, I am concerned about the overhead of Spaced KD during training, which, while easy to implement, has the potential to incur significant additional memory/time overhead, thus affecting distillation efficiency, especially for self-distillation with the goal of efficient distillation.
Other Comments Or Suggestions: It is recommended that the authors elaborate further on the details and overhead of Spaced KD.
Questions For Authors: I have no further questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We provide a point-to-point response as follows. We hope you may consider this a sufficient reason to raise the score. If you have any further questions, please let us know.
**Q1: The additional overhead that Spaced KD incurs compared to vanilla KD.**
The computational and parameter overhead of our Spaced KD is essentially **identical** the counterparts of online KD and self KD. Regardless of whether or not Spaced KD is introduced, the entire model (both teacher and student) employs the same network architecture and is trained with the same total number of epochs. In practice, since we need to train the teacher s epochs in advance and wait for the student to follow, this results in a slight delay in the runtime (around 30%) but does **not** increase the computational overhead (i.e., the waiting teacher is frozen, not computing).
**Q2: Lower update frequency of the teacher model.**
Our claim "less frequent" means that, the teacher is distilled every s epochs in Spaced KD, rather than every iteration in online KD. We will clarify this in the final version.
**Q3: How Spaced KD applies to self KD and the additional parameter/training overhead.**
In self KD, the deepest layer (as the teacher) transfers knowledge to the shallow layers (as the student) at each training time step [1]. In its spaced version, we first train the model using the cross-entropy loss between the deepest layer's output and the ground-truth label for $s$ steps. We then train the model using the standard self KD loss [1] between the deepest layer's output and each shallow layer's output for $s$ steps. Therefore, our spaced version of self KD does **not** store additional parameters and does **not** increase the training overhead (i.e., the total number of epochs is not changed). We will make this clearer.
[1] Be your own teacher: Improve the performance of convolutional neural networks via self distillation. ICCV, 2019.
**Q4: The accuracy of the teacher model and its comparison to the student model.**
We would point out that in all experiments, "w/o KD" denotes the teacher's performance, "w/o Ours" and "w/ KD" denote the student's performance after online KD or self KD without our spacing effect, while "w/ Ours" denotes the student's performance after online KD or self KD with our spacing effect (referred to as the Spaced KD).
This is because existing online KD and self KD methods often employ (two copies of) the same network as both teacher and student to improve generalization of the model itself. Therefore, the student's performance will exceed the teacher's performance if the online KD and self KD methods work well.
**Q5: In Table 5, the authors should provide a comparison with other distillation methods.**
In addition to online KD in Table 5, we add more experiments of **self KD** with different corruption types and network architectures. As shown in the following table, Spaced KD can also largely improve the robustness of self KD in generalizing to different noisy scenarios.
|Attack|ResNet18 w/o Ours|ResNet18 w/ Ours|ResNet50 w/o Ours|ResNet50 w/ Ours|ResNet101 w/o Ours|ResNet101 w/ Ours|
|-|-|-|-|-|-|-|
|impulse_noise|50.65|**60.57**|62.18|**71.57**|59.33|**68.78**|
|zoom_bhr|64.44|**68.60**|68.03|**72.13**|66.09|**71.16**|
|snow|61.30|**66.14**|64.72|**69.65**|64.03|**68.76**|
|frost|63.96|**67.80**|66.36|**71.81**|66.73|**70.21**|
|jpeg_compression|30.99|**34.67**|34.44|**35.34**|33.64|**34.76**|
|brightness|73.18|**75.92**|74.91|**79.19**|75.10|**78.91**|
**Q6 & Q7: Include citations and comparisons to more recent KD methods.**
Following your suggestion, we have included more recent KD methods, especially those for 2023-2024. For example, TSB [1] constructs superior "teachers" with temporal accumulator and spatial integrator. CTKD [2] controls the task difficulty level during the student’s learning career through a dynamic and learnable temperature. LSKD [3] employs a plug-and-play Z-score pre-process of logit standardization before applying softmax and KL divergence.
As shown in the following table, our Spaced KD can be combined with these methods and provide significant improvements under two datasets and two architectures.
|Dataset/Model|Method|TSB [1]|CTKD [2]|LSKD [3]|
|-|-|-|-|-|
|CIFAR-100/ResNet-18|w/o KD|67.65|67.86|67.94|
| |w/ KD|71.70|69.41|70.74|
| |w/ Ours|**72.82**|**71.12**|**71.76**|
|CIFAR-100/DeiT-Tiny|w/o KD|51.90|53.31|52.31|
| |w/ KD|52.63|54.20|52.93|
| |w/ Ours|**55.47**|**54.72**|**53.83**|
|Tiny-IN/ResNet-18|w/o KD|55.21|53.03|54.05|
| |w/ KD|59.92|58.78|59.30|
| |w/ Ours|**61.65**|**60.32**|**60.28**|
| Tiny-IN/DeiT-Tiny|w/o KD|40.29|40.82|39.65|
| |w/ KD|40.13|41.22|41.14|
| |w/ Ours|**43.36**|**41.60**|**41.48**|
[1] Online Knowledge Distillation by Temporal-Spatial Boosting. WACV, 2022. (requested by Reviewer C5Ts)
[2] Curriculum Temperature for Knowledge Distillation. AAAI, 2023.
[3] Logit Standardization in Knowledge Distillation. CVPR, 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors' response, additional results suggest that Spaced KD can indeed improve the performance of some recent KD methods. However, I still have a few questions. However, I still have some concerns that I would like the author to address. As Reviewer bqdc suggests, the authors need to compare with more SOTA KD methods. Also, the author may have misunderstood my initial comment. It is hoped that the authors will compare their method to the SOTA KD methods propsed in other papers in Table 5, rather than just to their baseline, so that the performance advantages of their method can be further confirmed.
---
Reply to Comment 1.1.1:
Comment: Thank you for you timely feedback. Since our Spaced KD is designed to improve generalization of a deep learning model itself with the same architecture and dataset, we mainly consider online KD, self KD, and relatively advanced methods of these two avenues in our paper (as well as our rebuttal results). It is worth noting that many state-of-the-art KD methods are not intended for this purpose (most of them focus on distilling a large teacher to a small student), making it difficult to adapt between our and their settings.
We have tried our best to search online / self KD methods and find that DTSKD [1] is the state-of-the-art, whose reported results outperform MixSKD (ECCV'22), PS-KD (CVPR'21), Tf-KD (CVPR'20), CS-KD (CVPR'20), etc. We run its officially released code and implement Spaced KD on it. For CIFAR-100 with VGG-16, ResNet-18 and ResNext-18, the improved performance of Spaced KD over DTSKD (running the default setting of its officially released code) is **+1.40%**, **+0.88%** and **+1.24%** under exactly the same configuration, which demonstrates the advantages of our approach.
Due to the very limited time in rebuttal, we will finish the experiments of other architecture / dataset setups and add them in the final version.
[1] Dual teachers for self-knowledge distillation. Pattern Recognition, 2024. | Summary: The paper introduces Spaced Knowledge Distillation (Spaced KD), a novel method drawing on the biological spacing effect to enhance generalization in online and self-knowledge distillation by inserting intervals between teacher and student training steps. It makes notable contributions: firstly, the bio-inspired strategy of adding temporal intervals between teacher updates and student distillation, which results in flatter loss landscapes ; secondly, the theoretical analysis that shows Spaced KD converges to flatter minima through Hessian trace analysis; and lastly, the empirical validation that reveals significant performance improvements.
Claims And Evidence: Performance improvements (Tables 1–2) are backed by extensive experiments.
Flat minima hypothesis is validated via noise robustness tests (Fig. 4, Page 8) and Hessian analysis (Sec. 4.2).
Methods And Evaluation Criteria: Methods: Spaced KD introduces a temporal interval s (e.g., 1.5 epochs) between teacher updates and student distillation, compatible with existing KD frameworks (Algorithms 1–3, Appendix).
Evaluation: Standard benchmarks (CIFAR-100, Tiny-ImageNet) and metrics (test accuracy) are appropriate. However, ImageNet-1K results are less comprehensive (Table 7, Page 14). The architectures validated in the experiments mainly include variants of ResNet, DeiT and PiT. Including more diverse architectures, such as WRN and VGG, would help to verify the effectiveness of the method. Additionally, It is mentioned( Page 2, Line 69 ) that it has a plug - in effect for a wide range of self - distillation and online distillation methods. However, the improvements over existing methods (Table 4) do not follow the latest methods, and the validated architectures/datasets are only ResNet - 18/CIFAR - 100.
Theoretical Claims: Theorem 4.4 (Page 4) links Spaced KD to flatter minima via Hessian trace analysis. The proof (Appendix A.1) assumes over-parameterized models and linearized dynamics, which may not fully capture real-world DNN training.
Experimental Designs Or Analyses: Strengths: Ablation studies on interval sensitivity (Fig. 2, Page 6) and critical timing (Fig. 3, Page 7) are thorough.
Weakness: The baseline comparison lacks state-of-the-art methods (e.g., contrastive distillation). Tables 1–2 focus on older baselines (e.g., BAN, DML).
Supplementary Material: Appendix: Includes proofs, pseudo-code (A.10), and additional experiments (e.g., adversarial attacks in Table 13). However, some details (e.g., hyperparameters for transformer training) are missing.
Relation To Broader Scientific Literature: Connects KD with biological spacing effect (Page 1, Lines 30–40), leveraging prior work on flat minima (Keskar et al., 2016) and online/self-KD (Zhang et al., 2018).
Essential References Not Discussed: Online Knowledge Distillation by Temporal-Spatial Boosting
C. Li, Z. Wang and H. Qi, "Online Knowledge Distillation by Temporal-Spatial Boosting," doi: 10.1109/WACV51458.2022.00354.
A recent work on rehearsal-based KD with temporal intervals, which shares conceptual similarities but is not cited.
Other Strengths And Weaknesses: Originality: Novel integration of neuroscience principles into KD.
Clarity: Well-structured, but the pseudo-code (Appendix A.10) lacks implementation details (e.g., gradient accumulation for interval s).
Other Comments Or Suggestions: Clarity: The term "space interval" (Page 4) could be confused with spatial intervals; "temporal interval" is more precise.
Questions For Authors: Why is the optimal interval s = 1.5 epochs (Page 7)? Is this dataset-dependent? A theoretical justification is missing.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We provide a point-to-point response as follows. We hope you may consider this a sufficient reason to raise the score. If you have any further questions, please let us know.
**Q1: ImageNet-1K results are less comprehensive in Table 7.**
We would respectfully point out that the ImageNet-1K results are presented in both Table 7 and Table 8, including different KD paradigms (online KD and self KD) and network architectures (ResNet-18 and Deit-Tiny). Our Spaced KD provides improvements in all cases.
**Q2: Evaluation of more recent KD methods with more scenarios.**
As shown in the response to **Reviewer bqdc's Q1**, our proposed Spaced KD brings significant improvements to a range of more recent KD methods under two datasets and two architectures. We will add these results in the final version.
**Q3: The proof (Appendix A.1) assumes over-parameterized models and linearized dynamics, which may not fully capture real-world DNN training.**
We would respectfully argue that state-of-the-art DNNs are often **over-parameterized** to improve generalization. This property results in multiple local minima with similar training errors but different testing errors, often reflected in the flatness of loss landscape. We therefore adopt the over-parameterization assumption to analyze how to converge to a flatter loss landscape.
Also, we would point out that we assume **local linearization** around the convergence point, rather than global linearization. As shown in previous work [1], SGD eventually selects a loss minima with linear stability (i.e., low errors with moderate linear disturbance), therefore ensuring flat loss landscape and improving generalization.
We will add more explanations to these two assumptions in the final version.
[1] The alignment property of SGD noise and how it helps select flat minima: A stability analysis. NeurIPS, 2022.
**Q4: Some details (e.g., hyperparameters for transformer training) are missing.**
For transformer training, we adopt the well-established training pipeline tailored for the benchmark datasets [1-3]. We will add more detailed descriptions in the final version.
[1] Efficient Training of Visual Transformers with Small Datasets, NeurIPS. 2021.
[2] Locality Guidance for Improving Vision Transformers on Tiny Datasets. ECCV, 2022.
[3] Logit Standardization in Knowledge Distillation. CVPR, 2024.
**Q5: One related work [1] is not cited.**
Thanks. We have conceptually analyzed this work and empirically demonstrated Spaced KD's plug-in benefit on it (see response to **Reviewer bqdc's Q1**).
**Q6: The pseudo-code (Appendix A.10) lacks implementation details (e.g., gradient accumulation for interval s).**
We would clarify that our proposed Spaced KD inherently avoids gradient accumulation through its dual-loop design. The hyperparameter s (i.e., temporal interval) operates as a temporal decoupler rather than a gradient accumulation window. With the outer loop, the teacher model updates its parameters immediately after each batch (Algorithm 2, line 5), with no gradient retention. With the inner loop (when $|\mathcal{R}=s$), the teacher’s parameters remain frozen while the student updates using the cached samples, ensuring no backward passes occur through the teacher in this phase. This design strictly segregates the gradient flows: the teacher’s gradients are computed and applied instantly in the outer loop, while the student’s gradients are confined to the inner loop without cross-interval persistence. We will add more explanations to make this clearer.
**Q7: The "temporal interval" is more precise than "space interval".**
Following your suggestion, we will modify our expression for better clarity.
**Q8: Why is the optimal interval s = 1.5 epochs (Page 7)? Is this dataset-dependent? A theoretical justification is missing.**
We theoretically demonstrate that there exists a desirable temporal interval to improve generalization of online KD (as well as self KD) and empirically investigate the selection of specific values (see online KD in Table 6, and self KD in the response to **Reviewer bqdc's Q2**). We find that the desirable temporal interval and its effectiveness is related to the strength of SGD-induced variability (target of KD paradigms, learning rate, batchsize, etc.), and is relatively **insensitive** to different network architectures and benchmark datasets.
Since the SGD-induced variability of online KD (targeting the entire network) is larger than that of self KD (targeting a few network blocks), the former requires a smaller temporal interval to aggregate such variability to obtain an appropriate teacher-student gap. We therefore select s=1.5 for online KD and s=4.0 for self KD as the default implementation. We further investigate the impact of learning rate and batch size in Fig.5, which together with the temporal interval affect the overall performance.
---
Rebuttal Comment 1.1:
Comment: Thanks for the careful revision. The experimental analysis can get enhanced in the formal version.
---
Reply to Comment 1.1.1:
Comment: Thank you for your timely feedback. We will definitely include the additional experiments and the enhanced analysis in the formal version. | Summary: This paper proposes a space KD strategy, which is inspired by spacing effect in biological learning and memory. Overall, the experiments verified the effectiveness when comparing the proposed space KD with self KD(Zhang et al., 2019). However, there are many KD methods been proposed in recent years, it missing comparisons with SOTA KD methods.
Claims And Evidence: Overall the claims are well supported. However, whether the space KD is compatible with other SOTA KD methods is not well analyzed.
Methods And Evaluation Criteria: The evaluation criteria is reasonable
Theoretical Claims: The theoretical basis is the spacing effect in biological learning and memory. I think it is correct.
Experimental Designs Or Analyses: 1. Missing comparisons with SOTA KD methods
2. Missing experiments on whether the propose strategy is compatible with SOTA KD methods.
Supplementary Material: The supplementary material well suppported some claimes on the main draft.
Relation To Broader Scientific Literature: No problems with this part.
Essential References Not Discussed: Missing review on latest KD methods.
Other Strengths And Weaknesses: 1. Missing comparisons with SOTA KD methods
2. Missing experiments on whether the propose strategy is compatible with SOTA KD methods.
3. Missing review on latest KD methods.
4. In online KD, the knowledge distillation performance will be affected by the capacity gap. The discussion in 'Teacher-Student Gap' is shallow. It is not clear when and why the Spaced KD is not effective in online KD.
5. Missing discussions on the selection of Intervals for self KD and online KD.
6. Missing failure analysis. It is not clear when the spaced KD not works.
Other Comments Or Suggestions: See [Other Strengths And Weaknesses]
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We provide a point-to-point response as follows. We hope you may consider this a sufficient reason to raise the score. If you have any further questions, please let us know.
**Q1: Review, comparison, and compatibility experiment on latest KD methods.**
Following your suggestion, we have included more recent KD methods. Here we discuss some representative ones: TSB [1] constructs superior "teachers" with temporal accumulator and spatial integrator. CTKD [2] controls the task difficulty level during the student’s learning career through a dynamic and learnable temperature. LSKD [3] employs a plug-and-play Z-score pre-process of logit standardization before applying softmax and KL divergence.
As shown in the following table, our Spaced KD can be combined with these methods and provide significant improvements under two datasets and two architectures.
|Dataset/Model|Method|TSB [1]|CTKD [2]|LSKD [3]|
|-|-|-|-|-|
|CIFAR-100/ResNet-18|w/o KD|67.65|67.86|67.94|
| |w/ KD|71.70|69.41|70.74|
| |w/ Ours|**72.82**|**71.12**|**71.76**|
|CIFAR-100/DeiT-Tiny|w/o KD|51.90|53.31|52.31|
| |w/ KD|52.63|54.20|52.93|
| |w/ Ours|**55.47**|**54.72**|**53.83**|
|Tiny-IN/ResNet-18|w/o KD|55.21|53.03|54.05|
| |w/ KD|59.92|58.78|59.30|
| |w/ Ours|**61.65**|**60.32**|**60.28**|
| Tiny-IN/DeiT-Tiny|w/o KD|40.29|40.82|39.65|
| |w/ KD|40.13|41.22|41.14|
| |w/ Ours|**43.36**|**41.60**|**41.48**|
[1] Online Knowledge Distillation by Temporal-Spatial Boosting. WACV, 2022.
[2] Curriculum Temperature for Knowledge Distillation. AAAI, 2023.
[3] Logit Standardization in Knowledge Distillation. CVPR, 2024.
**Q2: Selection of intervals for self KD and online KD.**
We theoretically demonstrate that there exists a desirable temporal interval to improve generalization of online KD (as well as self KD) and empirically investigate the selection of specific values (see online KD in Table 6, and self KD in the following table). We find that the desirable temporal interval for a given KD paradigm is relatively stable across different network architectures and benchmark datasets.
Since the SGD-induced variability of online KD (targeting the entire network) is larger than that of self KD (targeting a few network blocks), the latter requires a larger temporal interval to aggregate such variability to obtain an appropriate teacher-student gap. We therefore select s=1.5 for online KD and s=4.0 for self KD as the default implementation.
Temporal interval of self KD:
|Interval (epochs)|0|1|2|3|4|8|
|-|-|-|-|-|-|-|
|CIFAR-100/ResNet-18|73.25|74.27|75.15|74.30|75.73|76.41|
|CIFAR-100/ResNet-50|75.73|76.67| 79.27|79.89|79.43|79.44|
|CIFAR-100/ResNet-101|76.16|75.97| 79.02|79.27|79.24|79.64|
**Q3 & Q4: More discussion of ``Teacher-Student Gap'' in online KD and its failure analysis.**
We agree that the capacity gap between teacher and student is critical to the online KD performance. In previous studies of (online) KD, the teacher-student gap is regulated in various dimensions, such as the training data, network architecture, innate randomness (random initialization and SGD), etc. In this work, our main motivation lies in improving generalization of the model itself with **identical** network architectures, random initialization and data sources between teacher and student. We therefore characterize the teacher-student gap in online KD into the interplay of the SGD-induced variability and the temporal interval that aggregates such variability.
Theoretically, we demonstrate that a proper temporal interval between teacher and student (i.e., a proper teacher-student gap) helps the model to find flatter local minima using SGD. This is because the teacher that is slightly ahead in training provides a well-defined trajectory, ensuring low errors along so-called informative direction to improve generalization. In contrast, the naive SGD only ensures low errors in random directions around the convergence point, which limits the "radius" of loss flatness (see Sec.4.2).
Empirically, we perform extensive experiments to validate our proposal in online KD across a range of datasets and architectures. Aligned with our theoretical analysis, the proposed Spaced KD is most effective only when the temporal interval is set to an appropriate value (Fig.2 and Table 6). When the temporal interval is too large, the spaced version of online KD is closer to offline KD and the improvement is compromised. This is because the too large teacher-student gap makes them converge to different local minima, which violates the assumption of our theoretical analysis.
In the absence of differences in other spatial elements that produce the teacher-student gap, the effectiveness of our Spaced KD is regulated by the temporal interval in SGD. We believe that the effectiveness of our Spaced KD will be affected if the teacher-student gap produced by such spatial elements is too large, which is out of scope of the current focus. | null | null | null | null | null | null | null | null |
Learning Initial Basis Selection for Linear Programming via Duality-Inspired Tripartite Graph Representation and Comprehensive Supervision | Accept (poster) | Summary: Linear Programming (LP) is fundamental to numerous real-world applications, driving significant investment and research into improving the Simplex method, a widely used algorithm for solving LPs. Over decades, various heuristics have been developed to enhance solver efficiency, one of which is choosing an optimal initial basis, a critical factor influencing solver performance.
In this paper, the authors propose a novel framework that leverages a new way of representing LPs and a specialized loss function to predict a superior starting point for the Simplex algorithm. Their approach is validated through extensive experiments, demonstrating state-of-the-art (SOTA) performance improvements over existing heuristics, thereby showcasing its potential to accelerate LP solvers effectively.
Claims And Evidence: Yes. The evidence supports the claims. I have some clarifying questions. Please see the sections below.
Methods And Evaluation Criteria: Overall methods are clear and well explained. I have a few questions.
- Fan et al. uses LIBSVM and STOCH datasets. Can the authors display their results on these two datasets as well?
- Fan et al. establish that their method is better than CPLEX. Therefore, it doesn’t seem necessary to compare the proposal with CPLEX. -However, since Gurobi is also a widely accepted solver for LPs, is it possible to have Gurobi as one of the baselines (time or iterations)? This will be useful in completing the picture of Initial basis selection problem and standards.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Experimental design looks good. All ablations are clearly showing the efficacy of their proposal.
I have a few proposals for an additional ablation for better understanding
- $L_{multi}$ - How are $\theta$ and $\mu_k$ selected? Did the authors do some hyperparameter tuning on these parameters? Perhaps, a simple ablation would be to do random sampling and compare that with the chosen one.
- Tripartite representation has also been well explained. However, just like how the effect of the number of message passing round is understood in Table 11 (in Appendix), it will be useful to have such a table for tripartite representation. I am particularly interested in what power does N=0 (i.e., only the messages passed back and forth from the global node).
There is a knawing worry that the improvements might just be because of the increase in the number of parameters. It will be useful to have details around
- The number of learnable parameters in bipartite vs tripartite representation
- For all the datasets, geometric mean of the number of nodes and edges in bipartite vs tripartite representation
- Comparison of runtime of a single pass between bipartite vs tripartite representation
Supplementary Material: Yes. All of it.
Relation To Broader Scientific Literature: The authors’ proposal for tripartite representation of LPs for GNN is novel. This could actually be used across several other deep learning-based heuristic design for solvers as well as other deep learning-aided combinatorial optimization problems.
Their work falls under the broader theme of making LP solvers faster using deep learning methods.
Their work has also identified core issues in dealing with predictions in LP such as label inconsistencies, which haven’t been addressed in the literature.
Corresponding to the specific problem of Initial Basis Selection, the authors improve upon their predecessor and report substantial improvements.
Essential References Not Discussed: It looks good.
Other Strengths And Weaknesses: Authors proposal is quite novel and the results show substantial improvement over the benchmarks. I have suggestions for adding a baseline and showcasing the performance on more datasets as used by their predecessor work. Additional suggestion on ablation will also make for a better work.
Other Comments Or Suggestions: - The notation in Basis section of page 2 is wrong. $B_x$ should be in the subscript. In addition, it should be told explicitly that [ ] means concatenation.
- $x_{N_x}$ is wrongly placed inside the second bracket
Questions For Authors: - What is the form of $l$ in Equation 4?
- It will be useful to understand when tripartite and bipartite representation might look similar. Can the authors take a few examples such as $l_x=0$, $u_x=\inf$, $l_s=-\inf$, $u_s=b$? Some examples that can help understand how representations might differ wildly under two cases.
In general, I am optimistic about the proposal. I am inclined to increase my rating once I hear the authors' rebuttal.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ### Fan et al. uses LIBSVM and STOCH datasets. Can the authors display their results on these two datasets as well?
> It would certainly be helpful to compare results across all datasets from the original work. However, since these two datasets were generated by the authors and are not publicly available, we are unable to reproduce them. Instead, we incorporate other publicly available datasets to expand our evaluation.
### It will be useful to have such a table (Table 11 in Appendix)) for tripartite representation. I am particularly interested in what power does N=0 (i.e., only the messages passed back and forth from the global node).
> We add experiments on using tripartite graph-based GNN with difference bidirectional message passing layers (0, 2, 4 and 6 respectively) compared to the default solving mode without warm start. We use the base training without loss function for basic variable selection, the loss function for feasibility, and label preprocessing. Here are the number of iteration and solving time on the unpresolved datasets:
> Iteration number:
>
| datasets | n=0 | n=2 | n=4 | n=6 | default |
| -------------- | ----- | ----- | ----- | ----- | ---- |
| Mirp | 18798 | 14017 | 12109 | 10813 | 25432 |
| Anonymous | 60146 | 23583 | 23166 | 21342 | 35330 |
| Load_balancing | 4070 | 4357 | 4291 | 4512 | 7965 |
| geomean | 16633 | 11293 | 10637 | 10136 | 19271 |
> Solving time in seconds:
>
| datasets | n=0 | n=2 | n=4 | n=6 | default |
| -------------- | ----- | ----- | ----- | ----- | ---- |
| Mirp | 8.29 | 7.43 | 7.07 | 6.39 | 11.17 |
| Anonymous | 7.19 | 7.12 | 6.75 | 6.87 | 9.08 |
| Load_balancing | 16.71 | 10.49 | 10.98 | 8.85 | 7965 |
| geomean | 9.99 | 8.22 | 8.06 | 7.30 | 11.39 |
> Due to time constraints, we have not yet completed testing on the Mirp2 dataset, but we will provide the results later. It is evident that bidirectional message passing is crucial as it effectively utilizes the coefficients in the constraint matrix. Additionally, adding more bidirectional message passing layers results in a slight improvement.
### There is a knawing worry that the improvements might just be because of the increase in the number of parameters.
> The computational overhead of our tripartite graph-based model with two bidirectional message-passing steps is comparable to that of the bipartite graph-based model with six message-passing steps. As shown in Table 11 of the appendix, increasing the number of message-passing layers in the bipartite graph-based model does not necessarily improve prediction quality. Our GNN architecture effectively leverages its complexity, making full use of the computational overhead to predict a higher-quality basis.
### What is the form of $L$ in Equation 4?
### How are $\theta$ and $\mu_k$ selected? Did the authors do some hyperparameter tuning on these parameters?
> $\mu_k$ is the weight of the loss calculated from the $k^{th}$ label using Equation 2, which defines the weighted cross-entropy. The value of $\mu_k$ is given in Equation 16 and is determined based on experience rather than precise fine-tuning. It is worth investigating the hyperparameter settings for $\theta$, $k$, and $\mu_k$.
### It will be useful to understand when tripartite and bipartite representation might look similar. Can the authors take a few examples such as $l_x=0$, $u_x=inf$, $l_s=-inf$, $u_s=b$? Some examples that can help understand how representations might differ wildly under two cases.
> For LP problems in standard form:
\begin{equation}
\begin{aligned}
\min_{\boldsymbol{x} \in \mathbb{R}^n} \quad & \boldsymbol{c}^\top \boldsymbol{x} \\\\
\text{s.t.} \quad \boldsymbol{A} \boldsymbol{x} &\leq \boldsymbol{b}, \\\\
\boldsymbol{x} &\geq 0,
\end{aligned}
\end{equation}
> we have $l_x = 0$, $u_x = \infty$, $l_s = -\infty$, $u_s = b$. In the tripartite GNN, the initial message passing occurs only between the global node and nodes in $V_{primal}$ (see Figure 1(a)), which differs from the bipartite GNN. The other difference is that the tripartite GNN introduces additional message passing between $V_{dual}$ and $V_{primal}$ via indentity edges—an interaction that may be redundant in this case. As a result, the bipartite GNN could be preferable since it maintains a similar structure to the tripartite GNN while incurring lower computational overhead.
> However, for LP problems in a more general form, as shown in Equation (1), where $l_x$, $u_x$, $l_s$, and $u_s$ are all finite numbers, the initial message passing between the global node and other nodes introduces richer information into $V_{dual}$, $V_{primal}$ and $C$. Subsequent message passing between $V_{dual}$, $V_{primal}$ and $C$ can then fully exploit these enriched node embeddings. In this scenario, the tripartite representation differs significantly from the bipartite one, allowing the tripartite GNN to exhibit greater expressiveness. | Summary: The paper proposes a novel approach for selecting an initial basis for linear programming (LP) solvers using a duality-inspired tripartite graph neural network (GNN).
The following are the three main contributions:
- A tripartite graph representation for LP problems inspired by duality theory, which enhances feature extraction and GNN expressiveness
- Novel loss functions targeting basic variable selection and basis feasibility, along with multi-level labels from the solving path
- Data preprocessing schemes to address label inconsistencies in solver-derived data
The approach significantly outperforms state-of-the-art methods in predicting initial basis with higher accuracy, reducing the number of iterations and solving time in LP solvers.
## update after rebuttal
I thank the authors for the detailed rebuttal, their response sufficiently addresses my concerns, hence I am updating my scores.
Claims And Evidence: The claims in the paper are supported by convincing evidence. The authors demonstrate that the proposed approach
- outperforms the state-of-the-art bipartite model across datasets.
- reduces iterations for the previous state-of-the-art when used for a warm start in the HiGHS LP solver.
- they also include ablation studies to demonstrate the value of each component (basic variable selection, feasibility loss, and preprocessing)
Methods And Evaluation Criteria: The authors evaluate their approach on four well-known datasets (Mirp1, Mirp2, Anonymous, and Load Balance) from MIP problems relaxed to LP and compare it with the state-of-the-art bipartite GNN model (Fan et al., 2023). In addition, they perform a detailed ablation study to demonstrate the utility of each of the proposed approach's core components.
The experiments are well-designed to test the paper's key claims, with appropriate metrics to evaluate prediction quality and performance (iterations).
Theoretical Claims: The claims are generally well-supported with mathematical formulations and examples, particularly regarding duality theory and the message-passing mechanism in the tripartite graph.
Experimental Designs Or Analyses: Refer to Methods and Evaluation Criteria
Supplementary Material: Yes
The supplementary material provides the details about,
- mathematical formulations of duality in linear programming
- message passing mechanism on the tripartite graph
- details of dataset, and model set up
- analysis of labels given by the LP solvers
Relation To Broader Scientific Literature: The authors acknowledge prior work in these areas and clearly articulate how their approach extends beyond previous methods. The contribution of the paper addresses the specific gaps in the literature in the intersection of deep learning and mathematical optimization.
- the paper extends the bipartite graph representation of optimization problems
- builds upon on work for tripartite graph representations and extends it to handle general LP form
- addresses a fundamental issue in learning-based optimization
Essential References Not Discussed: While the paper covers most relevant literature, they don't discuss neural network based approaches for predicting optimal solutions to optimization problems beyond just basis selection.
Other Strengths And Weaknesses: Weaknesses
- limited discussion of computational overhead introduced by the more complex GNN architecture
- it is unclear how the proposed approach will scale to very large LP instances beyond those in the test datasets
Other Comments Or Suggestions: Refer to other sections
Questions For Authors: - How does the computational cost of the tripartite GNN compare to the bipartite model, and how does this balance with the solver time savings? It would be valuable to include runtime comparison of the GNN inference time versus the time saved in the solving process
- What are the limitations of your approach for very large-scale LP problems?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ### limited discussion of computational overhead introduced by the more complex GNN architecture
### How does the computational cost of the tripartite GNN compare to the bipartite model, and how does this balance with the solver time savings? It would be valuable to include runtime comparison of the GNN inference time versus the time saved in the solving process
> The computational overhead of our tripartite graph-based model with two bidirectional message-passing steps is comparable to that of the bipartite graph-based model with six message-passing steps, typically taking less than 1 second. As shown in Table 11 of the appendix, increasing the number of message-passing layers in the bipartite graph-based model does not necessarily lead to better prediction quality. Our GNN architecture effectively leverages its complexity, fully utilizing the computational overhead to predict a higher-quality basis. In simpler LP cases, this added overhead may slightly increase the total solving time, including inference time. However, for more complex or larger problems, the impact of this overhead becomes negligible.
### unclear how the proposed approach will scale to very large LP instances beyond those in the test datasets
### What are the limitations of your approach for very large-scale LP problems?
> Our approach scales effectively to large LP instances. In the Mirp2 dataset, some instances are so large that they require several minutes to several hours for the solver to process. We demonstrate the acceleration achieved on these large instances as follows:
>
| instances | iter_default | iter_tripartiteBMP | time_default | time_tripartiteBMP |
| ----------------------- | ------------ | ------------------ | ------------ | ------------------ |
| LR1_DR04_VC05_V17b_t360 | 561299 | 409428 | 491.48 | 250.7 |
| LR1_DR05_VC05_V25b_t360 | 706451 | 777452 | 474.74 | 513.63 |
| LR1_DR08_VC05_V40b_t180 | 301789 | 309473 | 143.06 | 128.84 |
| LR1_DR08_VC10_V40b_t120 | 267155 | 298875 | 112.76 | 131.6 |
| LR1_DR12_VC10_V70a_t180 | 836561 | 619425 | 751.76 | 606.93 |
| geomean | 484672 | 448942 | 309.27 | 265.74 |
> The iter_* columns represent the number of iterations, while the time_* columns indicate the solving time in seconds. In practice, the GNN model is particularly useful for large LP problems, as its computational overhead is negligible compared to the overall solving time. However, for extremely large LPs—which translate to massive graphs—memory constraints may require the use of graph sampling techniques like GraphSAGE during both training and inference. | Summary: This paper proposes a GNN-based approach for learning initial basis selection in LP, aiming to accelerate the simplex method. Inspired by LP duality, the authors introduce a tripartite graph representation to better capture problem structure. Additionally, they design new loss functions to improve basic variable selection and basis feasibility, along with data preprocessing schemes to reduce label inconsistencies.
## Update after rebuttal
I thank the authors for their thoughtful and detailed rebuttal. The clarifications regarding the distinction between basis accuracy and solver acceleration were particularly helpful and addressed my earlier confusion about the paper’s core claims.
I also appreciate the additional experiments and analysis related to the tripartite graph representation and the choice of GNN architecture. The authors not only provided ablation studies with varying message passing depths but also included comparisons with Graph Transformer architectures. Their justification for using the GraphConv-based model — highlighting interpretability, computational efficiency, and consistency with related work — is convincing and well-supported by the new results.
Given these clarifications and improvements, I believe the authors have sufficiently addressed my concerns. I have therefore increased my score to a positive recommendation.
Claims And Evidence: While the paper presents an interesting method, the writing is sometimes contradictory, leading to confusion about the claims:
- The abstract states, "a closer initial basis does not always result in greater acceleration," but later mentions "achieving high prediction accuracy." It is unclear what accuracy refers to in this context—closeness to the optimal basis or actual solver acceleration.
- Similarly, the introduction states, "A better starting point may be closer to the potential optimal solution in terms of logical pivot distance, often resulting in fewer simplex iterations," which contradicts the earlier claim that closeness does not always lead to acceleration.
- Clarifying these statements would improve the paper’s logical consistency and ensure a clearer understanding of the contributions.
Methods And Evaluation Criteria: The paper proposes a tripartite graph representation to replace the standard bipartite graph representation used in LP-based GNN models. This is a reasonable design choice, but the experimental validation is limited:
- The tripartite graph should be tested against multiple GNN architectures (not just one) to demonstrate its effectiveness across different models.
- The impact of the tripartite representation should be better isolated from other factors (e.g., loss functions, preprocessing).
Theoretical Claims: The theoretical motivation for the tripartite representation is reasonable, as it leverages LP duality.
Experimental Designs Or Analyses: The experiments primarily compare the tripartite GNN to a single bipartite GNN model. To strengthen the evaluation:
- More GNN architectures should be tested to confirm the robustness of the tripartite representation.
- The impact of each proposed modification (graph representation, loss functions, preprocessing) should be evaluated separately to better understand their individual contributions.
Supplementary Material: None
Relation To Broader Scientific Literature: The paper correctly cites LP and MIP learning-based methods, but it would benefit from a broader discussion on alternative warm-starting techniques beyond GNNs.
Essential References Not Discussed: No major missing references, but the paper could compare its approach to other LP warm-starting methods that do not use GNNs.
Other Strengths And Weaknesses: Strengths
- The tripartite graph representation is a novel adaptation based on LP duality.
- The method shows strong experimental improvements over the SOTA bipartite GNN model.
- The data preprocessing steps help address label inconsistencies in LP solvers.
Weaknesses
- Potential desk rejection risk: The paper omits required formatting elements (e.g., line numbers, "Anonymous Authors"), which could lead to desk rejection.
- Logical inconsistencies: The discussion on closeness of the initial basis vs. solver acceleration is contradictory.
- Limited experimental diversity: The tripartite graph representation should be tested on multiple GNN architectures to confirm its general effectiveness.
Other Comments Or Suggestions: None
Questions For Authors: See Weaknesses. I find the proposed method interesting and promising, but the writing inconsistencies, formatting errors, and limited experimental validation raise concerns. I am assigning a borderline score for now but am open to raising my score if the authors provide strong clarifications and additional experiments in their response.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ### The abstract states, "a closer initial basis does not always result in greater acceleration," but later mentions "achieving high prediction accuracy." It is unclear what accuracy refers to in this context—closeness to the optimal basis or actual solver acceleration.
### Similarly, the introduction states, "A better starting point may be closer to the potential optimal solution in terms of logical pivot distance, often resulting in fewer simplex iterations," which contradicts the earlier claim that closeness does not always lead to acceleration.
### Clarifying these statements would improve the paper’s logical consistency and ensure a clearer understanding of the contributions.
> Accuracy measures how close the basis is to the optimal solution. We follow the same equation as in [Fan et al., 2023] (page 13) to compute accuracy. In general, a closer basis tends to reduce the number of iterations, which can, in turn, shorten solving time. Therefore, achieving high accuracy is valuable. Additionally, it serves as an indicator of the GNN model’s ability to learn and approximate the optimal basis.
>
> However, a closer initial basis does not always guarantee greater acceleration. If the initial basis is invalid, Phase I may require significant time to obtain a valid one, which could still be far from optimal. While predicting a closer basis is an intuitive way to speed up the solver, additional refinements are necessary to achieve actual performance gains.
>
> Our work improves the model's ability to predict a closer basis. More importantly, we go beyond accuracy (closeness) by prioritizing actual solver acceleration. Through a detailed analysis of the LP problem, we introduce additional techniques that significantly enhance practical solving speed, which remains the ultimate objective.
### The tripartite graph should be tested against multiple GNN architectures (not just one) to demonstrate its effectiveness across different models.
### The impact of the tripartite representation should be better isolated from other factors (e.g., loss functions, preprocessing).
> We add experiments on using tripartite graph-based GNN with difference bidirectional message passing layers (0, 2, 4 and 6 respectively) compared to the default solving mode without warm start. We use the base training without loss function for basic variable selection, the loss function for feasibility, and label preprocessing. Here are the number of iteration and solving time on the unpresolved datasets:
> Iteration number:
>
| datasets | n=0 | n=2 | n=4 | n=6 | default |
| -------------- | ----- | ----- | ----- | ----- | ---- |
| Mirp | 18798 | 14017 | 12109 | 10813 | 25432 |
| Anonymous | 60146 | 23583 | 23166 | 21342 | 35330 |
| Load_balancing | 4070 | 4357 | 4291 | 4512 | 7965 |
| geomean | 16633 | 11293 | 10637 | 10136 | 19271 |
> Solving time in seconds:
>
| datasets | n=0 | n=2 | n=4 | n=6 | default |
| -------------- | ----- | ----- | ----- | ----- | ---- |
| Mirp | 8.29 | 7.43 | 7.07 | 6.39 | 11.17 |
| Anonymous | 7.19 | 7.12 | 6.75 | 6.87 | 9.08 |
| Load_balancing | 16.71 | 10.49 | 10.98 | 8.85 | 7965 |
| geomean | 9.99 | 8.22 | 8.06 | 7.30 | 11.39 |
> Due to time constraints, we have not yet completed testing on the Mirp2 dataset, but we will provide the results later. It is evident that bidirectional message passing is crucial as it effectively utilizes the coefficients in the constraint matrix. Additionally, adding more bidirectional message passing layers results in a slight improvement. We also plan to evaluate the performance using other GNN architectures, such as GraphTransformer and GAT, and will share the results once available.
### The paper correctly cites LP and MIP learning-based methods, but it would benefit from a broader discussion on alternative warm-starting techniques beyond GNNs.
> Thanks for your suggestions. We will add more literature on warm-starting techniques and update the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the detailed and thoughtful response. The clarifications provided have addressed most of my concerns, especially regarding the distinction between basis accuracy and actual solver acceleration. I appreciate the effort to explain the nuanced relationship between closeness to the optimal basis and practical performance improvements.
Regarding the point I raised earlier — “The tripartite graph should be tested against multiple GNN architectures (not just one) to demonstrate its effectiveness across different models. The impact of the tripartite representation should be better isolated from other factors (e.g., loss functions, preprocessing).” — I acknowledge that the authors have conducted ablation studies with different numbers of bidirectional message passing layers, which is helpful for understanding the influence of GNN depth. This is a valuable addition.
However, my original concern was slightly broader. The current GNN architecture used in this paper appears to be based on the one proposed by Qasim et al. (2019). While this is a reasonable and relevant choice, the field of graph neural networks has evolved rapidly in recent years, with more expressive and powerful architectures such as Graph Attention Networks, Graph Transformers, and others being widely adopted in various domains.
My question is: why was this particular architecture chosen over more recent alternatives? Was it based on empirical performance, computational considerations, or compatibility with the tripartite representation? I believe a brief discussion or justification of the architectural choice would make the contribution more robust and provide readers with better insight into the design decisions.
Again, thank you for the substantial improvements and clarifications in the revision.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the detailed and thoughtful feedback. We are glad to hear that the clarifications regarding basis accuracy and solver acceleration were helpful.
Regarding the choice of GNN architecture: we appreciate the reviewer’s suggestion to evaluate our tripartite representation across a broader range of architectures. As correctly noted, our current model is based on the architecture proposed by Qasim et al. (2019). We chose this architecture for three main reasons:
### Comparative Consistency:
This architecture is also used in the state-of-the-art work by Fan et al., which our method builds upon. To ensure a fair and direct comparison, we adopted their GNN design as a baseline.
### Analytical Interpretability:
We provide a detailed analysis of the message passing behavior of this architecture in Appendix A.2 (p.15). In particular, we aim for the amount of message passing to be proportional to the magnitude of the corresponding coefficients in the constraint matrix and the projection of changes in variables (or constraints) onto the respective rows (or columns). This behavior aligns well with the structure and nature of LP problems.
### Computational Efficiency:
Compared to more recent models such as Graph Transformers, this architecture is lightweight and computationally efficient, making it more practical for real-world applications.
To further address the reviewer’s concern, we have added results using Graph Transformer architectures on the unpresolved datasets (excluding mirp2 due to time constraints). The following tables summarize the results:
Iteration count:
| datasets | bipartite-transformerConv | tripartite-transformerConv | tripartite-graphConv |
| -------------- | ----- | ----- | ----- |
| Mirp | 16022 | 16146 | 14017 |
| Anonymous | 4561 | 4804 | 4357 |
| Load_balancing | 23946 | 21124 | 23583 |
| geomean | 12051 | 11790 | 11294 |
Solving time in seconds:
| datasets | bipartite-transformerConv | tripartite-transformerConv | tripartite-graphConv |
| -------------- | ----- | ----- | ----- |
| Mirp | 7.24 | 6.96 | 6.04 |
| Anonymous | 9.87 | 9.75 | 5.74 |
| Load_balancing | 6.37 | 6.44 | 10.61 |
| geomean | 7.69 | 7.59 | 7.17 |
These results show a modest advantage of our tripartite representation over the bipartite baseline. Additionally, we observe that the GraphConv generally outperforms the TransformerConv, further supporting our claim that the GraphConv architecture aligns well with the structure of LP problems and is expressive enough to capture the necessary properties.
While a full exploration of alternative GNN backbones is beyond the scope of this work, we view this as an important and promising direction for future research. We thank the reviewer again for raising this valuable point. | Summary: The paper proposes a new Graph Neural Network model for predicting the initial basis in the simplex method for solving linear programming (LP) problems. They use a tripartite graph that includes a global node and also nodes for dual variables, in addition to nodes for constraints and primal variables in a bipartite graph from previous work. Also, a new loss function is introduced to improve basic variable selection and basis feasibility. The proposed model significantly outperforms the state-of-the-art method in terms of prediction accuracy, reducing the number of iterations and solving time required by the LP solver. Effectiveness of each design component is evaluated through ablations study.
Claims And Evidence: The paper presents experimental results on several standard LP datasets (Load Balancing, Anonymous, MIRP relaxed from MILP instances) to support their claims. They compare their model against the existing SOTA (bipartite GNNs) and the default solver setting. The tables and figures show improvements in prediction accuracy and solver performance metrics (iterations and time). They also include an ablation study to demonstrate the effectiveness of each component of their proposed method.
A few aspects could be improved:
* Comparison with other warm-start heuristics. The paper mentions a few non-ML methods for basis selection. It would be important to include them as a baseline, though previously probably has shown they are no better than the bipartite GNN.
* The instances used are from MILP benchmark with relaxed integral constraints. The paper doesn’t quite justify why these benchmarks are important for LP solving that translate to real-world impact. In other words, why not use benchmarks designed for LP instead of MILP?
Methods And Evaluation Criteria: See above comments about benchmarks and other baselines.
In addition, the instances selected seem easy to solved, that is, on average the runtime is only less than a minute. It would be interesting to see how well this method performs on hard instances.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experiments are well-designed to demonstrated the effectiveness through prediction accuracies, runtime and numbers of iterations.
However, at this point, it is unclear how effective the method is for larger/harder instances (e.g, instances that takes 5 minute or even more to solve).
Supplementary Material: I didn't review the appendix.
Relation To Broader Scientific Literature: The paper studies an important problem related to linear optimization.
Essential References Not Discussed: Not that i'm aware of.
Other Strengths And Weaknesses: No other strengths or weaknesses i want to point out.
Other Comments Or Suggestions: A writing issues: There should be a space between words and citations. Some of them are missing.
Questions For Authors: In ablation study, why is tripartite without P (label preprocessing) the best?
Are the improvements statistically significant?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ### Comparison with other warm-start heuristics. The paper mentions a few non-ML methods for basis selection. It would be important to include them as a baseline, though previously probably has shown they are no better than the bipartite GNN.
> Thanks for your suggestion! We will incorporate these non-ML methods as baselines in our paper.
### The instances used are from MILP benchmark with relaxed integral constraints. The paper doesn’t quite justify why these benchmarks are important for LP solving that translate to real-world impact. In other words, why not use benchmarks designed for LP instead of MILP?
> The current pure LP benchmarks are very diverse instance by instance and large-sccale -- the reason may be partly due to the strong commercial solvers like Gurobi, CPlex and COPT. In fact, there is little benchmarks containing similar distributions for training and testing data to evluate learning-based methods, as the learning-based LP solvers are still in the early stage.
> In contrast, the MILP benchmarks contain rich instances that can be divided into training and testing ones.
### In addition, the instances selected seem easy to solved, that is, on average the runtime is only less than a minute. It would be interesting to see how well this method performs on hard instances (e.g, instances that takes 5 minute or even more to solve).
> Our approach scales effectively to large LP instances. In the Mirp2 dataset, some instances are so large that they require several minutes to several hours for the solver to process. We demonstrate the acceleration achieved on these large instances as follows:
>
| instances | iter_default | iter_tripartiteBMP | time_default | time_tripartiteBMP |
| ----------------------- | ------------ | ------------------ | ------------ | ------------------ |
| LR1_DR04_VC05_V17b_t360 | 561299 | 409428 | 491.48 | 250.7 |
| LR1_DR05_VC05_V25b_t360 | 706451 | 777452 | 474.74 | 513.63 |
| LR1_DR08_VC05_V40b_t180 | 301789 | 309473 | 143.06 | 128.84 |
| LR1_DR08_VC10_V40b_t120 | 267155 | 298875 | 112.76 | 131.6 |
| LR1_DR12_VC10_V70a_t180 | 836561 | 619425 | 751.76 | 606.93 |
| geomean | 484672 | 448942 | 309.27 | 265.74 |
> The iter_* columns represent the number of iterations, while the time_* columns indicate the solving time in seconds. In practice, the GNN model is particularly useful for large LP problems, as its computational overhead is negligible compared to the overall solving time. However, for extremely large LPs—which translate to massive graphs—memory constraints may require the use of graph sampling techniques like GraphSAGE during both training and inference.
### A writing issues: There should be a space between words and citations. Some of them are missing.
> Thanks and we have polished the paper.
### In ablation study, why is tripartite without P (label preprocessing) the best? Are the improvements statistically significant?
> From Table 8, we observe that the only reason the tripartite model without P outperforms the one with P is due to a significant improvement on the Mirp1 dataset. This may be because, while processing the labels to reduce inconsistency helps the GNN learn and achieve better accuracy, a closer initial basis may sometimes be invalid, requiring more time in Phase I to repair it. However, this is not a general trend, as adding P consistently leads to better acceleration across all other datasets. | null | null | null | null | null | null |
Towards Escaping from Class Dependency Modeling for Multi-Dimensional Classification | Accept (poster) | Summary: The paper proposes an approach to multi-dimensional classification (MDC), named DeCOupling Multi-dimensional classification (DCOM). Different from most MDC methods which explicitly model class dependencies through classifier chains or probabilistic graphical models (PGMs), DCOM captures partial class dependencies by conditioning on original features and latent variables computed from the original features. DCOM is evaluated on a set of benchmark datasets and shows its effectiveness in MDC.
Claims And Evidence: - The assumption of Theorem 3.3 may not be easily satisfied in practice. Can the authors provide examples to demonstrate how it can be held?
- The authors claim that the learned latent factor can capture "critical feature information". However, only empirical results are provided and there are no theoretical results or visualizations for justifying its effectiveness.
Methods And Evaluation Criteria: The evaluation criteria (HS, EM and SEM) are appropriate for the MDC problem.
Theoretical Claims: I did not find issues in the derivations in the main text.
Experimental Designs Or Analyses: - The experimental design is mostly sound for validating DCOM's core contributions.
Weaknesses:
- No further analysis or visualization is performed for the learned latent factor.
- The authors claim that DCOM is computationally superior to MDC methods based on learning a graphical model. However, no experiments are performed to support this.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: DCOM links some aspects of latent variable models to MDC. The effectiveness of latent variable models (e.g. Gaussian mixture models and variational auto-encoders) has been demonstrated in generative modeling tasks in previous studies.
Essential References Not Discussed: To my knowledge, no.
Other Strengths And Weaknesses: Strengths:
- The paper is well-written.
Weaknesses:
- The proposed method lacks significant technical novelty.
- The authors did not compare DCOM to MDC methods built upon PGMs. The claim that DCOM is more computationally efficient is not supported by experimental evidence.
- The learned latent variable lacks interpretability.
Other Comments Or Suggestions: N/A.
Questions For Authors: Please see above sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We want to express our sincere gratitude for your invaluable comments and suggestions. According to your comments in *Claims And Evidence*, *Experimental Designs Or Analyses* and *Other Strengths And Weaknesses*, we summarize the following questions.
The point-to-point responses are given as follows:
-**Q1: The assumption of Theorem 3.3 may not be easily satisfied in practice. Can the authors provide examples to demonstrate how it can be held?**
Answer for Q1: We sincerely appreciate your insightful question regarding the practical validity of the assumption of Theorem 3.3.
The basic condition of Theorem 3.3 is that the joint probability $p_{Y}(y_{\diamond 1},y_{\diamond 2},\dots,y_{\diamond q}) \to 0$, which can guide selecting high-frequency class combinations when constructing $\mathcal{D}'$ in Eq.(10) (please refer to the answer for Q2 for Reviewer Kvke). Take dataset *Flare1* as an example, the 323 examples present 14 distinct class combinations, where only 3 class combinations correspond to over 10 examples. In other words, the probability of most class combinations is less than 0.03.
As for the low-frequency class combinations involved in Theorem 3.3, we have
$$p^{\mathrm{(jt)}}(y_{\diamond},x_{\diamond},z_{\diamond})\le \int_{x_{\diamond}}\int_{z_{\diamond}}p^{\mathrm{(jt)}}(y_{\diamond},x_{\diamond},z_{\diamond})dz_{\diamond}\le p_{Y}(y_{\diamond 1},\dots,y_{\diamond q})\to 0,$$
Accordingly, $p^{\mathrm{(jt)}}-\delta$ also tends to 0, which makes the Assumption 3.4 easily satisfied.
-**Q2: The authors claim that the learned latent factor can capture "critical feature information". However, only empirical results are provided and there are no theoretical results or visualizations for justifying its effectiveness.**
Answer for Q2: Thanks to the comment. Considering that most MDC data sets are tabular data sets, we have not provided visualizations of latent vectors before. For image data sets, we implement pretrained ResNet-18 as the feature extractor and then conduct the proposed approach. So the obtained latent vectors are still tabular.
Figure 1-3 at **anonymous** link <https://anonymous.4open.science/r/DCOM-C8FE/ICML25_authors_response.pdf> present the t-SNE visualizations for the first fold of dataset *Voice*, *TIC2000* and *Flickr*, which show latent factors capture more compact manifold representations than the original features. We will incorporate these additions in the next version.
This dilemma urges us to develop CNN-based MDC approaches to obtain feasible feature maps directly in the future. Thanks again for this insightful comment.
-**Q3: The authors claim that DCOM is computationally superior to MDC methods based on learning a graphical model. However, no experiments are performed to support this.**
Answer for Q3: Thanks to the comment. In Section 2 (*Related Work*), we empirically mention the limitations of existing graphical MDC approaches which are widely recognized in previous works [1][2]. However, computational efficiency is neither the primary focus nor the core contribution of this paper. For comparision with the latest MDC approach *PIST* [3], which shares similar deep learning architecture, we list running times (500 epochs for training) on some data sets as follows which can show our efficiency:
| Dataset | Time for DCOM (s) | Time for PIST (s) |
|---|---|---|
|Flare1|85.97|421.94|
| Enb|112.94|471.63|
| Jura|89.88|306.05|
|Song|167.77|765.10|
| Oes10|699.84|10219.91|
-**Q4: The proposed method lacks significant technical novelty.**
Answer for Q4: Thanks to the comment. We would like to provide a more comprehensive response addressing this important concern.
Existing MDC approaches mainly consider modeling class dependencies, where the hardness stems from the typical intercoupling within multiple dimensions. To address this issue, this paper proposes a dependency-free framework and establishes a novel technical route to solve MDC problems. As shown in our experimental results, the proposed approach which only employs conventional MLPs has yielded statistically significant performance improvements over state-of-the-art dependency-modeling baselines.
-**Q5: The authors did not compare DCOM to MDC methods built upon PGMs. The claim that DCOM is more computationally efficient is not supported by experimental evidence.**
Answer for Q5: Please refer to the answer for Q3.
-**Q6: The learned latent variable lacks interpretability.**
Answer for Q6: Please refer to the answer for Q2.
***
[1] B.-B. Jia, M.-L. Zhang. Decomposition-based classifier chains for multi-dimensional classification. IEEE TAI, 2022, 3(2): 176-191
[2] M. Zhu, S. Liu, and J. Jiang, “A hybrid method for learning multidimensional Bayesian network classifiers based on an optimization
model,” Applied Intelligence, vol. 44, no. 1, pp. 123–148, 2016.
[3] Huang, T., Jia, B.-B., and Zhang,M.-L. Deep multi-dimensional classification with pairwise dimension-specific features. IJCAI'24, pp. 4183–4191. | Summary: This submission proposes a feature augmentation approach for multi-dimensional classification (MDC). I think the key motivation is to seek a set of augmented features $\mathbf{Z}$ to fulfill the partial conditional independence (6). In theory, it might work thanks to the notion of conditional independence of random variables. In practice, it might be hard to analyze whether that partial conditional independence is fulfilled or not.
Under the partial conditional independence (6), the joint conditional log-likelihood (5) becomes decomposable as given in (11). Under the assumption of parameter independence, i.e., the parameters of local models used to estimate $\mathcal{H}_j$, $j = 1, \ldots q$, are independent, we can train the local models separately. Section 3.3 details three components of the training loss and introduces the trade-off parameters $\alpha$ and $\beta$.
Experiments are conducted on seventeen data sets to assess the potential advantages of the proposed approach, namely DCOM, compared with other MDC approaches recalled in Section 4.1.3. The empirical evidence suggests that DCOM can provide more promising results, compared to other competitors, and the performance of DCOM seems to be robust to the change of $\alpha$ and $\beta$ and the cardinality of the augmented feature set $\mathbf{Z}$. However, it is not entirely clear to me which type of encoding network has been employed in the experiments.
Claims And Evidence: I think the claims made in the submission are supported by rather clear evidence. However, I would recommend the author(s) to add experiments to assess the robustness of DCOM under the presence of either noisy features or small data sets.
This is because these factors, which can appear in relevant applications, may affect the quality of the augmented feature set $\mathbf{Z}$. I guess under the presence of either noisy features, the augmented feature set $\mathbf{Z}$ may amplify the noisy level of the data. In the case of small data sets, the augmented feature set $\mathbf{Z}$ may amplify the overfitting of the local models $\mathcal{H}_j$, $j = 1, \ldots q$.
Methods And Evaluation Criteria: I think the key assumptions of the proposed DCOM have been either stated in the submission or can be derived from the submission. Commonly used evaluation criteria in the MDC setting have been used in the submission.
Theoretical Claims: I did my best to check the correctness of the theoretical claims. I haven't found any major issue.
Experimental Designs Or Analyses: On one hand, I think the soundness/validity of any experimental designs and analyses is good and seems to be in favor of the proposed DCOM.
On the other hand, I would recommend the author(s) to add experiments to assess the robustness of DCOM under the presence of either noisy features or small data sets.
Supplementary Material: I have read the entire supplementary material.
Relation To Broader Scientific Literature: It seems to me that considerable efforts in seeking scalable MDC approaches are based on the notion of conditional independence of random variables, decomposability of the loss/utility and parameter independence. Therefore, this submission may reasonably complement the existing literature on MDC.
Essential References Not Discussed: I think essential references are discussed.
Other Strengths And Weaknesses: To my knowledge, DCOM can be seen as a probabilistic graphical model learning approach with multiple latent variables. However, employing parametric models to estimate the local probability distribution may greatly facilitate the optimization of the loss/utility and scalability of the learning phase under suitable assumptions on the structure constraints and (decomposability of) the loss/utility. Therefore, the authors might consider making a connection between DCOM and the literature on that research topic.
Other Comments Or Suggestions: I would use bold capital letters to denote sets of random variables, such as $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ (instead of $X$, $Y$, and $Z$)
Questions For Authors: Q1: I couldn't find details of the encoding network. If I missed some part of the paper, could you give me a pointer? Otherwise, could you add it in the next version?
Q2: Could you make the relations between $log p_{\mathbf{X} | \mathbf{Z}}(\boldsymbol{x}_i | \boldsymbol{z}_i)$ in equation (13) and $\boldsymbol{z}_i$ clearer?
Q3: How do $\mu_{ia}$ and $\delta_{ia}$ related to $\boldsymbol{z}_i$?
Q4: Which kind of reconstruction network has been used in the experiments?
Q5: Have you assessed the robustness of DCOM under the presence of either noisy features or small data sets? Please all refer to "Claims And Evidence" for my detailed comments on this point.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We want to express our sincere gratitude for your invaluable comments and suggestions. The point-to-point responses are given as follows:
-**Q1: I couldn't find details of the encoding network. If I missed some part of the paper, could you give me a pointer? Otherwise, could you add it in the next version?**
Answer for Q1: Thanks to the comments. Please kindly refer to **the right column of page 7 (lines 375–384) in Section 4.1.4 (*IMPLEMENTATION DETAILS*)**, the encoding network $\mathcal{G}$ is a **multi-layer perceptron (MLP) with one hidden layer, configured with hidden dimension of 512**. The output dimension, i.e., the dimensionality of latent variable $Z$ is also set as 512. All activation functions are fixed as ReLU followed by a dropout layer with dropping probability of 0.5. We truly hope this clarification can help elucidate details of the encoding network and we will ensure that the implementation details are more prominently highlighted in future revisions.
-**Q2: Could you make the relations between $\log p_{X|Z}(\boldsymbol{x}_{i}|\boldsymbol{z}_i)$ in equation (13) and $\boldsymbol{z}_i$ clearer?**
Answer for Q2: We sincerely apologize for the lack of clarity and thank you for identifying this ambiguity. Below we provide a detailed clarification of the relationship between $\log p_{X|Z}(\boldsymbol{x}_i|\boldsymbol{z}_i)$ in Eq.(13) and the latent variable $\boldsymbol{z}_i$:
The term $\log p_{X|Z}(\boldsymbol{x}_i|\boldsymbol{z}_i)$ represents the logarithmic posterior probability of reconstructing input $\boldsymbol{x}_i$ given the latent variable $\boldsymbol{z}_i$, where $\boldsymbol{z}_i$ serves as the **input to reconstruction network $\mathcal{R}$**.
The reconstruction network $\mathcal{R}$ will then output the mean vector $\boldsymbol{\mu}_i$
and standard deviation vector $\boldsymbol{\sigma}_i$, i.e., key parameters of the assumed multivariate Gaussian $\mathcal{N}(\boldsymbol{\mu}_i,\boldsymbol{\sigma}_i^2 \mathbf{I})$.
-**Q3: How do $\mu_{ia}$ and $\delta_{ia}$ related to $\boldsymbol{z}_i$?**
Answer for Q3: Please kindly refer to the answer for Q2.
Considering that there is no $\delta_{ia}$ in Eq.(13), we suppose $\delta_{ia}$ in your question is $\sigma_{ia}$.
$\boldsymbol{z}_i$ serves as the **input to reconstruction network $\mathcal{R}$** and $\mathcal{R}$ will **output** the mean vector $\boldsymbol{\mu}_i$ and standard deviation vector $\boldsymbol{\sigma}_i$.
$\mu_{ia}$ and $\sigma_{ia}$ are the $a$-th element of $\boldsymbol{\mu}_i$ and $\boldsymbol{\sigma}_i$ respectively.
So they are related directly through the reconstruction network $\mathcal{R}$ as the input and output vectors.
Besides, Eq.(13) aims at inducing the reconstruction loss, i.e., Eq.(14). Thus in the practical algorithm, the reconstruction network $\mathcal{R}$ only needs to output mean parameters $\boldsymbol{\mu}_i$.
-**Q4: Which kind of reconstruction network has been used in the experiments?**
Answer for Q4: Please kindly refer to the answer for Q1. All implementation details are presented in **the right column of page 7 (lines 375–384) in Section 4.1.4**. All neural networks ($\mathcal{G}, \mathcal{R}$, $\mathcal{T}$ and {$\mathcal{H}_j|1\le j\le q$}) are employed as the **multi-layer perceptron (MLP) with one hidden layer**, configured with hidden dimension of 512 (except for the feature extractor for image data sets).
-**Q5: Have you assessed the robustness of DCOM under the presence of either noisy features or small data sets? Please all refer to "Claims And Evidence" for my detailed comments on this point.**
Answer for Q5: Thank you for raising this critical question about the robustness of DCOM under noisy features or limited size of data sets. Actually, considering the conditional probability $p(\mathcal{G}(\boldsymbol{x})|\boldsymbol{x})$ is inherently describing a deterministic event, we introduce a minor perturbation on $\boldsymbol{x}$ before inputting it into the encoding network $\mathcal{G}$ (please kindly refer to Eq.(4)). In other words, we have added random noise to the original MDC data sets as Denoising Auto-Encoders [1]. We will make this clearer in the revised version.
As for the small data sets, We have rigorously evaluated DCOM across datasets with varying sizes to assess its generalization capability (please kindly refer to Table 5 in Appendix B). Small data sets such as *Flare1*, *Oes97* and *Jura* only involve less than 400 samples. Though these results demonstrate DCOM's ability to handle limited data, we acknowledge (as correctly noted by you) that the performance advantage is more pronounced on larger datasets. This observation aligns with fundamental learning theory about the data requirements of deep models.
***
[1] Vincent, Pascal, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. “Extracting and Composing Robust Features with Denoising Autoencoders.” In Proceedings of the 25th International Conference on Machine Learning (ICML ’08) | Summary: In this paper, the authors propose a new method called DCOM to avoid class dependence modeling in multi-dimensional classification tasks. DCOM introduce an additional estimation of the gap between the joint probability and the product of marginal probabilities. Empirically, the authors verify the effectiveness of DCOM across various multi-dimensional classification tasks.
Claims And Evidence: Most of the claims in this paper are convincing with theoretical and empirical evidence. However, I think it would be better to empirically verify that the performance of previous methods is limited by dependence modeling and DCOM successfully escapes from this issue.
Methods And Evaluation Criteria: The introduction of an additional estimation is an interesting idea to solve the dependence modeling, and the proposed methods make sense. Besides, the authors verify their method both theoretically and empirically.
Theoretical Claims: The proof of Theorem 3.3 is clear and I do not find errors.
Experimental Designs Or Analyses: The experiments focus on the performance in multi-dimensional classification tasks. I think it would be better to introduce additional experiments to empirically verify that the performance of previous methods is limited by dependence modeling and DCOM successfully escapes from this issue.
Supplementary Material: The suuplementary material is organized well.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: I think most of essential references are discussed.
Other Strengths And Weaknesses: 1. The paper is easy to follow and the core idea is straightforward.
2. The theoretical motivation and empirical results cooperate well.
3. The emprical improvements are not marginal, which verifies the effectiveness of DCOM.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Is it possible to design a strategy and dynamically adjust the coefficient of the three loss terms?
2. Is it possible to evaluate the influence of dependence modeling in multi-dimensional classification tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We want to express our sincere gratitude for your invaluable comments and suggestions. The point-to-point responses are given as follows:
- **Q1: Is it possible to design a strategy and dynamically adjust the coefficient of the three loss terms?**
A1: Thanks to the comments. It is indeed possible. In our current experiments, it is shown in Figure 3 and Figure 4 of Appendix C.2 that our approach achieves relatively stable performance as the coefficients vary within a broad range. Therefore, the coefficients of the three loss terms are fixed as 1 during training. This static strategy is chosen to simplify the optimization landscape.
As for possible dynamical strategies, we may suggest adopting gradient balancing techniques [1][2] to automatically align gradient magnitudes across loss terms. This is indeed a promising direction that could further enhance our model's adaptability.
- **Q2: Is it possible to evaluate the influence of dependence modeling in multi-dimensional classification tasks?**
A2: Thank you for raising this critical question about evaluating the influence of dependence modeling in MDC. Your observation highlights a fundamental limitation in current methodological practices. Existing MDC approaches mainly evaluate the influence of dependence modeling implicitly through the downstream **classification metrics** in terms of HS, EM and SEM compared to the baseline approach (BR [3], a classic approach which ignores the dependence modeling completely).
Given the indefinability of dependence, explicit and direct evaluation methods for the influence of dependence modeling may be inaccessible. We suggest seeking possible theoretical definitions of dependence from information theory [4] and further exploring appropriate evaluation methods.
***
[1] Yu T, Kumar S, Gupta A, et al. Gradient Surgery for Multi-Task Learning[J]. 2020.DOI:10.48550/arXiv.2001.06782.
[2] Chen Z , Badrinarayanan V , Lee C Y ,et al. GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks[J]. 2017.DOI:10.48550/arXiv.1711.02257.
[3] Zhang, M.-L., Li, Y.-K., Liu, X.-Y., and Geng, X. Binary relevance for multi-label learning: An overview. Frontiers of Computer Science, 12(2):191–202, 2018
[4] Tishby N. The information bottleneck method[J]. 1999. DOI: 10.1145/345508.345578. | Summary: This paper mainly focuses on multi-dimensional classification tasks and points out that existing works mainly focus on designing effective class dependency modeeling strategies but fail to solve the intercoupling of multiple classes. To solve this problem, this paper proposes a method, Dcom, to identify a latent factor that encapsulates the most salient and critical feature information.
Claims And Evidence: The claims in the paper are well supported by evidence.
Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria make sense.
Theoretical Claims: I did not carefully check all theoretical claims in the paper, but the theory parts overall look sound.
Experimental Designs Or Analyses: The experimental designs and analyses are reasonable.
Supplementary Material: I checked all supplementary material.
Relation To Broader Scientific Literature: The contributions are good.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Pros:
- The motivation of the paper is good and clear.
- The paper is well written.
- The paper is well supported by both theoretical and empirical results.
Cons:
- I did not notice the obvious weakness of the paper.
Other Comments Or Suggestions: N/A
Questions For Authors: - Could you please provide more explanations for the assumptions mentioned in Eq. (3)?
- Does Theorem 3.3 mean that the candidate set will be reduced with the mild assumptions (Assumption 3.4)?
- I am curious about whether VLMs/MLLMs can perform such tasks better than the conventional deep learning methods. An intuition here is that the powerful foundation models can directly handle the multimodal data, like the image and text labels in this paper. It would be interesting to discuss this in the paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We want to express our sincere gratitude for your invaluable comments and suggestions. The point-to-point responses are given as follows:
- **Question 1: Could you please provide more explanations for the assumptions mentioned in Eq. (3)?**
Answer for Q1: Eq.(3) serves as a sufficient yet non-necessary condition for ensuring the equivalence between Eq.(1) and Eq.(2). Recall that in MDC, each feature variable $X$ is associated with $q$ class variables $Y_j (1\le j\le q)$, forming a composite class variable $Y=(Y_1,\dots,Y_q)$. Therefore, the class variable considered in Eq. (1) is $Y=(Y_1,\dots,Y_q)$, which models the **joint class structure**. However, existing MDC approaches mainly adopt Eq.(2) as the loss function which considers $q$ scalar class variables $Y_j (1\le j\le q)$ independently and aggregates errors across dimensions via summation (i.e., $\sum_{j=1}^{q}$ over the dimension index).
One straightforward way to impose equivalence between Eq.(1) and Eq.(2) is to assume that each summand is equivalent, i.e., Eq.(3), which actually assumes the partial conditional independence among class variables. We use “partial” here because the assumption does not require Eq.(3) to hold universally for all possible values of $Y$ and $X$, but rather only under the observed training distribution (which is finite and empirically sampled). We will make this clearer in the revised version.
- **Question 2: Does Theorem 3.3 mean that the candidate set will be reduced with the mild assumptions (Assumption 3.4)?**
Answer for Q2: Yes, precisely. Eq.(9) involves modeling the class space $\mathcal{Y}$, whose cardinality $|\mathcal{Y}|=\prod^{q}_{j=1}K_j$ grows exponentially with the number of dimensions $q$, resulting in a computationally intractable hypothesis space. By focusing on high-frequency class combinations, the full joint space $\mathcal{Y}$ will be reduced to a sparse subset $\mathcal{D}’$ which is computable in practical algorithms. As shown in the following table, the candidate set is reduced dramatically (with the frequency threshold $c$ set as $0.1$% when constructing $\mathcal{D}’$).
| Dataset | $\|\mathcal{Y}\|$ | $\|\mathcal{D}’\|$ |
|---|---|---|
|Adult|490|37|
|BeLaE|3125|17|
|CoIL2000|4800|140|
- **Question 3: I am curious about whether VLMs/MLLMs can perform such tasks better than the conventional deep learning methods. An intuition here is that the powerful foundation models can directly handle the multimodal data, like the image and text labels in this paper. It would be interesting to discuss this in the paper.**
Answer for Q3: We sincerely appreciate the insightful suggestion regarding the potential of VLMs and MLLMs for MDC. We fully agree that foundation models like CLIP or GPT-4V could revolutionize MDC by unifying visual and textual reasoning—a direction we are actively exploring in ongoing work.
However, our current work faces a practical limitation: most existing MDC data sets (except for the exemplar data set *DeepFashion*) represent class labels as numerical indices (0, 1, $\dots$) rather than rich semantic descriptors. This format significantly constrains our ability to leverage the text-image alignment capabilities that make VLMs/MLLMs so powerful.
Besides, we do believe VLMs/MLLMs will likely demonstrate superior performance with more semantically-rich MDC data sets in the future and will incorporate this important discussion in our revised manuscript to better highlight both the current challenges and future opportunities in applying VLMs/MLLMs models to MDC. | null | null | null | null | null | null |
Stochastic Forward–Backward Deconvolution: Training Diffusion Models with Finite Noisy Datasets | Accept (poster) | Summary: This paper tackles an important practical issue: how to train diffusion models without directly accessing large volumes of clean (and potentially copyrighted) data. The authors propose a novel method called Stochastic Forward–Backward Deconvolution (SFBD). The approach begins with pretraining on a small set of clean images and then leverages a large noisy dataset through a forward–backward iterative process. The work is grounded in a solid theoretical framework based on density deconvolution, where the authors derive that the optimal convergence rate for learning from noisy samples is $O((\log n)⁻²)$. This theoretical insight motivates the use of even a modest amount of clean data to guide the deconvolution process. Extensive experiments on CIFAR-10 and CelebA demonstrate that SFBD outperforms several baselines, including methods such as TweedieDiffusion and SURE-Score, in terms of FID scores.
Claims And Evidence: Most claims are supported by either theoretical analysis and experimental results well. However, I have one question:
Proposition 2 was presented in terms of the infinity norm between the chracteristic function between the data distribution and the iterative distribution. The results show that the norm not only depend on the iteration numbers K but also the l2 norm of the $u$ as well, which implies there are always non-trivial gap when $\|u\|$ is large enough no matter how large K is. In this sense, the theorem should be augmented by how the original distance/KL-divergence of the two distributions would be if the gap between the chracteristic function are only large when $\|u\|$ is large.
Methods And Evaluation Criteria: Experimental results for larger dataset, like imagenet, better to be demonstrated.
Theoretical Claims: Correct.
Experimental Designs Or Analyses: The results are only shown for at most 4 iterations, it is better to show more iterations and specifically, how fast the results would converge to the optimal state.
Supplementary Material: .
Relation To Broader Scientific Literature: .
Essential References Not Discussed: .
Other Strengths And Weaknesses: 1. The derivation of the convergence rate $O((\log n)⁻²)$ under a Gaussian noise model provides deep insight into the limitations of learning from solely corrupted data. This analysis justifies the necessity of even a small amount of clean data.
2. The experimental results, though limited, provide good insights.
Other Comments Or Suggestions: .
Questions For Authors: .
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for the review. Below are our responses to your comments:
**Q1.** Regarding the non-trivial gap when $|\mathbf{u}|$ is large in Prop 2.
**A1.** Thank you for this insightful comment. We agree that the bound appears to grow with $\|\mathbf{u}\|$, and we would like to clarify that in practice, the behavior of characteristic functions for large $\|\mathbf{u}\|$ is typically negligible. Specifically, characteristic functions tend to decay rapidly—often at an exponential or super-polynomial rate—for distributions with smooth and bounded densities.
To make the discussion concrete, consider the 1D case: the characteristic function $\phi(u)$ is the Fourier transform of the probability density function. When the density is k-times differentiable, it is well known (e.g., Lemma 4 on page 514 of [2]) that the characteristic function satisfies $|\phi(u)| = o(|u|^{-k})$. This implies that for sufficiently large $|u|$, the magnitude of the characteristic function becomes negligible.
Therefore, under the assumption that both $p_{\rm data}$ and $p_0^{(k)}$ are smooth and have bounded support, it suffices to ensure that their characteristic functions are close within a compact domain $|u| < U$ for some $U > 0$. This local closeness in the Fourier domain translates to closeness in the original densities, and hence the distributions. We will include this clarification in the final version of the manuscript.
**Q2.** Regarding addtional results on bigger datasets
**A2.** While we acknowledge that further experiments on larger datasets would strengthen the empirical study, our ability to scale up is constrained by the limited computational resources available at our academic institution. For instance, according to EDM’s official repository [1], training a diffusion model on ImageNet requires 32×A100 GPUs for 13 days—resources that are currently beyond our reach.
To partially address your comment, we conducted an experiment on the Tiny ImageNet dataset, which contains 100,000 images. We selected 1,000 clean images as copyright-free samples for pretraining and set the noise level to 0.2. Using a reduced batch size of 256 (compared to the recommended 4096 in EDM), the pretrained model achieved an FID of 36.74. After the first fine-tuning iteration, the FID dropped to 20.41. This result reflects a trend consistent with our findings on CIFAR-10 and CelebA, further corroborating our claims.
**Q3.** The results are only shown for at most 4 iterations, it is better to show more iterations and specifically, how fast the results would converge to the optimal state.
**A3.** In most of our experimental settings, SFBD converges within four iterations. (see [[link](https://shorturl.at/I71Vq)] for results from additional iterations.) This is why we focus on reporting the FID trajectories over the four few iterations in our original submission.
Moreover, when the noise level is fixed, SFBD tends to converge more quickly as the number of clean samples used for pretraining and fine-tuning increases. A larger clean dataset allows the model to start closer to the true data distribution (see also Section 5 – The Importance of Pretraining). Conversely, when the number of clean samples is fixed, increasing the noise level also accelerates convergence. We hypothesize this is because higher noise levels obscure more information in the noisy data, prompting SFBD to rely more on the clean samples, enabling faster adaptation through the fusion of complementary information from noisy samples. We will incoperate this discussion in the revision.
[1] T. Karras et al. Elucidating the Design Space of Diffusion-Based Generative Models. NeurIPS 2023.
[2] Feller, W. An Introduction to Probability Theory and Its Applications, Vol. 2. Wiley. 1971 | Summary: This work put forward a new training framework, dubbed Stochastic Forward–Backward Deconvolution (SFBD), a method for training diffusion models on noisy datasets while mitigating data memorization and copyright concerns.
Moreover, the theoretical analysis demonstrated that training solely on corrupted data is inefficient, whereas pretraining on a small fraction of clean data significantly improves performance.
Importantly, experimental results showed state-of-the-art sample quality with as little as 4\% clean data in certain settings.
By bridging the gap between deconvolution theory and practical generative modeling, the proposed SFBD provides a scalable solution for privacy-preserving training.
Claims And Evidence: The paper claims that training solely on noisy data is inefficient due to slow convergence rates but can be significantly improved by pretraining on a small fraction of clean data. While the theoretical analysis supports this claim, the extent to which pretraining on limited clean data generalizes to diverse datasets or real-world scenarios is not entirely explored. Additional experiments on different datasets and noise levels could further strengthen this conclusion.
Methods And Evaluation Criteria: This paper suggests that the proposed method may help address potential copyright issues, as it enables the recovery of noisy images without accessing their complete clean information.
Theoretical Claims: For Theorem 1 & 2:
The theorem claims that the optimal convergence rate for estimating the data density from noisy samples is $\mathcal{O}(\log n )^{-2}$ , which implies that training diffusion models solely on corrupted data is nearly infeasible.
However, it derived that the lower bound of the discrepancy between the modeling distribution and the target data distribution is $\mathcal{O}(\log n )^{-2} \cdot K$.
Since this is a lower bound, then how can restrain the modeling discrepancy the modeling distribution and the target data distribution?
Moreover, how $K$ determines the bound? is this very sensitive or just a constant.
Experimental Designs Or Analyses: The comparisons with the baseline models are all based on the same model backbone? or otherwise it if unfair.
It seems two datasets are insufficient verification of model performance.
Supplementary Material: The supplementary part includes the theoretical analysis and the visualizations. I try my best to check it.
Relation To Broader Scientific Literature: The key contributions of this paper build on existing research in diffusion models, deconvolution theory, and generative modeling with noisy data. By connecting ideas from these fields, it offers new theoretical insights and practical techniques to enhance training efficiency, improve sample quality, and support privacy-preserving generative modeling.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: weaknesses:
1. While the method improves sample quality, the paper does not provide a detailed analysis of training efficiency, computational overhead, or scalability compared to standard diffusion model training.
2. Although the paper claims that training on noisy data can mitigate copyright risks, it does not provide a formal analysis or guarantees regarding information leakage or potential reconstruction of copyrighted content.
3. The theoretical analysis seems no help for the model convergence, especially the derived lower bound.
Other Comments Or Suggestions: Evaluating the proposed method on a broader range of datasets would significantly enhance the assessment of its robustness.
While CIFAR-10 and CelebA provide useful benchmarks, incorporating larger and more diverse datasets, such as ImageNet for high-resolution images or LSUN for complex scenes, would better demonstrate the model’s generalizability.
Questions For Authors: 1. How many training iterations are required for the proposed method?
2. can you explain more about the $K$ in $\mathcal{O}(\log n )^{-2}$?
3. Since the training process still requires some clean images, does it truly help resolve the data leakage issue?
If the authors can address most of my concerns, I will improve the initial score.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for reviewing our paper. Below are our responses to your comments:
**Q1.** How many training iterations are required?
**A1.** In most of our experimental settings, SFBD converges within four iterations. (see [[link](https://shorturl.at/I71Vq)] for results from additional iterations.) This is why we originally report the FIDs over the first four iterations.
When the noise level is fixed, SFBD converges faster with more clean samples, as the pretrained model starts closer to the true data distribution (see also Sec 5: The Importance of Pretraining). Conversely, with a fixed number of clean samples, higher noise levels also speed up convergence. We hypothesize this is because higher noise levels obscure more information in the noisy samples, prompting SFBD to rely more on the clean data and thus more rapidly adapt by combining complementary features from the noisy sources.
**Q2.** Regarding "training efficiency, computational overhead, or scalability compared to standard DM"
**A2.** While SFBD involves multiple fine-tuning steps, each step has a similar cost to standard diffusion training, as it minimizes the regular conditional score-matching loss.
Most training time is spent on pretraining and the first fine-tune, together matching the cost of training a regular diffusion model. Later fine-tuning steps are much faster—each taking less than 1/4 of standard training time in our setup. Overall, using SFBD with 4 fine-tunes takes about 1.75× the time of training a regular diffusion model.
Backward sampling adds cost but is parallelizable. With 8 RTX 6000 GPUs, each sampling step takes under 30 minutes.
**Q3.** Baseline model selections
**A3.** Our implementation follows EDM [1] with identical hyperparameters (using the FFHQ‑64×64 config for CelebA). As noted in EDM Appx F.3, the UNet backbones are nearly identical to those in DDPM and DDIM. AmbDiff, EMDiff, and TweedieDiff also use EDM backbones, so their architectures match ours. For SureScore, we use results from [2], which employs a slightly different UNet but with a similar parameter count.
**Q4.** The theoretical analysis seems no help for the model convergence, especially the derived lower bound.
**A4**. Thms 1 and 2 are not meant to describe SFBD’s convergence, but to illustrate the difficulty of estimating clean distributions from noisy data. Based on density deconvolution theory, they provide matching upper and lower bounds, implying a sample complexity of $\mathcal{O}((\log n)^{-2})$. This poor rate shows that training solely on noisy data is impractical, justifying SFBD’s use of limited clean samples for pretraining.
**SFBD’s actual convergence guarantee is given in Prop 2**, which shows a rate of $\mathcal{O}(1 / \sqrt{K})$, where $K$ is the number of iterations in Algorithm 1.
*Note*: The $K$ in Prop 2 and Alg 1 refers to the same quantity. In contrast, the $K$ in Thm 2 is an unrelated constant. To avoid confusion, we’ll revise the notation in Thm 2 to use $C’$ in the updated version.
**Q5.** Can you explain more about the $K$ in $\mathcal{O}(\log n)^{-2}$?
**A5.** In Thm 2, given $p_{\rm data}$ and the noise level $\sigma_\tau$, the constant $K$ is a fixed positive value. (**Note**: This quantity will be renamed to $C'$ in the revised version to avoid confusion, as it is only used in Thm 2 and is unrelated to the iteration count $K$ in Alg 1 and Prop 2.)
**Q6.** Regarding the data leakage issue.
**A6.** The SFBD algorithm does not address potential leakage from the clean images used in pretraining. However, since these images are public and copyright-free, such leakage is not a concern in our setting. We also note that:
- Some clean data is essential, as the problem is otherwise intractable;
- Clean images can be copyright-free or obtained with user consent;
- Pretrained models may be sourced from public datasets, though with potential quality trade-offs (as shown in ablation studies).
SFBD is specifically designed to prevent leakage from sensitive data by exposing the model to only one corrupted version of each sensitive sample during training. This inherently limits memorization and reconstruction of private or copyrighted content. Notably, SFBD does not require clean images or the pretrained model to be released or shared through secure channels.
To further support our privacy claims, we follow [3] by computing similarity scores between generated and sensitive samples [[link](https://shorturl.at/19EOf)]. Results show no reconstruction of sensitive content. These findings will be included in the revision.
**Q7.** Regarding addtional results on bigger datasets
**A7.** See Reviewer BKmX - **A2**.
[1] T. Karras et al. Elucidating the Design Space of Diffusion-Based Generative Models. NeurIPS 2023.
[2] An Expectation-Maximization Algorithm for Training Clean Diffusion Models from Corrupted Observations. Bai et al. 2024.
[3] Consistent Diffusion Meets Tweedie. Daras et al., 2024. | Summary: This paper addresses the challenge of training diffusion-based generative models using datasets that are intentionally corrupted with noise to mitigate concerns around memorization and copyright infringement.
However, the authors show that in practice, the convergence rate for learning from noisy samples is extremely poor—on the order of $O((\log n)^−2)$—making effective training from only noisy data infeasible.
The key insight is to pretrain a diffusion model on a small set of clean (copyright-free) data and then iteratively refine it using the large noisy dataset. The algorithm alternates between backward sampling (a denoising step using the current model) and updating the denoiser with these generated samples. Over time, the model’s outputs converge to the true data distribution even though it only sees a tiny fraction of clean data.
The authors theoretically validate SFBD’s convergence and demonstrate its practicality via empirical studies on CIFAR-10 and CelebA. Impressively, SFBD achieves competitive image quality with only 4\% clean images (FID of 6.31 on CIFAR-10), outperforming other methods designed to train on noisy data.
Claims And Evidence: Yes.
The approach is justified using theoretical guarantees, specifically the convergence rate to the true clean distribution is analyzed and bounded.
The claims are supported by the empirical studies, showing that the approach is effective also in practice.
Methods And Evaluation Criteria: Almost.
The approach assumes that the noise level of the noisy data is within the trajectory of the diffusion process, which in many practical cases does not hold.
Theoretical Claims: Yes. They seem correct.
- Proposition 1
- Theorem 1
- Theorem 2
Experimental Designs Or Analyses: Yes, the method is compared to other methods designed to generate images given access only to noisy images. This comparison is not fair. The other methods do not require having a set of clean images, for the comparison to be fair one needs to test the methods after denoising the dataset with a denoiser network trained using the given clean images.
Supplementary Material: Yes. A, B, D, and E.
Relation To Broader Scientific Literature: The task discussed in the paper is very important, specifically when one does not have access to the true data and only the noisy version of the data is accessible. The approach shows that even in this case one can still learn the true data distribution in a very effective way.
Essential References Not Discussed: -
Other Strengths And Weaknesses: Strengths:
- The paper is well written.
- The approach is novel and interesting.
- Compared to other methods this approach shows great improvement.
- The method is supported by theoretical justification.
Weaknesses:
- The approach assumes that the noise level of the noisy data is within the trajectory of the diffusion process, which in many practical cases does not hold.
- The method requires access to clean data at the initial stage.
- The method is compared to other approaches but not fairly. The other methods do not require having a set of clean images, for the comparison to be fair one needs to test the methods after denoising the dataset with a denoiser network trained using the given clean images.
- The performance of the approach degrade significantly when the clean data is out of the distribution (FIgure 2C, when the model is pretrained on truck images, it performs poorly on horse images).
Other Comments Or Suggestions: N/A
Questions For Authors: - How does the method perform when you use a off-the-shelf denoiser trained on generic data?
- What will happen in the blind case when the noise variance is unknown?
- How does using a denoiser trained on multiple noise levels (blindly) affect the performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your comments. We will first clarify the distinction between our framework and standard denoising methods, and then address your comments point by point.
**How our method differs from a standard denoising algorithm.** SFBD alternates between denoising samples and fine-tuning the denoiser, allowing it to blend high-frequency details from clean data with global structure from noisy samples (see Section 6.3). Unlike standard denoising methods that reconstruct exact clean inputs, SFBD trains the denoiser to recover the full data distribution, resulting in more realistic samples.
To highlight SFBD’s advantages, we compare the FID of its denoised samples at each iteration ($\mathcal{E}_k$ in Algorithm 1) with the full training set, alongside results from a state-of-the-art off-the-shelf denoiser, Restormer [1].
**CIFAR-10 (4% clean images)**
| $\sigma$ | Model | Iter 1 | Iter 2 | Iter 3 | Iter 4
|-|-|-|-|-|-
| 0.30 | SFBD | 6.16 | 3.42 | 2.68 | 2.35
| | Restormer | 53.87
| 0.59 | SFBD | 10.23 | 7.47 | 6.31 | 6.54
| | Restormer | 99.99
| 1.09 | SFBD |12.68 | 9.39 | 9.08 | 10.14
| | Restormer | 132.69
**CelebA**
| Setting | Model | Iter 1 | Iter 2 | Iter 3 | Iter 4
|-|-|-|-|-|-
| 50 clean images, $\sigma = 0.2$ | SFBD | 47.69 | 10.05 | 5.63 | 3.93
| | Restormer | 18.90
| 1,500 clean images, $\sigma = 1.38$ | SFBD | 9.05 | 5.76 | 4.56 | 3.98
| | Restormer | 227.91
The results show that SFBD can produce significantly more realistic samples than regular denoisers.
Below, we provide point-by-point responses to your comments.
**Q1.** How does the method perform when you use a off-the-shelf denoiser trained on generic data?
**A1.** As shown above, Restormer-denoised images consistently yield much higher FIDs than SFBD (reported in the paper). Since generative models trained on these images can’t surpass the FID of their targets, using off-the-shelf denoisers like Restormer results in significantly worse performance. We will include this discussion in the revision.
**Q2.** Regarding Experimental Designs Or Analyses
**A2.** The upper block of Table 1 assumes full access to both clean copyright-free and sensitive samples, so the results naturally outperform those from models trained on denoised data—making our method appear weaker, not stronger.
For the algorithms in the lower block, their original designs either assume only noisy data or use clean samples in their own specific way. Adapting them to leverage denoised samples would require substantial changes and could also raise fairness concerns. To maintain a consistent and fair comparison, we used available clean data for pretraining when applicable.
As noted in **A1**, the reported results in the above tables serve as upper bounds for models trained on denoised samples—whether from a pretrained denoiser (SFBD-Iter 1) or an off-the-shelf one (Restormer). Importantly, final SFBD models (after multiple training iterations) still outperform these upper bounds. We will clarify this in the revision to avoid potential concern.
**Q3.** Regarding the case that noise variance is unknown.
**A3.** In our setting where noise is **intentionally** added to protect sensitive samples, it is reasonable to assume known noise variance (either directly controlled by the model developer or communicated to it securely).
To make the proposed framework compatible with scenarios where the noise level is unknown, it could be extended by incorporating noise level estimation techniques-such as those used in blind denoising methods. We consider this an interesting direction for future work.
**Q4.** How does using a denoiser trained on multiple noise levels (blindly) affect the performance?
**A4.** The denoiser Restormer, discussed in the tables above as well as in **A1** and **A2**, is designed to handle multiple noise levels in a blind manner. As shown in our results and discussions, generative models trained on images denoised by Restormer cannot achieve performance comparable to those trained using SFBD.
**Q5.** Regarding "The approach assumes that the noise level of the noisy data is within the trajectory of the diffusion process, ..."
**A5.** In theory, diffusion models can handle noise levels ranging from zero to infinity. We would appreciate it if you could provide more details or clarification on this concern so we can better understand and address the issue.
**Q6.** Regarding "The performance of the approach degrade significantly when the clean data is out of the distribution"
**A6.** We included these ablation studies to support our theoretical claims and offer additional insights into the framework’s behavior. While we agree that using in-distribution clean data yields better results, the main takeaway is that pretraining on out-of-distribution data still provides a clear benefit compared to no pretraining at all.
[1] Restormer: Efficient Transformer for High-Resolution Image Restoration. S. W. Zamir et al. CVPR 2022 | Summary: The authors consider the problem of training diffusion models with a small set of clean data and a large set of noisy data. This follows a line of recent works on developing techniques for training diffusion models under corruption in the training set. The main finding of this work is that without clean data performance is fundamentally limited for datasets of finite sizes. The authors develop a technique that significantly improves the performance by leveraging a small set of clean samples.
Claims And Evidence: Yes, the claims in this work are supported by clear evidence.
Methods And Evaluation Criteria: I find the benchmarking of this work unfair/incomplete.
* Missing baselines:
* The work [How much is a noisy image worth? Data Scaling Laws for Ambient Diffusion](https://arxiv.org/abs/2411.02780) (ICLR 2025) proposes the same idea (leveraging a few clean examples) and reaches the same finding (performance with noisy data only is limited but a few clean samples can lead to a dramatic increase in performance). Despite that, the paper is not discussed/cited and there is no benchmarking against it. I believe the authors were probably unaware of this work, but it is super relevant and should be extensively discussed and benchmarked against.
* Another useful baseline to compare against would be the training on noisy data first and then fine-tuning on clean data. This is in some sense what is happening post-training in the foundational models (e.g. see the [EMU](https://arxiv.org/abs/2309.15807) paper).
* Unfair implementation of baselines:
* Ambient Diffusion is a method developed for training diffusion models with linear corruptions (e.g. random inpainting). It is unclear how the authors use this method for the denoising case.
* The authors report that TweedieDiff achieves FID of 167.23 without clean data and FID of 65.21 with some clean data points on CIFAR-10 for $\sigma=0.2$. However, the work "How much is a noisy image worth" reports: a) FID 12.12 without clean data and without consistency, b) FID 11.93 without clean data and consistency, and iii) FID 60.73 with naive full sampling on a model that was just trained with times $t: \sigma_t>=0.2$. Using 10% of clean data further drives FID to 2.50. The reported FIDs of 167.53 and 65.21 in the submission paint an unfair and very pessimistic picture for the baselines.
I also believe that for the evaluation to be more rigorous, the pair of (FID, memorization) needs to be reported.
Theoretical Claims: I checked the proofs about MISE, Proposition 1 and the proposed method. The MISE proofs seem to be heavily based on prior work. I also have a question regarding MISE; wouldn't it make more sense if we weight the integral with p_{data}(x)? Why are errors in very unlikely datapoints (according to $p_{data}(x)$) important?
Experimental Designs Or Analyses: I have several concerns regarding the experimental validation, as listed above. Repeating here:
I find the benchmarking of this work unfair/incomplete.
* Missing baselines:
* The work [How much is a noisy image worth? Data Scaling Laws for Ambient Diffusion](https://arxiv.org/abs/2411.02780) (ICLR 2025) proposes the same idea (leveraging a few clean examples) and reaches the same finding (performance with noisy data only is limited but a few clean samples can lead to a dramatic increase in performance). Despite that, the paper is not discussed/cited and there is no benchmarking against it. I believe the authors were probably unaware of this work, but it is super relevant and should be extensively discussed and benchmarked against.
* Another useful baseline to compare against would be the training on noisy data first and then fine-tuning on clean data. This is in some sense what is happening post-training in the foundational models (e.g. see the [EMU](https://arxiv.org/abs/2309.15807) paper).
* Unfair implementation of baselines:
* Ambient Diffusion is a method developed for training diffusion models with linear corruptions (e.g. random inpainting). It is unclear how the authors use this method for the denoising case.
* The authors report that TweedieDiff achieves FID of 167.23 without clean data and FID of 65.21 with some clean data points on CIFAR-10 for $\sigma=0.2$. However, the work "How much is a noisy image worth" reports: a) FID 12.12 without clean data and without consistency, b) FID 11.93 without clean data and consistency, and iii) FID 60.73 with naive full sampling on a model that was just trained with times $t: \sigma_t>=0.2$. Using 10% of clean data further drives FID to 2.50. The reported FIDs of 167.53 and 65.21 in the submission paint an unfair and very pessimistic picture for the baselines.
I also believe that for the evaluation to be more rigorous, the pair of (FID, memorization) needs to be reported.
Supplementary Material: I went over all the Supplementary Material. Section E (Experiment Configurations) could benefit from explaining how the baselines are trained/used.
Relation To Broader Scientific Literature: This paper follows a line of recent works on training diffusion models with corrupted data and its implications for performance, copyright and memorization. The authors do a very good job in describing the state of the literature in Section 2 (Related Work). The main finding of this work is very interesting, and it is that a small set of clean data is essential for performance when the assumption of infinite data is violated (i.e. in all practical settings).
Essential References Not Discussed: As mentioned above, the authors miss the work [How much is a noisy image worth? Data Scaling Laws for Ambient Diffusion](https://arxiv.org/abs/2411.02780) (ICLR 2025) which proposes the same idea (leveraging a few clean examples) and reaches the same finding (performance with noisy data only is limited but a few clean samples can lead to a dramatic increase in performance).
Other Strengths And Weaknesses: I have already listed the main Weaknesses of the work. On the strengths side:
- The paper is clearly written.
- The topic is very interesting and the findings very intuitive.
- The finding confirms a recent finding from another work, which is in some sense, a positive thing since it shows its importance and its reproducibility.
- The connection to density deconvolution is novel and the sample-complexity results are interesting.
- The proposed framework, a sort of Expectation-Maximization applied to diffusion modeling, is very niche and it shows that there are other ways beyond consistency to extrapolate beyond the training data.
Other Comments Or Suggestions: Please update the running title as it is currently listed as "Submission and Formatting Instructions for ICML 2025".
Questions For Authors: Please include a discussion of the missed work and if possible, benchmark against it.
Please also clarify the questions I asked regarding how you implemented the baselines and if possible update the Tables using the (significantly improved) numbers from the related work. I understand that the rebuttal time is limited, but if you could also provide a comparison with the baseline of first training on everything and then fine-tuning on only the clean data points, that would be really useful.
I would also like to ask about the computational requirements for the method, as it seems to require K finetunings.
If my concerns are properly addressed, I will raise my score since I like the idea, the paper, and the analysis.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and for the detailed feedback.
Before addressing specific points, we would like to clarify that the remark stating "the benchmarking of this work is unfair/incomplete" is, in our view, **inaccurate**. Specifically,
1. We were not aware of the missing baseline [1] before our submission. We hope you would understand since [1] is very recent and there is no way we could have compared to it. We did our best to discuss its relation to our method below (in **A1**), which will also appear in our revision.
2. Regarding the claim of unfair implementation: our version of TweedieDiff is not the same as in Daras25 [1] but faithfully follows the original paper (see **A4** for details). While we’re happy to include results from [1] and discuss the differences, we do not have any intention to “paints an unfair and very pessimistic picture of the baselines.”
**Q1.** Regarding a related work [1] published in ICLR 2025
**A1.** We agree that this very recent work [1] is relevant. We will include a detailed discussion and benchmark comparison in the revision. In particular:
Both papers show that learning the true data distribution from only noisy samples is theoretically possible but practically requires an infeasible number of samples. To address this, both incorporate a small amount of clean data to guide training.
However, they differ in approach. Our method uses the density deconvolution, while Daras et al. build on Gaussian Mixture Models (GMMs), providing more precise modeling under stronger distributional assumptions. We believe the fact that both methods reach similar conclusions—despite their different foundations—reinforces each other.
Methodologically, [1] applies Tweedie’s formula (consistency constraint) to recover the clean distribution. In contrast, ours introduces a novel forward-backward deconvolution strategy, offering a fresh perspective without the heavy computational cost of enforcing consistency.
(Benchmark Comparison)
We will include the following CIFAR-10 FID results in the revision:
- 50 clean images, σ = 0.2 (Table 1 setting): Daras25: 8.05; SFBD: 13.53
- 10% clean images, σ = 0.2: Daras25: 2.81; SFBD: 2.58
- 4% clean images, σ = 0.59: Daras25: 6.75; SFBD: 6.31
Daras25 performs better with very limited clean data, likely due to overfitting during our model’s pretraining on small datasets. This overfitting is lessened as more clean data becomes available, allowing SFBD to outperform. We attribute this to SFBD’s stable fine-tuning via score-matching loss, which avoids the extra constraints of consistency-based methods.
**Q2.** Regarding training models in a way similar to EMU
**A2.** We illustrate this on CIFAR-10 by pretraining a diffusion model on noisy images (σ = 0.2), then finetuning on either 50 or 4% clean images. FID trajectories [[link](https://shorturl.at/7B0gq)] show an initial drop followed by a rise—sharper with 50 clean images, slower with 4%. This is expected: finetuning on clean data improves performance at first but eventually causes the model to forget useful noisy features, leading to degradation.
**Q3.** Regarding ambient diffusion
**A3.** The reported value is from the arXiv version of [2], which states that "AmbientDiffusion is trained with the standard setting." Since it was omitted from the NeurIPS final version and its reliability is uncertain, we will remove it in the revision.
**Q4.** Regarding TweedieDiff
**A4.** **Our implementation closely follows the original TweedieDiff paper**. Comparing Daras25 [1] and Daras24 [3], we think the difference might be caused by the new implementation in Daras25 [[link](https://shorturl.at/226Rv)], which we were not aware of before our submission. We will include results from both versions and clarify their differences in the revision. We are happy to include the new result of Daras25 in our revision and add explanation on the different results. We want to assure the reviewer we tried our best to compare all baselines fairly (to be best of our knowledge).
**Q5.** Regarding memorization
**A5.** See Reviewer kYMA-**A6**.
**Q6.** Regarding MISE
**A6.** MISE is the standard metric in density estimation that evaluates error uniformly across all x, encouraging agreement both within and outside the support of $p_{\rm data}(x)$. Weighting the error by $p_{\rm data}(x)$ instead reduces the influence of low-likelihood regions, under-penalizing high-density estimates in areas where the true density is low. This can make models that generate unlikely or absurd samples appear to perform well, contradicting our goal of accurately modeling the full distribution.
**Q7.** Regarding computation requirement
**A7.** See Reviewer kYMA-**A2**.
[1] How much is a noisy image worth? Data Scaling Laws for Ambient Diffusion. Daras et al., 2025.
[2] An EM Algorithm for Training Clean Diffusion Models from Corrupted Observations. Bai et al., 2024.
[3] Consistent Diffusion Meets Tweedie. Daras et al., 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. I expect that the authors will update their comparisons and baseline results, as promised.
I agree that the overlap with the other work reinforces the validity of the approach.
I am raising my score to 4.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful comments and engagement throughout the review process. We’re pleased that our responses have addressed your concerns. As noted, we will incorporate the promised comparisons and baseline results in the revised version to strengthen the final submission.
Sincerely,
The Authors | null | null | null | null | null | null |
Diffusion Counterfactual Generation with Semantic Abduction | Accept (poster) | Summary: The authors proposed a method that incorporates semantic abduction in diffusion models for the preservation of exogenous noise in counterfactual generations. The inference is conducted with a CFG-style amortised, anti-causally guided DDIM sampling. Results are shown on three datasets: Morpho-MNIST, CelebA-HQ and EMBED.
Claims And Evidence: It's ambiguous. The claim is that this is the first work to consider high-level semantic abduction for diffusion counterfactual. If you strictly consider the meaning of the terminology "semantic abduction" that the authors used, this might be the first, however, I think the proposed method is not novel and has minor contributions among the existing diffusion counterfactual models. Using diffusion models for exogenous abduction is already proposed in DiffSCM [1] and using amortised, anti-causally guided inference is already proposed in DDCM [2], which entails high-level exogenous abduction. Both of these works are not compared in either related work or experiments.
[1] Sanchez, Pedro, and Sotirios A. Tsaftaris. "Diffusion causal models for counterfactual estimation." CLeaR (2022).
[2] Xu, Sihan, et al. "Inversion-Free Image Editing with Language-Guided Diffusion Models." CVPR. 2024.
Methods And Evaluation Criteria: No. On Morpho-MNIST, there is no benchmark comparison to prior diffusion-based counterfactual models and no benchmark comparison to state-of-the-art counterfactual models. On CelebA-HQ, there is no benchmark comparison, period. Poor benchmark comparisons had unfortunately been a bad norm of this field up until 2023, but it is not the norm anymore. Not everyone just simply use VAE and HVAE anymore. The readers should very reasonably expect *at least* three benchmark models being compared to this work:
- DiffSCM [1] – as the prior effort in applying diffusion model in counterfactual modeling
- DDCM [2] – as the state-of-the-art diffusion model for image editing that already proposed the CFG-style [4] *amortised, anti-causally guided inference*.
- VCI [3] – as the current state-of-the-art in counterfactual modeling.
I know this sounds like a lot, but unfortunately that just means this work hasn’t done enough. In my judgement, it is not ready for publication. Again, any reader should *very reasonably* expect these three works being compared to the proposed method for the reasons I listed. I noticed that [1] and [3] are cited, but [2] is not even cited. (Note that image editing is essentially the same as image counterfactual, especially for a work that isn't theoretically-driven. On an empirical level, the difference is simply whether you use the term "counterfacutal" and frame it within a causal formulation or not. On observational datasets, there is no difference.) As for the results, the proposed method does not beat HVAE by a noticeable margin (if it beats HVAE at all) according to Table 1, while VCI [3] beat HVAE by a wide margin. So, it is very reasonable for me to assume that this work performs sub-par compared to state-of-the-art without further benchmark comparison.
Since I do not have benchmark comparisons to evaluate the results, I tried to judge the results by empirical comparison. However, the empirical results the authors showed in Figure 3. b) do not show good ability in exogenous noise abduction either – in most of these pictures, the intervention *greatly* changed the person’s hair style and exhibited some denoising ability (which you do not want for counterfactual modeling) – this seems very sub-par compared to DDCM and VCI, and I’m not sure if it even beats the old non-variational and non-diffusion based method [5].
[1] Sanchez, Pedro, and Sotirios A. Tsaftaris. "Diffusion causal models for counterfactual estimation." CLeaR (2022).
[2] Xu, Sihan, et al. "Inversion-Free Image Editing with Language-Guided Diffusion Models." CVPR. 2024.
[3] Wu, Yulun, Louie McConnell, and Claudia Iriondo. "Counterfactual Generative Modeling with Variational Causal Inference." ICLR (2024).
[4] Ho, Jonathan, and Tim Salimans. "Classifier-free diffusion guidance." (2022).
[5] Zhu, Jun-Yan, et al. "Unpaired image-to-image translation using cycle-consistent adversarial networks." ICCV. 2017.
Theoretical Claims: N/A. No theoretical claims in this paper.
Experimental Designs Or Analyses: I checked the soundness of experimental design, and I think the evaluation metrics used for MorphoMNIST are problematic. For a dataset where *the counterfactual truth is available*, there is no reason to use composition and reversibility in favor of the error between counterfactual generation and counterfactual truth. I believe the authors are well-aware that composition, effectiveness, and reversibility are metrics proposed to evaluate results on datasets where *counterfactual truth is not available*. For MorphoMNIST, there is no reason to use composition – the MSE between reconstruction and original image, and reversibility – the MSE between cycle reconstruction and original image, in favor of the counterfactual prediction error – the MSE between counterfactual construction and counterfactual image. A naive model that does nothing other than returning the original image can achieve the perfect composition and reversibility. The counterfactual prediction error, on the other hand, is a true counterfactual evaluation metric on layer 3 of Pearl’s 3 layers of causality and should always be preferred when it’s attainable.
## Update After Rebuttal
I have raised the score to a strong acceptance as the authors have put in an abundant amount of efforts in additional experiments. However, this is the one issue that the authors did not give me an adequate answer for, and since this year's ICML does not seem to allow further discussion with the authors after their **Reply Rebuttal Comment** is posted, I am highlighting my intended response to their **Reply Rebuttal Comment** here for readers' information:
---
*Thanks for the detailed response. Great work in rebuttal. I have raised my score to a 4, assuming that the comparison to DiffSCM will actually be in the final version of the paper. I do not think your argument make sense regarding issue 1, and the concern you [linked](https://openreview.net/forum?id=oeDcgVC7Xh¬eId=is9vzvhbQQ) was literally a clarification that composition and reversibility should be used in CelebA, where counterfactual truth is **not** attainable. So I do not see how this is related to the issue I raised here.*
> *MSE/MAE remain meaningful for composition and reversibility because the target is uniquely defined (ie. the observed image)*
*I think this is such a poor argument for using composition and reversibility over counterfactual error. The counterfactual image **is** uniquely defined, it is just not observed for real-world datasets. It is literally a unique target by the definition of counterfactual and the consistency assumption. You can correct me on this if I'm wrong: I don't believe there is any randomness to the counterfactual image generated by morpho-MNIST when intervention is given. Not only is the target uniquely defined, it **is** the target you care about -- ultimately, you don't care about how good your model can do reconstruction or cycle reconstruction, you care about how good your model can do counterfactual generation. The former is an indirect estimate of model's ability in exogenous noise abduction when the direct estimate, i.e. the latter, cannot be evaluated.*
> *These metrics are also sensitive to sharpness and local artefacts. A poor but blurry counterfactual may yield low MSE [1,2,8].*
*This is the flaw of pixel-wise MSE, it has nothing to do with the comparison of counterfactual error vs. composition & reversibility as all of them are pixel-wise MSE. If this is your argument why counterfactual error is flawed, I can use the exact same argument to tell you composition & reversibility are flawed as well.*
*You are contradicting yourself with these two arguments. You said "MSE/MAE remain meaningful for composition and reversibility because the target is uniquely defined (ie. the observed image)" yet you also link works [1,2,8] where MSE is calculated on uniquely defined observed image to argue MSE/MAE is flawed. Just like the reviewer in the concern you linked, you don't have a coherent argument why composition & reversibility should be preferred over counterfactual error.*
*[1] Zhang et al. "The unreasonable effectiveness of deep features as a perceptual metric." CVPR 2018*
*[2] Wang & Bovik, "Mean squared error: Love it or leave it?" IEEE 2009 - Fig 2*
*[8] Wang et al, "Image quality assessment: from error visibility to structural similarity." IEEE 2004*
---
Supplementary Material: N/A. No supplementary material in this paper.
Relation To Broader Scientific Literature: As discussed in "Claims And Evidence" and "Methods And Evaluation Criteria", I do not find any evidence of this work exceling prior works in diffusion-based counterfactual models and state-of-the-art counterfactual models.
Essential References Not Discussed: Yes, prior works in diffusion-based counterfactual models [1][2] and state-of-the-art counterfactual models [3] are not discussed in related work or compared in the experiment section. See "Claims And Evidence" and "Methods And Evaluation Criteria" for details.
[1] Sanchez, Pedro, and Sotirios A. Tsaftaris. "Diffusion causal models for counterfactual estimation." CLeaR (2022).
[2] Xu, Sihan, et al. "Inversion-Free Image Editing with Language-Guided Diffusion Models." CVPR. 2024.
[3] Wu, Yulun, Louie McConnell, and Claudia Iriondo. "Counterfactual Generative Modeling with Variational Causal Inference." ICLR (2024)
Other Strengths And Weaknesses: Strengths:
The paper is clear and well written. Figures are clean and illustrative.
Weaknesses:
See "Claims And Evidence", "Methods And Evaluation Criteria", and "Experimental Designs Or Analyses".
Other Comments Or Suggestions: I think trying to abduct high-level semantics to aid diffusion models is interesting and intuitive, and I do not want to discourage the authors from keeping exploring this direction. But I think you definitely have to show more evidence of how it works better than / different from existing diffusion models to be more convincing.
Questions For Authors: No further questions beyond the weaknesses I listed. If the authors can address those that'd be great.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback and are encouraged by their comments that semantic abduction is “interesting and intuitive”. Below, we address the major points raised:
> **“Semantic Abduction” and Novelty**
Refer to our response to reviewer vbtG16 for details regarding the novelty and our contributions.
> **Quality of Abducted Exogenous Noise**
We acknowledge the reviewer’s observation regarding Fig 3b. The denoising training procedure may influence the apparent challenges in exogenous noise abduction. However, we note that in Fig 3c and App H - Fig 16, dynamic semantic abduction and $p_\varnothing > 0.1$ improves identity preservation of hairstyle, skin colour, facial structure and backgrounds, which is especially challenging given that we choose not to center crop faces as in existing methods, such as [2,3]. Additionally, we refer to Fig 5 in [2], which shows changes in skin colour, and Fig 2e/2g in [4], which shows changes in background and image fidelity, which the dynamic abduction method largely corrects. Additionally, referring to Table 6 in [4], our effectiveness is on par with theirs.
> **“For MorphoMNIST, there is no reason to use composition...”**
We agree that the method suggested by the reviewer is valid. Instead, we follow [5,6], which uses ground truth morphological measurement functions provided by the MorphoMNIST library to measure the intensity, thickness, and slant of the counterfactual digits directly. Both methods are valid since they use ground truth mechanisms to evaluate counterfactuals. For MorphoMNIST, we only used pseudo-oracles for attributes which don’t have a ground truth mechanism, i.e. digit class ($d$).
> **“Proposed method does not beat HVAE by a noticeable margin…”**
Table 1 shows that for many settings, i.e. spatial and semantic mechanisms with $\omega=3, p_\varnothing=0.1$, our model exceeds HVAE effectiveness. Notably, the spatial and semantic mechanisms improve digit class accuracy by ~4%. When considering effectiveness across all parents holistically, while the target intervention may be successful in the baselines, the image's defining characteristic (digit class) is not preserved.
As requested, we also train VCI (which outperforms DiffSCM) for the scenario in Fig 2a. Again, all interventions cause large drops in digit class effectiveness:
| Interv. | MAPE(t) | MAPE(i) | MAPE(s) | Acc(d) | Rev. ($L_1$) |
|-|-|-|-|-|-|
| do(s) | 3.08e-2 | 6.52e-3 | 9.07e-2 | 90.04 | 3.19e-2 |
| do(d) | 2.63e-2 | 6.32e-3 | 9.12e-2 | 94.62 | 1.98e-2 |
| do(t) | 3.08e-2 | 6.26e-3 | 9.13e-2 | 92.97 | 2.10e-2 |
| do(i) | 6.99e-2 | 2.61e-2 | 3.97e-1 | 82.52 | 3.75e-2 |
with Comp. ($L_1$) = 1.31e-2, with full table [here](https://imgur.com/a/BYuLIgx).
We also compare against VCI trained in a simpler setting using only $i$ and $t$ from the dataset generated with Fig 2a, and also notice large margins of improvement:
| | MAPE(t) | MAPE(i) | Rev. ($L_1$) | MAPE(t) | MAPE(i) | Rev. ($L_1$) | Comp. ($L_1$) |
|-|-|-|-|-|-|-|-|
| VCI | 5.09e-2 | 9.49e-3 | 1.17e-2 | 9.58e-2 | 5.89e-2 | 2.74e-2 | 6.65e-3 |
| Spatial | 3.45e-2 | 6.31e-3 | 3.19e-2 | 5.52e-2 | 1.07e-2 | 3.53e-2 | 9.26e-4 |
where the first three cols are for do(t) and the next three are for do(i), with the full table provided [here](https://imgur.com/a/IojOgnj).
Note that given our choice of metrics differs from those in VCI, the margins of improvement are not comparable across works.
Additionally, we run VCI on CelebAHQ without center cropping, such that effectiveness can be evaluated with our pseudo-oracles, and we notice a large drop in the counterfactual fidelity:
| | F1(g) | F1(s) |
|-|-|-|
| do(g) | 3.39 | 97.84 |
| do(s) | 95.58 | 33.81 |
with examples [here](https://imgur.com/a/DRhkuUY).
We will include DDCM[8] in our related work. DDCM, which uses text-based LDMs, is challenging to compare fairly with our method due to model size, dataset requirements, and prompt engineering. Their sampling method in Alg 1 in [8] complements our work and naturally extends our mechanisms alongside other sampling strategies[7,9]. This is outside the scope of this paper.
We appreciate the reviewer’s detailed feedback and believe these clarifications and additional experiments demonstrate that our work offers an improved and valuable perspective on diffusion-based counterfactual generation.
[1] https://arxiv.org/abs/2212.12570
[2] https://arxiv.org/abs/2410.12730
[3] https://www.jmlr.org/papers/volume23/21-0080/21-0080.pdf
[4] https://arxiv.org/abs/2303.01274
[5] https://arxiv.org/abs/2306.15764
[6] https://github.com/biomedia-mira/causal-gen/blob/e0e4e22f8ad972b9d3b7dd662fa77d9d7c845078/notebooks/eval_example.ipynb
[7] https://arxiv.org/abs/2206.00927
[8] https://arxiv.org/pdf/2312.04965
[9] https://arxiv.org/abs/2406.08070
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response and I am very impressed with the effort you put in the updated experiments, so I have raised the score by 1. However, I still have a big problem with the metrics you used for morphoMNIST that I think you ignored. Besides, I still have concern that comparison to prior diffusion models in image counterfactual / image editing are not shown. Therefore, I'm going to give the authors a tractable objective: if you solve one of these two issues, I'm going to further raise the score to a 3. If you solve both, I'll raise it to a 4.
---
**Issue 1**: morphoMNIST metric
I'm not asking for ground truth measurement for the observed factors, I understand you already used those for effectiveness. What I'm saying is using composition (MSE between reconstruction and original image) and reversibility (MSE between cycle reconstruction and original image) over counterfactual error (MSE between counterfactual construction and true counterfactual image) do not make sense when true counterfactual image is available, which is the case for morphoMNIST. The counterfactual error should be used instead of composition and reversibility -- that is a direct estimate of exogenous noise abducation in counterfactual generations.
I have some doubt regarding your VCI experiments as the results shown in [2] (Figure 12) do not exhibit any drop in digit class fidelity, especially on intensity which is arguably the easiest intervention. Yet you show a large drop of digit class fidelity specifically on intensity. I find that hard to believe. I'm suspicious if there is something wrong with your classifier. Regardless, you can generate the counterfactual truth with morphoMNIST and if you just show the counterfactual error on the full image along with effectiveness as I suggested, it will be super clear which method is the best. It is a comprehensive evaluation of both exogenous noise and class fidelity. If you only evaluate effectiveness / class fidelity, then yes, conditional VAE-based models such as conditional diffusion models are always going to have the advantage -- interventional inference is what the conditional VAE objective is set out to do.
---
**Issue 2**: Baseline model
While I believe in the authors that VCI outperforms DiffSCM, I think the latter baseline is more essential to this paper because I think it is very important to show comparison to prior diffusion model in image counterfactual / image editing as an alternative proposal of diffusion model in image counterfactual. Readers would very much want to know in what aspects did your proposed novelties make a difference compared to prior diffusion work. DDCM's main algorithm (Algorithm 1) has nothing to do with text or prompt engineering, but I will give you a pass as I do acknowledge the potential heavy workload in adaptation. While I don't believe your model can beat DDCM, I fully believe your model can beat DiffSCM pretty easily because I don't think DiffSCM was a coherent effort. However, DiffSCM was kind of a landmark work of applying diffusion model to counterfactual modeling and it would be nice to show the comparison such that your paper would be more self-contained. After all, both yours and DiffSCM are non-theoretical papers, so regardless of how much better your high-level idea sounds compared to DiffSCM, empirical evidence is probably the only compelling evidence.
---
**Note**: I confirmed with AC that this year's ICML does not allow more than 2 rounds of interactions during the discussion phase, instead, reviewers can update their comments to add additional information. Therefore, I have posted my response to the Authors' **Reply Rebuttal Comment** below in the **Update** paragraph of my original review.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for increasing their score and for providing very valuable, concrete suggestions for further improving our paper. We have now added the requested comparisons and experiments which we hope address the remaining issues.
---
**Issue 1**
As requested, we have computed MSE/MAE between predictions and the "true" counterfactual for two Morpho-MNIST SCMs: (i, t)-only and (i, t, s, d). While the results confirm the competitiveness of our methods, we believe this is not the most meaningful method of evaluation. Pixel-wise metrics are fundamentally flawed when the target is not unique - there are multiple equally plausible counterfactuals that accurately obey interventions which may exhibit spatial ambiguity (such as rotations or translations). Even small spatial differences may induce large changes in pixel-wise metrics despite being unrelated to effectiveness. These metrics are also sensitive to sharpness and local artefacts. A poor but blurry counterfactual may yield low MSE [1,2,8]. Concerns about MSE for counterfactual error have been raised in the VCI reviews by reviewer QZf3 (see link: [https://openreview.net/forum?id=oeDcgVC7Xh¬eId=dkGjENbOX](https://openreview.net/forum?id=oeDcgVC7Xh¬eId=is9vzvhbQQ)).
MSE/MAE remain meaningful for composition and reversibility because the target is uniquely defined (ie. the observed image). We would also like to stress that the soundness of counterfactuals cannot be assessed by any metric alone, so it is not a question of choosing one over the other. Hence, we report composition and reversibility in addition to effectiveness measured on intervened variables, following the common evaluation strategies in [3,4,5]. Please also refer to the theoretical results in [6,7] which support our evaluation approach.
Regarding the classifier, upon double-checking, we are confident that it works correctly (we provide implementation and performance in App. D2), and we use the same classifier to evaluate all models which show good performance. The drop in digit class fidelity for VCI is related to the model performing poorly for low-intensity interventions with generated images being almost empty. Please see do(i) visuals generated by the VCI codebase during training at 50 and 280 epochs: https://imgur.com/a/DVDLstb. We attribute this to instability during training, with frequent loss spikes, across all VCI mechanisms we trained: https://imgur.com/a/L31aody.
**(i,t)-mechanism**
| | do(t) - $L_1 (10^{-2}) / L_2 (10^{-3})$ | do(i) - $L_1 (10^{-2}) / L_2 (10^{-3})$ |
|-|-|-|
| VCI | 1.19/2.53 | 1.43/3.74 |
| Semantic ($p_\varnothing=0.1, \omega = 1$) | 1.89/4.29 | 1.28/3.33 |
**(i,t,s,d)-mechanism**
| | do(t) - $L_1 (10^{-2}) / L_2 (10^{-3})$ | do(i) - $L_1 (10^{-2}) / L_2 (10^{-3})$ | do(s) - $L_1 (10^{-2}) / L_2 (10^{-2})$ |
|-|-|-|-|
| VCI | 1.73/4.44 | 3.83/2.22 | 3.11/1.23 |
| Spatial ($p_\varnothing=0.5, \omega = 1.5$) | 1.70/4.85 | 1.04/2.42 | 3.31/1.39 |
| Semantic ($p_\varnothing=0.1, \omega = 1.5$) | 1.65/4.35 | 1.49/3.45 | 3.10/1.26 |
---
**Issue 2**
We would like to thank the reviewer for insisting on the comparison to DiffSCM. We agree this comparison adds value. As suggested, we have now added this comparison by taking DiffSCM’s original source code (which is limited to digit interventions), ensuring that its training hyperparameters follow those in their App E. table 2, and generate counterfactuals with the range of guidance scales provided in their Fig 4. We then trained our digit-conditional spatial and semantic mechanisms to compare with DiffSCM (and VCI). As correctly predicted by the reviewer, our methods outperform DiffSCM.
| | Comp $L_1$ | Acc % | Rev $L_1$ |
|-|-|-|-|
| VCI | 2.05e-2 | 92.48 | 6.71e-2 |
| DiffSCM ($\omega = 3$) | 8.20e-3 | 17.02 | 2.62e-2 |
| Spatial ($p_\varnothing = 0.1, \omega = 1.5$) | 1.23e-2 | 99.63 | 5.12e-2 |
| Semantic ($p_\varnothing = 0.1, \omega = 1.5$) | 6.83e-3 | 97.46 | 5.05e-2 |
---
[1] Zhang et al. "The unreasonable effectiveness of deep features as a perceptual metric." CVPR 2018
[2] Wang & Bovik, "Mean squared error: Love it or leave it?" IEEE 2009 - Fig 2
[3] Hao et al. "Natural Counterfactuals With Necessary Backtracking." Neurips 2024
[4] Ribeiro et al. “High Fidelity Image Counterfactuals with Probabilistic Causal Models.” ICML 2023.
[5] Melistas et al. “Benchmarking Counterfactual Image Generation.” NeurIPS 2024 Track on Datasets and Benchmarks
[6] Monteiro et al. MEASURING AXIOMATIC SOUNDNESS OF COUNTERFACTUAL IMAGE MODELS. ICLR 2023
[7] Halpern, Axiomatizing Causal Reasoning. JAIR 2000 - Sec 3
[8] Wang et al, "Image quality assessment: from error visibility to structural similarity." IEEE 2004 | Summary: This paper explores diffusion models for counterfactual image generation by incorporating semantic abduction to enhance high-level semantic identity preservation causal consistency. The authors propose a structural causal model (SCM)-based framework that integrates diffusion models for counterfactual reasoning, leveraging spatial, semantic, and dynamic abduction mechanisms. The paper introduces amortized anti-causal guidance to improve intervention fidelity and evaluates the approach using counterfactual soundness metrics across datasets, including Morpho-MNIST, CelebA-HQ, and EMBED.
## update after rebuttal
Thank the authors for their rebuttal. Some of my concerns have been addressed. The experimental section could be further improved by offering a more comprehensive comparison with strong baseline models. Additionally, it would have been beneficial to see results on a broader range of real-world datasets, which would better highlight the practical value of the approach to the vision community. I maintain my original score of weak accept.
Claims And Evidence: Most claims are well-supported with experimental results and theoretical justification.
However, claims related to scalability and complexity may meed further validation.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem, but the benchmark datasets are relatively simple, and additional evaluations on more complex, real-world datasets would strengthen the work.
Theoretical Claims: The paper presents theoretical justifications for integrating semantic abduction and counterfactual trajectory alignment (CTA) within diffusion models, particularly in the context of structural causal models (SCMs). The high-level formulation appears sound.
Experimental Designs Or Analyses: The overall experimental setup is well-designed, including multiple real datasets and quantitative metrics.
Further improvement could consider including large-scale and more complicated datasets, and include some human perceptual evaluation.
Supplementary Material: Yes, I have reviewed the appendix.
Relation To Broader Scientific Literature: The key contributions of this paper build on prior work in counterfactual generation, causal inference, and diffusion models, integrating these areas in a novel way.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The approach has promising applications in medical imaging and causal reasoning tasks.
Other Comments Or Suggestions: n/a
Questions For Authors: - What is the computational cost of the method and have you considered efficiency optimizations to make diffusion-based counterfactual generation more practical?
- How does your method compare to causal disentanglement approaches in generative modeling?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments regarding our framework’s “promising applications in medical imaging and causal reasoning tasks” with “well-designed” experiments and our integration of existing concepts in a “novel way”.
> **Scalability and Computational Costs**
Like most diffusion-based approaches, our method incurs high computational costs at generation time. We acknowledge that techniques such as distillation [1], consistency models [2], and higher-order solvers for sampling [3] can improve efficiency; however, these optimisations are beyond the scope of our current work. We focus on exploring the trade-offs involved in using diffusion models for counterfactual generation, thereby setting the stage for future work addressing efficiency concerns.
Notably, our approach to dynamic semantic abduction via CTA leverages causally conditioned diffusion models to improve identity preservation with a single-step optimisation. This contrasts with methods that rely on less robust, image-specific test-time optimisations - such as expensive multi-step optimisations or heuristics about self and cross-attention maps - often requiring extensive fine-tuning for each image [4, 5, 6, 7]. Our single-step method simplifies and enables rapid testing on larger datasets with a simple model selection process (Fig 5). Dynamic semantic abduction via CTA improves identity preservation of backgrounds, hairstyles, and skin colours (see Fig 3c and App H - Fig 16).
Furthermore, using the latent diffusion model paradigm, our approach can be readily scaled to higher-dimensional images [8]. Our work performs comparably to existing variational methods like HVAEs [9] and VDVAEs [10], which can be challenging to scale to deeper architectures for modelling complex images due to the need to compute and match posteriors [11, 12]. In contrast, the diffusion paradigm uses a fixed posterior, offering a more computationally scalable alternative. We can provide further exposition about this in sec 3.1 for clarity.
> **Comparison to Causal Disentanglement Approaches in Generative Modelling**
In response to reviewer kmep, we provide additional results on VCI [13], whose results improve upon DEAR [14], a popular causal disentanglement generative modelling approach. Our results show that our methods improve effectiveness in the scenario in Fig. 2c, and in a simpler causal modelling problem involving only thickness and intensity, whilst reversibility decreases as guidance scale increases due to trajectory deviation, as described in Sec 3.3. We also focus on methods where an SCM can be assumed instead of methods that infer the assumed causal graph but may struggle with scaling to larger parent sets [16]. We intend to provide further exposition about the challenges associated with defining an SCM in our limitations, and future work could explore incorporating causal discovery frameworks akin to [17].
When compared to CausalDiffAE [17], we condition on $c_{sem} = (z_{sem}, pa)$, whereas they condition solely on $z_{sem}$, which also encodes the parent SCM in the style of [16]. Unfortunately, we could not reproduce the results in CausalDiffAE for the main paper or during the rebuttal period; instead, we have included another baseline from the aforementioned work on VCI [13] in response to reviewer kmep. On inspection of Fig 2a in CausalDiffAE, we believe that their framework may perform poor abduction when all parents are unobserved, as in many real-world scenarios, given their poor preservation of digit style.
[1] https://arxiv.org/abs/2303.01469
[2] https://arxiv.org/abs/2202.00512
[3] https://arxiv.org/abs/2206.00927
[4] https://arxiv.org/abs/2211.09794
[5] https://arxiv.org/abs/2309.15664
[6] https://arxiv.org/abs/2405.01496v1
[7] https://arxiv.org/abs/2212.12570
[8] https://arxiv.org/abs/2112.10752
[9] https://arxiv.org/abs/2306.15764
[10] https://arxiv.org/abs/2303.01274
[11] https://arxiv.org/abs/2401.06281
[12] https://arxiv.org/abs/2208.11970
[13] https://arxiv.org/abs/2410.12730
[14] https://www.jmlr.org/papers/volume23/21-0080/21-0080.pdf
[15] https://arxiv.org/abs/2004.08697
[16] https://arxiv.org/abs/2004.08697
[17] https://arxiv.org/abs/2404.17735 | Summary: This paper studies the image counterfactual generation problem using diffusion models. Specifically, the authors propose a suite of deep causal mechanisms, spatial mechanism, semantic mechanism, and anti-causal mechanism, for tractable counterfactual generation with respect to composition, reversibility, and effectiveness. The authors also propose a dynamic classifier-free training approach to study the trade-off between composition and effectiveness of generated counterfactuals. Experiments on three datasets (one synthetic and two real-world) are performed to evaluate the soundness of generated image counterfactuals.
## Update After Rebuttal
I appreciate the authors taking the time to address my questions and concerns. I believe this work has some limitations, however, after the rebuttal period, I believe the authors did a good job responding to all the reviewers' concerns. I will keep my score at 3 and lean towards **acceptance** of this paper. However, it would be good if the authors made their contribution more clear in the writing and distinguish it clearly from different approaches (DCM, CausalDiffAE, DiffSCM). Since there are only a few diffusion counterfactual baselines and the authors have included some results from DiffSCM (in response to Reviewer kmep), I am satisfied with comparisons. Ideally there would be comparisons with more diffusion counterfactual baselines. I will note that DiffSCM is not completely related to the paradigm proposed here. Although DiffSCM proposes a general formulation for arbitrary causal graphs, it only applies to classifier-based guidance (e.g., image and its label) in practice. Therefore, I do not think this baseline is all that much related to methods such as this paper, DCM, and CausalDiffAE which explicitly focus on modeling the causal structure.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, I checked the soundness of the experimental design and analysis. Specifically, the use of the theoretically grounded compositionality, reversibility, and effectiveness metrics have been shown to be good axiomatic metrics to evaluate the soundness of generated counterfactuals. Furthermore, I checked the visual quality of generated counterfactuals for all datasets. Overall, the evaluation is sound, extensive, and holistic.
Supplementary Material: Yes, I reviewed all of the additional experimental results provided in the supplementary material, especially Appendix E-I.
Relation To Broader Scientific Literature: This paper provides a general framework and formalization for counterfactual generation using diffusion models. Specifically, the paper extends the general structure of Pawlowski et al and Ribeiro et al to provide a suite of deep causal mechanisms for tractable counterfactual generation. The causal mechanisms serve as formal generalizations of previous diffusion counterfactual methods, namely DCM (Chao et al), CausalDiffAE (Komanduri et al), and DiffSCM (Sanchez et al).
Pawlowski et al. Deep Structural Causal Models for Tractable Counterfactual Inference. NeurIPS 2020.
Ribeiro et al. High Fidelity Image Counterfactuals with Probabilistic Causal Models. ICML 2023.
Chao et al. Modeling Causal Mechanisms with Diffusion Models for Interventional and Counterfactual Queries. TMLR 2024.
Komanduri et al. Causal Diffusion Autoencoders: Toward Counterfactual Generation via Diffusion Probabilistic Models. ECAI 2024.
Sanchez et al. Diffusion Causal Models for Counterfactual Estimation. CLeaR 2022.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ## Strengths
- The paper is written exceptionally well with clear intuitions and formulations contextualizing diffusion-based counterfactual generation with respect to composition, reversibility, and effectiveness, which are fundamental notions in counterfactual inference.
- The experiments are extensive and show that the proposed abduction mechanisms are quite effective in counterfactual generation on several different counterfactual soundness metrics. Furthermore, the authors experiment on facial image data and a breast cancer image dataset, which further underscores the application of the method in real-world domains.
## Weaknesses
- The proposed mechanisms seem to be generalizations of existing methods. For instance, the spatial mechanism is essentially a generalized version of DCM (Chao et al.), the semantic mechanism is a generalized version of DiffAE (Preechakul et al.) and CausalDiffAE (Komanduri et al.), and the anti-causal mechanism is a generalization of DiffSCM (Sanchez et al.). Although the comprehensive formulation is important, the main contribution is not clear.
- The authors take a causal interpretation of classifier-free guidance to show that the masking probability of the semantic conditioning acts as a trade-off between composition and effectiveness. However, Komanduri et al also take a very similar causal interpretation of classifier-free guidance. There does not seem to be a substantial difference between the two interpretations.
Other Comments Or Suggestions: N/A
Questions For Authors: - Could the authors provide clarifications on the differences between the proposed mechanisms and related work as pointed out in the weakness section?
- Could the authors explain the difference between their causal interpretation of classifier-free guidance in Eq (16) and the one proposed by CausalDiffAE (Komanduri et al)?
- What is the motivation behind learning the $\phi$ token for dynamic classifier free guidance? Traditionally, one would just use a zero-mask an jointly train a conditional and unconditional diffusion model.
- How does this framework translate to text-to-image diffusion models? For existing pretrained models, such as Stable Diffusion, it can be unrealistic to retrain them using this sort of paradigm. How would one perform high-fidelity counterfactual image generation using these pretrained models?
Komanduri et al. Causal Diffusion Autoencoders: Toward Counterfactual Generation via Diffusion Probabilistic Models. ECAI 2024.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and are pleased that the paper was found to be “written exceptionally well with clear intuitions” and that the experiments “are extensive and show that the proposed abduction mechanisms are quite effective”. Below we address weaknesses and questions:
> **“Although the comprehensive formulation is important, the main contribution is not clear”...**
Concretely, our work distinguishes itself from existing approaches by:
- Spatial Abduction: We use causally conditioned DDIM inversion rather than the unconditional forward process [1,2,3], such that spatial noise can help preserve image structure [4]. We go beyond DiffSCM, by incorporating multiple causal conditions (discrete or continuous), and DCM, by demonstrating our mechanisms on complex vision datasets. Also, we adopt a new SCM for MorphoMNIST, including a challenging relationship between slant and digit.
- Semantic Abduction: Our semantic mechanisms condition diffusion models on semantic encoding and causal conditions from the parent’s SCM, whereas CausalDiffAE [2] incorporates [5] into diffusion models, widely acknowledged to be sensitive to a large number of parents and susceptible to unstable training [6,7], diminishing the benefits of the stable training objective of diffusion models. DiffAE does not provide a method for amortised guidance. Our analysis goes significantly beyond CausalDiffAE and DiffAE, exploring how our amortised guidance procedure trades off effectiveness for identity preservation.
- Dynamic Abduction: We introduce a novel, general, dynamic semantic abduction framework, which we implement via CTA, that learns image-specific guidance tokens during the abduction step in counterfactual inference. These tokens enhance the retention of attributes that cannot be trivially measured and included as causal conditions via the SCM, such as background, hairstyle, and skin colour (Fig 3c, App H - Fig 16).
Our formulation provides a unifying perspective between image editing using pre-trained diffusion models and counterfactual image generation methods. As such, we set the stage for future work on model identifiability and incorporating techniques from LDM-based methods to improve dynamic abduction, which has been a primarily empirical field. Additionally, by carefully considering which variables should be regarded as exogenous or observed within our formalism, we can evaluate counterfactuals via metrics grounded by intuitive soundness axioms and more formally explain tradeoffs between intervention-faithfulness (effectiveness) and identity preservation (composition) when using diffusion models for counterfactual inference or image editing.
> **Difference between their causal interpretation of classifier-free guidance in Eq (16) and the one proposed by CausalDiffAE**
We condition on $c_{sem} = (z_{sem}, pa)$, whereas they condition solely on $z_{sem}$ which also encodes the parent SCM in the style of [5].
> **What is the motivation behind learning the ϕ token for dynamic classifier-free guidance?**
As stated in sec 3.3, we train our guided mechanisms in the standard way [8]. We only use CTA to perform dynamic semantic abduction during counterfactual inference. Given that the CTA optimisation is performed w.r.t composition, the guidance tokens can learn additional semantic information not captured by $z_{sem}$ or $pa$, thereby improving the preservation of hair colour, skin tone and backgrounds in counterfactuals (Fig 3c, App H - Fig 16).
> **How does this framework translate to text-to-image diffusion models?**
Our general dynamic abduction procedure (eq 21-22) is analogous to many test-time optimisations for LDM-based image editing [4, 9, 10, 11], with the fundamental difference that fine-grained causal control can be challenging via text conditioning alone [12]. For an LDM, dynamic abduction using the spatial mechanism with amortised guidance (eq 19) can be formulated trivially within our framework. Natural extensions of our work involve replacing CTA with the test-time optimisation techniques cited above. Further, we demonstrate that shorter DDIM strides with single-step guidance token updates at each timestep improve identity preservation over semantic abduction. In contrast, many LDM-based methods opt for longer DDIM strides with computationally expensive multi-step guidance token optimisations.
[1] https://arxiv.org/abs/2210.11841
[2] https://arxiv.org/abs/2202.10166
[3] https://arxiv.org/abs/2404.17735
[4] https://arxiv.org/abs/2211.09794
[5] https://arxiv.org/abs/2004.08697
[6] https://ieeexplore.ieee.org/document/10021114
[7] https://arxiv.org/abs/2411.19556
[8] https://arxiv.org/abs/2207.12598
[9] https://arxiv.org/abs/2309.15664
[10] https://arxiv.org/abs/2405.01496v1
[11] https://arxiv.org/abs/2403.02981
[12] https://arxiv.org/abs/2212.12570
[13] https://arxiv.org/abs/2410.12730
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors providing clarifications to my questions and concerns. Currently, the core contribution of this work is still a bit unclear to me. I understand the suite of mechanisms provided. However, they really seem to be generalizations of existing approaches (e.g., DCM [1], CausalDiffAE [2], DiffSCM [3]). For the semantic abduction mechanism, conditioning on $c_{sem}=(z_{sem}, pa)$ does not explicitly encode any causal mechanisms. For instance, if we consider DeepSCM [4], we can see that the causal relationship between variables is explicitly modeled via a normalizing flow. In my view, it seems that $c_{sem}$ should be a compact representation that explicitly embeds causal relationships (similar to the role $z_{causal}$ plays in CausalDiffAE).
The following is my rationale for keeping my score at a 3. I believe this paper is quite well written and provides a generalized framework for diffusion-based counterfactual modeling. Furthermore, I believe the experiments are extensive and serve as interesting case studies into the utility of each of the mechanisms with robust metric evaluations. That said, the reason I am not inclined to give anything higher than a 3 is because I believe the fundamental contribution is weak compared to existing diffusion-based counterfactual generation work (e.g., DCM [1], CausalDiffAE [2], DiffSCM [3]) as well as prior counterfactual generation work in the VAE setting (e.g., DeepSCM [4], CausalHVAE [5]).
[1] Chao et al. Modeling Causal Mechanisms with Diffusion Models for Interventional and Counterfactual Queries. TMLR 2024.
[2] Komanduri et al. Causal diffusion autoencoders: towards counterfactual generation via diffusion probabilistic models. ECAI 2024.
[3] Sanchez et al. Diffusion Causal Models for Counterfactual Estimation. CLeAR 2022.
[4] Pawlowski et al. Deep Structural Causal Models for Tractable Counterfactual Inference. NeurIPS 2020.
[5] Ribeiro et al. High Fidelity Image Counterfactuals with Probabilistic Causal Models. ICML 2023.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for taking the time to read our response.
Regarding contributions, we would like to highlight that the existing diffusion-based models performed poorly in our experiments, and we believe that our work marks a fundamental advancement in generating high-quality counterfactuals with diffusion models. We tackle key limitations of previous work: CausalDiffAE [2] clearly does not preserve identity in their Fig 2a, DiffSCM [3] is limited to a single discrete parent [3] and admits poor effectiveness and image fidelity in their Fig 3b (see our 2nd response to reviewer kemp), and DCM [1] shows no evidence of being directly applicable to complex imaging datasets. We believe the poor identity preservation in CausalDiffAE is due to $z_{causal}$ encoding both semantic identity and causal parents, so an intervention affects identity. By keeping $z$ separate from $pa$ we are able to better preserve identity with our semantic mechanism. The strength of our method over previous diffusion-based work is further confirmed by the additional experiments we carried out as part of the rebuttal. We also outperform existing VAE-based models [4,5,6].
In terms of methodological contribution, please note that in addition to our spatial and semantic methods which you mention, we also present dynamic abduction which is an entirely novel method that has not been discussed previously which is highly effective for preserving backgrounds, hairstyles and skin colours (Fig 3c + App H - Fig 16). Current VAE-based methods struggle with these.
[1] Chao et al. Modeling Causal Mechanisms with Diffusion Models for Interventional and Counterfactual Queries. TMLR 2024.
[2] Komanduri et al. Causal diffusion autoencoders: towards counterfactual generation via diffusion probabilistic models. ECAI 2024. - Fig 2a
[3] Sanchez et al. Diffusion Causal Models for Counterfactual Estimation. CLeAR 2022. - Sec. 3.4 & 5
[4] Pawlowski et al. Deep Structural Causal Models for Tractable Counterfactual Inference. NeurIPS 2020.
[5] Ribeiro et al. High Fidelity Image Counterfactuals with Probabilistic Causal Models. ICML 2023.
[6] Wu et al. Counterfactual Generative Modeling with Variational Causal Inference. ICLR 2025 | Summary: The paper “Diffusion Counterfactual Generation with Semantic Abduction” explores the use of diffusion models for counterfactual image generation, a task that requires maintaining identity, visual fidelity, and causal consistency. The authors argue that while diffusion models have achieved state-of-the-art synthesis quality, they lack structured semantic control for counterfactual reasoning. To address this, the paper introduces a new framework that integrates diffusion models with structural causal models (SCMs), leveraging semantic abduction to improve identity preservation in generated counterfactuals. The proposed approach introduces mechanisms such as spatial, semantic, and dynamic abduction to refine causal interventions while maintaining perceptual consistency. The study evaluates its methods across synthetic and real-world datasets, including face images and medical imagery, demonstrating improvements over existing generative approaches like VAEs and GANs. However, challenges such as computational efficiency, dataset limitations, and reliance on human-defined causal graphs are acknowledged. The work presents an important step towards using generative models for more controlled, interpretable, and robust counterfactual reasoning.
Claims And Evidence: The paper’s core claims—that diffusion models generate high-quality counterfactuals and semantic abduction improves identity preservation—are well-supported through experiments and ablation studies. However, it overstates the ease of defining causal structures, as real-world causal relationships are often ambiguous. Additionally, while results on faces and medical images are promising, generalization to more complex, diverse datasets is not fully explored. More testing on real-world counterfactual scenarios would strengthen its conclusions.
Methods And Evaluation Criteria: The proposed method—integrating diffusion models with causal reasoning—is a logical and promising approach for counterfactual image generation, as diffusion models excel at high-quality synthesis while causal inference ensures meaningful interventions. The use of semantic abduction to refine counterfactual edits also makes sense, as it helps preserve identity and realism.
However, the evaluation criteria have some limitations. While the paper uses perceptual similarity metrics, identity preservation scores, and qualitative comparisons, it lacks human evaluation to assess the realism and plausibility of generated counterfactuals. Additionally, the datasets (faces and medical images), though relevant, are relatively small and lack diversity, making it unclear how well the method generalizes to more complex, real-world counterfactual reasoning tasks. More robust benchmarking on larger and varied datasets would improve the evaluation.
Theoretical Claims: The paper does not contain formal, theorem-based proofs that require rigorous correctness checks. Instead, it presents a theoretical framework for integrating diffusion models with causal inference, specifically through semantic abduction.
Experimental Designs Or Analyses: Yes, I reviewed the experimental design and analysis validity, the experimental design is well-structured and supports key claims, but the lack of large-scale, diverse datasets and human validation weakens its generalizability. More real-world testing and sensitivity analysis on causal assumptions would improve robustness.
Supplementary Material: I reviewed all parts of Supp.
Relation To Broader Scientific Literature: This paper builds on diffusion models for high-quality image synthesis (Ho et al., 2020) and extends them to counterfactual generation, an area previously dominated by GANs and VAEs (Goyal et al., 2020). Unlike prior methods, it integrates Structural Causal Models (SCMs) to enforce causal consistency, bridging generative modeling with causal inference. The introduction of semantic abduction refines counterfactual edits, improving identity preservation and realism beyond standard interventions.
Essential References Not Discussed: The related works are well discussed
Other Strengths And Weaknesses: Please refer to previous section
Other Comments Or Suggestions: Please refer to previous section
Questions For Authors: Please refer to previous section
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and for recognising the significance of our work. We're encouraged by reviewers acknowledging our framework as "an important step towards using generative models for more controlled, interpretable, and robust counterfactual reasoning". We also appreciate the positive remarks regarding our integration of diffusion models with counterfactual inference, the novelty of semantic abduction, and the strength of our experimental design and ablation studies. Additionally, we want to highlight further the novelty, generality, and effectiveness of our dynamic abduction framework, which also offers opportunities to integrate alternative test-time optimisations from LDM-based image editing into structured counterfactual inference beyond CTA.
> **"It lacks human evaluation to assess the realism and plausibility of generated counterfactuals."**
We agree that incorporating human evaluation to assess the plausibility and realism of generated counterfactuals would add value to our analysis. In particular, expert assessment by radiologists for EMBED and experienced annotators for CelebAHQ could provide additional insights beyond our chosen perceptual metrics. However, such user studies are time-consuming and challenging to set up and not feasible as part of the rebuttal. We acknowledge this limitation and plan to include it as a key direction for future work in our discussion section.
> **"However, it overstates the ease of defining causal structures, as real-world causal relationships are often ambiguous."**
We recognise that our manuscript may have understated the challenges involved in specifying causal structures, especially in real-world applications. We will expand the limitations section to acknowledge the ambiguity in causal discovery, the assumptions required to define SCMs, and the fact that our results are contingent on the accuracy of our SCM. Note that if assumptions in an SCM are shown to be incorrect, the transparency of our framework means that developers can redesign their SCM, and either retrain our mechanisms under new assumptions or perform more faithful simulated interventions.
For MorphoMNIST, we leverage the known data-generating process within our DSCM directly for the parents of the image. In the CelebAHQ experiments, we adopt a simplified SCM involving attributes such as smiling and eyeglasses with mild label noise, which can be reliably predicted from observed images using pseudo-oracles, consistent with prior work (see Appendix F, [1]). For the EMBED dataset, our SCM is derived from clinical insights presented in [2] and validated by radiologists.
We also note that discovering causal structure is an active and open research area [3,4], and future work could focus on jointly learning causal structure and counterfactual generation, taking inspiration from the theoretical discussion in Appendix C in [5].
> **"Generalization to more complex, diverse datasets is not fully explored. More testing on real-world counterfactual scenarios would strengthen its conclusions."**
We appreciate the reviewer’s concern regarding generalisability. While we agree that broader testing is important, we would like to highlight that the EMBED artefact removal task addresses a clinically important and underexplored problem in a real-world medical imaging scenario. The EMBED variant proposed by [2] contains many patients with significant variations in tissue density and anatomical structure. We believe this makes EMBED a suitably complex testbed for evaluating the robustness of our counterfactual generation methods, directly addressing a real-world challenge of breast cancer detection in screening mammography. Compared to many existing works, we felt that including this real-world medical imaging application sets our work apart from the many ML papers that primarily test on readily available benchmarks.
Furthermore, we have obtained improved results on EMBED since the initial submission, achieving over 90% accuracy in triangular artefact removal. These updated findings will be included in the paper's final version to support our semantic mechanisms' efficacy further.
[1] https://arxiv.org/abs/2303.01274
[2] https://arxiv.org/abs/2410.03809
[3] https://arxiv.org/abs/2004.08697
[4] https://arxiv.org/abs/2307.05704
[5] https://arxiv.org/abs/2404.17735 | null | null | null | null | null | null |
Efficient and Privacy-Preserving Soft Prompt Transfer for LLMs | Accept (poster) | Summary: The paper "Efficient and Privacy-Preserving Soft Prompt Transfer for LLMs" introduces POST (Privacy Of Soft-prompt Transfer), a framework that enables efficient and privacy-preserving soft prompt tuning and transfer for Large Language Models (LLMs).
Key Contributions:
Privacy-Preserving Soft Prompt Tuning: POST allows users to tune soft prompts locally on a small, distilled model instead of directly on a large LLM, avoiding the need to share private data with external providers.
Knowledge Distillation for Transferability: The LLM provider distills a small version of their model, which the user uses for tuning. The tuned prompt is then transferred back to the large LLM using only a small public dataset.
Differential Privacy Integration: POST incorporates differential privacy (DP) to ensure that sensitive user data remains protected even during prompt tuning.
Efficient and Effective Transfer: The method significantly reduces computational costs while maintaining strong task performance, outperforming existing soft prompt transfer approaches.
Experiments & Findings:
Evaluations on multiple LLM architectures (RoBERTa, GPT2-XL, LLaMA2-7B) show that POST achieves high-utility prompt transfer while preserving privacy.
It significantly reduces computational overhead compared to direct prompt tuning on large models.
Claims And Evidence: Yes, most claims are well-supported by empirical evidence, though some could benefit from clearer acknowledgment of limitations.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are appropriate for the problem. The paper uses knowledge distillation, differential privacy, and public dataset-based transfer, which align well with the goal of efficient and privacy-preserving prompt tuning. The evaluation includes multiple LLMs (RoBERTa, GPT2-XL, LLaMA2-7B) and diverse classification and generation tasks, making the results robust. However, the impact of public dataset choice on privacy and transferability could be further analyzed.
Theoretical Claims: The paper does not have any theoretical claims.
Experimental Designs Or Analyses: Yes, the experimental design is generally sound. The authors use multiple LLMs, diverse classification and generation datasets, and compare POST against relevant baselines (Zero-Shot Transfer, DP-OPT). The LiRA attack for privacy evaluation and runtime analysis for efficiency are well-executed.
Supplementary Material: I reviewed the supplementary material, including Appendix B (Dataset details), Appendix C (Runtime analysis, baseline methods), and Appendix D (Ablation studies on KD, prompt length, transfer loss, and public dataset selection). The additional experiments and ablations provide useful insights, particularly on knowledge distillation, prompt length, and public dataset impact. No major issues were found, but a more detailed discussion on public dataset selection risks could improve clarity.
Relation To Broader Scientific Literature: The paper's contributions relate to broader literature in several key ways:
Soft Prompt Tuning & Transfer: It builds on prior work on soft prompt tuning (Shin et al., 2020; Lester et al., 2021) and addresses the challenge of cross-model prompt transfer, improving on methods that require private data sharing (Su et al., 2022) or suffer from poor performance (Wu et al., 2023).
Privacy-Preserving Adaptation: Unlike prior differential privacy (DP) methods that protect model outputs (Duan et al., 2023), POST ensures privacy during prompt tuning without leaking private data.
Knowledge Distillation for Transfer: It repurposes knowledge distillation (Hinton et al., 2015; Sanh et al., 2019) to enhance soft prompt transfer, rather than just compress models.
Efficiency & Benchmarking: The framework reduces computational costs compared to full fine-tuning and outperforms existing private prompt transfer methods (Hong et al., 2023) across multiple tasks.
Essential References Not Discussed: The paper provides a thorough review of related work, covering soft prompt tuning, knowledge distillation, and privacy-preserving techniques. It cites key contributions in these areas, including prior work on soft prompt transfer (Su et al., 2022; Wu et al., 2023), differential privacy (Dwork et al., 2006; Duan et al., 2023), and knowledge distillation (Hinton et al., 2015; Sanh et al., 2019).
Other Strengths And Weaknesses: Strengths:
1) Innovative Transfer Method: The paper proposes a novel framework, POST, which introduces a structured approach to soft prompt transfer via knowledge distillation. This is a significant technical improvement over prior works that either required private data access or suffered from severe transfer degradation.
2) Efficient Soft Prompt Tuning: By using a distilled smaller model for local tuning, POST significantly reduces the computational cost of optimizing prompts compared to full LLM fine-tuning.
3) Strong Empirical Support: The framework is rigorously tested across multiple model architectures (RoBERTa, GPT-2 XL, LLaMA-2 7B) and five classification plus two open-ended generation tasks, showing strong transfer performance while preserving privacy.
Weaknesses:
1) Limited Theoretical Analysis of Transferability: While the empirical results support the effectiveness of POST, the paper lacks a theoretical discussion on why soft prompts trained on a distilled model remain transferable. The connection between knowledge distillation’s preservation of model decision boundaries and its impact on prompt representation is not analyzed in depth. Including theoretical insights or at least empirical ablations on feature space alignment between the small and large models would strengthen the argument.
2) Prompt Transfer Loss Function Justification: The prompt transfer objective consists of two KL-divergence terms (Equation 3), but there is no principled justification for the chosen balance parameter αα. The weighting scheme appears to be chosen heuristically based on downstream task performance. A more rigorous exploration of how αα interacts with factors such as the student model's generalization error and the zero-shot capabilities of the target LLM is needed.
3) Prompt Transfer Loss Function Is Not Well-Justified: The prompt transfer objective (Equation 3) consists of two KL-divergence terms, but the balance between them is controlled by an empirically chosen hyperparameter αα. There is no discussion on how to select αα beyond empirical tuning, nor any exploration of its effect on convergence or transfer quality.
Other Comments Or Suggestions: Section 4.3 (Privacy-Preserving Prompt Transfer): The explanation of the loss function (Equation 3) would be clearer with a brief discussion on the intuition behind the two KL-divergence terms and why they are necessary for effective transfer.
Questions For Authors: 1) How does the choice of public dataset affect the quality of prompt transfer?
1) The paper mentions that a small public dataset is used for transferring the prompt but does not provide a systematic analysis of how dataset properties (e.g., domain similarity, label space overlap, dataset size) impact transfer success.
2) Could the method fail if the public dataset is too different from the private task dataset? Have you experimented with mismatched public datasets?
2) Why does the KL-divergence transfer loss (Equation 3) use a fixed weighting factor αα, and how was it selected?
1) The transfer objective combines two KL-divergence terms with a fixed αα, but there is no theoretical justification for how αα should be set.
2) Have you explored dynamic or task-adaptive weighting schemes for αα? Could tuning αα differently for various LLMs improve performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the Reviewer for the constructive feedback! We address the main points one by one below:
>**A more detailed discussion on public dataset selection risks**
We investigate the effect of the choice of public data in Table 15 in Appendix (section D.1), where we consider both public datasets from similar task domains as well as those from different task domains compared to the private dataset. Our experimental results show that selecting a public dataset with a similar task to the private dataset is helpful for effective prompt transfer.
To limit the privacy leakage and obtain good performance, we showed that only the target task should be considered when selecting the public dataset. In Table 15 (Appendix D.1), we find that the best transfer performance can be achieved with public datasets from the same task family, even if the datasets are not very similar and have, e.g., different numbers of classes. Our framework assumes that the model provider has knowledge of only the task domain, not the user’s actual data. This is a reasonable assumption, as task metadata is often available in practical settings.
For instance, cloud providers and ML platforms (e.g., Google Vertex AI, OpenAI’s API) typically encourage users to specify the task type or provide task demonstrations (i.e., few-shot learning) when fine-tuning models to optimize performance [A]. Similarly, many AI service providers, such as NVIDIA’s NeMo framework, offer task-specific APIs explicitly designed for applications like chatbots, summarization, and generation, which inherently expose task-level information [B].
>**why soft prompts trained on a distilled model remain transferable?Theoretical insights or at least empirical ablations on feature space alignment between the small and large models**
We conducted an ablation study to investigate the impact of feature alignment. We use a model with the same architecture and initial parameters as our distilled 2-layer LLaMA. Instead of applying knowledge distillation on the BookCorpus dataset, we directly fine-tune this model on BookCorpus to achieve comparable performance to the distilled smaller model. This model has no feature space alignment with the large model. We report this model’s transfer performance.
| Dataset | PT on Distilled model | PT on non-distilled model | Transfer with the distilled model | Transfer with non-distilled model |
|-|-|-|-|-|
| SST2| 78.78 | 78.24 | 90.02| 85.38|
| IMDB| 79.95 | 79.40 | 87.29| 82.45|
| tweet | 54.12| 54.79| 61.15 | 45.65 |
| arisetv | 77.92| 72.07| 86.71 | 70.65 |
| mpqa | 83.82 | 84.16 | 87.37 | 81.52|
Transfer results for the distilled model are always higher. This indicates that using knowledge distillation to preserve alignment between small model and big model is essential in our designed prompt transfer procedure.
>**How to select $\alpha$ in the prompt transfer loss (Equation 3)? Have you explored dynamic or task-adaptive weighting schemes for $\alpha$? Could tuning $\alpha$ differently for various LLMs improve performance?**
We conducted a thorough ablation study on $\alpha$ (see Table 19), evaluating transfer performance across $\alpha$ values ranging from 0.0 to 1.0. The results indicate that multiple $\alpha$ values lead to good transfer performance, indicating the robustness of our POST to the choice of $\alpha$.
Additionally, we clarify the intuition behind the transfer loss design in Appendix D.5 and propose a heuristic method (Equation 6) for finding an optimal $\alpha$, which supports tuning $\alpha$ differently for different LLMs to improve performance in practice.
>**How dataset properties (e.g., domain similarity, label space overlap, dataset size) impact transfer success?**
We investigated the influence of the domain similarity of the public dataset on the transfer performance (see Appendix D.1). The findings demonstrate that selecting a public dataset with a task domain similar to the private task is helpful for effective prompt transfer.
>**What if the public dataset is too different from the private task dataset?**
Table 15 presents the transfer performance using public datasets from different task families. It is observed that the best transfer performance can be achieved when using same-domain data, while datasets with different tasks can lead to performance degradation.
---
We provide the reference for all the responses.
**References:**
[A] OpenAI Fine-Tuning Guide – https://platform.openai.com/docs/guides/fine-tuning
[B] NVIDIA NeMo Framework – https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html
[C] Duan, H., Dziedzic, A., Papernot, N., and Boenisch, F. “Flocks of stochastic parrots: Differentially private prompt learning for large language models.” NeurIPS 2023. | Summary: This paper addresses the problem of inefficiency and privacy risks in soft prompt tuning on LLM provider-hosted models. Therefore, the authors propose a prompt transfer framework called POST, that first distill a smaller model to better match the teacher behavior, then prompt tuning on the small model using private data, and finally transfer the prompt to the larger model with a non-private dataset.
Claims And Evidence: Yes the claims made in the submission supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes the proposed three-step prompt transfer method makes sense for the problem of addressing data privacy and prompt tuning efficiency.
Theoretical Claims: No applicable.
Experimental Designs Or Analyses: The experimental settings are generally comprehensive, the proposed POST method is compared with 1. zero-shot performance on private data 2. performance of private data prompt tuning on target model 3. performance of private data prompt tuning on source distilled model and 4. directly transfer the learned prompt to target model that not using public data. POST significantly outperforms the directly transferred prompt and achieves better performance than the distilled source model, demonstrating its practical effectiveness.
Supplementary Material: The authors have uploaded the code for replicating the experiments.
Relation To Broader Scientific Literature: The proposed method sits at the intersection of knowledge distillation, soft prompt tuning and privacy-preserving ML. It improves soft prompt transferability in Wu et al., 2023 and privacy issue in Su et al., 2022.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The proposed method is well-motivated
2. The analysis is comprehensive
Weaknesses:
1. The quality of the transferred prompt heavily depends on the distilled model. For complex tasks, where the full target LLM can effectively learn a strong prompt, a weakly tuned prompt on the small model will not transfer well, as the distilled model may fail to capture the necessary task structure.
Other Comments Or Suggestions: No
Questions For Authors: See weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the Reviewer for the positive feedback! We address the main points one by one below:
>**The quality of the transferred prompt heavily depends on the distilled model.**
We fully agree with the Reviewer. The influence of compressed model size (number of layers) on performance and compute time is presented in Table 18 in our paper. And we show the results in the table below for the reviewer’s convenience. Indeed, our findings show that while larger distilled models generally improve performance, they also increase computational costs for the user.
|# layers in distilled version|1 layer|2 layers (paper)|3 layers|
|-|-|-|-|
|sst2|84.52|87.73|88.53|
|imdb|78.01|83.96|83.64 |
|tweet|50.65|54.55|61.50 |
|arisetv|53.62|82.73|86.45|
|Distill time|6h 04min|6h 45min|7h 35min|
>**For complex tasks, where the full target LLM can effectively learn a strong prompt, a weakly tuned prompt on the small model will not transfer well, as the distilled model may fail to capture the necessary task structure.**
Beyond investigating the trade-offs of different compression ratios, we also designed a heuristic approach to optimize the $\alpha$ in Equation 6, which balances the first and second loss terms in the transfer loss function (Equation 3). Specifically, when the full target LLM has a strong zero-shot performance or the small model has a weak performance, we set a larger $\alpha$, reducing the reliance on the small model’s behavior while primarily incorporating the directional change induced by the prompt. This adjustment enhances the effectiveness of our POST method in scenarios where the small model has weak performance.
We perform a wide range of experiments using different $\alpha$ values and compare the best performing $\alpha$ with the one output by our heuristic, as shown in Table 19 in Appendix D.5. For reviewer’s convenience, we show partial results of Table 19, i.e., the performance of Roberta-base, in the following table (transferred accuracy are presented as mean (standard deviation)):
|$\alpha$|sst2|imdb|tweet|arisetv|
|-|-|-|-|-|
|heuristic|0.76|0.77|0.14|0.41|
|0.0|80.54 (1.23)|78.22 (0.55)|57.75 (1.88)|83.41 (0.91)|
|0.1|81.94 (1.87)|79.20 (2.92)|57.68 (1.95)|83.09 (1.45)|
|0.2|83.33 (0.78)|80.89 (1.85)|**58.13 (2.16)**|83.62 (0.77)|
|0.3|84.67 (0.73)|80.57 (1.16)|58.08 (1.15)|83.62 (1.26)|
|0.4|86.81 (0.46)|79.86 (1.20)|54.77 (1.04)|**83.94 (0.79)**|
|0.5|88.61 (0.46)|82.16 (1.55)|54.10 (0.74)|80.11 (1.73)|
|0.6|**88.61 (0.78)**|81.82 (1.06)|52.88 (0.44)|76.57 (0.84)|
|0.7|87.88 (0.29)|80.89 (1.59)|52.33 (0.20)|71.26 (0.64)|
|0.8|87.35 (0.24)|**82.65 (0.95)**|51.32 (0.24)|68.68 (0.57)|
|0.9|86.47 (0.64)|81.52 (1.39)|49.38 (0.55)|63.04 (0.85)|
|1.0|85.82 (0.85)|80.57 (1.32)|48.42 (1.13)|58.57 (0.24)|
We observe from the results that the heuristic can successfully identify a good range for $\alpha$ and is usually not far off the empirical optimal values.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional results. I would like to keep my score as 4.
---
Reply to Comment 1.1.1:
Comment: We appreciate the Reviewer's positive feedback and for maintaining the high score. | Summary: This paper proposes POST (Privacy Of Soft-prompt Transfer), a framework designed to efficiently and privately transfer soft prompts for adapting large language models (LLMs) to private downstream tasks. The core innovation involves locally tuning soft prompts on a small, distilled model derived from a larger LLM, optionally under differential privacy constraints, and subsequently transferring these prompts back to the large LLM via a small public dataset. The method reduces computational requirements and enhances privacy, avoiding the direct sharing of sensitive data with LLM providers. Experimental evaluations using various classification and generation tasks show POST's improvements over existing prompt transfer methods.
Claims And Evidence: The claims in the paper are generally well-supported by empirical results, including evaluations on several datasets (sst2, imdb, tweet, arisetv, mpqa, disaster, trec) across classification and open-ended generation tasks. However, more rigorous exploration into why certain public datasets work better for transfer than others would enhance the clarity of some claims about dataset suitability.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate and clearly defined for the targeted privacy and efficiency goals. The datasets selected for evaluation (sst2, imdb, etc.) are standard benchmarks, which strengthens the credibility of the results. However, additional clarification or justification for the choice of public datasets is needed.
Theoretical Claims: The paper does not contain explicit theoretical claims.
Experimental Designs Or Analyses: The experimental designs and analyses are sound. The authors thoroughly evaluated their methods, including ablations for the impact of the number of public samples, the number of transfer steps, and model compression ratios. However, a clearer explanation of the hyperparameter choices (especially α in the transfer loss) is needed.
Supplementary Material: I reviewed portions of the supplementary material, specifically the detailed experiments related to the ablations of knowledge distillation hyperparameters and the influence of public dataset size. These additional details are valuable for understanding the sensitivity and practical implications of the method.
Relation To Broader Scientific Literature: The paper positions itself clearly within the broader literature on soft prompts, knowledge distillation, and differential privacy, effectively differentiating itself by combining these aspects into a cohesive framework. The distinction and improvement over previous methods like DP-OPT and zero-shot transfer (ZST) are explicitly demonstrated.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Strengths:
1. Innovative combination of knowledge distillation, differential privacy, and prompt tuning.
2. Comprehensive experimental validation demonstrating clear benefits over existing methods.
3. Significant practical implications for privacy-sensitive applications of LLMs.
Weaknesses:
1. The choice and generality of the public dataset for prompt transfer are not fully justified.
2. Potential limitations in transfer performance to LLMs with significantly different architectures or training paradigms are not fully explored.
Other Comments Or Suggestions: An in-depth analysis or case study illustrating practical privacy impacts beyond membership inference attacks would further strengthen the narrative.
Questions For Authors: 1. How robust is POST's performance when the public dataset significantly differs from the private dataset in distribution or domain? Would this impact the transfer accuracy significantly?
2. Can the authors clarify how sensitive the method is to the choice of the α parameter in the prompt transfer loss? How was α tuned in practice?
3. How would the approach scale to very large LLMs (e.g., 70B+ parameters)? Would computational constraints significantly limit the practicality of POST in such scenarios?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the Reviewer's insightful comments! Below, we address each comment in detail:
>**More rigorous exploration into why certain public datasets work better for transfer than others; Additional clarification or justification for the choice of public datasets.**
We thoroughly analyzed the effect of public dataset selection in Table 15 in the Appendix (section D.1). Our experimental results demonstrate that public datasets from the same task domain as the private dataset yield better transfer performance than others. This aligns with the intuition that task similarity enhances knowledge transfer. Based on these findings, it is suggested to select a public dataset for a similar task as the private data for effective prompt transfer.
>**A clearer explanation of the hyperparameter choices (especially $\alpha$ in the transfer loss)**
We conducted an ablation study for the parameter $\alpha$ in Table 19 in the Appendix (section D.5).
We evaluated the transfer performance across various $\alpha$ values, ranging from 0.0 to 1.0. The results indicate that multiple $\alpha$ values lead to good transfer performance.
Furthermore, we designed a heuristic to find an optimal $\alpha$ as shown in Equation 6 in the Appendix (section D.5). The intuition of the heuristic is that if the target model performs well, it needs to less mimic the behavior of the smaller model, but only incorporate the directional change induced by the prompt, leading to a larger $\alpha$. It can be observed from Table 19 that the designed heuristic can successfully identify a good range for $\alpha$ and is usually not far off the empirical optimal values.
Thus, for each dataset and model, we selected the $\alpha$ according to the heuristic we proposed in the same appendix section (D.5).
>**Potential limitations in transfer performance to LLMs with significantly different architectures or training paradigms.**
In our setup, the LLM provider distills a small model from the LLM with the aim of guaranteeing good transfer performance. The transfer performance between different architectures is orthogonal to our problem setup.
>**Practical privacy impacts beyond membership inference attacks**
Membership inference attacks are widely recognized as a gold standard for evaluating privacy leakage. Notably, in its most stringent form, our POST framework incorporates DP, which, by definition, ensures that the presence or absence of any individual data point remains indistinguishable and provides theoretical protection against **every possible attack**.
>**Can the authors clarify how sensitive the method is to the choice of the $\alpha$ parameter in the prompt transfer loss? How was $\alpha$ tuned in practice?**
We performed an ablation study on different $\alpha$ values in Table 19, confirming that our approach is robust to a range of $\alpha$-s. Additionally, we designed a heuristic to find an optimal alpha (see Equation 6). The underlying intuition is that if the target model exhibits strong zero-shot performance, it requires less direct imitation of the small model. Instead, it should primarily incorporate the directional changes induced by the prompt, ensuring effective transfer while maintaining the target model’s inherent capabilities.
>**How would the approach scale to very large LLMs (e.g., 70B+ parameters)?**
The LLM providers are assumed to have the computational resources to distill the target models into smaller ones, as they already manage to train such large language models. The overhead on the client end should not depend on the scale of the model used by the LLM provider. | Summary: This paper proposes POST (Privacy Of Soft-prompt Transfer), a framework designed to enable efficient and privacy-preserving transfer of soft prompts between LLMs. It has three major steps:
* Knowledge Distillation: The LLM provider first distills a smaller local model from the original large LLM.
* Local Prompt Tuning: The user tunes soft prompts on the small model using their private data (with optional differential privacy for additional protection).
* Privacy-Preserving Prompt Transfer: The LLM provider transfers the tuned prompt back to the large LLM using a small public dataset, avoiding any access to private data.
Claims And Evidence: No obvious wrong claims found.
Methods And Evaluation Criteria: Methods are fine for the assumed scenario, but the problem setting itself is unrealistic to me.
Theoretical Claims: The paper doesn't have theoretical claims.
Experimental Designs Or Analyses: I did check the experiments in the paper. Detailed comments are in "Strengths and Weaknesses" below.
Supplementary Material: no
Relation To Broader Scientific Literature: I don't see clear contributions of the paper.
Essential References Not Discussed: No obvious missing references found.
Other Strengths And Weaknesses: **Strengths:**
The paper is well-written and easy to follow.
**Weaknesses:**
My major concern is the unrealistic Problem Setting:
* The proposed approach assumes that the LLM provider will release a distilled smaller version of their strong and close-sourced LLM. However, this is impractical in real-world scenarios because the distilled model parameters can inadvertently reveal crucial pretraining details, such as the data weighting of different training corpora [1].
* The approach also requires the LLM provider to execute the proposed soft prompt transfer, which involves selecting a public dataset that is expected to be similar to the private dataset used by the user. This assumption is problematic because, in a proper setting, the LLM provider should not have knowledge of users' private data.
Important models and baselines are also missing in the experiments:
* The experiments are conducted using base models such as Llama2-7B and GPT2-XL. However, real-world LLM services typically deploy instruction-tuned models (e.g., Llama2-7B-Instruct), which exhibit significantly stronger zero-shot problem-solving capabilities. The absence of instruction-tuned models makes the "Full ZS" baseline unconvincing.
* Prompt tuning generally performs worse than full fine-tuning or LoRA fine-tuning while offering only marginal reductions in training cost since it still requires backpropagation through the full model. The key advantage of prompt tuning is its reduced parameter storage requirements. Therefore, a crucial missing baseline is full fine-tuning on the compressed model. Demonstrating superiority only over the prompt-tuned compressed model ("Compressed PT") is not sufficient; the study should compare against a fully fine-tuned small model to justify the proposed approach. For example, if the fully finetuned compressed model is better than the proposed approach, why should the users keep paying for the LLM provider's service?
[1] Detecting Pretraining Data from Large Language Models, ICLR 2024.
Other Comments Or Suggestions: no
Questions For Authors: * What's the expected similarity between the private and public datasets in the proposed framework?
* I understand the soft prompt sent from the private data owner to the LLM provider should definitely have DP guarantees. Otherwise the privacy is not protected at all. Why do you say the DP in this step is just "optional"?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We appreciate the Reviewer's constructive review and address each point as follows:
>**The problem setting itself is unrealistic to me.**
Soft prompts are exposed via public APIs, such as NVIDIA NeMo, as we highlight in the abstract and introduction of our paper. Could the reviewer kindly clarify specific concerns regarding the “*realism of the problem setting*” in our work?
>**I don't see clear contributions of the paper.**
We enumerated the contributions of our work at the end of the introduction.
>**distilled model parameters can reveal pretraining details [1].**
The problem of protecting pre-training details is orthogonal to our work. However, we do agree that the LLM provider might not want to disclose such details.
To address the reviewer’s comment, we follow the setup from the paper cited by the Reviewer [1] and use mink% to perform Membership Inference Attack on the WikiMIA [1] dataset with the Pythia-2.8B model. We test 2 models distilled from Pythia-2.8B, each with 3, and 4 layers, respectively. Our results are depicted below and show that the AUC and TPR@1%FPR of the distilled models are much smaller than the base model. This shows that the knowledge distillation in our work can even reduce the pretraining data leakage.
||Pythia 2.8B|3-layer distilled|4-layer distilled|
|-|-|-|-|
|ROC-AUC|0.7103|0.5737|0.5750|
|TPR@1%FPR|0.0980|0.0196|0.0196|
>**The LLM provider shouldn't have knowledge of users' private data.**
Our framework assumes that the model provider has knowledge of only the task domain, not the user’s actual data. This is a reasonable assumption, as task metadata is often available in practical settings. For instance, cloud providers and ML platforms (e.g., Google Vertex AI, OpenAI) typically encourage users to specify the task type or provide task demonstrations (i.e., few-shot learning) to fine-tune models [A]. Similarly, many AI service providers, such as NVIDIA’s NeMo framework, offer task-specific APIs explicitly designed for applications like chatbots and summarization, which inherently expose task-level information [B].
Importantly, our definition of *privacy leakage* follows DP and is measured at the individual datapoint level. Our approach follows the practical setups. We also offer a general statement regarding the privacy-utility trade-off at the end of our response.
>**Additional baseline 1: Zero-shot performance of instruction-tuned models**
We appreciate the suggestion and have included the zero-shot performance of instruction-tuned Llama2-7b-chat in the following table:
|SST2|IMDB|Tweet|arisetv|mpqa|
|-|-|-|-|-|
|81.42|85.01|44.55|77.29|54.37|
The instruction-tuned model has very close zero-shot performance to the non-instruction-tuned model. This is because we convert the problem of classification to a prefix infilling problem for a non-instruction-tuned model and aggregate the results with multiple ground truth text labels, see Table 7 in the Appendix. This enhances the robustness of our evaluation and allows the non-instruction-tuned model to effectively handle classification tasks.
>**Advantage of prompt tuning**
Prompt tuning offers advantages for LLM providers as it enables batch processing, allowing multiple tasks (from different users) to be handled in a single inference pass with prompts. In contrast, LoRA or fine-tuned models require separate inference passes through the model, significantly increasing the computational costs, and even more importantly, separate models per task [C]. The efficiency gain of soft-prompts provides a strong incentive for the LLM providers to adopt our framework.
>**Additional baseline 2: Full fine-tuning on the compressed model**
We have conducted an additional experiment evaluating full fine-tuning on the compressed model:
||SST2|IMDB|tweet|arisetv|mpqa|
|-|-|-|-|-|-|
|Llama2-7b-compressed|83.02|83.39|59.21|78.41|85.23|
Full fine-tuning on the compressed model indeed outperforms prompt tuning on the compressed model, however, it still underperforms POST transfer. This is because the compressed model, although it maintains some similarity to the teacher model, is still not powerful enough.
>**expected similarity between the private and public datasets**
Table 15 in Appendix D.1 investigates the impact of public dataset selection. We categorize datasets into three groups based on task similarity and observe that public datasets with tasks similar to the private dataset yield better transfer performance. Thus, it is expected that the public dataset is from the similar task domain as the private dataset in our framework.
>**Why the DP is “optional”?**
Our framework is designed to be modular, allowing private data owners to balance privacy and utility trade-offs based on their specific requirements. DP is an optional component that can be integrated when strong privacy guarantees are needed, but its inclusion depends on the user’s trade-off preferences.
---
References included in answer to Reviewer Qi3Y.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the additional experiments — they help alleviate some of my concerns regarding unfair comparisons in the evaluation.
However, I still don't feel that my major concerns have been well addressed:
> Could the reviewer kindly clarify specific concerns regarding the “realism of the problem setting” in our work?
> Soft prompts are exposed via public APIs, such as NVIDIA NeMo.
My concerns regarding the problem setting are described in the "Weaknesses" section. Your framework assumes that the parameters of a distilled version of a closed-source LLM can be released to users for prompt tuning, which is different from the soft prompt APIs mentioned here.
> The problem of protecting pre-training details is orthogonal to our work.
I am not convinced by this point. On the contrary, it should be one of the most critical aspects to validate in advance. Your work proposes an LLM service based on a closed-source model, whose pretraining costs exceed hundreds of millions of dollars and whose training details are highly confidential. Therefore, releasing a distilled version of such a model would be a highly risky action — *yet this highly risky action becomes a fundamental prerequisite for your proposed framework*. To validate your approach, it is essential to thoroughly demonstrate that the distillation is secure against possible attacks, theoretically or empirically, such as "how large can the distilled model be while still guaranteeing protection?", etc. The single attack example you added is not sufficient, though I appreciate the effort.
> Cloud providers and ML platforms typically encourage users to specify the task type or provide task demonstrations (i.e., few-shot learning) to fine-tune models.
"Whether the service provider should know the user's private task information" might be debatable. However, the two examples you cite — OpenAI [A] and NeMo [B] — do not actually guarantee data privacy for users, based on the links you provided.
Overall, the assumptions underlying your proposed privacy-preserving LLM service framework — particularly regarding distillation — are not thoroughly justified. As such, it is difficult to envision how the proposed framework could be practically and securely implemented.
---
Reply to Comment 1.1.1:
Comment: >**Your framework assumes that the parameters of a distilled version of a closed-source LLM can be released to users for prompt tuning (...) To validate your approach, it is essential to thoroughly demonstrate that the distillation is secure against possible attacks, theoretically or empirically, such as "how large can the distilled model be while still guaranteeing protection?", etc. The single attack example you added is not sufficient, though I appreciate the effort.**
The additional experiment that we performed for the Reviewer runs a state-of-the-art MIA attack against the largest distilled models used in our work (3 and 4 layers), i.e., the ones with the highest potential of leakage. As we highlight in the paper, POST assumes an aggressive distillation to extremely small models, as this enables efficient local prompt tuning on user-hardware, as we show in Table 13 and Table 18. The question on “how large can the distilled model be while still guaranteeing protection”, therefore, does not capture the core of our method.
>**Overall, the assumptions underlying your proposed privacy-preserving LLM service framework — particularly regarding distillation — are not thoroughly justified. As such, it is difficult to envision how the proposed framework could be practically and securely implemented.**
To deploy POST, a concerned LLM provider can perform a multitude of state-of-the-art attacks against the distilled model to assess its risk before leakage. If the attacks yield a risk that the LLM provider deems intolerable, they can further compress the model, thereby, reducing leakage.
>**"Whether the service provider should know the user's private task information" might be debatable. However, the two examples you cite — OpenAI [A] and NeMo [B] — do not actually guarantee data privacy for users, based on the links you provided.**
We agree with the Reviewer that existing APIs do not guarantee user privacy. Our proposed POST framework offers a solution to this problem. Therefore, especially when executed with DP, it provides the formal guarantee that no individual user data points will not leak more to the LLM provider than specified by the DP parameters. | null | null | null | null | null | null |
Towards Understanding Catastrophic Forgetting in Two-layer Convolutional Neural Networks | Accept (poster) | Summary: This paper presents a theoretical analysis of catastrophic forgetting in continual learning using a two-layer neural network model. It constructs a multi-view data structure consisting of task-specific, general, and random features and examines the learning dynamics of these features. The study identifies two potential causes of catastrophic forgetting:
- Task-specific features have a stronger signal than general features.
- Task-specific features from previous tasks are treated as random features in future tasks.
Claims And Evidence: Ambiguity in Feature Representation:
- The data model defines over Features $=\left[e_1, e_2, e_{\text {rob }}\right]$, where the random feature for each task is expressed as $\alpha_\zeta e_{3-\tau}$.
- This implies that the random feature for Task 1 is $\alpha_\zeta e_2$, which coincides with the task-specific feature of Task 2.
- Consequently, the model appears to learn Task 2's feature during Task 1's training phase, and also learn Task 1's feature during Task 2's training phase, potentially confusing the conclusions.
Dependence on Coefficient Choices:
- The proof relies heavily on specific choices of $\alpha_u, \alpha_v, \alpha_\zeta$.
- However, Theorems 5.2 and 5.4 use identical values for $T_u^{(1)}$ and $T_u^{(2)}$, suggesting that for some $T \geq T_u^{(1)}, T_u^{(2)}$, both conclusions hold simultaneously--leading to a contradiction.
Line 191: Setting $\sigma_{\xi}=0$ is invalid under the last condition 3.5.
Methods And Evaluation Criteria: N/A
Theoretical Claims: I reviewed the main framework of the proof but did not go through the step-by-step details.
Experimental Designs Or Analyses: Yes.
Supplementary Material: I checked the experimental part.
Relation To Broader Scientific Literature: - The paper introduces a novel framework to analyze catastrophic forgetting in continual learning, offering valuable theoretical insights into its underlying causes.
- The data model distinguishing task-specific and robust (general) features provides a reasonable perspective for understanding catastrophic forgetting.
Essential References Not Discussed: The paper lacks discussions on several theoretical works related to continual learning, including:
- Zheng et al. (2024) – Understanding memory buffer-based continual learning.
- Li et al. (2023) – Fixed design analysis of regularization-based continual learning.
- Ding et al. (2024) – Understanding forgetting in continual learning with linear regression.
- Li et al. (2024) – Theory on mixture-of-experts in continual learning.
- Benjamin et al. (2024) – Continual learning with the neural tangent ensemble.
Other Strengths And Weaknesses: Two Additional Limitations:
Confusion in the Simplified Model (Section 4):
- The analysis in Section 4 is somewhat unclear.
- For instance, if $v_1=v_2$, the paper claims that the model performs well on both tasks during training and testing.
- However, this seems counterintuitive, as it suggests that task-specific vectors do not contribute meaningfully to classification.
- Are there missing conditions on task-specific vectors that would clarify their role in the model's performance?
Limited Generalization: The analysis is restricted to a two-task setting, making it unclear how well the results generalize to m-task scenarios.
Other Comments Or Suggestions: Above.
Questions For Authors: Above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank reviewer ZF1H for the insightful comments. We find the reviewer misunderstanding our framework, and we show a illustration for our framework in **Q3** in our response to reviewer iCej. We the answer the questions as follows.
**Q1**. Ambiguity in Feature Representation
**A1**. Yes, both $\mathbf{e} _1$ and $\mathbf{e} _2$ appear in both tasks, please see our example in **Q3** in our response to reviewer iCej. **The existence of random noise ensures that the task-specific feature can only be used for one task**. For this reason, we let the random feature be $\epsilon \alpha_{\zeta} \mathbf{e}_{3-\tau}$ rather than $y \alpha_{\zeta} \mathbf{e}_{3-\tau}$. Our definition of task-specific feature is reasonable. Furthermore, the task 2's feature $\mathbf{e} _2$ **is not** learned in Task 1's training phase, as shown in Corollary 5.3.
**Q2**. The proof relies heavily on specific choices of $\alpha _{u}, \alpha _{v}, \alpha _{\zeta}$.
**A2**. Please see **Q3** in our response to reviewer iCej for clarification of our main ideas. While our theoretical results shows the conditions leading to CF depends on $\alpha _{u}, \alpha _{v}, \alpha _{\zeta}$, they help reveal the underlying causes of CF, as shown on Page 7, left column, lines 341-453. Empirical studies of $\alpha _{u}, \alpha _{v}, \alpha _{\zeta}$ in Figure 2 support our theoretical findings.
**Q3**. About contradiction from $T$.
**A3**. We want to clarify that at the beginning of the second stage we set $\mathbf{w} _{c}^{(2)}(0) = \mathbf{w} _{c}^{(1)}(T^{(1)} _{u})$. Here, $T^{(1)} _{u}$ refers to the time in the first stage while $T^{(2)} _{u}$ refers to the time in the second stage, **but not the total time in the first and seond stages**. The entire process can be described as follows:
1. Initially, it takes $T^{(1)} _{u}$ epochs to reach the end of the first stage, after which the process transitions to the second stage. Theorem 5.2 holds in the end of the **first** stage.
2. After an additional $T^{(2)} _{u}$ epochs, the second stage concludes. Theorem 5.4 holds in the end of the **second** stage.
**There's no contradiction** for Theorems 5.2 and 5.4.
**Q4**. Setting $\sigma _\xi = 0$ is invalid under the last condition 3.5.
**A4**. Thanks for pointing this out. First, $\sigma _\xi = 0$ is only used for simplicity in section 4 and **does not impact the analysis of our main results**. Additionally, when setting $\sigma _\xi = 0$ in Section 4, all conditions involving $\sigma _\xi$ in Condition 3.5 should be removed. We will clarify this in our revised version.
**Q5**. The paper lacks discussions on several theoretical works.
**A5**. Thank you for your suggestions. Benjamin et al. (2024) introduce the concept of NTE in CL, reformulating a single neural network as an ensemble of fixed classifiers. Li et al. (2024) provide a theoretical study of mixture-of-experts models in CL. Ding et al. (2024) analyze factors contributing to forgetting under linear models in CL. Li et al. (2023) investigate $\ell _2$-regularized CL with two linear regression tasks. Zheng et al. (2024) examine memory-based CL under overparameterized linear models. **Our study focuses on CF in a two-layer CNN using multi-view data, which differs from these works.** We will include discussions on these related works in our revised version.
**Q6**. Confusion in the Simplified Model.
**A6**. We clarify that **the reviewer miss the condition** $\alpha _v \gg \alpha _u$ for learning $\mathbf{v}$ in the first stage. If $\alpha _v \gg \alpha _u$, it implies that $T^{(1)} _{v} \gg T^{(1)} _{u}$, and the model will **only capture the general feature $\mathbf{v}^{(1)}$ and not the task-specific feature** $\mathbf{u}^{(1)}$, and $\mathbf{u}^{(1)}$ will not contribute to classification in task 1. In the right column of Line 200, we similarly shows that if $\alpha _u \gg \alpha _v$, the model will only capture the task-specific feature $\mathbf{v}^{(1)}$ and not the general feature in the first stage, and $\mathbf{v}^{(1)}$ will not contribute to classification in task 1.
**Q7**. Limited Generalization.
**A7**. Please see our response to **Q1** to reviewer QA6F.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' rebuttal. While some concerns have been addressed, I am still confused about something as follows.
1. **Feature Setting:** As the authors responded to Reviewer iCej, the random feature is uncorrelated with the true label. Given this, why do the authors use $\epsilon \cdot e$ instead of just $e$? What does the $\epsilon$ represent in this context? In most theoretical works on feature learning, I’ve only seen one label $y$ involved, rather than both $y$ and $\epsilon$.
2. **Generalization:** I read the authors' response to Reviewer QA6F and examined the proof framework. I think the most challenging aspect of extending the analysis to an $m$-task setting is handling the relationship between $y_i$ and $\epsilon_j$, where $i$ and $j$ index different tasks. This is a central point of confusion for me in this work: how do the authors model or address the dependencies between $y_i$ and $\epsilon_j$, and between $y_i$ and $y_j$, in the learning dynamics? For example, when predicting new data from task 1 after training on task 2, shouldn’t some terms involving $y_1 y_2$ and $y_1 \epsilon_2$ appear? If I’m misunderstanding this, could the authors kindly clarify?
3. **Data Noise:** Regarding the data noise $\xi$, I don’t believe its presence can be ignored in the analysis of the main results. If data noise is involved, the analysis should also account for its impact on learning dynamics. I also question the SNR conditions: in Cao’s work, only one feature is considered, whereas this work involves two features, $u$ and $v$. What is the SNR condition for $v$? And where in the analysis is this condition used?
---
Reply to Comment 1.1.1:
Comment: Thanks for your response. We address your further concerns as follows:
**Q1. Feature Setting.**
**A1.** The definition of $e$ and $ε$ can refer to **A3** to iCej, where $ε \sim U(\{+1,-1\})$ is a ($y$-independent) random noise term generating random feature $\zeta$. We could also use $e _{3-\tau}$ instead of $ε\cdot e _{3-\tau}$. The only technical difference is **how we partition the sample into two sets** when studying the learning dynamics of $\zeta$:
$$
\langle w^{(\tau)} _{c}(t+1), \zeta^{(\tau)}\rangle-\langle w^{(\tau)} _{c}(t),\zeta^{(\tau)} \rangle
=-\frac{\eta}{n}\sum _{i=1}^{n}y^{(\tau)} _i ε^{(\tau)} _i\ell^{\prime}(F({x}^{(\tau)} _i),y^{(\tau)} _i)(\langle{w}^{(\tau)} _{c}(t), \zeta^{(\tau)}\rangle)^2α _\zeta^2
=-\frac{\eta}{n}(\sum _{i\in\mathcal{I} _{=}}\ell^{\prime}(F({x}^{(\tau)} _i),y^{(\tau)} _i)-\sum _{i\in\mathcal{I} _{\ne}}\ell^{\prime}(F({x}^{(\tau)} _i),y^{(\tau)} _i))(\langle{w}^{(\tau)} _{c}(t),\zeta^{(\tau)}\rangle)^2α _\zeta^2
$$
- With $ε\cdot e _{3-\tau}$, we define $\mathcal{I} _{=}=\{i:y _i=ε _i\}$ and $\mathcal{I} _{\ne}=\{i:y _i\ne ε _i \}$.
- If we insteadly use $e$ along, the partition would instead be $\mathcal{I} _{=}=\{i:y _i=+1\}$ and $\mathcal{I} _{\ne}=\{i:y _i\ne-1\}$.
**Our analysis remains valid in either case**.
**Q2. Generalization.**
**A2. All elements in $\{y _i\} _{i\in[M]}\cup\{ε _i\} _{i\in[M]}$ are mutually independent**. In stage $\tau$, the model is **only** trained on task $\tau$. The update rule of features depend solely on the current task's $y^{(\tau)}$ and $ε^{(\tau)}$ not on other tasks’ $y,ε$:
- **Task specific feature**. $ \langle w^{(\tau)} _{c}(t+1),u^{(\tau)} \rangle-\langle w^{(\tau)} _{c}(t),u^{(\tau)}\rangle=-\frac{\eta}{n}\sum^{n} _{i=1}y^{(\tau)} _iy^{(\tau)} _i\ell^{\prime}(F({x}^{(\tau)} _i),y^{(\tau)} _i)(\langle {w}^{(\tau)} _{c}(t),u^{(\tau)}\rangle)^2α _u^2$.
- **General feature** $\langle w^{(\tau)} _{c}(t+1),v^{(\tau)}\rangle-\langle w^{(\tau)} _{c}(t),v^{(\tau)}\rangle=-\frac{\eta}{n}\sum^{n} _{i=1}y^{(\tau)} _iy^{(\tau)} _i\ell^{\prime}(F({x}^{(\tau)} _i),y^{(\tau)} _i)(\langle {w}^{(\tau)} _{c}(t),v^{(\tau)}\rangle)^2α _v^2$.
- **Other task's specific feature (Random Feature)**. Refer to **A1** in this response.
We show how $y,ε$ affects the analysis:
- Stage 1 (learning on task 1): we study the order of each component (Lemma 4.1) at the initialization, then study the learning speed of each component. Both $\max _{c\in[C]} \langle w^{(1)} _{c}(t),u^{(1)}\rangle$ and $\max _{c\in[C]}\langle w^{(1)} _{c}(t),v^{(1)} \rangle$ **monotonically increasing** with speed depending on $α _u$ and $α _v$, respectively. **The direction of the update of random feature depends on the sign of** $\sum _{i\in\mathcal{I} _{=}}\ell^{\prime}(F({x}^{(\tau)} _i),y^{(\tau)} _i)-\sum _{i\in\mathcal{I} _{\ne}}\ell^{\prime}(F({x}^{(\tau)} _i), y _i^{(\tau)} )$, and the speed depends on $α _\zeta$.
- Stage 2 (learning on task 2): we only care about the **order of components at the beginning (Corollary 5.3)**. While $y^{(1)}$ and $ε^{(1)}$ affects the direction of the random noise updates in Stage 1, their effects **do not explicitly appear in the order of each component in Corollary 5.3**. We then study learning speeds and directions of components in the second stage based on Corollary 5.3 and update rules. **$y,ε$ in past tasks disappear in both update rules and Corollary 5.3** in stage 2.
By iteratively analyzing the component orders at the start/end of each stage and learning speeds, the framework generalizes to $m$-task IL, **and $y^{(1)}y^{(2)},y^{(1)}ε^{(2)}$ will not appear in the analysis.**
**Q3. Data Noise.**
**A3.** We claim again that **we do not ignore $\xi$ in our main results in section 5, while SNR condition requires $\sigma^3_\xi$ to be relatively small compared with $nα^3_u$**. $\xi$ is only omitted in section 4 for simplicity in illustrating intuitions. Following Cao et al. (2022b), we let $\xi$ be orthogonal to the common feature space spanned by the vectors $\{e _1,e _{rob},e _2\}$, so that **update rules** of $\langle w^{(\tau)} _{c}(t),u^{(\tau)}\rangle$ and $\langle w^{(\tau)} _{c}(t),v^{(\tau)}\rangle$ (shown in **A2** of this response) **remain unchanged** regardless of whether $\xi=0$, which motivates us to set $\xi=0$ in section 4.
Unlike prior work, we involve two features. However, **no additional SNR condition is needed for** $v$ because:
1. When $α _v\leq o(α _u)$, then the learning speed of $v$ is lower than $u$, and SNR condition of $u$ is sufficient to guarantee that the model fits $u$ rather than noise $\xi$.
2. When $α _v\geq\Omega(α _u)$, The SNR condition for $v$ (i.e., $nα^3 _v/\sigma^3 _\xi\geq\omega(1)$) immediately holds. This ensures the model fits $v$ over noise $\xi$.
The SNR condition is used in Lemma 5.1 and Corollary E.1. The analyses show that **noise updates remain slow during the whole training process** under the SNR condition. Therefore, we ignore the effect of noise in section 4. | Summary: Continual learning is an important area in machine learning, and catastrophic forgetting is the most important problem in continual learning. Despite the empirical efforts on suppressing forgetting in continual learning, the theoretical understanding of catastrophic forgetting remains less studied, especially in CNN. In this work, the authors propose a novel framework to theoretically study catastrophic forgetting in a simple CNN model. Their analysis shows that the different distribution of features in different tasks causes catastrophic forgetting. They conduct experiments to verify their findings.
--------------------after rebuttal----------------------
Thanks the author for the reply. They have sufficiently addressed my questions and concerns. After checking other reviewer's comments and the response, I think this paper takes a nice step on understanding the CL. Futher works may be inspired by this paper. Thus, I increase my score.
Claims And Evidence: The main claim of this paper is presented in Section 5, where the authors identify the condition leading to catastrophic forgetting as follows:
1. The task-specific feature has a relatively larger signal than the general feature.
2. The task-specific feature from one task appears as a random feature in the second task with a strong signal.
Methods And Evaluation Criteria: 1. Theoretically, the authors propose a framework to show the evidence for the main claim. They formally define catastrophic forgetting in Definition 3.3.
2. Experimentally, they present a numerical analysis with simulated data. By evaluating the performance of model on the old and new tasks after training, they find that the signals of the general feature and the task-specific feature are relevant to catastrophic forgetting. Furthermore, they visualize the feature space in a real-world data set to show that the learned features after learning the old and new tasks are different.
Theoretical Claims: The theoretical claim in this paper consists of two sections: In Section 4, the authors study catastrophic forgetting in a simplified setting, and in Section 5 they show the main results. I have read the proofs in both sections, and their theoretical claim seems correct to me.
Experimental Designs Or Analyses: I have reviewed the experimental designs and analyses presented in Section 6, and most of the analyses are reasonable and convincing to me.
Supplementary Material: There is no supplementary material provided by the authors.
Relation To Broader Scientific Literature: The main contribution of this paper is that they analyze the catastrophic forgetting problem in a specific CNN model, which has not been studied in recent works. As for the results in this paper, the authors analyze the reason behind the catastrophic forgetting of the features stored in the data. In the previous works, they show the reason behind the catastrophic forgetting from task dissimilarity, task order, and so on.
Essential References Not Discussed: Most of the important related works are included in this work. I have not found any other important references that should be discussed in this work.
Other Strengths And Weaknesses: **Strengths**
1. Catastrophic forgetting is a valuable and important problem to study in continual learning. The authors propose a novel framework and study this problem in the CNN model, which is less explored.
2. The authors analyze the training dynamics of continual learning in two stages, which is different from previous studies on studying the learning dynamics in the two-layer CNN model.
3. The authors conduct experiments on both simulated and real-world datasets to verify the theoretical results.
**Weaknesses**
1. The authors only study the case of two-task continual learning, which limits the contribution of the work.
2. In the section 6.2, the authors should add more description to illustrate how to distinguish the task-specific feature and general feature in real-world dataset.
3. The remarks about Condition 3.5 could benefit from further clarification. The authors argue that the condition on $\sigma_\xi$ is to ensure that ***"at the beginning of the training process, the network cannot easily classify the data with the task-specific feature"***, this claim requires a more detailed explanation.
4. The authors use Figure 5 to show that after training on the second task, the model forgets the learned feature from the first task. It is not convincing to me, because the signal of the feature in the second task may be larger than in the first task, and the authors only show the inner product of the features with the largest singular vector.
Other Comments Or Suggestions: 1. Line 79, right column, $\mathbb{O} \in \mathbb{R}^{d \times (P+3)} \to \mathbb{O} \subseteq \mathbb{R}^{d \times (P+3)}$.
2. Line 83, right column, $\mathbf{\xi}(\tau) \to \mathbf{\xi}^{(\tau)}$.
3. The notation can be improved. For example, the authors use both $n_{=}$ and $\vert \mathcal{I}_ {=} \vert$ to denote the size of $\mathcal{I}_{=}$.
Questions For Authors: 1. Why do some results, such as Corollaries 5.3 and 5.5, show only the order of the largest inner product among $C$ channels?
2. What is the motivation and intuition behind the random noise in the multi-view data model?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thanks for your positive comments. We answer the questions as follows.
**Q1**. The authors only study the case of two-task continual learning.
**A1**. Our framework can be naturally extended to $M$-class CL. Our framework can be extended to $M$-task scenarios. A key modification is to assume $m+1$ basis vectors $\\{\mathbf{e} _{(\tau)}\\}^{M} _{\tau=1} \cup \\{\mathbf{e} _{rob}\\}$ exists in common feature space. In task $\tau$, the data $\mathbf{x}$ with label $y$ contains features $\\{ y\alpha _u \mathbf{e} _{(\tau)}, y\alpha _v \mathbf{e} _{rob} \\} \cup \\{\epsilon\alpha^{(m,\tau)} _\zeta\mathbf{e} _{(m)}\\} _{m \in [M], m \ne \tau}$ along with background noise, where $\epsilon$ is defined in Definition 3.1.
**Proof sketch**:
- Step 1. Show that at the initialization, $\forall \tau \in [M],$
$$
\max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{u}^{(\tau)} \rangle = \widetilde{\Theta}(\sigma _0\alpha _u), \max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{v} \rangle = \widetilde{\Theta}(\sigma _0\alpha _v), \max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{\zeta}^{(\tau)} \rangle = \widetilde{\Theta}(\sigma _0\sigma _\zeta), \max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{\xi} \rangle = \widetilde{\Theta}(\sigma _0\sigma _\xi).
$$
- Step 2. For any $m \in [M]$, iteratively analyze the learning speed of each component and derive the order of $\max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{u}^{(\tau)} \rangle$ for any $\tau$, as well as $\max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{v} \rangle$, $\max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{\zeta}^{(\tau)} \rangle$ and $\max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{\xi} \rangle$ at the end of the stage $m$. Those orders depend on $\alpha^{(m,\tau)} _\zeta, \alpha _v, \alpha _u$. The technique is similar within two-task scenario.
- Step 3. Identify the conditions under which catastrophic forgetting (CF) occurs.
As a result, we believe **the conclusions and implications are similar to the two-task scenario**.
Similarly, our framework can be extended to **Class-IL**: We can **modify our loss from logistic loss to cross entropy loss** for multi-class classification. The main idea remains the same—analyzing the learning dynamics of different features and determining the order of each component at the end of each stage, just as in task-IL.
**Q2**. Distinguish the task-specific and general feature in real-world dataset.
**A2**. It is a good question. In real-world datasets, our empirical study on Figure 3 shows that the existence of general feature is hard to satisfy. In most case, the model learns the task-specific feature. We believe that distinguishing features is a challenging problem to be solved in future works.
**Q3**. More detailed explanation about $\sigma _\xi$
**A3**. $\sigma _\xi$ reflects the strength of noises in an image. Our condition about $\sigma _\xi$ is that $\omega(1)\leq \sigma _\xi \leq o(P^{-1}\sigma _{0}^{-1})$ and $n\alpha^{3} _{u}/\sigma^{3} _{\xi}\geq \omega(1)$. For the first condition, $\sigma _\xi \leq o(P^{-1}\sigma _{0}^{-1})$ is to ensure that at the initilization, $\max _{c\in[C]} \langle \mathbf{w} _{c}, \sigma _{\xi} \rangle$ is small enough. $\sigma _\xi \geq \omega(1)$ is to ensure the learning of features is hard under the noises. For the second condition, it provides a lower bound for $\text{SNR}^3$, which is also shown in Cao et al.(2022b).
**Q4**. About Figure 5.
**A4**. First, Figure 5 demonstrates that in the second stage, the model relies on task 2's task-specific feature for classification. Additionally, in the bottom row, middle column of Figure 4, we observe that the data from the first task is no longer separable, unlike in the middle row, middle column of Figure 4. These empirical findings provide evidence that the model forgets the learned feature from the first task.
**Q5**. Why do some results, such as Corollaries 5.3 and 5.5, show only the order of the largest inner product among $C$ channels?
**A5**. We have shown that both $\max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{u}^{(\tau)} \rangle$ and $\max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{v} \rangle$ monotonically increases. The analysis of the largest inner product among $C$ channels demonstrates that **at least one channel learns the component**.
**Q6**. Motivation and Intuition behind the random noise in the multi-view data model?
**A6**. **The existence of random noise ensures that the task-specific feature can only be used for one task**. For this reason, we let the random feature be $\epsilon \alpha _{\zeta} \mathbf{e} _{3-\tau}$ rather than $y \alpha _{\zeta} \mathbf{e} _{3-\tau}$. Moreover, we use $\alpha _\zeta$ to control the strength of the random feature. When $\alpha _\zeta$ is zero, the random feature disappears; and as it increases, forgetting occurs. | Summary: This paper investigates the catastrophic forgetting (CF) phenomenon in continual learning (CL) for convolutional neural networks (CNNs) from the perspective of training dynamics. The paper considers a multi-view data model with four components: task-specific features, general features, random features, and background noise. The theoretical results in this paper show some reasons for the occurrence of CF: (1) the task-specific feature has a larger signal than the robust feature, which can genearize well in both tasks, causing the model to learn the task-specific feature rather than the robust feature; (2) the task-specific feature from one task acts as a random feature for another task, causing the model to forget the learned task-specific feature for task 1 while learning task 2. Experiments conducted on both simulated and real-world data sets validate the theoretical claims.
Claims And Evidence: Yes, the central claims are supported by evidence. The theoretical claims in this paper are all supported by strict proofs. The results are also validated by experiments on both simulated and real data sets.
Methods And Evaluation Criteria: Yes. For the theoretical part, the paper uses the classification accuracy to measure the performance of a model, it is very reasonable. For the experimental part on the real-world dataset, it is hard to decide which feature has a stronger signal, the paper uses the singular vector with maximal singular value, I think it makes sense.
Theoretical Claims: Yes, I have checked the proofs of the main theoretical results in this paper. I did not find any explicit errors in the proofs, and they appear to be correct.
Experimental Designs Or Analyses: Yes, the experimental design is reasonable. The main purpose of the experiments in this paper is to validate the findings derived from the theoretical results. In addition to the experiments on simulated data, the experiments on the real-world datasets are also very important, which makes the theoretical findings meaningful in the real-world.
Supplementary Material: No. There is no supplementary materials.
Relation To Broader Scientific Literature: The contribution of this paper lies in two parts. Firstly, for the CL community, this paper shows reasons for CF in CL, which provides a theoretical explanation for CF. Secondly, for the feature learning community, most works study cases where the parameters of the models are initialized according to a Gaussian distribution, while this paper also studies another scenario where the parameters are not randomly initialized (in the second stage). So this paper contributes to both CL and feature learning communities.
Essential References Not Discussed: I do not find any essential references that are not discussed.
Other Strengths And Weaknesses: Strengths
- The paper investigates CF in CL from a training dynamics perspective, it is a novel view from feature learning.
- The paper explains the reason of CF in CL through theoretical analysis. The results of the paper are meaningful and insightful, and are consistent with the validation experiments in this paper.
- The experimental results support the theoretical claims, making the results more convincing.
- The writing is kind to readers, the authors show the insights of the learning process, which helps to understand the main ideas of this paper.
Weaknesses
- Although theoretical results identify the condition that leads to the CF, the main implication of the theoretical results remains unclear.
- This work studies CF based on learned features, but it is unclear how their analysis relates to over-parameterization, task similarity, and other factors identified in earlier research.
- In the right row of line 231. The authors claim that $\ell^{\prime}(\mathbf{z}_1)/\ell^{\prime}(\mathbf{z}_2)$ has a same order with $\exp(\mathbf{z}_2 - \mathbf{z}_1)$, but lacks a theorem to illustrate it.
Other Comments Or Suggestions: Typos:
- On line 99, the left column, "a lot of works study" but not "a lot of works studies".
- In the second part of Definition 3.3, it should be $(x,y) \sim \mathcal{D}_ {z}^{(1)}$ but not $(x,y) \sim \mathcal{D}_{z}^{(2)}$.
- On line 79, the right column, it might be "for any $T^{(2)}$" but not "for $T^{(2)}$".
- In the second and third equations in Theorem 5.4, $(x,y)$ is better than $(x_i,y_i)$.
Some Minor Issues:
- **The dimension of $e$'s**: On line 80, the right column, since $\mathbb{O}$ is a subspace in $\mathbb{R}^{d \times (P+3)}$ and $\\{ e_1, e_2, e_{rob} \\}$ span $\mathbb{O}$, they should be of dimension $d \times (P+3)$. While on line 116, the left column, the expression $I_d - e_1 e_1^T$ shows that $e_1$ is of dimension $d$.
- **The definition of $\ell$**: On line 151, the left column, the logistic loss is defined as a function of two variables $\ell(F(x),y)$. While on line 159, the left column, the derivative of $\ell$ is used as a scalar, which is confusing. I think a good way is to define the logistic loss as $\ell(F(x),y) = L(yF(x)) = \log \left( 1 + \exp{(-y F(x))} \right)$ so on line 159, we can use $L^\prime(y_i F(x_i))$.
- **The expression in Definition 3.3**: the expression "with a high probability $\delta_1$" is not proper, since correctly classifying the test data with a high probability requires $1-\delta_1$ is close to $1$. So is can be corrected as "with a high probability $1-\delta_1$". The expression in the second part faces the same problem, it might be better to change "with a high probability $\delta_2$" into "for a small $\delta_2$ that is close to $0$".
- **The definition of $\mathcal{I}$**: On line 117, the left column, $\mathcal{I}$ is the index set of $\mathcal{S}$, do you mean that $\mathcal{I} = [n]$?
- **The assumptions in Condition 3.5**: The usual concept of over-parameterization refers to the scenario where the number of parameters of the model exceed the size of the training dataset, in this paper, it means that $n \le d\times(P+3)$, which is a little far from $nP \le o(\sqrt{d})$. Maybe the order of the expression should be changed, for example, you can say "$nP \le o(\sqrt{d})$, i.e., the network is over-parameterized".
Questions For Authors: See the "Weaknesses" and "Other Comments Or Suggestions".
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank reviewer g9HH for the careful reading and insightful suggestions. We will fix typos and improve our paper based on these suggestions in our revised version. We answer the major concerns as follows:
**Q1**. The main implication of the theoretical results remains unclear.
**A1**. Intuitively, our results suggest that CF occurs under two key conditions::
- **The general feature is absent or has a weak signal while the task-specific feature has a relatively large signal**. If $\alpha _v \leq o(\alpha _u)$, after training in the first stage, only $\mathbf{\mathbf{u}}^{(1)}$ is learned by the model.
- **The task-specific feature from the first task manifests as a random feature with a strong signal in the second task**. The model forgets $\mathbf{\mathbf{u}}^{(1)}$ with a fast speed while learns $\mathbf{\mathbf{u}}^{(2)}$ in the second stage.
Futhermore, it implies that
1. **We can overcome CF by breaking these two conditions**. To empirically validate this, we conduct experiments in Figure 2. As shown in the first row, third column of Figure 2, when $\alpha _\zeta$ decreases or $\alpha _v$ increases, CF can be mitigated.
2. **CNN tends to learn the strongest feature rather than the most robust feature** with (S)GD, which suggests the need for designing robust training algorithms to prevent CF.
3. **CF is less likely to occur when learning on similar tasks**. We can quantify the similarity through the parameters $\alpha _u,\alpha _v,\alpha _\zeta$. For example we fix $\alpha _u,\alpha _\zeta$, as $\alpha _v$ increases, the tasks become more similar. Our results suggest that CF is less likely to occur when learning on such similar tasks.
4. **We provide a novel perspecitve for understanding CF**. We believe our works can inspire future works to think CF from the perspective of learning features with interesting ideas.
We will include the discussion in our revised version.
**Q2**. It is unclear how their analysis relates to over-parameterization, task similarity, and other factors identified in earlier research.
**A2**. **Our study is conducted in the over-parameterized regime** as we assume $n \ll d$. Regarding task similarity, **we can quantify the similarity through the parameters $\alpha _u,\alpha _v,\alpha _\zeta$**, as we mentioned in **A1** in this response.
**Q3**. In the right row of line 231, it lacks a theorem for $\ell^{\prime}(\mathbf{z} _1)/\ell^{\prime}(\mathbf{z} _2)$.
**A3**. The formal theorem is provided in Lemmas A.4 and A.5 in Appendix A. We will clarify this in the main body of the paper in the revision.
**Q4**. About the dimension of $e$'s.
**A4**. We will rewrite it as "$\mathbb{O}$ is a subspace in $\mathbb{R}^{d}$". | Summary: The paper aims to provide a theoretical understanding of catastrophic forgetting in a two layered CNN using a multiview data model to understand the learning dynamics of different features. The key theoretical insights are that task specific features have a larger signal than the general features and task specific features of prior tasks appear as a random feature with a strong signal in other tasks. Authors provide theoretical proof and test their findings on both simulated and real world datasets.
## Update after rebuttal
I would like to thank the authors for addressing the questions and raised concerns. The clarifications in the rebuttal increases my confidence in the work and I would increase my score accordingly. I would still strongly request the authors to improve the readibilty of their work and provide more intuition for their main theorems so that the paper reaches a wider audience and has more impact on the commmunity.
Claims And Evidence: The theoretical model is simplified (two-layer CNN, task-incremental setting with only two binary classification tasks). While insightful, it’s unclear if the findings generalize to deeper CNNs or more complex continual learning settings (e.g., class-incremental CL).
Methods And Evaluation Criteria: Task-incremental CL is less challenging than class-incremental CL. Results may not generalize. Only two tasks are studied—how does CF evolve across many tasks?
Theoretical Claims: Reviewer was not able to validate the correctness of the proofs and theoretical claims.
Experimental Designs Or Analyses: The numerical analysis in section 6.1 states that the last layer of the CNN s fixed but authors don’t provide a reason for why it is necessary to analyze the feature extraction capabilities of the model.
The experiments on real world datasets are hard to follow and the authors do not provide sufficiently convincing evidence for their claims. The expectation that models should be able to learn general features from earlier tasks that allows them to generalize on unseen tasks without even seeing these classes or training the model is unfounded. Similarly, using the performance on the current task and comparing it to unseen tasks to make the claim that the model tends to extract task-specific features isn’t convincing. The authors seem to discount the difference between representation learning and learning classifiers on top of it to learn a decision boundary. Even if the model is able to learn general features in Task 0, we cannot expect the model to be able to classify objects in task 4 without ever training the model on them. The same expectation is carried over to the T-SNE plots. Overall the reviewer did not find section 6.2 technically sound and convincing.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper provides theoretical insights to understand catastrophic forgetting in two layered CNN. However it lacks suggestions on how these insights can be utilized to design a CL method and there is limited evidence to suggest the findings will generalize to commonly used architectures on complex datasets.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: Strengths:
- The paper aims to understand catastrophic forgetting in CL and provides interesting insights which might be useful for the CL research community.
- Useful framework for analyzing feature learning in CL.
Weaknesses:
- In addition to the concerns raised earlier, the paper is difficult to follow and would benefit a lot from major restructuring and better contextualization of theoretical insights and findings. The authors can provide more intuition for their theorems and assumptions before delving into theoretical proofs. This would help a broader audience to understand the key takeaways of the paper.
- Missing discussions on other CL approaches (e.g., regularization-based, architectural methods).
Other Comments Or Suggestions: The analysis would be more useful for the research community if the authors can add a discussion on how these insights help us design better architectures/methods to improve CL ability of the model.
Questions For Authors: Q1) Can the authors comment on how they believe the insights from their analysis can benefit the CL research community and if these can be used to design better CL methods or benchmarks. Reviewer believe this is an important section missing in the paper.
Q2) Do the authors believe their findings would generalize to deeper architectures and more complex CL settings (e.g. Class-IL)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the thoughtful and insightful review, and we answer the concerns as follows:
**Q1**. Generalizing the simple theoretical model to complex settings:
**A1**. Studying the learning dynamics in a two-layer CNN and multi-view model is a widely used approach for understanding deep learning. Despite its simplicity, our results are supported by our numerical and real-world results.
We agree with the reviewer that task-IL is less challenging. However, **even within task-IL, a theoretical understanding about CF remains under-explored, particularly in CNNs**. Moreover, we believe our analysis provides a novel perspective on CF by analyzing feature learning. Our framework is flexible and can be extended to more complex settings, please refers to **Q1** in our response to Reviewer QA6F.
**Q2.** Why analyze feature extraction.
**A2.** By fixing the last layer, we can focus on the learning dynamics of features to find the hidden reason behind CF. **It is crucial for identifying whether and how certain features contribute to CF**. Our setup of numerical empiriments follows the theoretical framework.
**Q3**. Intuitions for theorems and assumptions.
**A3**. Yes, the intuition behind our assumptions is shown in Remark 3.6. We additionally provide intuitions:
**An example on two-task-IL to illustrate our data model** ($\epsilon \sim U(\\{+1,-1\\})$):
- Task 1: Data $\mathbf{x}$ with label $y$ contains features $\\{ y \alpha _u \mathbf{e} _1, y \alpha _v \mathbf{e} _{rob}, \epsilon \alpha _{\zeta}\mathbf{e} _2 \\}$ and noises.
- Task 2, Data $\mathbf{x}$ with label $y$ contains features $\\{ y \alpha _u \mathbf{e} _2, y \alpha _v \mathbf{e} _{rob}, \epsilon \alpha _{\zeta}\mathbf{e} _1 \\}$ and noises.
**Note**: All of $\\{ \mathbf{e} _1, \mathbf{e} _{rob}, \mathbf{e} _2 \\}$ appear in each task, but only $y\alpha _u\mathbf{e} _{\tau}$ and $y\alpha _v\mathbf{e} _{rob}$ are correlated with the true label in task $\tau$, which means the two features can be both used for classification in task $\tau$. $\epsilon\alpha _{\zeta}\mathbf{e} _{3-\tau}$ is called random feature, which is uncorrelated with the true label. In such a model, **the task-specific feature is only used in its respective task**. $\alpha _u, \alpha _v, \alpha _{\zeta}$ control the strength of the features.
To analyze the learning dynamics, we quantify the learning speed of feature components in the first and second stages.
1. **Initialization**, Lemma 4.1 shows the order of $\max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{u}^{(\tau)} \rangle, \max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{v} \rangle, \max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{\zeta} \rangle, \max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{\xi} \rangle.$
2. **First stage**, we analyze the learning speed of each component and find that when $\alpha _v \leq o(\alpha _u)$, $\mathbf{u}^{(1)}$ is captured by the model.
3. **Second stage**, Corollary 5.3 shows the order of each component at the beginning of the second stage. We then analyze the learning speed of each component and observe that $\max _{c\in[C]} \langle \mathbf{w} _{c}, \mathbf{\zeta} \rangle$ decreases with the rate depending on $\alpha _\zeta$. Specifically, when $\alpha _\zeta \geq \Omega(\alpha _u)$, the decline becomes significant, leading to forgetting.
4. **Conclusion** We identify two key conditions leading to forgetting:
1. $\alpha _v \leq o(\alpha _u)$. The common feature is absent or has weak signal.
2. $\alpha _\zeta \geq \Omega(\alpha _u)$. The random feature has a strong signal.
**Q4**. Inconvincing evidence on real-world datasets.
**A4**. The reviewer misunderstands our feature definitions; we clarify this in **Q3** in this response. Based on our definition, we **evaluate the performance of the second task at the end of the first stage** (Figure 3) to study the learned feature at the end of the first stage is task-specific or general:
- If the model learns a general feature on first task, it can achieve a great performance on the unseen second task (Task-0 and Task-4 in CIFAR-10).
- If the model learns the task-specific feature in first task, it can not achieve a great performance on the unseen second task (Most Case in Figure 3). **This aligns with the condition $\alpha _v \leq o(\alpha _u)$**.
**Note: We do not compare the performance of model on current and unseen task**
Additionally, Figures 4 and 5 analyze learning on Task-0 and Task-3 in CIFAR-10, where forgetting occurs:
- Figure 4 provides evidence for the existence of random features in real-world datasets, as features from task 1 and task 2 exhibit significant overlap, **which aligns the condition $\alpha _\zeta \geq \Omega(\alpha _u)$**.
- Figure 5 demonstrates that in the second stage, the model relies on task 2's task-specific feature for classification, and task 2's task-specific feature is helpless for task 1.
**Q5**. Impact on CL research.
**A5**. Please refers to **Q1** in our response to Reviewer g9HH. | null | null | null | null | null | null |
Task-Aware Virtual Training: Enhancing Generalization in Meta-Reinforcement Learning for Out-of-Distribution Tasks | Accept (poster) | Summary: This paper tackles meta RL, where the goal is find optimal policies for MDPs that have the same state-action space but different transition and reward functions. This paper is similar to prior works like PEARL but additionally uses extra loss terms/objectives to improve performance. Their main ideas are:
- Virtual tasks generation as linear combinations of observed training tasks
- They propose a distance metric for tasks and train their latent representations of tasks to reflect this distance (bisimulation loss)
- They train latents for the same tasks to be consistent whether the policy is exploratory on optimizing for that task (on-off)
- Reconstruction loss for generated samples
- Task Preserving sample generation: They train the encoder-decoder to be consistent in the sense that the generated samples from some latent z^alpha are encoded to some value hat(z)^alpha that is close to z^alpha.
- They use a GAN based approach to ensure the generated samples are indistinguishable from real samples
- They regularize state generation with some real next states
- They train a policy on both real and virtual tasks.
Claims And Evidence: The claims are effectively that all of the above techniques are necessary and performant. Their claims are empirically validated, where ablations show that performance significantly degrades if any technique is removed.
Methods And Evaluation Criteria: Yes, the proposed approach and techniques are well-motivated given the problem.
Theoretical Claims: I've checked the theoretical claims/proof in appendix A. It appears correct, but I believe there are some almost everywhere arguments that need to be addressed for completeness. The proposed metric may be a pseudo metric, though this has little practical implication.
Experimental Designs Or Analyses: The experiments are standard for Meta RL settings. Using prior implementations for baselines is good practice.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper introduces novel contributions to prior meta RL algorithms, and I consider it an improved version of PEARL. It also leverages ideas from GANS, VAEs, bisimulation, etc.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
- The method is thoroughly ablated
- The empirical results are on reasonably challenging benchmarks
- Each individual contribution is well-justified.
Weaknesses:
- This paper does not sufficiently address its limitations. Primarily, I am concerned about compute time. This paper introduces many new networks and loss terms, which greatly increases resource usage. In the appendix, the authors claim that their algorithms has better convergent performance. However, it would be more convincing if ablations with compute time on the x axis were provided to compare performance between approaches.
- The state regularization parameter epsilon is concerning to me for two reasons. 1) In some MDPs, taking a linear combination of states may not be sensible. For example, in a maze, two states may be feasible but their linear combination may be inside a wall. Alternatively, if the state is an image, linear combinations do not make sense. 2) The ablation of epsilon raises many questions. The ablation makes it seem like a small epsilon around 0.1 is best, with degrading performance is epsilon = 0 or 1. Furthermore, the algorithm is very sensitive to this choice. I would like to see a more thorough ablation on epsilon at a higher granularity to see what exactly is going on here. Perhaps there is some quadratic relationship, but there is not enough information at the moment to conclude this in my opinion.
- T-sne is notoriously fickle and occasionally misleading. I would be curious to see other visualization techniques to see if the expected relationships in the latent space are still preserved. For example, you could set the latent dimension to 2 so that it can be directly plotted, and see if anything interesting happens. This may degrade performance but is usually a good smell test. This is not a major issue, but is worth considering.
Other Comments Or Suggestions: Minor:
- Extra comma on line 84, right side.
- Typo line 104
- Comma line 99 right side
- Comma 246
- There may be some typos in appendix d1 relating to phi - and phi ~. Perhaps use a different symbol then phi ~ since phi - can be interpreted as phi with no gradients blocked.
Questions For Authors: 1) In figure 6, it seems like many of the baselines have fixed performance, such as LDM, Varibad, RL^2. Why is this?
2) Does the extra decoder in section D.1, which has a one hot encoding as input, cause some scaling issues? Please discuss how well this scales since one hot encodings are not generalizable, and whether you need a one hot for only training tasks or also test tasks.
Congrats on a good paper. I am willing to increase my score if questions and weaknesses are sufficiently addressed.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive comments. We would like to address the main concerns raised, particularly regarding (1) computational cost, (2) the validity of state regularizaion method, and (3) t-SNE -based visualization.
**1. Computational Cost:** (common response with Reviewer q6Z7)
We appreciate the reviewer’s concern regarding the potential overhead of TAVT. While direct comparison of training time across all baselines is difficult (e.g., training time on the x-axis), we analyze the relative overhead of each TAVT component in a consistent setup. As this is a shared concern and in line with ICML’s policy encouraging shared responses, we kindly refer to **our detailed answer to Reviewer q6Z7**, which includes a full analysis. We thank for your understanding and believe it adequately addresses the concern.
**2. Validity and Sensitivity of the State Regularization Method:**
We agree that linear state blending may be problematic in discrete or non-Euclidean environments (e.g., image-based or maze tasks). In such cases, an alternative is to use a discriminator-guided filtering strategy, as in recent data augmentation works [1], to retain only realistic generated states. This would improve applicability to more complex domains and is a promising direction for future work. In our study, however, we focused on continuous settings, where such blending is a reasonable approximation due to smooth and Euclidean state dynamics. We will also make sure to clarify this assumption in the revised version.
In addition, to address the reviewer’s insightful comment on $\epsilon_{\mathrm{reg}}$ sensitivity, we extended our ablation study to include a finer sweep over $\epsilon_{\mathrm{reg}} \in [0.0, 0.05, 0.1, 0.2, 0.5, 1.0]$ on the Walker-Mass-OOD environment. The results, shown in Table R.2, confirm that small values around $0.1$ consistently lead to lower $Q$-function estimation bias and improved return, while both extremes ($0.0$ or $1.0$) result in performance degradation. This suggests a convex relationship and supports the reviewer’s observation that a regularization sweet spot exists.
**Table R.2:** Ablation on $\epsilon_{\mathrm{reg}}$ in Walker-Mass-OOD
|$\epsilon_{\mathrm{reg}}$|0.0|0.05|0.1|0.2|0.5|1.0|
|-|-|-|-|-|-|-|
|Avg. Q Bias|-24.1±25.6|91.4±21.2|**36.4**±23.5|65.1±26.8|98.4±36.4|73.1±32.6|
|Avg. Return|612.6±27.2|660.8±23.4|**745.6**±42.3|632.3±91.7|238.2±98.3|163.8±96.4|
[1] Zhang et al., "Discriminator-guided model-based offline imitation learning," CoRL 2023.
**3. Use of t-SNE and Latent Space Visualization:**
We thank the reviewer for the valuable suggestion regarding the limitations of t-SNE and the use of alternative visualization techniques. While t-SNE is a common tool for qualitatively assessing representation structure, we agree that it can be sensitive to hyperparameters and may not always preserve the true geometry of the latent space.
To address this, we additionally visualized task latents by directly setting the latent dimension to 2 in the Ant-Goal-OOD environment, thereby avoiding t-SNE entirely. The resulting plots (linked below) show raw 2D latent representations without projection, and the alignment with task goal positions confirms that our method preserves semantic structure even without dimensionality reduction artifacts.
Link: https://anonymous.4open.science/r/024/ant-goal-ood-2D.PNG
We will include these results in the revised version and appreciate the reviewer’s suggestion, which strengthened the empirical support for our metric-based task representation method.
**4. Minor Points and Additional Questions:**
We sincerely thank the reviewer for their detailed reading and thoughtful suggestions. We address the raised points as follows and will incorporate the necessary revisions in the updated version:
- Proposition 3.1: While the theoretical metric is defined via true expectations, our practical implementation relies on samples from learned transition models. As such, minor approximation errors may occur, particularly in unvisited regions of the state space across tasks. We will clarify this limitation.
- Typos and notation issues: We appreciate the reviewer’s careful attention and will correct all identified typos and symbols in the revision.
- Figure 6: On-policy methods generally require significantly more samples for convergence. Following common practice in meta-RL, we report their final performance when comparing with off-policy methods.
- One-hot encoding: The one-hot decoder is used only for training tasks. While scalability could be a concern with an extremely large number of training tasks (usually not large due to memory constraints), this has no impact on test-time generalization.
Once again, we sincerely thank the reviewer for their constructive feedback and positive recommendation. We believe these clarifications will further improve the clarity and completeness of our work.
---
Rebuttal Comment 1.1:
Comment: 1. Awesome
2. Interesting. Glad to see its relatively smooth.
3. Thanks for running this ablation. Its interesting that the learned representation is split left to right by distance, rather than in rings spreading out from the origin or something similar.
Increasing my score to accept.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you very much for your kind and encouraging evaluation. We truly appreciate your thoughtful feedback and are grateful for your generous assessment of our work. Thank you again for your time and support. | Summary: In context-based meta RL, an agent is trained to assign a context vector to an MDP. It learns to identify the context, and to take actions (policy) based on it. This paper investigates the setting where an agent is tested in an MDP not seen during training.
Several ideas are introduced to improve generalization to unseen MDPs:
1. Using a bisimulation metric idea, contexts of bi-similar MDPs are trained to be similar.
2. The agent is trained on imagined data from contexts that are "mixes" of seen contexts.
3. A GAN is used to learn better reward+next state predictions for the imagined contexts.
In experiments, agents are evaluated on MDPs that were not trained on. The variations in the MDPs are all low-dimensional - a single velocity direction, a 2/3 dimensional goal, a single mass coefficient, etc. The proposed method is shown to outperform previous methods.
## Post rebuttal:
The authors' rebuttal clarified the comaprison with Mendonca et al., 2020 (MIER) and mostly addressed my main concern of comparison with baselines tailored for OOD meta RL. From the list of references the authors mention, Ajay et al., 2022 also seems like a relevant (and more recent) comparison, but the paper already includes a rather extensive experimental evaluation.
Overall, and after reading the other reviews, I'm more convinced that this is an incremental but solid contribution to OOD tasks in Meta RL, and is a good fit for ICML.
I have changed my score to weak accept.
Claims And Evidence: The experiments are thorough, and the paper is well written. The main idea makes sense, and extensive experiments support it (but see my question about missing baselines later).
One issue that should be emphasized: the task distributions here are all low-dimensional, and I suspect that this is a limitation of the method (like it is for other context-based meta RL).
Methods And Evaluation Criteria: The authors experiment with several benchmark tasks that are standard in meta RL, and the comaprison makes sense.
It would be good if the authors explicitly discuss the reward functions used. I had to look in the code to verify that dense rewards were used. Dense rewards make some domain (like ant goal) much easier, but this is a choice that has been popularized in several prior works.
I was missing baselines that explicitly account for distribution shift, except for LDM. Citing from the related work section: "some studies tackle distributional shift challenges through robust learning (Mendonca et al., 2020; Mehta et al., 2020; Ajay et al., 2022; Greenberg et al., 2023)". Why are these methods not compared with?
Theoretical Claims: The (fairly simple) claim on the bi-simulation metric being a metric looks correct.
Experimental Designs Or Analyses: In high level the experiments follow standard meta RL evaluations and look OK.
Supplementary Material: Looked at the proof, some experiment settings.
Relation To Broader Scientific Literature: The paper contribution is relatively narrow: within the scope of meta-RL with low-dimensional task distribution, it proposes several technical advances for improving generalization. The idea of training on virtual tasks has been proposed before, but the technical improvements proposed here make sense and are reported to work well.
The authors nicely discuss other meta RL studies on OOD generalization in the related owrk section, but these are mostly missing from the empirical evaluation.
Essential References Not Discussed: A relevant theoretical investigation of generalization in meta RL, and how virtual tasks can help generalization appears in [1] and should be mentioned. That paper also explicitly discussed the dimension of the task space, and why such methods are limited to low dimensional tasks.
[1] Rimon, Z., Tamar, A., & Adler, G. (2022). Meta reinforcement learning with finite training tasks-a density estimation approach. Advances in Neural Information Processing Systems, 35, 13640-13653.
Other Strengths And Weaknesses: The paper is very well written, easy to follow, and the appendix provides extensive details. Also, code is provided (although I did not run it).
This paper has a rather narrow scope: how to technically improve meta RL by improving the virtual tasks the agent is trained on. Within this scope, the authors have done a good job at coming up with several good ideas, and extensively evaluated their approach. The overall method is relatively complex (includes training a GAN, several loss terms, etc), but since the authors provide code, I guess that only time will tell if it gets picked up and used in more practical applications (beyond the toy benchmarks).
Overall, if it wasn't for my reservations about the missing baselines, I would have proposed to accept this paper, as it makes an incremental but clear contribution to meta RL.
Other Comments Or Suggestions: It may be worthwhile to discuss some future work. How can some of the ideas here be used beyond this specific application, or what problems are still missing in the current solution.
Questions For Authors: Why is there no empirical comparison with methods that explicitly account for distribution shift?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive comments. We would like to address the main concerns raised, particularly regarding (1) comparison with baselines handling distributional shift, (2) the scope and practicality of our method, (3) clarification on reward functions, (4) the connection to theoretical work on generalization, and (5) additional future work.
**1. Comparison with Baselines Addressing Distributional Shift:**
We appreciate the reviewer’s insightful suggestion to compare with prior work that explicitly targets distributional shift. We would like to clarify that among the baselines the reviewer mentioned, MIER (Mendonca et al., 2020) is indeed already included in our empirical evaluation. We selected MIER as it is a PEARL-based method specifically designed to handle distributional shift, closely aligned with our setting, and it provides open-source code for reproducibility. Additionally, as the reviewer noted, LDM is also an algorithm that addresses OOD test tasks through virtual task generation, and is therefore included in our comparative evaluation. We regret that the citation for Mendonca et al. (2020) was inadvertently omitted, and we will correct this in the revised version to avoid confusion.
Moreover, we designed our comparison to cover a diverse range of meta-RL algorithms across categories, including both on-policy and off-policy methods (e.g., VariBad, RL$^2$, PEARL), methods with advanced task representations (e.g., CCM using contrastive learning, Amago using transformer-based in-context learning), as well as those employing virtual tasks (e.g., LDM). As a result, we focused on representative or most relevant algorithms per category rather than exhaustively including all prior works, due to experimental feasibility.
**2. Scope and Practicality of the Method:**
We acknowledge and appreciate the reviewer’s comment on the scope of our work. While our approach lies within the context of meta-RL with virtual task training, we argue that it has broader practical implications. One of the core contributions of TAVT is its ability to generalize across a wide spectrum of task shifts, including:
State transition variations (e.g., in Walker-Mass/Hopper-Mass), reward variations (e.g., Ant-Dir/Cheetah-Vel), and 3D task variation (ML1-Push/Reach).
By addressing these diverse sources of task variability, TAVT takes a step closer toward real-world applicability, where test tasks often differ substantially from those seen during training. We agree that practical validation in real robotic systems or high-dimensional continuous control domains remains a promising future direction, and we will add a discussion of this in the revised version.
**3. Clarification on Reward Functions:**
Thank you for highlighting the need to clarify the reward functions used in our experiments. We confirm that dense rewards were used in all environments, following standard practice in recent meta-RL literature. For example, Ant-Goal-OOD uses dense distance-based rewards as in PEARL and LDM. We will explicitly clarify these reward formulations in the main text and provide detailed specifications in Appendix G to ensure transparency.
**4. Connection to Theoretical Work on Generalization:**
We thank the reviewer for pointing out the connection to Rimon et al. (2022), which presents a theoretical framework for understanding generalization in meta-RL via density estimation. Their work highlights the challenge of generalizing to low-density regions in the task distribution. In contrast, our method complements this by densifying the latent task space through virtual task generation and bisimulation-aligned interpolation. This allows the agent to train on transitions that approximate those in unseen test tasks.
We see this connection as a valuable opportunity for synergy: combining our sample generation strategy with density-aware task weighting could lead to more principled and efficient task coverage, especially under limited task availability. We will cite and discuss this work in the revised paper and consider integrating its ideas in future research.
**5. Additional Future Work:**
We appreciate the reviewer’s encouragement to explore broader extensions. Beyond the theoretical connection mentioned above, one promising direction is the application of TAVT to offline meta-RL. In offline settings, OOD task generalization is especially challenging due to limited and biased data. We believe that our VT-based sample generation can augment offline datasets with behaviorally consistent transitions, helping mitigate the distributional gap between training and test tasks. We plan to explore this extension in future work.
Once again, we thank the reviewer for their detailed feedback and thoughtful suggestions. We hope our clarifications address the raised concerns and help situate our contribution within the broader meta-RL literature. | Summary: This paper introduces Task-Aware Virtual Training (TAVT), a novel meta-reinforcement learning algorithm designed to enhance generalization to out-of-distribution (OOD) tasks. The numerical results demonstrate that TAVT significantly enhances generalization to OOD tasks across various MuJoCo and Meta-World environments.
Claims And Evidence: The paper claims that TAVT enhances generalization to OOD tasks. This claim is supported by numerical results in MuJoCo and Meta-World environments. The results consistently show TAVT outperforming other meta-RL methods in OOD settings.
Methods And Evaluation Criteria: The method applies several mechanisms, including metric-based task representation, data generation, and state regularization to improve the OOD generalization ability of context-based RL. The proposed methods somehow make sense.
The evaluation criteria include comparing TAVT with various on-policy and off-policy meta-RL methods across MuJoCo and MetaWorld environments. The performance is measured by the average return on OOD test tasks.
Theoretical Claims: Yes
Experimental Designs Or Analyses: The experimental design involves comparing TAVT with other meta-RL methods in MuJoCo and MetaWorld environments. The authors analyze the task representation of TAVT and conduct ablation studies to evaluate the contribution of different components of TAVT.
Supplementary Material: No
Relation To Broader Scientific Literature: The method applies several mechanisms, including metric-based task representation, data generation, and state regularization to improve the OOD generalization ability of context-based RL.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths
The paper is well-structured and clearly written.
The paper addresses an important problem in meta-RL: generalization to OOD tasks.
The paper proposes a novel and effective algorithm, TAVT, that outperforms existing methods.
The paper provides comprehensive experimental results and analysis to support its claims.
Weaknesses
The method applies several mechanisms, including metric-based task representation, data generation, and state regularization to improve the OOD generalization ability of context-based RL. The proposed methods somehow make sense. However, the mechanisms are not novel. For example, the data generation and data mix-up are widely applied to improve the generalization. The bisim loss in metric-based task representation is similar to the combination of the contrastive loss and reconstruction loss, which has been widely discussed in context-based RL papers.
The proposed method generates virtual task samples using GAN, which may increase the training cost of context-based RL.
Other Comments Or Suggestions: MetaWorld is a good benchmark with OOD test tasks. The most important feature is the heterogeneous types of tasks in ML10 and ML45. However, these scenarios are not tested by the paper.
Questions For Authors: In the bisim loss (3), is the computation cost of the task distance metric high?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive comments. We would like to address the main concerns raised, particularly regarding (1) the novelty of our method, (2) benchmark coverage, and (3) computational cost.
**1. Novelty of Our Approach:**
We appreciate the reviewer’s comment on novelty. To address the concern, we clarify that our method is not a generic combination of task representation and VT, but a principled framework designed to answer a central question: **how can we generate virtual tasks that are truly task-aware**, that is, faithful to underlying task dynamics and semantics?
To this end, we propose a metric-based task representation that aligns task latents using the Bisimulation metric. Unlike contrastive methods that separate positive and negative pairs without modeling precise distances, the Bisimulation metric captures behavioral similarity through reward and transition dynamics. This enables the latent space to reflect task geometry, supporting meaningful interpolation and extrapolation. We also introduce an on-off alignment loss to enforce consistency between off-policy latents (used in training) and on-policy latents (used in meta-testing), which is essential for stable generalization. As shown in Fig. 1, our method yields more structured and consistent task representations compared to contrastive-learning-based methods like CCM.
Building on this aligned space, we present a task-preserving sample generation strategy using WGAN, coupled with a latent consistency constraint: generated virtual contexts should reconstruct their source latent when re-encoded. This ensures that generated data remains both realistic and task-consistent. As demonstrated in Fig. 3, our method effectively captures OOD task semantics, in contrast to prior VT-based methods that suffer from latent mismatch.
Together, these two components form the core of TAVT. We believe this task-aware design is the key to its strong OOD generalization, supported by our visualizations and ablations. While representation learning is a widely explored area in meta-RL, it remains a crucial pillar for generalization. Our work extends this line by explicitly aligning representations with task semantics during both training and deployment.
**2. Benchmark Scope:**
We appreciate the reviewer’s thoughtful suggestion regarding ML10 and ML45. These benchmarks feature nonparametric task variation (e.g., changes in object types or task semantics), which differ fundamentally from the parametric setups we target, such as variations in velocity, mass, or 3D goal positions of tasks. In parametric cases, virtual tasks can be generated meaningfully through latent interpolation, making VT-based augmentation both tractable and effective. In contrast, ML10/45 often require task decomposition or modular policies due to their discrete and heterogeneous structure.
For example, SDVT [1] tackles this by clustering tasks through task decomposition and applying per-cluster VT modules using LDM, rather than building a unified latent space. Accordingly, our method can be seen as complementary: integrating our task-aware VT generation within each sub-task cluster in nonparametric settings could enhance generalization. We will include this as a promising future direction in the revised version.
[1] Lee et al., "Parameterizing non-parametric meta-reinforcement learning tasks via subtask decomposition," NeurIPS 2023.
**3. Computational Cost:** (common response with Reviewer t99j)
We appreciate the reviewers’ concern regarding the potential computational overhead of TAVT. While it is difficult to directly compare training time across all on-policy and off-policy baselines, we provide a breakdown of the relative cost introduced by each component in TAVT in the same setup. As shown in Table R.1, adding the reconstruction loss and decoder contributes approximately 3% overhead compared to PEARL, while the metric-based representation (w/o $\mathcal{L}_{\text{gen}}$) adds about 11%. When the full TAVT is considered (using WGAN), the total overhead reaches approximately 18%.
Despite this increase, TAVT significantly outperforms all baselines across diverse OOD environments in MuJoCo and MetaWorld, as shown in Fig. 6. Even with extended training, other methods do not reach TAVT’s level of generalization. We therefore believe the added cost is reasonable and practical. This analysis will be added to the revised version for improved clarity.
**Table R.1:** Relative Per-Epoch Training Time Across Component Variants
|Method|Relative Training Time per Epoch|
|-|-|
|PEARL|100%|
|Recon only|103%|
|TAVT w/o $\mathcal{L}_{\text{gen}}$|111%|
|TAVT w/o on-off loss|116%|
|TAVT (full)|118%|
Once again, we thank the reviewer for their detailed feedback and insightful suggestions. We hope our clarifications help address the raised concerns and further highlight the contribution of our work within the broader meta-RL literature. | Summary: This paper provides a new method for meta Reinforcement Learning that focuses on generalizing to novel tasks. The paper provides a method for generating virtual tasks that takes in to account task characteristics. A metric-based task representation is learned using the Bisimulation metric. Furthermore task preserving sample generation is used to generate additional samples generalize to unseen tasks. With a combination of these, Task-Aware Virtual Training, trains a task latent dependent policy using Soft-Actor Critic. The paper compares the new method against several baselines across many different environments.
Claims And Evidence: - A claim is made that the task their task-preserving loss ensures that the virtual contexts better retain task latent information. If the meaning of this is that there is parity between the virtual latent and the output of the encoder to the virtual context, then I believe that Figure 3 makes a reasonable case for this claim.
- A claim is made that their exploration policy covers a _wider_ range of trajectories. This was compared with PEARL. The evidence in Appendix E shows for the Ant-Goal problem what trajectories from exploration policies trajectories look like after the first and second episodes looked like. I think that this claim is visually shown in the Figures.
- A claim is made that the proposed method out-performs the compared baselines, as well as that generalization is good. I believe that the results from Figure 6 and Table 3 show case this.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I checked the Proof of Proposition 3.1. I believe it is correct.
Experimental Designs Or Analyses: - It is unclear to me how many repeats of each experiment were carried out to obtain the results in Figure 6 and Table 3.
- Am I right in understanding that the averages in Figure 6 for example are also across the different test tasks (for each environment)?
Supplementary Material: I checked Appendix A and Appendix D.3
- In Equation D.4 and D.5, how are the values $z^\alpha_{\text{on}}$ and $\hat{c}_{\text{off}}^\alpha$ chosen?
Relation To Broader Scientific Literature: This paper does a good literature review, and also carries out comparisons to other meta learning algorithms. The paper builds on top of methods from Latent Dynamic Mixtures [1], and also uses the Bisimulation metric [2]
[1] Lee et al, Improving generalization in meta-rl with imaginary tasks from latent dynamics mixture. 2021
[2] Zhang et al, Learning invariant representations for reinforcement learning without reconstruction. 2021
Essential References Not Discussed: None to my knowledge.
Other Strengths And Weaknesses: ### Strengths
- Good literature review
- Generally written well
- Good experimental comparisons across many different method from the literature.
- Appendix F and G details a lot of the experimental setup. I believe this is sufficient for reproducibility.
- Good ablation studies are carried out.
### Weaknesses
- I had trouble understanding the algorithm described in words in Section 4.3 until I looked at the Algorithm 1 in the Appendix. Perhaps the steps of the algorithm can be explained step-by-step (by breaking into a numbered list for example).
- Limitations are discussed in the Appendix. Perhaps this should be done in the main text.
Other Comments Or Suggestions: ### Introduction
1. LDM is used before writing its expansion
2. Figure 1: Can a description of what the annotation (1) -> (8) represents in Figure 1(e) be written in the caption.
3. 'Despite its benefits, we identify _several_ key issues with the existing methods for VT construction' -> Since you identify two issues, wouldn't it be more prudent to say '[...] identify _two_ key issues [...].'?
### Section 2
4. After the definition of the task encoder loss, there is an extra comma.
5. End of Page 2: typo 'aallows'
6. End of Page 2: You mention a 'task decoder' for the first time as if it was mentioned previously in the section. Could you describe the 'task decoder' in the first paragraph of Section 2.2 please?
Questions For Authors: ### Introduction
1. 'First, generated VTs often fail to capture task characteristics accurately.' -> What is the justification for this? Is this from your own analysis or from the literature. If the latter, please give a reference.
2. Figure 1: Is there a notion in which there would be equivalent points (1) -> (8) in (b) - (d)?
### Section 2
3. What do you mean by 'allows extrapolation beyond the original latents'?
#### Section 4
4. line 193: What do you mean by '[...] virtual contexts generalize to unseen tasks.'. Do you mean that the generated contexts would correspond to real 'unseen' tasks? Or that the task latent that would correspond to the generated contexts would be close to virtual context?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive comments. We would like to address the main concerns raised, particularly regarding (1) issues of clarity, and (2) technical questions.
**1. Clarity Improvements in Weaknesses Part:**
We sincerely thank the reviewer for pointing out aspects of the paper that could benefit from improved clarity.
- Section 4.3 Algorithm Description: We appreciate the feedback that the algorithmic workflow in Section 4.3 may be difficult to follow in its current narrative form. To improve readability, we will revise this section to include a clearer step-by-step breakdown, either by adding a numbered list or directly referencing steps from Algorithm 1 in the appendix.
- Limitations in Appendix: We agree that the discussion of limitations is important and should be accessible to all readers. We will relocate the key points from the appendix into the main body of the paper, allowing them to be more visible and impactful.
**2. Technical Clarifications for Questions:**
Thank you for the thoughtful questions. Below are our clarifications, which will also be reflected in the revised manuscript:
- Averaging in Figure 6 and Table 3: All performance metrics in these figures are averaged over 5 random seeds. For each seed, the reported value reflects the average performance across all considered OOD test tasks in that environment. The shaded areas and error bars represent the standard deviation across these seeds. We will explicitly clarify this in both the main text and figure captions to avoid any ambiguity.
- Equations D.4 and D.5 (Source of $\mathbf{z}^\alpha$ and $\hat{\mathbf{c}}^\alpha$): The virtual latent $\mathbf{z}^\alpha$ is computed through linear interpolation between training task latents as described in Section 2.3. The virtual context $\hat{\mathbf{c}}^\alpha$ is then generated using the task decoder conditioned on $\mathbf{z}^\alpha$ as described in Section 4.2. We will revise the appendix to clearly connect these definitions across sections.
- "First, generated VTs often fail to capture task characteristics accurately": We would like to clarify that the statement was not referencing a claim made in prior literature, but rather reflects our own empirical observation based on the results shown representation of LDM in Figure 1. Specifically, although LDM also utilizes VT generation, we observe that its task representations do not align well with task semantics such as goal direction in the Ant-Goal environment. We will revise the manuscript to clearly indicate this.
- Figure 1 (1)→(8) Annotation: The indices (1) through (8) represent different goal directions in the Ant-Goal environment as defined in Figure 1(a). These were included in Figure 1(e) to illustrate that our method accurately reflects structured goal alignment. To improve comparison, we will add the same labeling to Figures 1(b–d) and enhance the caption to clarify this design.
- "allows extrapolation beyond the original latents": This refers to interpolating task latents using coefficients $\alpha$ beyond the $[0,1]$ range, effectively generating virtual tasks that lie outside the convex hull of the training task distribution. This helps the agent learn from extrapolated (OOD) regions.
- "[...] virtual contexts generalize to unseen tasks": This sentence explains the rationale for generating virtual contexts. Since real samples from unseen tasks are unavailable, the generated contexts do not correspond to any specific real task. As the reviewer correctly inferred, this is closer to the latter interpretation, virtual contexts are generated by decoding linearly interpolated task latents using the decoder’s generalization capability. We will clarify this point in the revised version.
**3. Other Comments and Typos:**
We are grateful for the reviewer’s careful reading. The following minor issues will be addressed in the revised manuscript:
- Expand "LDM" to "Latent Dynamics Mixture" at first mention.
- Correct phrasing such as “identify several key issues” to "identify two key issues."
- Fix typos like "aallows" and remove unnecessary commas.
- Introduce the term "task decoder" earlier in Section 2.2 for clarity.
Once again, we sincerely thank the reviewer for their constructive feedback and positive recommendation. We believe these clarifications will further improve the clarity and completeness of our work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications! I think that these answers reinforce my original score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We sincerely appreciate your thoughtful and encouraging feedback. Your positive assessment means a great deal to us, and we’re truly grateful for the time and care you dedicated to reviewing our work. Thank you again for your support. | null | null | null | null | null | null |
Tracking The Best Expert Privately | Accept (poster) | Summary: The authors develop differentially private algorithms for prediction with expert advice under dynamic regret (tracking the best expert) across three adversary types: stochastic with shifting distributions, oblivious, and adaptive. They achieve sub-linear regret bounds for all cases, notably providing explicit regret guarantees against shifting stochastic and oblivious adversaries.
---After Rebuttal---
---
The author fails to explain how to achieve sublinear dynamic regret in a very simple example even in the discussion stage. Consider the following 2 loss vectors when $S=1$:
(a) $l_t(1)=1-l_t(2) = 0$ for all $t<=T/2$; $l_t(1)=1-l_t(2) = 1$ for all $t>T/2$;
(b) $l_t(1)=1-l_t(2) = 1$ for all $t<=T/2$; $l_t(1)=1-l_t(2) = 0$ for all $t>T/2$;
Note that, for each of these 2 loss vectors, we can find a sequence of $\\{j^*_t\\}$ suffering $0$ "loss".
Then, since expert 1 and expert 2 are symmetric, we consider choosing expert 1 with at least 50% probability at $t=1$. Then the remaining strategy of the agent is to consider when to jump to expert 2. However, the problem is that if the agent jump to expert 2 before $t=T/2$, then the agent will inevitably suffer O(T) regret in the worst case after $t=T/2$ (i.e., in case (b)) . If the agent insists on staying in expert 1 before $t=T/2$, the agent will also suffer O(T) regret in the worst case (i.e., in cases (b) ) .
Similarly, we can get a similar conclusion if the agent chooses expert 2 with at least 50% probability at $t=1$ due to symmetry, and the worst case is case (a).
**Therefore, I have every reason to believe that the theory of this article is wrong, and hence I adjust the score to 1 and will defend this decision unless someone can point out a logical loophole in the above.**
(The authors can "edit" their own reply in the remaining rebuttal time to face this simple example directly instead of proposing an indirect answer to evade it.)
---Update 2---
---
Now, I realize that the above example is not a counter-example because the agent can switch experts more than $S$ times. I believe the writing of the paper should be largely modified. Especially, many "abuse of notation" in the problem statement would confuse the reader. In line 162, the "comparison sequence" is very confusing without pre-definition.
I will increase the score as long as I can see the detail plan for increasing the clarity of the paper.
---Update 3---
---
Thank you for the detailed response. I am looking forward to the revised version if accepted. I have increased the score to 3 as I promised, and this score is conditional on the author revising their paper as in their latest response.
Claims And Evidence: The claims presented in this paper are strictly based on theoretical bounds; notably, the paper does not include any experimental validation to support these theoretical findings.
Methods And Evaluation Criteria: N/A
Theoretical Claims: I did not check the correctness of any proofs.
Experimental Designs Or Analyses: Notably, the paper does not include any experimental evaluation, which I consider to be a notable weakness.
Supplementary Material: N/A
Relation To Broader Scientific Literature: This paper investigates three distinct adversarial bandit frameworks with dynamic regret under differential privacy constraints: stochastic adversaries with shifting distributions, oblivious adversaries, and adaptive adversaries. By systematically addressing these settings and analyzing their dynamic regret, this work makes a comprehensive contribution to the research community.
Essential References Not Discussed: There is a lot of work on adversarial bandit in the literature. The author should pick a typical algorithm, such as Exp3 [1], to explain why it can or cannot be adapted to the current setting of dynamic regret with DP.
[1] Part three of Bandit Algorithms, Tor Lattimore and Csaba Szepesv ́
Other Strengths And Weaknesses: Table 1 is particularly valuable as it clearly outlines the lower bounds for the three settings. Although the upper bounds achieved by the authors' algorithms do not yet match all of these lower bounds, the results offer important insights and guidance for future research directions.
Other Comments Or Suggestions: See "Questions For Authors"
Questions For Authors: I find it difficult to gain intuitive insight into why dynamic regret is considered in this context. While static regret for adversarial bandits has been extensively studied in the literature, dynamic regret remains relatively under-explored. It would be helpful if the author could provide a clearer, more intuitive technical insight for analyzing dynamic regret, particularly highlighting scenarios where it may become suboptimal in terms of dynamic regret for the algorithms that aim at static regret.
Additionally, it would be very helpful to justify the technical insight if the authors can explain the following example: Let number of experts be 2. Let's consider a set $L=\\{0,1\\}^T$ that represents a set of loss functions. Particularly, for any $v \in L$, it implies a sequence of loss functions (or say vectors) $\mathcal{l}_t(1)=v_t$ and $\mathcal{l}_t(2)=1-v_t$ for any $t=1,\dots,T$.
Then, it looks like no matter how your agent chooses each $J_t$ at each time $t$, the expected regret must be $O(T)$ in the worst case (i.e., in one of the loss vector of the above set $L$ ) . I understand that this may not be true due to the limitation of switching times $S$; however, your definition of expected regret in Lines 145 to 152 of the second column does not explicitly depend on $S$ (particularly for $j^*$). I believe this definition needs to be revised.
Can the authors explain the above and write the intuition behind them in the Introduction section? I think this will greatly increase the readability of the paper. Especially explain the case when $S=0$ or $S=1$ in the above example.
Although my current score is negative, I am very happy to increase my score if the author can satisfactorily address these questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We address the reviewer's concerns and questions below and hope that they will reevaluate their score accordingly.
> No experimental results
We acknowledge the reviewer’s concern regarding the lack of experiments. However, our work is intentionally theoretical, aimed at understanding fundamental rates and limits of private learning from experts. Theoretical contributions often provide key insights that guide future empirical research. While experimental validation is certainly valuable, we believe that our results are meaningful in their own right and align with the norms of theoretical research in this area.
> There is a lot of work on adversarial bandit in the literature. The author should pick a typical algorithm, such as Exp3 [1], to explain why it can or cannot be adapted to the current setting of dynamic regret with DP.
We note that our work focuses on the setting of experts where we have *full-information* rather than the setting of bandits. There are many papers that study our problem (dynamic experts under *full-information* feedback) in the non-private setting which we discuss in our related work section. We will add a discussion on the challenges in privatizing these existing algorithms. We briefly mention one challenge here with regards to the modified multiplicative weights algorithm of (Lu and Zhang (2019)). Their algorithm basically applies the same update as multiplicative weights and then projects to the clipped simplex. Unfortunately, this clipping operation is challenging for privacy as now we have to privatize each gradient separately instead of privatizing the sum of gradients via the binary tree mechanism. We will discuss this more carefully in the final version.
Lu, Shiyin, and Lijun Zhang. "Adaptive and efficient algorithms for tracking the best expert." arXiv preprint arXiv:1909.02187 (2019).
> Motivation for dynamic regret
Dynamic regret is stronger notion than static regret that compares the performance of the algorithm to a sequence of changing experts (rather than a fixed expert). This is extremely important in scenarios where there is a distribution shift which results in a new optimal expert. For example, in recommendation systems, the user's preferences might change over time.
To highlight the importance of dynamic regret, lets consider the following example. Consider the setting of two experts, where the first $T/2$ losses are such that $\ell_t(1) = 0$ and $\ell_t(2)=1$, and for the last $T/2$ losses we have $\ell_t(1) = 1$ and $\ell_t(2) = 0$. For this sequence of losses, a simple algorithm that gets $0$ static regret is one that plays expert $1$ for all rounds. However, the dynamic regret of this algorithm is $T/2$ because the optimal strategy for dynamic regret would play expert $1$ for $T/2$ rounds before switching to expert $2$. For this sequence of losses, one can generalize this argument to the Multiplicative Weights (MW) algorithm, which would obtain sublinear static regret, but linear dynamic regret since MW would wait roughly $T/2 - \sqrt{T}$ rounds before switching to expert $2$. This shows a scenario where an algorithm aimed at static regret suffers suboptimal dynamic regret.
> Definition of dynamic regret
As for the reviewer's example and question about the definition, we first clarify that the definition of expected regret in Lines 145 to 152 is the definition of *static regret* which does not involve any switching. The definition for *dynamic regret* is provided in lines 175-181 and the constraint on the number of switches $S$ is made explicit in the definition of regret here. Note that our definition of dynamic regret (lines 175 - 181) crucially relies on limiting the number of changes in the set of best experts we compare to ($S$ upper bounds this). Indeed, we mention in line 161 in the paper that "To make the problem tractable, we constrain the comparison sequence of experts to have at most $S$ changes". *This is crucial as otherwise we end up having linear regret, which is exactly what is happening in the reviewer's example.*
When $S=0$, dynamic regret becomes equivalent to static regret since the sequence of comparison experts must consist of a single expert. When $S=1$, we allow to compare our algorithm to sequences of experts than change at most once, e.g. as in the example above, expert $1$ for the first $T/2$ rounds and expert $2$ for the last $T/2$ rounds. The larger $S$ is, the stronger is the notion of the regret. However, as $S$ becomes too large, it becomes hard to provide meaningful guarantees on the regret.
We will add more clarification and intuitions regarding the parameter $S$ and its importance for the dynamic setting in the introduction.
---
Rebuttal Comment 1.1:
Comment: 1. Regarding the definition in Lines 145 to 152: I suggest the authors move this definition of static regret to the introduction or just under the title “2. Preliminaries.” It is very confusing to place a definition of static regret under the title “2.1. Prediction with Expert Advice and Dynamic Regret.”
2. Regarding $S=1$ in my example: It is still unclear how we can get sublinear regret when $S=1$. For simplicity, let’s consider the setting of an “oblivious adversary.” If your strategy is just “expert 1 for the first T/2 rounds, and expert 2 for the last T/2 rounds,” then under the losses $l_t(1) = 0$ and $l_t(2) = 1$ for all $t \leq T$, your strategy will still suffer $O(T)$ regret. I would expect a clearer explanation of how to achieve sublinear regret in my example of S=1. If I understand correctly, dynamic regret depends on the worst case of the loss vector (or say functions); and of course, different strategies may have different loss vectors as the worst case.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response and address their comments below.
1. We thank the reviewer for the suggestion and will make sure to incorporate this change in the final version.
2. We would like to point out that we did not claim that “playing expert 1 for T/2 rounds and then switching to expert 2” is a good *algorithm* that minimizes dynamic regret for all sequences of losses. Rather, we stated that this is the optimal *strategy* for minimizing dynamic regret for just that *specific* sequence of loss functions that we considered (i.e. the first $T/2$ losses are such that $\ell_t(1) = 0$ and $\ell_t(2)=1$, and for the last $T/2$ losses we have $\ell_t(1) = 1$ and $\ell_t(2) = 0$).
That said, the reviewer is correct that this particular strategy does not achieve low dynamic regret when S=1 across all loss sequences. To achieve sublinear dynamic regret when S = 1 across all loss sequences (including the reviewer's example), one has to use online learning algorithms *specifically aimed at minimizing dynamic regret*. Fortunately, there are many such algorithms (see [1, 2, 3, 4, 5, 6]) and we have referenced these works in Section 1.2. These algorithms *modify* the standard Multiplicative Weights Algorithm to *explicitly* account for the fact that they are being evaluated against sequences of experts that can switch at most S times. At a high-level, these algorithms ensure that the probability of playing any expert never drops too low (where "low" depends on $S$ and $T$). This allows the algorithm to recover fast enough in case an expert goes from having very large loss to very small loss. So, to obtain sublinear dynamic regret when S=1, one would not use the standard Multiplicative Weights Algorithm, but use a dynamic regret minimization algorithm, like those in [1, 2, 3, 4, 5, 6], which ensures "recoverability."
[1] Herbster, Mark, and Manfred K. Warmuth. "Tracking the best expert." Machine learning 32.2 (1998): 151-178.
[2] Wei, Chen-Yu, Yi-Te Hong, and Chi-Jen Lu. "Tracking the best expert in non-stationary stochastic environments." Advances in neural information processing systems 29 (2016).
[3] Zhang, Lijun, Shiyin Lu, and Tianbao Yang. "Minimizing dynamic regret and adaptive regret simultaneously." International Conference on Artificial Intelligence and Statistics. PMLR, 2020.
[4] Herbster, Mark, and Manfred K. Warmuth. "Tracking the best linear predictor." Journal of Machine Learning Research 1.Sep (2001): 281-309.
[5] Bousquet, Olivier, and Manfred K. Warmuth. "Tracking a small set of experts by mixing past posteriors." Journal of Machine Learning Research 3.Nov (2002): 363-396.
[6] Zhang, Lijun, Shiyin Lu, and Zhi-Hua Zhou. "Adaptive online learning in dynamic environments." Advances in neural information processing systems 31 (2018)."
## ---RESPONSE TO UPDATE 2---
The reviewer is correct that although the sequence of experts is limited to S switches, the algorithm is not. We will make this clear in the final version. Before we discuss the plan, we would like to point out that we were not trying to be evasive. As the reviewer probably realized, minimizing dyn. regret is non-trivial even when S=1. That’s why we opted to referencing existing dyn. regret minimizing algorithms, as we would not have enough space to specify these algorithms and their analysis.
Detailed plan:
- In the Introduction, we will include an example of how static regret minimizing algorithms can fail to achieve low dyn. regret by discussing what happens when $S$ goes from $0$ to $1$.
- Within Section 2, we will make a new subsection dedicated to static regret min. and move the definition of static regret that’s in Section 2.1 to this new subsection.
- Before line 162, we will define "comparison sequence of experts" and note that static regret can be written in terms of competing against the best *constant* sequence of experts.
- In Section 2.1, we will discuss existing ways to minimize dyn. regret and how these differ from minimizing static regret.
- In Section 2.1, we will remove all uses of “abuse of notation” and be explicit. In particular,
- In line 199, we will now define of dyn. regret under an oblivious adversary as $DR^o_A(T, N, S) := \sup_{l_1, \ldots, l_T} DR_A(f_{l_1}, \ldots, f_{l_T}, N, S)$ where $f_{l_t}$ is such that $f_{l_t}(j_t, j_{1:t-1}) := l_t(j_t).$
- In the right column on line 186, we will use a variable other than $\ell$ to define the function mapping $[N]$ and $\mathcal{Z}$ to $[0, 1]$ to avoid overloaded notation.
- In the right column on line 215-216, we will just define $A \circ Adv(z_{1:T})$ to be the sequence of random plays of the learner $A$ when interactive with the adversary given inputs $z_1,\ldots, z_T$.
- In our definition of dyn. regret, we will make clear that while the sequence of experts is limited to S switches, the learner is not, and this is crucial for obtaining sublinear regret. Subsequently, we will explain how increasing $S$ makes the problem harder. | Summary: This work studies the online private learning problem, in the setting of online prediction with experts. The main focus is the relaxed notion of dynamic regret, where the best expert of the baseline can change at most S times. The main result considers three different models of the adversary: shifting stochastic, oblivious and adaptive. For all three settings author prove upper bounds and show lower bounds in shifting stochastic and adaptive regimes. The latter indicates a separation between oblivious and adaptive online learning.
The paper uses different proof techniques: SVT-based algorithm for shifting adversary, reduction from static regret for oblivious adversary and privatization of non-private dynamic regret algorithm for adaptive adversary. For the lower bounds, authors reduce existing lower bounds for static regret.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Line 309, right column: it is claimed that when $S \ll N$, we have that $\log (N / S) = \Omega (\log N)$, which is not true generally. Consider, e.g., $N = S \log S \gg S$, but $\log N / S = \log \log S \ll \log N$.
Experimental Designs Or Analyses: Not applicable
Supplementary Material: Yes
Relation To Broader Scientific Literature: This work extends the literature on private online learning. Results connect private online learning with static regret to dynamic regret formulation. Authors use three different techniques of establishing upper bounds in this setting.
Essential References Not Discussed: Not to the best of my knowledge.
Other Strengths And Weaknesses: Strengths: The techniques used for providing upper bounds are broad and interesting. The result for the shifting adversary is nearly-optimal.
Weaknesses:
1. Upper bound for oblivious adversary is computationally inefficient.
2. It is hard to understand the tightness of the result for oblivious and adaptive adversary.
Other Comments Or Suggestions: 1. In Theorem 3.1, there should be additional $\log T$ factor in the first term.
2. Section 2 is not clearly written, ‘abuse of notation’ mentioned 3 times on the same page.
3. Formulation of Lemma A.3 is unclear, please rewrite.
Questions For Authors: 1. Lower bound for oblivious adversary is the same as the shifting stochastic one. However, authors use the result of (Asi et al., 2024) to prove upper bound for oblivious adversary. This paper also contains lower bound (for certain class of algorithms). Is this lower bound extendable to dynamic regret?
2. Authors often assume 'high-dimensional' regime $T < N$ or $N > T/S$. I think there should be more discussion on what would change if $N$ is small.
3. What is $j_t$ in Lemma A.2?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and finding our techniques to be broad and interesting. We address the reviewer's concerns and questions below and hope that they will reevaluate their score accordingly.
> Computational inefficiency
This is a good question and an important direction of future work. First, we note that despite its inefficiency, this result is important on its own because *it is the first known upper bound for oblivious adversaries and it establishes a separation in the achievable rates between oblivious and adaptive adversaries*. Unfortunately, our current attempts at designing efficiency private algorithms against oblivious adversaries have been unsuccessful, as it is not clear how to privatize existing efficient non-private algorithms for dynamic regret. Perhaps central to the difficulty is the tension between lazy updating and obtaining small dynamic regret. Existing techniques for obtaining private online learning algorithms under static regret rely on privatizing existing (non-private) lazy algorithms that do not switch their played experts too often [1, 2]. Unfortunately, it is reasonable that such lazy algorithms cannot obtain good dynamic regret, as switching which expert is played is crucial to ``tracking" the best expert. We will make sure to add a discussion of this in the camera-ready version.
That said, we do note that our algorithm against adaptive adversaries is efficient and also gives guarantees in the oblivious settings.
[1] Asi, Hilal, et al. "Private Online Learning via Lazy Algorithms." Advances in Neural Information Processing Systems 37 (2024): 112158-112183.
[2] Asi, Hilal, et al. "Private online prediction from experts: Separations and faster rates." The Thirty Sixth Annual Conference on Learning Theory. PMLR, 2023.
> Tightness of the result for oblivious and adaptive adversary
This is an important point of our work and we will clarify it in our final version. First, note that our bounds for oblivious adversaries are not tight as can be seen from Table 1 in the paper. As for adaptive adversaries, the bounds are tight in the following sense. Assume we ask the question of what is the best privacy (smallest $\varepsilon$) we can guarantee while being able to learn anything useful (e.g. obtain sublinear regret)? Our upper and lower bounds provide tight answer for this question: indeed, the lower bound implies that sublinear regret requires $\varepsilon = \Omega(\sqrt{S/T})$ and our upper bound (theorem 5.2) shows that the dynamic regret is sublinear whenever $\varepsilon = \Omega(\sqrt{S/T})$. This shows that our setting has $\varepsilon \ge \sqrt{S/T}$ as a fundamental threshold for learning.
> Comments about lack of clarity and typos
We will fix these comments in the final version. More specifically, we will add the missing $\log(T)$ factor in theorem 3.1 and rewrite Lemma A.3 more clearly. We will also revise Section 2 by removing comments regarding ``abuse of notation" and be more explicit with our definitions.
> $\log(N/S) \gg \log(N)$"
The reviewer is right. When we wrote that $S \ll N$, we meant that $S$ is actually polynomial smaller than $N$ in the sense that $S = O(N^{1-\delta})$. We will clarify this in the final version.
> (Q1) Extension of the lower bound of (Asi et al., 2024)
The family or algorithms used in the lower bound proof of (Asi et al. 2024) are those that are lazy, in the sense that they do not switch their played expert frequently. While this is suitable for static regret, for dynamic regret, we do want to switch experts since we are competing with a sequence of switching experts. That said, it could be extended, but it would likely not be as meaningful.
> (Q2) Small $N$ regime
Our focus in this work was on the high-dimensional regime where $N$ is large. However, we agree with the reviewer and believe the small $N$ regime is also of interest. Our algorithms work in that setting but are probably sub-optimal. Similarly to the static regret setting, we believe that versions on the binary tree mechanism are going to be optimal for small $N$. These algorithms usually have a worse dependence on $N$ ($N$ instead of $\log N$) but a much better dependence on $T$. We leave this question for future research.
> (Q3) what is $j_t$ in Lemma A.2?
Lemma A.2 proves certain properties about Algorithm 1 and uses notation from that algorithm ($j_t$ was defined in Algorithm 1). We will clarify this in the paper. | Summary: The paper studies the dynamic regret in online learning with differential privacy. The paper considers three adversaries: shifting distributions, oblivious, and adaptive. This paper provides both lower bound and upper bound for three different adversaries. Finally, similar to static regret, This paper establishes a fundamental separation between oblivious and adaptive adversaries for the dynamic setting.
## update after rebuttal
I maintain my score.
Claims And Evidence: In general, the claims are supported by sufficient evidence. The theoretical results seem sound, even though I could not understand all the proofs related to the privacy of algorithms.
Methods And Evaluation Criteria: The performance measure, the Dynamic regret and $\varepsilon$-differentially private, makes sense as it is the measure that is studied in the most similar works.
Theoretical Claims: I checked the general soundness and read a small part of the proofs, but I couldn’t read all of them given the length of the supplementary material.
Experimental Designs Or Analyses: The paper does not include experiments.
Supplementary Material: I read through Appendix A, part of B and C.
Relation To Broader Scientific Literature: This paper is the first to systematically incorporate dynamic regret minimization into private online learning, bridging an important gap. Previous research on private online learning had largely focused on the static regret case.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
1. By handling shifting stochastic, oblivious, and adaptive adversaries, the authors offer a comprehensive picture, including how much privacy “costs” in each scenario.
2. The paper provides both upper bound and lower bound for dynamic regret in private online learning settings.
Weaknesses:
1. Authors may give details of the proof for differential privacy of algorithms. For example, post-processing is a crucial step in demonstrating an algorithm’s privacy. Please either include a relevant theorem in the paper or cite papers.
2. Authors may provide experiments.
Other Comments Or Suggestions: NA
Questions For Authors: 1. Can these results be extended to the bandit setting?
2. Is this a typo: 283-284: arg max -> arg min?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We address the reviewer's concerns and questions below and hope that they will reevaluate their score accordingly.
> Authors may give details of the proof for differential privacy of algorithms...
We will make sure to include theorems about basic privacy mechanisms and their privacy/utility guarantees in the camera-ready version. That said, we do prove the privacy guarantees for all our algorithms. For example, on lines 332-334 in the right column, we provide the proof of the privacy guarantee in Theorem 4.1. Similar privacy proofs for the other Theorems can be found in the Appendix.
> Authors may provide experiments
This paper is theoretical in nature as we aim at rigorously understanding fundamental limits in the context of dynamic regret minimization. While empirical validation can be valuable, our contributions stand independently as theoretical results that advance understanding in this area. Future work may explore experimental verification, but our primary goal here is to establish a solid theoretical foundation.
> (Q1) Can these results be extended to the bandit setting?
Yes, we do think that some of these results can be extended to the bandit setting, especially the results for stochastic and adaptive adversaries. With regards to oblivious adversaries, more care needs to be taken as one often uses unbiased estimates of the true loss under bandit feedback. This can cause issues in the privacy analysis as the sensitivity of the unbiased estimator can be large.
> (Q2) Is this a typo...
Yes, we thank the reviewer for catching this typo. We will make sure to fix it in the camera-ready version. | Summary: This paper studies differentially private online learning in the context of tracking the best expert, a problem where an algorithm dynamically selects from a set of experts to minimize cumulative loss over time. The authors develop differentially private algorithms for this problem under three types of adversaries: Shifting Stochastic Adversaries, where the data distribution changes up to S times;Oblivious Adversaries, which determine loss sequences independently of the algorithm. Adaptive Adversaries, which choose loss sequences based on the learner’s decisions. The paper provides upper and lower bounds on the dynamic regret of private algorithms in these settings and highlights fundamental limitations when learning privately against adaptive adversaries.
Claims And Evidence: The claim presented in this paper appears to be clear and correct.
Methods And Evaluation Criteria: The privacy and utility guarantees of the algorithms have been rigorously proven.
Theoretical Claims: The proof appears to be correct.
Experimental Designs Or Analyses: Not applicable; the proof appears to be correct.
Supplementary Material: I have reviewed most of the Supplementary Material.
Relation To Broader Scientific Literature: The paper contributes to differential privacy and online learning.
Essential References Not Discussed: The paper appears to have sufficient references.
Other Strengths And Weaknesses: Strength:
* The authors provide algorithms for three types of adversaries: shifting stochastic, oblivious, and adaptive.
* The results for each setting are equipped with upper and lower bounds.
Weakness:
* The algorithm for the oblivious adversary setting is not computationally efficient, as it relies on maintaining an exponentially large set of meta-experts.
Other Comments Or Suggestions: The paper is clearly written and well-structured. I don't have other comments.
Questions For Authors: The algorithm for oblivious adversaries is not computationally efficient and requires an exponential number of meta-experts. Is it possible to design an efficient algorithm that attains same-order regret bounds in this setting?
## update after rebuttal
Since my question has been answered, I will keep my positive score and vote to accept.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We address their concerns below and hope they will reevaluate their score accordingly.
> Is it possible to design an efficient algorithm that attains same-order regret bounds in this setting?
This is a good question and an important direction of future work. First, we note that despite its inefficiency, this result is important on its own because *it is the first known upper bound for oblivious adversaries and it establishes a separation in the achievable rates between oblivious and adaptive adversaries*. Unfortunately, our current attempts at designing efficient private algorithms against oblivious adversaries have been unsuccessful, as it is not clear how to privatize existing efficient non-private algorithms for dynamic regret. Perhaps central to the difficulty is the tension between lazy updating and obtaining small dynamic regret. Existing techniques for obtaining private online learning algorithms under static regret rely on privatizing existing (non-private) lazy algorithms that do not switch their played experts too often [1, 2]. Unfortunately, it is reasonable that such lazy algorithms cannot obtain good dynamic regret, as switching which expert is played is crucial to ``tracking" the best expert. We will make sure to add a discussion of this in the camera-ready version.
That said, we do note that our algorithm against adaptive adversaries is efficient and also gives guarantees in the oblivious settings.
[1] Asi, Hilal, et al. "Private Online Learning via Lazy Algorithms." Advances in Neural Information Processing Systems 37 (2024): 112158-112183.
[2] Asi, Hilal, et al. "Private online prediction from experts: Separations and faster rates." The Thirty Sixth Annual Conference on Learning Theory. PMLR, 2023. | null | null | null | null | null | null |
Analysis of an Idealized Stochastic Polyak Method and its Application to Black-Box Model Distillation | Reject | Summary: The paper provides an analysis in different convex settings of the stochastic Polyak step size SPS* and a version with momentum/iterate averaging they call Iterate Averaging Adaptive Method (IAM). The paper analyzes convergence rates in both non-smooth and smooth settings, and shows that both SPS* and IAM are able to adapt to the interpolation noise level $\sigma_\ast^2$ to interpolate between $1/\sqrt{T}$ and $1/T$ rates. They also discuss the limitations imposed by needing access to $f_\xi(x_\ast)$, the value of the optimum on each batch, and presents a model distillation setup in which this is possible to estimate using the teacher network.
Claims And Evidence: Yes. The focus of this paper is on proving convergence rates in for SPS* and IAM for convex objectives and all proofs are presented in the appendix.
Methods And Evaluation Criteria: The paper evaluated SPS* and IAM (and the Adam variants) on a model distillation task, which makes sense as a way to circumvent the main issue with these methods, which is that they require knowledge of $f_\xi(x_\ast)$. While the experiments are fairly limited/small scale, they serve as a nice proof of concept.
Theoretical Claims: Yes. I checked the proofs in Appendices C, D.1-D.4 and skimmed the remaining proofs in the appendix.
Experimental Designs Or Analyses: n/a
Supplementary Material: See "Theoretical Claims"
Relation To Broader Scientific Literature: This paper continues a line of work extending the Polyak step size to the stochastic setting. They provide clean and simple proofs for convergence for SPS* and extend the same arguments to IAM. These methods also fall into the category of "parameter free" methods, although they require access to $f_\xi(x_\ast)$ which may be difficult to estimate.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Strengths:
- Both the main paper and the appendix are well written and have clear comparisons to prior work
- The proof sketch for SPS* (Appendix D.1) is very clear and makes it easy to read the full proof in D.2.
- The paper is forthcoming about its weaknesses (e.g. the need to estimate $f_\xi(x_\ast)$) and discusses situations in which this is possible.
Weaknesses:
- It is not clear that the theory is relevant for the experiments in Figure 1 as the setting is highly non-convex. However, this is a common issue faced by papers in convex optimization theory and is not the focus of the paper.
Other Comments Or Suggestions: Lines 307-308: There should be a square root for the SGD convergence rate to match eq. (42)
Questions For Authors: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer,
thank you for your thoughtful review, and doing the expert work of checking our proofs and appendix, and catching the type-O on Lines 307-308. Regarding the weakness you highlighted
>It is not clear that the theory is relevant for the experiments in Figure 1 as the setting is highly non-convex. However, this is a common issue faced by papers in convex optimization theory and is not the focus of the paper.
We do of course agree. But we have written a small paragraph “Defense of convex optimization in machine learning” in response to reviewer UUoW rejection of our work because of the convexity assumption. We copy the relevant parts of our response here for your consideration, though it does not truly address your very valid point.
**Defense of convex optimization in machine learning.**
Stochastic non-smooth convex optimization has a surprising impact on deep learning practice.
For instance, the development of the Adagrad method [Duchi2011] was based on its convergence analysis in the non-smooth convex setting. Adagrad went on to be widely used in Deep learning, until the development of exponentiated weighted variants of Adagrad such as RMSprop and ADAM. Thus non-smooth convex analysis has a significant role to play in inspiring the most popular optimization ADAM for deep learning. Much more recently, the Schedule-free method [Defazio2024] which was developed to have fast last-iterate convergence in the non-smooth convex setting, set a new record in the AlgoPerf competition in the self-tuning category:
https://mlcommons.org/2024/08/mlc-algoperf-benchmark-competition/
The fact that non-smooth convex analysis is a reasonable model for predicting performance on Deep learning, including training large language models, is now a well-established conundrum [Schaipp2025]. Where-as, the convergence analysis of SGD in the non-convex setting has not had such an impact on Deep learning practice. As such, convex optimization has a surprisingly important and active role to play in deep learning practice.
[Duchi2011] Duchi, John, Elad Hazan, and Yoram Singer. "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization." Journal of Machine Learning Research 12 (7): 2121–2159, 2011
[Defazio2024] Aaron Defazio, Xingyu Alice Yang, Ahmed Khaled, Konstantin Mishchenko, Harsh Mehta, Ashok Cutkosky, “The Road Less Scheduled”. Neurips Oral 2024
[Schaipp2024] Fabian Schaipp, Alexander Hägele, Adrien Taylor, Umut Simsekli, Francis Bach, “The Surprising Agreement Between Convex Optimization Theory and Learning-Rate Scheduling for Large Model Training”, arXiv:2501.18965, 2025 | Summary: This paper analyzes the stochastic gradient method with Polyak stepsizes, termed
SPS*, for minimizing the expectation of a family of convex functions $f_{\xi}$.
The method adaptively selects stepsizes based on knowledge of $f_{\xi}(x^*)$,
the function value at the optimal point. The authors demonstrate that SPS* is
simultaneously adaptive in various settings, including nonsmooth, smooth, and
strongly convex objectives. It achieves the same convergence rates as SGD with
well-tuned stepsizes without requiring knowledge of problem-specific constants
such as the distance to the solution or the Lipschitz constant. The paper also
extends SPS* by incorporating momentum, achieving nearly the same convergence
rates but valid for the last iterate instead of the average one. Computational
experiments illustrate that for some problems, $f_{\xi}(x^*)$ can be
well-approximated, enabling efficient implementation of SPS* with good
performance.
Claims And Evidence: The claims made in the paper are well-supported, with each theorem and lemma
accompanied by a corresponding proof.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at
hand.
Theoretical Claims: I have reviewed the main theoretical claims and assessed the general idea behind
each proof. However, I did not verify all details rigorously.
Experimental Designs Or Analyses: Since the paper is primarily theoretical, I did not conduct an in-depth review
of the experimental designs.
Supplementary Material: I reviewed the supplementary material, including the appendix, and inspected
each proof.
Relation To Broader Scientific Literature: This work builds on the SPS* method, previously studied in (Gower et al., 2021;
Garrigos et al., 2023). Some of the results may have been known in special cases
(e.g., for Lipschitz functions or smooth functions under the interpolation
assumption), but to my knowledge, they have not been established under the more
general assumptions considered in this paper.
Essential References Not Discussed: I have not identified any essential references missing from the discussion.
Other Strengths And Weaknesses: **Strengths:**
1. The paper presents compelling arguments for the adaptivity of the stochastic
Polyak method across key optimization settings.
1. The proofs are generally simple and easy to follow, allowing interested
readers to verify the details.
1. The authors acknowledge the primary limitation of SPS*—the requirement
for $f_{\xi}(x^*)$—and discuss scenarios where this information is
available.
**Weaknesses:**
1. Several mistakes appear in the reported convergence rates, particularly in
Table 1. For example:
1. The convergence rate of SGD (and SPS*) for $L$-smooth problems should be
$\frac{L D^2}{t} + \sqrt{\frac{L D^2 \sigma_\*^2}{t}}$,
not $\frac{L D^2}{\sqrt{t}} + \frac{\sigma_\*^2 \log t}{L \sqrt{t}}$.
In particular, for $\sigma_* = 0$, this should recover the standard rate
$\frac{L D^2}{t}$ of gradient descent.
1. For $L$-smooth strongly convex problems, the complexity to reach an
$\epsilon$-solution (in terms of function value) should be
$\frac{L}{\mu} \log \frac{f(x_0) - f^*}{\epsilon} + \frac{\sigma_\*^2}{\mu \epsilon}$,
not $\frac{L^2 D^2}{\mu^2 t^2} + \frac{\sigma_\*^2}{\mu^2 t}$.
In particular, for $\sigma_* = 0$, this should recover the standard
estimate for gradient descent.
1. The reported rates for SPSmax and NGN appear incorrect, particularly as
they are not invariant under function scaling. Also, $\sigma_{pos}$ is
likely equivalent to $\sigma_*^2$.
1. The novelty of this work compared to previous SPS* results (e.g., Gower et
al., 2021; Garrigos et al., 2023) is not entirely clear. While the results
seem more general, it would help to explicitly clarify their differences.
1. The impact of mini-batching on method performance is unclear. Clarification
would be useful, especially given that for SGD, condition (9) is usually
written in terms of variance rather than second moments, showing that both
$A$ and $B$ decrease with increased mini-batch size.
Other Comments Or Suggestions: 1. There are inconsistent definitions of $\sigma_*$ throughout the paper (e.g.,
in Table 1, Corollary 2.3, and Theorem E.1).
1. Some rates in Table 1 are stated in terms of function suboptimality, while
others use distance to the solution. A consistent presentation using function
suboptimality would be preferable.
1. Line 142 is missing references to recent works on the Polyak stepsize for
$(L_0, L_1)$-smooth optimization:
- Gorbunov et al. *Methods for Convex $(L_0, L_1)$-Smooth Optimization:
Clipping, Acceleration, and Adaptivity*, 2024.
- Vankov et al. *Optimizing $(L_0, L_1)$-Smooth Functions by Gradient
Methods*, 2024.
1. Line 301: Likely meant to reference (13) instead of (14).
1. Line 308: The statement "Note that SGD ..." should be checked for accuracy.
1. Theorem E.1: The given estimate is incorrect. The correct bound should be
$O(\frac{L D^2}{T} + \frac{\sigma_* D}{\sqrt{T}})$.
Questions For Authors: 1. Please address the weaknesses noted above, particularly the incorrect
convergence rates throughout the paper.
1. Why does Table 1 mark SGD with an "X" in the first column? SGD can also be
constrained to the $D$-ball if necessary.
1. What is the motivation for defining $\sigma_*^2 = \inf f - \mathbb{E}[\inf f_{\xi}]$
in Corollary 2.3? Why not define it using inequality (13)? The current
definition excludes important cases, such as when
$L \sigma_*^2 = \mathbb{E}[\\| g_{\xi}(x^\*) \\|^2]$.
1. What is the purpose of introducing Bregman divergence terms in (20)?
1. What happens in Theorem G.1 when $B = 0$? This case is not currently covered.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed review. It is a pleasure to have an expert reviewer. Below we address each of your questions.
>**Q1** mistakes …in Table 1
The rates we report throughout are the anytime rates of each method. We do not report the *fixed horizon* rate. We will now include the following exert in our revision:
We say $x_t$ has an anytime convergence rate if, once parameters are fixed, there exists a real function $r:[0,+\infty) \to [0,+\infty)$ such that $\mathbb{E}[f(x_t) - f(x)] \leq r(t)$ and $r(t) \to 0$ as $t \to + \infty$.
We say an algorithm has a finite horizon rate if, for every tolerance $\epsilon>0$, there exists a choice of parameters and there exists $T \geq 1$ such that $\mathbb{E}[f(x_T) - f(x)] \leq \epsilon$.
We focus on anytime rates because these are the rates that best translate into practical performance. Where-as fixed horizon rates depend on very small step sizes, and are completely impractical when implemented.
For example, SGD with a constant stepsize does not have an anytime convergence rate, but it has a finite horizon rate.
The anytime rate for SGD on convex smooth problems is $O(\log(t)/\sqrt{t})$ (Theorem 5.7 in [Garrigos & Gower, 2023]) and the fixed horizon rate is $O(1/\sqrt{t})$ (Theorem 5.5 in [Garrigos & Gower, 2023]) which is the rate you pointed out.
>**Q2** Rates for SPSmax and NGN … not invariant under function scaling
SPSmax and NGN do depend on the function scaling. If we turn to eq (4) for SPSmax, it is not invariant because the constant $\gamma_b$ does not scale with the function. Where as the learning rate of NGN is given by
$$\gamma_k = \frac{\sigma f(x_k)}{f(x_k) + \sigma \| \nabla f(x_k)\|^2} $$
The $\| \nabla f(x_k)\|^2$ scales quadratically, whereas $f(x_k) $ scales linearly with $f$. Thus NGN cannot be invariant to scaling the function.
> **Q3** $\sigma_{pos}$ is likely equal to $\sigma_{*}^2$
These two constants are not equivalent, instead
$ \sigma_{*}^2 = f(x_{\*}) - \sigma_{pos}.$ In the setting where $\sigma_{pos} =0$ (regression models with no weight decay), we still could have $ \sigma_*^2 \neq 0 $
> **Q4** novelty compared to previous SPS* results
The novelty of our analysis of SPS* is in the smooth setting. Outside of the interpolation setting, we give the first convergence for SPS* in the smooth setting. Previous results in Gower et al., 2021 and Garrigos et al., 2023 only establish that SPS* converges to a fixed neighborhood of the solution.
> **Q5** impact of mini-batching
To see the dependency on mini-batching, we change the expectation in eq (9) to be over the mini-batching, instead of directly over $\xi$. Then $A$ and $B$ will now depend on the batch size $b$. For example, if we use sampling without replacement in the smooth $n$--finite sum setting, then according to Proposition 3.8, item (iv) [Gower et al 2019] the constants $A$ and $B$ in eq (9) have the following form
$$ A = \frac{n}{b}\frac{b-1}{n-1} L_{full} + \frac{n-b}{b(n-1)} L_{\max}$$
$$ B = \frac{1}{b}\frac{n-b}{n-1} \sigma^2_*$$
Where $L_{full} $ and $L_{\max}$ are the full batch and max over stochastic functions smoothness constant, respectively.
For the non-smooth setting, the right hand side in (11) would be $G^2/b$ if we used $b$--sampling with replacement.
>**Q6** inconsistent definitions of $\sigma_*$ throughout
Thank you and well spotted. We will no be consistent and use *function noise* $\sigma_f^2 =f(x_*) - \mathbb{E}{\inf f_{\xi}}$ throughout.
>**Q7** Table 1 uses function suboptimality and distance to the solution.
We will convert all results to function suboptimality, at the cost of an additional fixed $1/\mu$ multiplicative factor for the strongly convex problems.
>**Q8** Missing references
Thank you for the two additional references, we will include them both, and comments about their relative smoothness results.
>**Q9** Table 1 SGD has "X" in the first column? Constrain to the ball
We agree, but this would require access to the ball $B_D$ that contains $x_*$, so that the iterates $x_k$ can be projected back onto $B_D$. But this is a different oracle access. Depending on the problem, we would not have access to this ball. But we can include a footnote on this.
>**Q10** Why not define $\sigma_*$ in (13)
We use Corollary 2.3 to give a precise setting, and Theorem 2.1 and eq (9) for the general setting. But we do agree that Corollary 2.3 also holds when using *gradient noise* given by $\sigma_* = L \mathbb{E}\|g_{\xi}(x_*)\|^2$.
>**Q11** Why introduce Bregman div in (20)
We introduce the Bregman div because it is a positive term that provides a minor speed-up of IAM over SPS*.
>**Q12** What if $B=0$ in Theorem G.1
Good question, the result in Theorem G.1 does not hold for $B=0.$ Instead when $B=0$, from equation (49) we can prove that SPS* converges linearly according to
$$\mathbb{E}\\| x_{t+1}-x_*\\|^2 \leq (1-\mu/(2A))\mathbb{E}\\|x_{t}-x_*\\|^2 $$
We will edit the theorem to include this remark. | Summary: This paper presents a new anytime convergence result for stochastic polyak step-size which requires access to the loss for every training batch evaluated at a solution. They also provide a momentum variant of this method and prove convergence rates with respect to the final iterate. Lastly, they provide a small set of experiments which compare an approximation of their momentum optimization algorithm to other standard optimizers for toy experiments and a distillation problem.
Claims And Evidence: ## SPS* Convergence Rates
### Claim:
- SPS* achieves optimal/best-known convergence rates under a new set of assumptions not considered by existing optimization literature
### Evidence:
- Formal proofs in Theorem 2.1 and Corollaries 2.2, 2.3
- Shows O(1/√T) rate for Lipschitz functions which matching known lower bounds
- First O(1/√T) anytime convergence rate for smooth convex case
## IAM Last-Iterate Convergence
### Claim:
- IAM achieves same favorable rates as SPS* but for last iterate
### Evidence:
- Theoretical proofs in Theorems 3.2 and 3.3
- Avoids need for iterate averaging which is less practical
- Matches optimal rates in non-smooth case
## Adaptivity to Interpolation
### Claim:
- Methods automatically adapt to interpolation settings
### Evidence:
- Theoretical analysis showing rates improve when interpolation parameter σ* approaches 0
- Proofs show O(1/T) convergence under interpolation vs O(1/√T) otherwise
## Convergence Without Smoothness/Lipschitz Assumptions
### Claim:
- Methods converge on non-smooth, non-Lipschitz problems
### Evidence:
- Experiments on Poisson regression showing convergence comparable to L-BFGS
- Performance matches SGD with best manually-tuned learning rate
## Robustness to Misspecification
### Claim:
- Methods are somewhat robust to errors in optimal loss estimates
### Evidence:
- Experiments showing convergence (though slower) when using approximate/averaged values
- Detailed ablation studies with different levels of noise
## Effectiveness for Model Distillation
### Claim:
- Methods enable efficient distillation without hyperparameter tuning
### Evidence:
- Experiments distilling large GPT-2 models on three datasets
- Competitive performance vs carefully tuned SGD/Adam baselines
- Learning rate plots showing sensible adaptive behavior
Methods And Evaluation Criteria: ## Empirical
### The methods and evaluation criteria are appropriate for the stated goals, with evaluation chosen to:
- Demonstrate practical utility in distillation
- Test robustness to key assumptions
### Weaknesses
- Could include more diverse set of optimization problems
- Relatively small number of datasets
- Limited comparison to other adaptive methods (especially Adagrad / SLS variants)
- One SLS variant, SSO, seems designed for applications related to the distillation problem (online imitation learning) and may be a good variant to consider evaluation agains.
- No evaluation or discussion of computational overhead
- Some confusion (for me) around why distillation satisfies their assumption (outside of non-convexity issues)
Theoretical Claims: I did not have time to work through the entire appendix, however the results which they present seem reasonable given the assumptions that they make regarding access to the true function evaluation for every batch. I would like some additional discussion around exactly how this assumption relates to interpolation settings and deterministic settings. It just kinda seems like the assumption of knowing the true f_zeta seems to reduce the problem to a deterministic one, which from my understanding becomes effectively equivalent to an interpolation setting.
Experimental Designs Or Analyses: ## Poisson Regression Experiments:
### Summary
- Comparison methodology against L-BFGS and SGD
- Learning rate sweep for baselines (0.001 × {0.01, 0.1, 0.5, 1.0, 2.0, 5.0, 20, 50})
- Dataset characteristics
### Sound aspects:
- Appropriate baseline methods
- Reasonable learning rate range
- Clear reporting of epochs/convergence
### Issues:
- No mention of number of repeated runs/statistical significance
## Misspecification Study:
### Summary:
- Experimental setup with synthetic quadratic problems
- Three versions of IAM tested (theoretical, averaged, lower-bound)
### Sound aspects:
- Well-controlled synthetic setup
- Clear parameter settings
- Systematic variation of noise (ν) and batch size (b)
### Issues:
- No error bars/variance reporting
- Single problem structure (quadratic) may limit generalizability
## Distillation Experiments:
### Summary:
- Dataset splits and sizes
- Model architectures
- Hyperparameter tuning procedure
- Baseline comparison methodology
### Sound aspects:
- Multiple datasets
- Clear documentation of model sizes
- Systematic hyperparameter search for baselines
### Issues:
- No reporting of variance across runs
- No discussion of computational costs
- Training durations not clearly specified
## Limitations in Experimental Design:
1. Lack of statistical significance testing
2. Missing error bars/variance reporting
3. Incomplete documentation of computational requirements
4. Limited ablation studies on algorithmic components
While the experimental designs are generally reasonable, the lack of statistical analysis and variance reporting makes it difficult to fully assess the robustness of the results (I did not catch a mention of seeds or multiple runs).
Supplementary Material: I was only able to briefly browse the material in the appendix, and largely focused on the experimental results.
Relation To Broader Scientific Literature: I cannot speak to the theoretical impacts to the larger scientific community but the experimental results seem at best incremental, and are currently missing quite a few comparisons.
Essential References Not Discussed: From my understanding of this area the authors seem to reference all of the folks which I am aware which work on these methods. There could however be some papers which are newer that I am not aware of.
Other Strengths And Weaknesses: # Strengths:
### Theoretical Depth & Rigor:
- Comprehensive convergence analysis
- Clear proofs and technical innovations
- Handles multiple function classes
- Adapts automatically to interpolation
### Practical Relevance:
- Demonstrates real application in model distillation
- Shows how to incorporate momentum and Adam-style preconditioning
- Seems to avoid hyperparameter tuning in practice
### Organization & Clarity:
- Clear connection between theory and practice
- Well-structured presentation
- Thorough appendices with detailed proofs
### Technical Innovation:
- Novel analysis techniques (e.g., explicit inversion of convex monotone function)
- First O(1/√T) anytime rate for smooth case
### Validation Strategy:
- Tests both theoretical claims and practical utility
- Uses multiple datasets and model sizes
- Includes ablation on misspecified optimal losses
# Weaknesses:
### Experimental Rigor:
- No reporting of random seeds
- Missing error bars/statistical significance
- Limited ablation studies
- No analysis of computational overhead
### Limited Comparisons:
- Insufficient theoretical comparison to AdaGrad
- Limited empirical comparison to other adaptive methods
- Relatively small number of datasets
### Practical Limitations:
- Core method (SPS*) requires access to optimal batch losses
- Theoretical results assume convexity
- Primary applications limited to special cases (interpolation/distillation)
### Missing Analysis:
- No discussion of computational complexity
- Limited analysis of robustness to hyper-parameters
- No investigation of failure modes
### Empirical Gaps:
- Limited diversity in optimization problems and algorithms tested
- No large-scale benchmarking
Other Comments Or Suggestions: Most of my issues are on the experimental side, so if you could add additional baselines to your comparisons I would consider increasing my score, subject the other reviewers substantiating the relevance of the theoretical claims.
Questions For Authors: ## Questions I had while reading the manuscript
- Can you explain this statement more formally: "By anytime, we mean a proof that the method converges to any predefined tolerance without prior knowledge of that tolerance."
- "The proof of convergence for SPS* in the Lipschitz convex and strongly convex setting was first given in (...)" - the main contribution is then your convergence result (for the same algorithm) is any-time vs not anytime.
- " ... but they either require bounded domain (Levy et al., 2018; Kavis et al., 2019) or are only studied in the deterministic setting (Li & Lan, 2023)." Can you clarify what the difference between this assumption and what you assume in line 270 is?
- The teacher student setting does not really explain why that would be any different from a general stochastic setting. Why would we expect a better approximation to zeta there over other settings. Is it just that the surrogate (from the teacher) should be close to the minimum provided by the original problem? If so this seems very related to the online imitation learning setting.
- You state that "assuming access to fξ(x∗) is not the same as assuming that interpolation holds", but again I am a bit unclear on the relationship between this assumption and the general stochastic / deterministic assumption.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comprehensive review. We have divided our answers in **M**ain **Q**uestions, and **Q**uestions
> **MQ1** anytime convergence formally?
We say $x_t$ has an anytime convergence rate if, once parameters are fixed, there exists a real function $r:[0,+\infty) \to [0,+\infty)$ such that $\mathbb{E}[f(x_t) - f(x)] \leq r(t)$ and $r(t) \to 0$ as $t \to + \infty$.
We say an algorithm has a finite horizon convergence if, for every tolerance $\epsilon>0$, there exists a choice of parameters and there exists $T \geq 1$ such that $\mathbb{E}[f(x_T) - f(x)] \leq \epsilon$.
If an algorithm has an anytime convergence rate then it has a finite time convergence rate, but the reciprocal is not true in general. The counterexample is SGD with constant stepsize, which does not have an anytime convergence rate, but has a finite time convergence.
>**MQ2** main contribution is (SPS*) anytime rate?
Our main contributions are the anytime rates for SPS* in the smooth (convex+ str convex) setting, proposing the IAM method, all the last iterate analysis of IAM, and identifying a reasonable application (black box model distillation).
> **MQ3** Difference between bounded domain and line 270
The bounded domain assumes we know a bounded set $\mathcal{B}\subset \mathbb{R}^d$ such that $x_* \in \mathcal{B}$. If we know such a set, we can modify SGD to project $x_t$ onto $\mathcal{B}$.
Our assumption on line 270 and Eq (9), is very different and is almost for free. We require that the gradients verify some inequality on a certain ball, that we do not need to know. As we emphasize in Remark 2.4, for finite sum problems our assumption is always true, because convex functions always have bounded gradients on every bounded set.
> **MQ4** The teacher should be close to the minimum? Related to online imitation learning?
Yes, the teacher's loss is assumed to be close to the minimum student loss, and thus we can use the teacher loss to set the learning rate of the student using the IAM. There is some relation to the online imitation learning setting, both involve a student learning from a teacher, but there are differences. In online imitation learning the student learns a policy, and has to interact with an environment. Where-as in our setting there is no environment or policy.
> **MQ5** Unclear "access to $f_{\xi}(x^*)$ not the same as interpolation"
This is a good question. In short, assuming access to $f_{\xi}(x^*)$ imposes no restriction on the problem itself. This could be any stochastic optimization problem. Rather, this is an assumption as to what we know regarding the problem, and what we have access to.
The interpolation assumption does impose a constraint on what type of problems we are considering. Interpolation restricts our problem to models that can perfectly fit every single data point. This can (approximately) occur in image classification tasks, where the neural network is sufficiently overparameterized with respect to the data set. This cannot occur when training a generative language model, because it is impossible to perfectly guess the next token in a sequence due to ambiguities in languages. Because of this, the cross-entropy loss is never zero, and is always above the entropy of language (around 2.0). Thus the interpolation assumption cannot hold in generative language modelling. In contrast, assuming access to $f_{\xi}(x^*)$ imposes no restriction to the model itself, and it can occur in setting such a generative language modelling. For instance, such as out black-box model distillation problem
>**Q1** Comparisons to Adagrad
In the non-smooth setting, Adagrad has a guaranteed convergence of $\sqrt{2}GD/\sqrt{T}$, which is $\sqrt 2$ worse than our rates in for SPS* and IAM in Eq (12) and (20). The difference between this rate of Adagrad and our rates for IAM, is this rate holds for the last iterate of IAM and Adagrad assumes access to a bounded domain $D$ which contains the solution.
We can include numerical comparisons to Adagrad, but AdamW is SOTA for LLMs, not Adagrad, which perform poorly for LLMs.
> **Q2** Comparisons SLS variant, SSO
Neither SLS, or the SSO variant, are applicable in our black-box model distillation setting. SLS assumes that we can use the teacher to generate a candidate sequence, where-as SSO assumes we have access to the logits of the teacher. But our black-box setting assumes we only have access to the final loss of the teacher, and not the logits.
> **Q2** Discussion on computational overhead
Certainly, SPS* and IAM impose a small additional computational overhead as compared to SGD and SGD-M. Both SPS* and IAM need only store one additional scalar, and do two inner products with respect to the gradient. We will add this remark to the appendix.
> **Q3** Random seeds
We use $42$ to set all random seeds.
> **Q4** No failure modes
We investigate failure modes regarding misspecification of the optimal loss values in Section L2. | Summary: This paper presents some optimization methods with idealized stochastic Polyak step (SPS) sizes and their convergence analyses under the convexity conditions of objective functions. The analyses are based on the following two methods:
[Section 2] Stochastic gradient descent (SGD) with SPS (Theorem 2.1) under
- Expected locally Lipschitz assumption (Corollary 2.2)
- local expected smoothness assumption (Corollary 2.3)
[Section 3] Iterate averaging adaptive method (IAM) with SPS under
- Expected locally Lipschitz assumption (Theorem 3.2)
- local expected smoothness assumption (Theorem 3.3)
It also provides some numerical results to support their analyses (Section 4).
## update after rebuttal
Thank you for your comments. I checked all of your replies.
I have discussed my concern with AC and other reviewers during AC and Reviewers discussion period. As a result, I am not satisfied with your recent response.
I believe that the assumption such that $f_i$ is convex on a neighborhood at the global minimizer $x_*$ of $f = (1/n) \sum_{i=1}^n f_i$ is strong, since I do not know practical examples of $f$ and $f_i$ satisfying the assumption and we have many counterexamples. I asked the authors for providing examples of the assumptions in Sections 2 and 3 and the authors said that "As for the Black-box Model Distillation in Section 4.1, the student models are decoder only transformers, as such, they are non-convex deep neural networks." Unfortunately, I am not satisfied with the authors' response, and hence, at this time, I conclude that the assumption such that $f_i$ is convex on a neighborhood at the global minimizer $x_*$ and the assumptions in Sections 2 and 3 are unrealistic.
It would be grateful if the authors could provide practical examples such that $f_i$ is convex on a neighborhood at the global minimizer $x_*$ of $f = (1/n) \sum_{i=1}^n f_i$. At least, I have no example of such $f_i$s. Hence, I believe the revised manuscript should be submitted to other venues. My opinion is that the convexity condition over the whole space is unrealistic, while the convexity condition over a neighborhood at a local minimizer is acceptable.
Claims And Evidence: I am an emergency reviewer of the paper ICML Submission 261. Hence, I could not check all the proofs of the theorems. However, I have theoretical and practical concerns for Claims And Evidence. Please see Questions For Authors.
Methods And Evaluation Criteria: Numerical comparisons in Section 4 would be insufficient to support the performances of the proposed methods. More numerical comparisons with large-scale datasets and networks would be needed.
Theoretical Claims: I have some theoretical concerns. Please see Questions For Authors.
Experimental Designs Or Analyses: I have some practical concerns. Please see Questions For Authors.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The paper compares the proposed analyses with the existing ones (Table 1). However, more comparisons and discussions for the existing analyses are needed. For example, it is needed to discuss not only convex setting but also non-convexity setting, and it is needed to compare SGD, momentum, and adaptive methods using mini-batch stochastic (sub)gradients with the proposed methods.
Essential References Not Discussed: I think that the references are sufficient.
Other Strengths And Weaknesses: I have theoretical and practical concerns. Please see Questions For Authors.
Other Comments Or Suggestions: I am an emergency reviewer. I am sorry to find only the following typos:
- Abstract: 022: a O --> an O
- (3): ``support" is not defined.
- Theorem 2.1: I guess that $\mathbb{E}\_\xi$ is an expectation with respect to $\xi$. Meanwhile, $\mathbb{E}$ in (10) is not defined.
Questions For Authors: **Theoretical Concerns:**
For simplicity, let us consider a finite sum minimization (Remark 2.4). I understand a consideration under the convexity condition is significant for ML community. However, I believe that the assumption in Theorem 2.1 such that $f\_\xi$ is convex over the whole space $\mathbb{R}^d$ is strong for ML problems. For example, since the cross-entropy loss used in Section 4 (Page 8) is non-convex over the whole space, the assumption is strong. Let $x^*$ be a local minimizer of $f_i$ (or $f$) . We must replace the assumption with (A1) ``$f\_i$ is convex over a $\delta_i$-neighborhood at $x^*$ (or $f$ is convex over a $\delta$-neighborhood at $x^*$), while $f$ and $f\_i$ are non-convex over the whole space $\mathbb{R}^d$." The $\delta$-neighborhood at $x^*$ is denoted by $N(x^*; \delta)$.
Here, I have the following concerns under (A1):
- Does $x_t$ $(t = 1,2,\cdots)$ generated by SGD (2) belong to $N(x^*; \delta)$? That is, is it guaranteed that the points generated by SGD are in a neighborhood at a local minimizer of $f$ with probability 1? If $x_t \notin N(x^*; \delta)$, then, since $f_i$ (or $f$) is not convex around $x_t$, the results in Theorem 2.1 (in particular, (8) and (10)) do not hold. Therefore, we must prove that, for all $t$, $x_t \in N(x^*; \delta)$ with probability 1.
- We assume that $f$ is continuously differentiable. Even if $x_t \in N(x^*; \delta)$, we do not know whether $x_{t+1} = x_t - \gamma_t g_t$ is in $N(x^*; \delta)$, since $g_t$ is a stochastic gradient of $f$ (i.e., there exists an upper bound of the variance $\sigma^2$ such that $\mathbb{E}\_\xi [\Vert g_t - \nabla f (x_t)\Vert^2] \leq \sigma^2$).
The above concerns are applied to the results of IAM (Section 3).
**Practical Concerns:**
Do the objective functions considered in Section 4 satisfy the assumptions in Sections 2 and 3 (for example, in Theorem 2.1, $f\_\xi$ is convex over $\mathbb{R}^d$ and (9))?
From the above discussion, my recommendation is Reject. However, I can change my score if the authors could address the above theoretical and practical concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Dear reviewer, the main issue you have raised is regarding our assumption in our theorems that the loss is convex. We will divide our answer into two parts. In our first part, we will show how to relax global convexity to a form of local convexity. Indeed, this extension follows straightforwardly by using that SPS* and IAM have the very unusual property of being monotonic (Eq (8) and Lemma 3.1). In our second part, we write a defense for convex optimization in machine learning. After which we answer a minor point below.
We hope the reviewer will reconsider their position in light of our response.
**Extending beyond convexity.**
Our theorems do not require global convexity. Instead, we need only assume that for the given starting iterate $ x_0 \in \mathbb{R}^d $, that the $f_\xi $ functions are almost surely convex within the closed ball
$$
B_D(x^*) = \lbrace x : \Vert x - x^* \Vert \leq \Vert x_0 - x^* \Vert \rbrace
$$
Using your notation, we need convexity in an $\delta$-neighborhood around $x^*$ where $\delta = \Vert x_0 - x^* \Vert .$
The reason we are able to relax the global convexity assumption is because SPS* and IAM have the very unusual property (amongst stochastic methods) of being *monotonic*. That is, for the iterates $ x_t $ (or $ z_t$ for IAM), we have that
$$
\Vert x_{t+1} - x_* \Vert \leq \Vert x_t - x_* \Vert \leq \cdots \leq \Vert x_0 - x_* \Vert.
$$
The same cannot be said about SGD or any variant of SGD that we are aware of, since the iterates may escape the closed ball with some probability. We will include a remark on this in our revision.
---
**Defense of convex optimization in machine learning.**
As a mere formality, we first point out that convex optimization is one of the official topics of interest listed in ICML’s 2025 call for papers:
[https://icml.cc/Conferences/2025/CallForPapers](https://icml.cc/Conferences/2025/CallForPapers)
As such, we do not believe it is a valid reason to recommend rejecting our paper because our analysis focuses on stochastic convex optimization. Moreover, stochastic non-smooth convex optimization has had, and continues to have a surprising impact on deep learning practice.
For instance:
- **Adagrad** [1] was developed based on its favourable non-smooth convex anytime convergence. Adagrad went on to be widely used in Deep learning, until the development of exponentiated weighted variants of Adagrad such as RMSprop and ADAM. Thus non-smooth convex analysis has a significant role to play in inspiring the most popular optimization method (ADAM) for deep learning
- The **Schedule-free method** [2], designed based on its fast last iterate convergence for non-smooth convex settings, just recently set a record and one first place in the AlgoPerf competition in the self-tuning category for deep learning ([https://mlcommons.org/2024/08/mlc-algoperf-benchmark-competition/](https://mlcommons.org/2024/08/mlc-algoperf-benchmark-competition/)).
The fact that non-smooth convex analysis has some predictive power on the performance in deep learning (including LLMs) is now a well-established conundrum [3]. In contrast, non-convex SGD analysis has had far less practical impact on deep learning. Thus, convex optimization remains relevant to deep learning practice, justifying its inclusion in ICML’s topics of interest.
**Q1.** Do the objective functions considered in Section 4 satisfy the assumptions?
**A1.** The Poisson regression experiments mentioned in Section 4 (and relegated to Appendix L.1 due to space constraints), is a globally convex problem. In fact, this is one problem for which SPS* and IAM provenly converge (see Remark 2.4), and for which we are unaware of any other method that converges. This experiment was to highlight the predictive power of our theory. As for the Black-box Model Distillation in Section 4.1, the student models are decoder only transformers, as such, they are non-convex deep neural networks. Thus our theorems do not explicitly hold for these problems.
### References
[1]: Duchi, J., Hazan, E., & Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. *Journal of Machine Learning Research*, 12, 2121–2159, 2011
[2]: Aaron Defazio, Xingyu Alice Yang, Ahmed Khaled, Konstantin Mishchenko, Harsh Mehta, Ashok Cutkosky. The Road Less Scheduled, *Oral at NeurIPS* 2024.
[3]: Fabian Schaipp, Alexander Hägele, Adrien Taylor, Umut Simsekli, Francis Bach.
The Surprising Agreement Between Convex Optimization Theory and Learning-Rate Scheduling for Large Model Training, *arXiv:2501.18965*, 2025
---
Rebuttal Comment 1.1:
Comment: Thank you for your comments. However, I am not satisfied with your comments.
**Extending beyond convexity:**
I do not know why SPS* and IAM have the very unusual property (amongst stochastic methods) of being monotonic. Could you prove it without using the convexity condition over the whole space? (this is my question. Please check the above comments.)
**Defense of convex optimization in machine learning:**
I understand a consideration under the convexity condition is significant for ML community. (Please check the above comments.)
**A1:**
Thank you for your reply. However, your comment implies that the convexity condition is strong in practice.
Therefore, my score remains unchanged.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
thank you for engaging further, giving us a chance to address your concerns. We address all of your questions/concerns below.
**Q2.1** *Extending beyond convexity and the descent lemma.*
Yes, we can prove the descent lemma for SPS* and IAM using only local convexity. The argument is a simple induction. In words, our induction proof will establish that, if we assume $f_{\xi}$ is almost surely convex in
$$B_D(x_0) := \lbrace x : \Vert x - x_* \Vert \leq \Vert x_0 - x_* \Vert \rbrace,$$
then all the iterates $x_t$ are in $B_D(x_0) $, and consequently we have a descent lemma at every step.
Now for the proof: Suppose that $x_0$ is such that $f_0 := f_{\xi_0} $ is almost surely convex at $x_0,$ in other words for $t=0$ we assume
$$f_t(x) \geq f_t(x_t) + \langle g_t, x- x_t \rangle,$$
where $g_t$ is a stochastic subgradient.
Using the above and (6) in our paper, and for $t=0$, we then have that:
$$\\| x_{t+1} - x_* \\|^2 - \\| x_{t} - x_* \\|^2 \leq 2 \gamma_t (f_{t}(x_t) - f_t(x_*)) + \gamma_t \\| g_t\\|^2$$
Minimizing the right hand side in $\gamma_t\geq 0$ gives
$$ \gamma_t = \frac{(f_{t}(x_t) - f_t(x_*))_+}{\\| g_t\\|^2}$$
Plugging this back into the above gives
$$\\| x_{t+1} - x_* \\|^2 - \\| x_{t} - x_* \\|^2 \leq -\frac{(f_{t}(x_t) - f_t(x_*))_+^2}{\\| g_t\\|^2}$$
This shows that we have descent at step $t=0$ and consequently $x_1$ must be closer to the solution $x_*$ as compared to $x_0$, that is $x_1 \in B_D(x_0).$
Now our induction argument is that if $x_t$ is in $B_D(x_0)$, then $x_{t+1}$ is in $B_D(x_0)$. The proof of the induction step follows the exact same steps above but for any $t$ (instead of just $t=0$). Because of this, we need only assume $f_{\xi}$ is convex in $B_D(x_0)$ and **not globally**.
**Q2.2** *your comment implies that the convexity condition is strong in practice*
The fact that Poisson regression is globally convex, does not show global convexity is a strong assumption. Poisson regression is one of many globally convex machine learning problems that enjoys countless applications. In terms of theory, assuming *only* global convexity is a very weak assumption. More assumptions are always included (Lipschitz or Smooth) in order to achieve a convergence rate.
To understand further how global convexity is not a strong assumption, consider the fact that Poisson regression is being applied to numerous tasks in Operations and Queueing Systems (counting arrivals, customer call, requests per second), in Insurance and Actuarial Science (counting number of claims per policyholder, accidents ..etc), in Neuroscience ( Modeling spike counts, counting event-related potentials), and the list goes. There are entire book dedicated to the applications of Poisson regression, such as
Cameron AC, Trivedi PK. *Regression Analysis of Count Data*. 2nd ed. Cambridge University Press; 2013.
Thus, despite Poisson regression being globally convex, it gives us access to all these applications.
Furthermore, there are countless training problems in machine learning that are globally convex such as
- Linear Regression
- Support Vector Machines
- Logistic Regression
- Boosting (for instance Adaboost with an exponential loss)
- Conditional Random Fields
- Linear Quadratic Regulator (LQR) (in Optimal control and Reinforcement learning)
- Kernel Ridge Regression
The list goes on, and is also the subject of several text books such as
Changho Suh, *Convex Optimization for Machine Learning*, Now Publishers (2023)
Shai Shalev-Shwartz and Shai Ben-David, *Understanding Machine Learning: From Theory to Algorithms*,
Cambridge University Press (2014)
In summary, (globally) convex problems in machine learning is an entire field. Though it excludes deep learning, it still enjoys many daily and commonly used applications. The fact that our method is both stochastic and only relies on convexity within $B_D(x_0)$ is exceptional within the field.
We hope the review will reconsider their position in light of our response. | Summary: This paper studies an idealized method of stochastic Polyak step size called SPS*, assuming access to the optimal loss value that can be achieved by a solution.
It is shown that SPS* achieves the optimal convergence bound for globally Lipschitz function, and also enjoys anytime convergence with rate $O(1/\sqrt{t})$ for smooth functions.
It is also shown that momentum can be incorporated to guarantee the convergence rate of the last iterate.
Acknowledging the impracticality of having access to the optimal loss value in general, the paper also identifies settings where it is possible to approximate the optimal loss value, including 1) the interpolation setting where the optimal loss value should be 0, and 2) the setting where the optimal loss value can be approximated, e.g., model distisllation where the optimal loss value can be approximated by the teacher model.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem at hand.
Theoretical Claims: I did not check the correctness of the proofs in the appendix.
Experimental Designs Or Analyses: The experimental designs and analyses discussed in the main text seem sound.
Supplementary Material: I took a quick look at the supplementary details for the experiments in Appendix L.
The appendix provides additional details on the experimental setup and results, which are helpful for understanding the empirical evaluation.
Relation To Broader Scientific Literature: The key contributions of the paper are about the convergence properties of the method of Polyak step size, and the incorporation of momentum to guarantee the convergence rate of the last iterate.
This is related to the broader literature about learning rate schedules and momentum methods in optimization.
Essential References Not Discussed: The essential references have been properly discussed in the paper.
Other Strengths And Weaknesses: The paper is well written and presents a clear comparison between the convergence rates of SPS* and other existing methods.
Though SPS* is an idealized method, the authors provide a clear motivation for studying this method and show that it can be useful in practice in certain settings, where the example of model distillation is particularly interesting.
It would be helpful if further discussions are provided on how to approximate the optimal loss value in general settings, and how the convergence analysis would be affected by the approximation error.
Other Comments Or Suggestions: N/A
Questions For Authors: Why does the approximation of the optimal loss value have to be an underestimate?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review, and appreciating our work.
**Q1**. It would be helpful if further discussions are provided on how to approximate the optimal loss value in general settings, and how the convergence analysis would be affected by the approximation error.
**A1**. This is a good question. We are currently working on a follow-up on how the optimal loss value could be approximated on language models. The optimal cross-entropy loss over the training data is lower bounded by the average number of bits used to compress this data when using an optimal compression method. The final loss over a batch is then given by the average number of bits used to compress this batch. Though we do not have access to an optimal compressor, compressing is in general an easier task than predicting the next token, and thus, we do have access to very good compressors. This would suggest the following approximation: Compress the training data using a good compressor $
\implies $ Use the average number of bits used to encode a batch of data as an approximation to the optimal batch loss $
\implies $ Run IAM with this approximation. We can consider adding this to potential future work, if the reviewer thinks it is insightful or partially answers their question. We are also considering how to extend our convergence analysis to allow for an approximation error. This has proved challenging, and we cannot guarantee that our revision will include results on this. | null | null | null | null |
Open-Det: An Efficient Learning Framework for Open-Ended Detection | Accept (poster) | Summary: In this paper, the authors present a novel learning framework for open-ended detection. Open-ended detection consists in detecting objects that are not known a priori. The proposed framework is based on four main components: 1) an open object detector, 2) a prompt distiller, 3) an object name generator and 4) a vision language alignment module. the model is trained with a Visual Genome dataset (about 77K images). The pproposed model is compared to GenerateU, the SOTA model in open-ended detection, a new task in Deep Learning based Computer Vision. The proposed model slightly outperforms GenerateU in terms of AP. Moreover, Open-Det is more efficient in terms of training convergence (5 less epochs than GenerateU) and training data. The ablation study shows the effectiveness of each component of the proposed model. The paper is well written and the experiments are well conducted. This paper is a good contribution.
Claims And Evidence: The authors claim that the proposed model is more efficient than GenerateU in terms of training convergence and size of training data. The authors provide evidence that the proposed model outperforms GenerateU in terms of AP. The ablation study shows the effectiveness of each contribution of the proposed model.
Methods And Evaluation Criteria: /
Theoretical Claims: /
Experimental Designs Or Analyses: /
Supplementary Material: Yes, interesting to get further information
Relation To Broader Scientific Literature: /
Essential References Not Discussed: To my point of view, OED is close to Open Word Object Detection (OWD). Joseph, K. J., Khan, S., Khan, F. S., and Balasubramanian, V. N. Towards open world object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5830–5840, June 2021.
It could be interesting to cite OWD in the related work section.
Other Strengths And Weaknesses: ++ Strong Points:
- The paper is well written and the experiments are well conducted.
- The proposed model is more efficient than GenerateU in terms of training convergence and size of training data.
- The ablation study shows the effectiveness of each component of the proposed model.
- There are many good insights in the paper: 1) a variable number of queries from encoder tokens in the object detector part, 2) the vision-language alignment module with the bidirectional alignment, 3) the joint loss function that merges binary score with IoU and alignment score. 3) improves the performance for rare objects by 4%.
-- Weak Points:
- Although the proposed model is more efficient than GenerateU, the performance gain is not significant. The authors should provide more insights on the limitations of the proposed model and how to improve it.
- In the GenerateU paper, the authors achieved a better performance by using a Swin-L model. Why the authors did not use this model in order to show if Open-Det can also benefit from this BB and outperform GenerateU?
- The training data is small (77K images). Did you try to train the model with a larger dataset and to see if the performance is improved?
Other Comments Or Suggestions: /
Questions For Authors: Did you try to train the model with a larger dataset and to see if the performance is improved?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear reviewer JNJc,
We sincerely appreciate your valuable feedback and have incorporated all suggestions into the manuscript for the next version.
**Q1: OED is close to OWD. It could ... section.**
We have carefully checked the OWD [1], which enables the detector to label unknown objects as ''unknown'' and incrementally learn these identified unknown categories without forgetting previously learned classes. In contrast, our method could directly detect novel/unseen categories without requiring any prior human knowledge. In other words, both OWD and OED are capable of detecting and recognizing new categories, demonstrating their relevance. Following your suggestion, we will include this work in the related work section in the new version.
**Q2: The authors ... insights ... improve it.**
Similarly to the existing OED framework of GenerateU, Open-Det's performance is primarily constrained by cross-modal semantic discrepancies between **visual regions** and image-like **textual embeddings**. These discrepancies arise from the *interactions* among the detector, the VLM, and the LLM. To mitigate this limitation, employing stronger foundation models, such as ALIGN and CogVLM-17B for VLM models and DeepSeek for generative language model, is an efficient method for further performance improving. Additionally, training on supplementary datasets can serve as an effective approach to enhance performance.
Furthermore, integrating a segmentation module into Open-Det framework will offer mask priors for more precise **regional** and semantic feature extraction. This enhancement can further address semantic gap in cross-modal representations, transforming Open-Det into a unified **detection-segmentation** framework and boosting performance in both tasks. We will incorporate this discussion in the new version and propose it as a direction for future research.
**Q3: Why the ... Swin-L model ... GenerateU?**
Compared to Swin-T, the Swin-L model has a significantly larger number of parameters (**197M vs. 29M, a difference of 6.8$\times$**) and supports a higher input image size (384 $\times$ 384 vs. 224 $\times$ 224, with GenerateU using a pretrained Swin-L at a resolution of 384 $\times$ 384). This results in a substantial increase in GPU memory usage during model training (e.g., Swin-L requires approximately 4 $\sim$ 5 times more memory than Swin-T), making it **infeasible** for Open-Det to train on 4 V100 GPUs.
It is fair and effective to evaluate the performance of GenerateU and Open-Det models with the **same backbone** (such as Swin-T). Therefore, we did not use Swin-L for performance comparisons in the first instance.
However, as suggested, we managed to conduct **additional experiments** with two backbones: Swin-S (using 4 V100 GPUs) and Swin-L (using 4 A800 GPUs). The main experimental results are as follows:
| Model | Backbone | Train Data | Data Size | Epochs | AP$_{r}$ | AP$_{c}$ | AP$_f$ | AP |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| GenerateU | Swin-L | VG,GRIT5M | 5.077M | - | 22.3 | 25.2 | 31.4 | 27.9 |
| **Open-Det (ours)** | Swin-T | VG | 0.077M | 31 | 21.0 | 24.8 | 29.8 | 27.0 |
| **Open-Det (ours)** | Swin-S | VG | 0.077M | 31 | 26.0 (+3.7%) | 28.6 (+3.4%) | 32.8 (+1.4%) | 30.4 (+2.5%)|
| **Open-Det (ours)** | Swin-L | VG | **0.077M** | 31 | **31.2 (+8.9%)** | **32.1 (+6.9%)** | **34.3 (+2.9%)** | **33.1 (+5.2%)** |
**Results:**
* Open-Det-Swin-S achieves an improvement of **+5.0%** in AP$_r$ (26.0% vs. 21.0%) and **+3.4%** in AP (30.4% vs. 27.0%) than Open-Det-Swin-T, demonstrating its efficiency.
* Using only **1.5%** training data, **Open-Det-Swin-S** outperforms *GenerateU-Swin-L* by **+3.7%** in AP$_r$ and **+2.5%** in AP.
* When utilizing the larger backbone of Swin-L, **Open-Det-Swin-L** significantly outperforms *GenerateU-Swin-L* by ``+8.9%`` in AP$_r$(31.2% vs. 22.3%)and ``+5.2%`` in AP(33.1% vs. 27.9%), further confirming its superior effectiveness and efficiency.
We will include these results in the new version.
**Q4: Did you try to ... is improved?**
Intuitively, more training data would further improve the performance of Open-Det. Unfortunately, due to current limitations in available time (within the rebuttal days) and GPU resources, conducting experiments with the large-scale dataset (such as GRIT5M) presents significant challenges.
As an implementable scheme, we further extended Open-Det's training on the VG dataset from 31 to 50 epochs to investigate potential performance improvements under practical resource limitations. The results show that extended training further improves the performance with **+0.9%** (21.9% vs. 21.0%) in AP$_r$ and **+0.4%** (27.4% vs. 27.0%) in AP, confirming its effectiveness. However, we believe it remains valuable to investigate the impact of training data scale on performance for furture studies when more GPU resources become available.
[1] Towards open world object detection. CVPR 2021. | Summary: This paper introduces **Open-Det**, which addresses the inefficiencies of previous open-ended methods, including slow training and high memory consumption. Open-Det achieves improved efficiency and performance through:
1. Enhancing the object detector,
2. A vision-language (VL) alignment module,
3. A VL distillation module,
4. LoRA for parameter-efficient fine-tuning,
5. A noise-adding and denoising strategy,
6. Masked Alignment Loss.
## update after rebuttal
During the rebuttal, the authors repeatedly used terms like “factual error” in an attempt to assert the originality of their method, which I find unconvincing. In my view, the proposed approach does not appear substantially different from prior work—though this may require further verification by other reviewers or the AC. That said, I believe there is nothing wrong with building upon existing methods; standing on the shoulders of giants is how we advance the community. However, in the method description section, the authors failed to acknowledge this connection. I trust that the proposed modules were likely inspired by previous work, and this omission is a notable weakness. I will maintain my original score and leave the final judgment to other reviewers and the AC.
Claims And Evidence: The analysis and claim of **GeneratU** in the paper is clear and convincing.
Methods And Evaluation Criteria: **My biggest concern is that the proposed acceleration methods for open-ended detection are adaptations of techniques already introduced in traditional object detection, yet the authors fail to cite any of them. This omission gives the impression of misleading novelty.**
3.2 Object Detector: **a threshold-based query selection** is essentially the common two-stage approach found in [1][2][3] and so on.
3.2 Object Detector: *divide the first layer and last two layer* is similar to the idea presented in [4].
3.4. Vision-to-Language Distillation: *using deformable cross-attention to attend encoder feature* is exactly the same as [1]
3.4. Vision-to-Language Distillation: *using text-embedding to supervise query feature* is a common trick used in open-vocabulary (Almost every method uses this approach.)
3.5. Object Name Generator: *Noisy Alignment.* The noise-adding and denoising strategy is the same as in [2].
3.6. Masked Alignment Loss: *Mask text-classifier based on feature similarity* is the same as [5].
*The key issue is that the authors did not cite any of the aforementioned works, which raises serious concerns. Given that many of these methods are well-known, it is unlikely that the authors are unaware of them. Instead, it appears that they may have deliberately omitted these citations, which could be misleading.*
[1] Deformable Transformers for End-to-End Object Detection
[2] DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection
[3] Dynamic Anchor Boxes are Better Queries for DETR
[4] DAC-DETR: Divide the Attention Layers and Conquer
[5] Scaledet: A scalable multi-dataset object detector
Theoretical Claims: N/A
Experimental Designs Or Analyses: Missing results on COCO and Obje365, which is common Open-end dataset and evaluated in in GenerateU
Supplementary Material: Yes, I review all the supplementary material.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: Please see method section. I firmly believe that the vast majority of contributions presented in this paper were originally proposed and validated in the object detection domain. However, the authors neither discuss nor cite these prior works, which raises concerns about the completeness and transparency of their literature review.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Dear reviewer AMke,
We thank your time and feedback. However, we respectfully disagree with comments (Q1 to Q7) and the assertions(the omitted citations and misleading novelty) for the following reasons:
**Q1: The ..., yet ... cite ... novelty.**
(1) **Reviewer's Factual Error:** The reviewer claimed ***3 times***: we **failed to cite any** of the 5 specified papers. In fact, we have explicitly **cited** ``3`` out of 5 in **multiple places** in main paper: Deformable DETR [1] (*Line 338*), DINO [2] (*Lines 87, 132, 152, 295, 796*), and DAC-DETR [4] (*Lines 79, 155, 161, 295, 797*). The assertion that *we deliberately omitted citations - and thus misleading novelty* - has ``no ground and constitutes a factual inaccuracy requiring correction``.
(2) **Exclusion of [3,5]:** DAB-DETR [3] differs from Open-Det in *task* and *method* (e.g., Q2). Similarly, ScaleDet [5] tackles label unification for multiple-dataset training, differing from our focus in **Q7**. We initially excluded them but may be included in Related Works if space permits.
(3) Open-Det enhances detection across object granularities (coarse-to-fine) and scales (large-to-small) (not solely in training acceleration). Notably, it demonstrates *additional vocabulary priors are not necessary in inference*, with ***11.7%*** training data achieving **+6.8%** AP$_r$ and **+8.5%** AP over GLIP(A) of OVD. We believe our work establishes a **foundational framework** for future OED research.
**Q2: Object is ... found in [1][2][3].**
**Reviewer's Factual Error:**
* **[1]**: Obtains Content Query (CQ)/Positional Query (PQ) via Top-K selected sinusoidal-encoded anchors (**no threshold**);
* **[2]**: Initializes CQ statically, derives PQ from Top-K boxes (**no threshold**);
* **[3]**: Uses static initialization for both CQ/PQ (**no threshold**)
**Innovation of Open-Det:** Different from [1-3], Open-Det introduces a NOVEL single adaptive query type (rather than CQ and PQ), transforming fixed number of objects prediction into flexible.
**Q3: Object ... DAC-DETR.**
* **Reviewer's Factual Error:** DAC-DETR [4] employs a dual-decoder architecture (standard O-Decoder and auxiliary C-Decoder) to enhance training efficacy, rather than dividing the first and last two decoder layers.
* **Advantages of Open-Det:** In contrast, our Open-Det directly optimizes the architecture of standard single-decoder (Sec.``3.2``), enabling lower training costs and superior efficiency.
**Q4: Vision ... deformable attention (DA).**
This claim **misinterprets** our contribution. We did **NOT** claim DA is our contribution. It is not suprise that DA is widely used. We also follow this practice. However, one of our novelty is the design of **VLD-M** (Fig.``3``), leveraging it to construct **image-like** queries for visual-text alignment. This represents both a **novel architectural design** and **distinct application**.
**Q5: Vision ... supervise ...**
* This claim without gives any references. **Text-embedding to supervise query feature is NOT a common trick but a priciple paradiam for vision-text alignment.** Exisiting mehtods differ in how the supervison is done, *rather than whether or not simply using it*. The reviewer's claim is NOT professional.
* **Innovation of Proposed VLD-M:** Our VLD-M aims to adaptively construct **image-like** queries to reduce **region-text annotations** (Sec.``3.4``), improving AP$_c$ by **+3.9%** and AP by **+2.8%** (Table ``2``).
**Q6: ONG: Noisy ... [2].**
DINO adds *biased coordinates noise into box* to accelerate *box convergence*. However, Open-Det adds *Gaussian noise into text embeddings* of VLM to speed up *LLM*'s convergence. They **differ fundamentally** in both objectives (**box vs. LLM**) and noise types (**biased coordinates vs. Gaussian**).
**Q7: MAL: ... ScaleDet [5].**
* **Reviewer's Factual Error:** MAL is NOT used for text-classifier but for query-text alignment.
* **Text Label Unification ≠ Multimodal Alignment:** They are **fundamentally different** in motivation and method. ScaleDet unifies text labels via semantic similarity (**as soft label in MSE**) to combine multi-dataset training, while MAL resolves query-text matching conflicts through **similarity-binarized BCE** updates (Sec.``3.6``).
**Q8: Missing ... .**
* We conducted test on COCO. Open-Det still outperforms GenerateU by **+2.8%** AP (35.8% vs. 33.0%).
* Furthermore, additional experiment shows that Open-Det-SwinL surpasses GenerateU-SwinL by ``+8.9%`` in AP$_r$ on LVIS (see **Q3** responses to **JNJc**), demonstrating significant superiorities.
***
According to our responses to Q1~Q8, our method is **NOT** an adaptation of existing techniques, but rather respresents a fundamentally different approach, innovating in framework, modules, loss functions, performance and efficiency.
``Overall, we respectfully request the reviewer to cross-check the citations, methodologies, as well as all our responses, and we appreciate a fair judgement.``
---
Rebuttal Comment 1.1:
Comment: The authors' feedback has reinforced my initial judgment. According to their response to Q1 ("The ..., yet ... cite ... novelty."), the authors clearly acknowledged being aware of the five papers I mentioned. However, they failed to appropriately cite three of these papers in relevant contexts (which constitutes misleading referencing) and intentionally omitted the other two. Regarding their replies to Q2–Q7, I perceive the authors as attempting to exaggerate minor differences. For instance, in Q2, does using a threshold selection method truly differ fundamentally from top-k selection? Similarly, in Q6, is replacing uniform noise sampling in DINO with Gaussian noise sampling genuinely a substantial distinction? Nevertheless, after carefully reviewing other reviewers' opinions, I have decided to maintain my current score and attribute my primary reason for rejection entirely to the paper's misleading claims regarding novelty and referencing. In my view, the paper still contains significant issues and does not meet the acceptance bar.
---
Reply to Comment 1.1.1:
Comment: We believe our paper has **NOT** been read carefully and has been subjected to unfounded speculative accusations regarding our intentions.
Specifically, we have explicitly cited **3 of 5 papers** listed by reviewer AMke and referenced those papers in **mutiple places** in our ``initial manuscript`` (e.g., Deformable DETR [1] at *Line 338*, DINO [2] at *Lines 87, 132, 152, 295, 796*, and DAC-DETR [4] at *Lines 79, 155, 161, 295, 797*, totaling **11** ciations). If the reviewer AMke believes he is really familiar with our work, *it is unclear why he failed to notice that we, in fact, cited three of the five referenced papers with a total of 11 times* (Given initial comments ''*fail to cite any of them*'', which now has been reinterpreted by the reviewer AMke as "*failed to appropriately cite three of these papers*". This new comment is **contradictory to his initial comments** (which were repeated for 3 times) and **lacks any specific supporting facts**, which are **unconvincing** to us.). Are many essential contributions of our work also missed by reviewer AMke?
In new feedback, reviewer AMke did **NOT** directly acknowledge these factual errors but presented new comment that we *constitute misleading referencing and intentionally omit the other two*. It is irresponsible to conclude that *we intentionally hid these popular references* based on **these factual errors**.
Additionally, We would like to highlight that ``some factual errors and subjective ill-judged remarks appear once again in reviewer AMke's new feedback``. For example:
* The new comment "*In Q2, does using a threshold selection method truly differ fundamentally from top-k selection?*" — In our response to Q2, we did NOT claim that "*threshold selection method fundamentally differs from Top-K selection*". The reviewer AMke initially claimed that ''*threshold-based query selection is found in [1][2][3] (Q2)*''. ``We just pointed out reviewer AMke's factual error`` that threshold selection method is **NOT** used in Deformable DETR [1], DINO [2], and DAB-DETR [3]. Notably, one of the novelties of our method is the proposed new single query type (differing from existing positional query and content query) and achieved flexible object prediction (differing from existing fixed query) for detection, but NOT the ''threshold'' as the key point. This novelty is also highlighted by the reviewer JNJc in Strong Points:good insights 1)... .
* The new comment "*In Q6, is replacing uniform noise sampling in DINO with Gaussian noise sampling genuinely a substantial distinction?*" — In our response to Q6, we did NOT claim that "*the uniform noise sampling is substantial distinction to Gaussian noise sampling*" (it is reviewer's subjective ill-judegment). The denoising training is a common and effective training approach in deep learning. **The key is NOT which kind of *uniform noise sampling* or the *Gaussian noise sampling* were used**, but rather the *different problems being addressed* (closed-set detection vs. OED), *motivations* (box convergence vs. LLM's convergence), *method implementations in noise types* (biased coordinates (x, y, w, h) vs. text embedding vector with Gaussian noise). These core differences are overlooked by reviewer AMke once again.
We believe we have adequately addressed the reviewer AMke's questions (with factual support, detailed analysis, and experimental results, etc., in our responses). Our Open-Det framework is novel in its *modules design (e.g., detector, BVLA-M, VLD-M, ONG), loss functions (MAL and Joint Loss), significantly improved performance (e.g., Open-Det-SwinL achieves an improvement of* **+8.9%** AP$_r$ and **+5.2%** AP *compared to GenerateU-SwinL), high efficiency* (using only* **1.5%** training data and **20.8%** training epochs than GenerateU), and resource superiority (**4 V100 GPUs** vs. **16 A100 GPUs** for training).
Overall, ``we believe our paper has NOT been read carefully with convincing reviews`` and questions raised by reviewer AMke contain many factual errors in our opinion. At the same time, we do NOT think that the reviewer AMke's concerns (e.g., erroneous comments on citations) could weaken the core contributions of our paper. Given the current discussion, we find it difficult to reach a consensus. We will leave it for others to comment on publicly.
Thank you for your efforts in reviewing the paper. | Summary: This paper presents Open-Det, a novel and efficient framework for Open-Ended Object Detection (OED), addressing the issues of slow convergence, low efficiency, and reliance on large-scale datasets found in existing models like GenerateU. Open-Det consists of four core components: the Object Detector (ODR), the Prompts Distiller, the Object Name Generator, and the Vision-Language Aligner. By introducing the Bidirectional Vision-Language Alignment module (BVLA-M) and the Vision-to-Language Distillation Module (VLD-M), Open-Det effectively bridges the semantic gap between vision and language. Additionally, the Masked Alignment Loss and Joint Loss further optimize training efficiency and classification consistency. Compared to existing models, Open-Det achieves superior performance using only 1.5% of the training data and 20.8% of the training epochs, while also detecting a broader range of objects, including smaller ones. Open-Det provides a highly efficient and scalable solution for OED tasks, with significantly reduced resource requirements.
Claims And Evidence: - The claims made in the submission are generally supported by clear and convincing evidence. The paper provides experimental results that demonstrate the superiority of the Open-Det framework over existing models such as GenerateU and GLIP in various aspects, including performance, training data requirements, and convergence speed. Specifically, the evidence shows that Open-Det achieves higher performance with significantly fewer resources, requiring only 1.5% of the training data, 20.8% of the training epochs, and fewer GPUs, which is supported by quantitative results in the tables and performance curves.
- I have find a work named VL-SAM, they achieved 23.4 APr in LVIS, they are training free, so the resource and performance is better than yours.
Methods And Evaluation Criteria: - In the Open-Ended Detection task, objects can belong to multiple categories or have ambiguous semantics (e.g., "dog" vs. "pet"). How does Open-Det handle ambiguous or overlapping categories during inference, especially when there are multiple plausible names for a detected object? What is the impact of this ambiguity on the accuracy of object name generation, and how does the model ensure consistent results under these circumstances?
- The proposed Masked Alignment Loss (MAL) aims to address contradictory supervision, but how does this loss behave in practice when applied to images with multiple objects of the same category (e.g., several "cars" in a street scene)? Does the MAL genuinely prevent conflicting gradients, or could there still be scenarios where the supervision conflicts due to the inherent ambiguity in object detection tasks (e.g., detecting small objects in cluttered scenes)?
- The use of average precision (AP) and related metrics is standard, but how well do these metrics capture the true performance of Open-Det in open-ended object detection, where the focus is not just on detection but also on generating the correct category names? For example, how would Open-Det fare in tasks that require fine-grained distinctions between objects or the generation of complex object descriptions (e.g., detecting "a small red ball on the table" vs. "a ball")?
Theoretical Claims: This work does not involve too much new mathematical theory or proofs.
Experimental Designs Or Analyses: In the comparison experiments, the authors chose GLIP, which performs relatively poorly in OV detection, as a baseline. This seems somewhat unfair, as there have been many OV detectors that have achieved higher accuracy on LVIS after GLIP, such as the computationally intensive GroundingDINO or the more efficient CoDET, among others.
Supplementary Material: - **Appendix for Method**: The simplified pipeline of the Open-Det framework (Fig. 5) is discussed, which helps clarify the workflow of how different components (e.g., Object Detector, Prompts Distiller, LLM) interact during training.
- **LoRa Head in the LLM**: The effectiveness of the LoRa Head in reducing the number of trainable parameters and maintaining model performance is detailed, with an explanation of how it contributes to improved computational efficiency.
- **Contradictory Alignment Loss**: The section explaining the causes of contradictory losses in the query-text alignment process, along with the proposed solution (Masked Alignment Loss), provides further insights into why the proposed approach works and how it addresses the issues.
- **Joint Loss**: The section includes an effectiveness analysis of the Joint Loss and compares it with other loss functions (e.g., Focal Loss and BCE Loss), along with visualizations to demonstrate its impact on training stability.
Relation To Broader Scientific Literature: - The generative nature of Open-Det (which generates object names in addition to bounding boxes) relates to the work done on generative approaches in vision-language tasks, such as **image captioning** and **visual question answering (VQA)**, where models generate descriptive text based on visual inputs. The paper’s approach of using a **Large Language Model (LLM)** for name generation is in line with recent advancements in **transformer-based generative models** for language, like **T5** (Raffel et al., 2020).
- The paper incorporates a **Vision-to-Language Distillation Module (VLD-M)** for transferring knowledge from pre-trained Vision-Language Models (VLMs) into VL-prompts. where knowledge from a teacher model is transferred to a smaller or more efficient student model. In this case, the VLD-M distills alignment knowledge from powerful pre-trained VLMs (like CLIP) to improve the training of the Object Name Generator.
Essential References Not Discussed: Essential references are discussed.
Other Strengths And Weaknesses: While Open-Det demonstrates impressive results with significantly reduced data, could the approach still face limitations in very low-data regimes, such as when there is an extremely low number of labeled examples for certain object categories? Does the framework struggle to generalize in scenarios where only a few instances of a new category are available, or when the object categories are underrepresented in the training set?
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear reviewer E2VT,
We deeply appreciate your suggestions and will include them into our revisions.
**Q1: Training free VL-SAM .. better.**
A direct performance comparison requires careful consideration of **model scales**:
* **VL-SAM**: VLM: 17B, LLM: 7B, SAM: 0.632B, totaling ``24.632B``
* **Open-Det**: VLM: 0.307B, LLM: 0.250B, Detector: 0.049B, totaling ``0.606B`` (ONLY **2.46%**)
With significantly **less** parameters (**40.6$\times$** ↓), Open-Det is still on par with VL-SAM in AP$_c$ (24.8% vs. 25.3%) and even **higher** in AP$_f$ (30.1% vs. 30.0%), indicating better trade-off between resouce and performance.
**Q2: (1) How ... ambiguous? (2) What's ... accuracy? (3) How ... consistent results?**
Language (text) inherently exhibits ambiguity, a characteristic **independent of the design of OED models**.
(1) Similar to GenerateU, Open-Det also adopts ''*beam search*'' strategy to generate ``multi-level object names`` , effectively addressing the ambiguity issue and enhancing accuracy.
(2) For fair evaluation, we adopt GenerateU's **standardized protocol**: comparing latent space similarities score between generated and annotated texts, *eliminating evaluation biases* from textual variability.
(3) The adopted *beam search* strategy ensures consistent results, and Open-Det achieves higher text similarity scores (Appendix Sec. ``C.1``) than GenerateU.
**Q3: (1) How ... same category? (2) Does ... conflicting gradients, or (3) could ... detection tasks?**
(1) MAL transfers one-to-one matching (query to category, Fig. ``6``) to one-to-many matching (single query to multiple same-category instances) via:
* **Computes text similarities** between all object-name pairs, offering a quantitative basis for dynamic label matching correction.
* **Corrects dynamic label matching** (from 0 to 1) by transforming unmatched same-category pairs into matched ones (one-to-many) based on text similarities, resolving contradictory loss terms during optimization.
(2) Yes, MAL efficiently addresses contradictory loss generation at its source.
(3) MAL depends **solely** on textual ground truth, making it invariant to scene variations, but may be related to the annotations accuracy and text encoder.
**Q4: (1) The use of AP ... category names? (2) how ... descriptions (e.g., ...)?**
(1) The evaluation assesses **both** box accuracy (IoU) and object naming performance (via cosine similarity (CS) score between **generated** and **annotated** texts). If object name is wrongly predicted, the CS score will be lower, thereby reducing the performance of AP.
(2) Generated name granularity varies by training data annotations. Similar to GenerateU, Open-Det focuses on OED task, using **detection annotations** (i.e., object names and boxes) without utilizing *complex object descriptions*. Thus, Open-Det mainly predicts object categories (Fig.``4``, ``10``, ``11``), rather than fine-grained descriptions (i.e., obtaining relations between objects). We view the *task* of finer-grained name generation as an important area for future research.
**Q5: The ... Grounding DINO or ... CoDET.**
The comparison between Grounding DINO and Open-Det are as follows:
| Model | Train Data | Data Size | AP$_{r}$ | AP$_{c}$ | AP$_f$ | AP |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Grounding-DINO-SwinT | O365,GoldG | 1.460M | 14.4 | 19.6 | 32.2 | 25.6 |
| **Open-Det-SwinT(ours)** | VG | 0.077M | 21.0 | 24.8 | 30.1 | 27.0 |
Using only **5.3%** training data, Open-Det-SwinT still significantly **outperforms** Grounding-DINO-SwinT by ``+6.6%`` in AP$_r$, ``+5.2%`` in AP$_c$, and ``+1.4%`` in AP, further confirming its effectiveness and efficiency.
CoDET [1] is designed for programming problems and has NOT been evaluated on object detection tasks. Due to fundamentally different objectives (**Code Generation** vs. **Object Detection**), a direct detection performance comparison is invalid. We also cross-check Co-DETR [2], which is designed for closed-set detection and can't be directly compared to our OED model.
**Q6: (1) While ..., ... low-data regimes, such as ...? (2) Does ... training set?**
(1) Like most OVD and OED models, Open-Det also faces common challenges related to low-data regimes and few-instance learning.
(2) However, Open-Det demonstrates superiorities in both **efficiency** and **generalization** via VLM knowledge distillation into proposed VL-prompts. Notably, using significant **less** training data, it obtains superior performance on ``rare`` categories (namely only few instances of new category): **+6.8%** AP$_r$ over GLIP(A), **+6.6%** AP$_r$ over Grounding-DINO-SwinT, and **+3.6%** AP$_r$ over GenerateU. Using ``1.5%`` training data, Open-Det-SwinL outperforms GenerateU-SwinL by ``+8.9%`` in AP$_r$ (Please see our responses to **JNJc** in **Q3**)
[1] Codet: Code generation with generated tests. ICLR 2023.
[2] Detrs with collaborative hybrid assignments training. ICCV 2023. | Summary: This paper proposes a new framework for open-ended detection. To address the problem of alignment between vision queries and text embeddings, the authors adopts a Bidirectional Vision-L;anguage Alignment module to obtain alignment score and uses a masked alignment loss to supervise the alignment beween queires and text embeddings. Additionally, a Vision-to-Language Distillation module is used to enhance the query feature to make more align to text embedding. The author conducts experiments on LVIS MiniVal dataset and show the proposed method can achieve better performances to GenerateU with the same pre-train data.
## update after rebuttal
The author's rebuttal address my concerns about the comparison with GenerateU. I keep my rating to Weak accept.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes. Zero-shot domain transfer on LVIS MiniVal dataset is a popular choice for open set detection.
Theoretical Claims: No. There are not theoretical claims.
Experimental Designs Or Analyses: Yes. I check the main experiments and ablation studies.
Supplementary Material: I check the A.1 and A.2.
Relation To Broader Scientific Literature: The whole pipeline is similar to GenerateU which follows a pipeline of region proposal generator with a language model to generate object names for visual regions.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
1. This paper proposes several reasonable modules to improve the performances of open-ended detection and conduct detailed ablation studies to validate the effectiveness of this modules. These modules contribute consistent performance improvement.
2. The idea of adopting IoU and alignment scores to enhance the loss function is interesting and the proposed approach inprove the performances significantly.
3. The proposed framework achieve better performances with less training cost comparing to recent SOTA methods.
Weaknesses
1. Can the proposed method achieve better performances with more training data? Comparing to GenerateU with VG and GRIT5M dataset, the proposed method trained with VG achieve not very significant performance improvement. It would be interesting to see the performance with VG and GRIT5M.
Other Comments Or Suggestions: 1. The Figure 2 is too complex and it not easy to understand the whole pipeline with Figure 2.
Questions For Authors: Please refer to the Weaknesses in Other Strengths And Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer EV8Z,
We sincerely appreciate your time and constructive suggestions. We have carefully integrated your suggestions into the manuscript and will include these updates in the next version.
**Q1: (1) Can the proposed ... more training data? (2) Comparing to ... improvement. It would be interesting to see the performance with VG and GRIT5M.**
Thank you for your valuable suggestion.
(1) Intuitively and empirically, it is commonly observed that larger datasets can enhance model accuracy in deep learning, as demonstrated in studies such as [1,2,3]. This principle is applicable to the Open-Det model as well.
(2) There are two key reasons why Open-Det was not trained using the combined VG and GRIT5M datasets. **Firstly**, Open-Det's design prioritizes model efficiency, minimizing reliance on large datasets and training resources. Remarkably, when trained on the single VG dataset for only **20.8%** of GenerateU's training epochs, our model *outperforms* GenerateU which used both datasets. This result effectively demonstrates the effectiveness of our approach. **Secondly**, the GRIT5M dataset is **64.6$\times$** larger than VG dataset, which would require either:
* **64.6$\times$** more GPU resources for the same training time (for instance, GenerateU was trained using *16 A100 GPUs*), or
* **64.6$\times$** longer training time on our *4 V100 GPUs*
These demands would result in substantial resource requirements and costs for model training. As a result, we excluded GRIT5M training from our initial submitted manuscript.
Unfortunately, due to current limitations in available time (within the rebuttal days) and GPU resources, conducting experiments with the large-scale GRIT5M dataset presents significant challenges. However, following the reviewer's suggestion, we implemented a more computationally efficient alternative by extending Open-Det's training on the VG dataset from 31 to 50 epochs. This modified experimental design serves to:
* Investigate potential performance improvements under practical resource limitations;
* Validate the model's effectiveness without requiring additional combined dataset training;
The experimental results are as follows:
| Model | Backbone | Pre-Train Data | Data Size | Epochs | AP$_{r}$ | AP$_{c}$ | AP$_f$ | AP |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| GenerateU | Swin-T | VG | 0.077M | 149 | 17.4 | 22.4 | 29.6 | 25.4 |
| GenerateU | Swin-T | VG,GRIT5M | 5.077M | - | 20.0 | 24.9 | 29.8 | 26.8 |
| **Open-Det (ours)** | Swin-T | VG | 0.077M | 31 | 21.0 | 24.8 | 30.1 | 27.0 |
| **Open-Det (ours)** | Swin-T | VG | 0.077M | 50 | **21.9** | **25.1** | **30.4** | **27.4** |
Our extended training experiments yielded *consistent improvements* across all evaluation metrics. Compared to the 31-epoch baseline, additional training achieved: **+0.9%** improvement in AP$_r$ and **+0.4%** improvement in AP. Notably, Open-Det trained **solely** on VG dataset outperforms GenerateU (trained on both VG and GRIT5M), demonstrating **+1.9%** higher AP$_r$ and **+0.6%** higher AP. These comparative results provide further evidence of Open-Det's superior efficiency and effectiveness in the OED task.
**Experiments on Larger Backbones:** In addition, when using larger backbones, such as *Swin-S* and *Swin-L*, Open-Det achieves significant improvements across all metrics. For instance, **Open-Det-Swin-S** obtains an improvement of **+5.0%** in AP$_r$ and **+3.4%** in AP than Open-Det-Swin-T. Using only **1.5%** training data, **Open-Det-Swin-L** *significantly outperforms GenerateU-Swin-L* by ``+8.9%`` in AP$_r$(31.2% vs. 22.3%)and ``+5.2%`` in AP(33.1% vs. 27.9%), further confirming Open-Det's significantly superior effectiveness and efficiency. For more information, please refer to our responses to the Reviewer **JNJc** in **Q3**.
**Q2: Figure 2 is complex and it is not easy to understand the entire pipeline.**
Thank you for your valuable suggestion. Figure ``2`` may appear complex due to the *extensive detailed information* it contains. In Appendix ``A.1`` (page 12) of the submitted manuscript, we have presented a ``simplified version`` of the Open-Det pipeline (Figure ``5``), which highlights the core workflow and provides a clearer illustration of the entire process for better clarity. Additionally, the caption of Figure ``2`` has explicitly indicated the relationships between the two figures, contributing to a better understanding of the main pipeline. We deeply acknowledge the reviewer's suggestion and will further modify Figure ``2`` by reorganizing the figure layout to make it more understandable.
[1] Grounded language-image pre-training. CVPR 20222.
[2] Grounding dino: Marrying dino with grounded pre-training for open-set object detection. ECCV 2024.
[3] Detclipv3: Towards versatile generative open-vocabulary object detection. CVPR 2024. | null | null | null | null | null | null |
Reducing Tool Hallucination via Reliability Alignment | Accept (poster) | Summary: This paper discusses how LLMs suffer tool hallucinations, which can cause task failures and higher costs. It defines these errors as two main types: picking the wrong tool or using a tool incorrectly. The paper introduces RelyToolBench, a set of specialized tests and new metrics to measure and reduce such issues. The paper introduces Relign, a framework that helps LLMs make better tool-related decisions by allowing them to pause, ask for clarification, or switch tools when unsure. The experiments show that Relign effectively decreases tool hallucinations, making LLMs more reliable and efficient in using tools.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: The main result uses specific training data sizes and model scales, which might not generalize to all LLMs or tool-use scenarios. Please show detailed results in multi scales.
Supplementary Material: Yes, all appendix
Relation To Broader Scientific Literature: The contributions of this paper build on and extend existing works by providing detailed evaluation metrics, a specialized benchmark, and a new alignment framework. These advancements aim to enhance the reliability LLMs in their interactions with external tools.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
The paper presents a novel framework for addressing tool hallucinations in LLMs. The categorization of tool hallucinations into selection and usage errors, with further subtypes, is a good approach to systematically addressing this issue.
Weaknesses:
While the framework is novel, it builds on existing ideas from reinforcement learning and preference optimization. The paper could benefit from a more detailed comparison with related methods/benchmarks to highlight the specific advancements it makes.
Other Comments Or Suggestions: See Questions
Questions For Authors: 1. The concept of "indecisive actions" in the Relign framework seems crucial for reducing tool hallucinations. Could you clarify the specific mechanism or threshold the model uses to decide between executing a tool call, deferring action, or seeking clarification?
2. The evaluation metrics are central to demonstrating the effectiveness of Relign. Beyond the limited human evaluation mentioned in Appendix A, have these metrics been validated against more extensive human judgments of task success and system reliability?
3. The paper shows that larger models generally perform better and hallucinate less. Have you explored how different model architectures (beyond just scale or model type(company)) might affect hallucination rates? Any insightful analysis??
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your detailed review and valuable feedback. We provide our responses below:
# Response about Weaknesses
Thank you for recognizing the novelty of our framework. We clarify our key contributions as follows:
### (1) A systematic evaluation framework for tool hallucinations.
Our framework integrates rule-based and LLM-based evaluations. While rule-based methods ensure syntactic correctness in tool invocation, they overlook whether **a valid invocation is actually non-hallucinatory**. This issue has been largely ignored in prior work, which often neglects tool content hallucination. Additionally, we introduce the **Reliable Pass Rate (RePR)** metric to address limitations in traditional pass rate evaluations by explicitly considering tool hallucinations.
### (2) Joint alignment of helpfulness and reliability.
Unlike previous work that focuses solely on task completion, our Relign framework also emphasizes **the reliability of tool usage**. This reduces both hallucinated and ineffective tool calls while maintaining high task success rates. Our approach jointly optimizes these factors using a comprehensive **utility** metric to ensure balanced performance.
### (3) Differences from conventional SFT+DPO training.
While our training builds on preference optimization, we redefine the optimization objectives by **introducing an indecisive action space**, teaching the model when to execute indecisive actions instead of hallucinating. Unlike general preference alignment methods that optimize multiple factors, we explicitly focus on helpfulness and reliability, leveraging **structured preference data** to refine tool invocation decisions. Rather than modifying the core algorithm, our contribution lies in refining **training objectives and data synthesis**.
# Response about Q1
In our work, indecisive actions are implemented as special termination functions, such as direct termination, tool switching, and TalkToUser. When the model invokes these functions, the tool invocation process ends immediately, as the model determines the task to be infeasible. The model learns to use these functions through SFT and DPO training.
- **SFT stage**: We introduce the indecisive action space by modifying tasks to be unsolvable (e.g., changing toolsets) and training the model to invoke appropriate termination functions instead of hallucinating.
- **DPO stage**: The model refines its decision-making through preference learning. At each tool invocation step, multiple outputs are sampled.
- If the task is feasible, the preference order is: correct tool invocation > hallucinated tool invocation > indecisive action.
- If the task is infeasible, the order is: indecisive action > hallucinated tool invocation (as correct tool use is impossible).
This data-driven approach enables the model to assess the task's feasibility and determine whether to continue tool invocation or execute an indecisive action, effectively reducing hallucinations.
# Response about Q2
### (1) Is there additional human evaluation for the metrics?
Our evaluation metrics are improvements upon the **Pass Rate** introduced in ToolBench, which has been validated through human evaluation in its original work. While our study introduces several new evaluation metrics, they primarily rely on tool hallucination evaluation. Thus, we conducted targeted human evaluation on the LLM-based components to ensure accuracy. Given the alignment with prior metrics, we believe this evaluation is sufficient. However, if specific aspects require further assessment, **we are open to conducting additional evaluations**.
### (2) Is LLM-based automatic evaluation feasible and reasonable? How do we ensure evaluation accuracy?
Our evaluation is divided into two specific tasks: assessing tool-task relevance and checking parameter-user input consistency. These tasks are **simpler than full tool invocation**, as critique-based judgments are easier than generative actions. We further enhance reliability by **using specialized prompts to guide assessments** and instructing the LLM to **return “unsure” when uncertain** (see Table 3 in the appendix). This approach minimizes errors and ensures accurate evaluations. Our human validation further confirms its effectiveness.
# Response about Q3
Apologies, but we have not conducted additional exploration into model architectures. To the best of our knowledge, existing research on hallucinations has not discussed the impact of different model architectures (e.g., MoE, Mamba) on hallucinations. Our work primarily focuses on proposing an evaluation framework for tool hallucinations and improving tool hallucination mitigation based on this framework. We believe investigating the relationship between model architectures and tool hallucinations is beyond the scope of this paper.
We greatly appreciate your valuable feedback and look forward to your response. | Summary: LLMs can solve diverse tasks by using external tools. Investigating tool hallucination is crucial because hallucination can cause problems that are hardly recognized. This paper introduces RelyToolBench to evaluate tool hallucinations, including four types of hallucinations. Authors also propose Relign to reduce tool hallucination by SFT and DPO. As shown in the experiments section of the paper, Relign generally outperforms other baselines. In addition, the large data size increases both tool and task hallucinations.
## update after rebuttal
I would like to acknowledge the contributions of this work in terms of both the benchmark and the proposed method. I have read other reviews and rebuttal comments. However, the scope and positioning of the paper are somewhat ambiguous. Thus, I keep my original score; please see the reasons below.
The benchmark proposed in this paper is not the first to tackle tool hallucinations, which may raise questions in terms of originality. The paper needs a detailed discussion of the pros and cons of the previous work. I am not convinced by the rebuttal comment, such as the lack of taxonomy. The previous work has proposed a well-organized benchmark in terms of in-depth and in-breadth. Also, the paper had to include the discussion before submission because the previous work is easily found by topic alone and published before ICML submission. The authors highlight the alignment and training as the one of key differences, but, more analysis and investigation are needed. Based on the conducted experiments and used hyperparameters, it is difficult to understand why the proposed method is better than others.
Claims And Evidence: Please refer to the weaknesses.
Methods And Evaluation Criteria: - The proposed method, Relign, is reasonable to mitigate the tool hallucination. SFT and DPO are widely used algorithms.
- The evaluation criteria used in the paper are tool hallucination rate, task hallucination rate, RePR, and Utility. The used criteria are clear.
- Tool hallucination rate considers the number of hallucination tool calls.
- Task hallucination rate considers the sample-level hallucination.
- RePR is the pass rate without hallucination.
- Utility measures the quality in terms of success, the number of calls, and hallucinations.
Theoretical Claims: There are no theoretical claims in the paper.
Experimental Designs Or Analyses: - The hyper-parameters used in other baselines, such as SFT, DPO, and SFT & DPO, are ambiguous. For example, does Line 325, "the learning rate was set to 1e-5", mean the learning rate is used in SFT & DPO or the proposed method? The experimental details should be clear.
- I wonder if the training steps of SFT, DPO, SFT & DPO, and Relign are the same or not.
- If authors use the grid search for finding hyper-parameters, the details should be included.
- Relign is the sequential training of SFT and DPO. It is unclear why the proposed method outperforms single training of SFT and DPO and the joint training of SFT & DPO.
- In Lines 381-382, the authors said that "This could be attributed to overfitting". However, considering (1) the number of tool calling number is reduced as the data size increases and (2) the hallucination is reduced as the model size increases, the overfitting might not cause the degradation of performance. Including the additional analysis can make the claim clear.
- The authors discussed "a noticeable decline in both Tool and Task Hallucination rates" in the discussion of "Does a Larger Model Reduce Hallucination?". However, task hallucination increases as the model size increases.
Supplementary Material: Supplementary material generally includes the used prompts in the paper.
Relation To Broader Scientific Literature: The paper is related to large language models with tools. The key contribution is the investigation of tool hallucination. It is related to the finding of prior work that LLMs have hallucinations.
Essential References Not Discussed: Prior work [R1] has proposed ToolBH to evaluate the hallucination in LLMs with tools. Since one of the main contributions is the evaluation of tool hallucination, discussion and comparison are needed.
[R1] Toolbehonest: A multi-level hallucination diagnostic benchmark for tool-augmented large language models, EMNLP'24
Other Strengths And Weaknesses: **Strength**
- Hallucination is an important research topic to improve the responsibility.
- The paper categorizes the types of tool hallucinations and defines the metrics to evaluate the hallucination.
- Sequential training with SFT and DPO shows the higher performance than others.
**Weakness**
- The major concerns regarding experiments are mentioned in **Experimental Designs Or Analyses**, such as experiment details and analysis. Please see **Experimental Designs Or Analyses**.
- I respectfully disagree with "pioneer a comprehensive evaluation framework for Tool Reliability" because prior work [R1] has proposed ToolBH to evaluate the hallucination in LLMs with tools. Since one of the main contributions is the evaluation of tool hallucination, discussion and comparison are needed, as mentioned in **Essential References Not Discussed**.
- Does Relign also outperform others on the prior benchmarks? Reducing hallucination is important, but the performance on the existing benchmarks is questionable.
[R1] Toolbehonest: A multi-level hallucination diagnostic benchmark for tool-augmented large language models, EMNLP'24
Other Comments Or Suggestions: Please see weaknesses.
Questions For Authors: - Why does the paper not report a naive pass rate?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your detailed review and valuable feedback. We provide our responses below:
# Re. to Experimental Designs or Analyses
- (1) All experiments share the same hyperparameters unless stated otherwise. In all methods, the learning rate is set to 1e-5 for SFT and 1e-5 for DPO to ensure consistency.
- (2) The SFT dataset contains 10,000 fixed samples, so the number of training steps remains the same. However, DPO training steps vary across methods and models. We select 10,000 tasks and allow multi-round interactions with the environment. In each round, we sample ten outputs at a temperature of 0.7 to construct the DPO dataset. This results in a theoretical maximum of 10,000 tasks × number of interaction rounds, but not all rounds yield valid DPO samples. In practice, the number of DPO preference pairs ranges from 10,000 to 20,000. For example, in Relign training, Toolllama, Llama3.1, and Qwen2.5 yield 17,000, 19,000, and 14,000 pairs, respectively. The final training steps depend on the number of DPO pairs, using a batch size of 32 for two epochs.
- (3) We did not perform a grid search but used standard values (e.g., 1e-5 for SFT, 1e-6 for DPO) to ensure stable fine-tuning, focusing on training data synthesis rather than hyperparameter optimization.
- (4) Relign follows a two-stage training: SFT first, then DPO on the SFT model. The key distinction between Relign and other methods lies in the base model used for DPO data sampling. In Relign, the DPO training data is sampled from a model that has undergone SFT training, whereas in other methods, it is not. We believe SFT introduces an indecisive action space, which is then refined during DPO. Without SFT, the model generates fewer indecisive actions, leading to suboptimal DPO training. This explains Relign’s superior performance.
- (5) We apologize for any confusion caused. Larger models generally reduce tool hallucinations due to improved general capabilities. However, increasing training data slightly raises hallucinations, likely due to overfitting on ToolBench data, which consists of well-structured tool interaction trajectories and lacks samples that handle edge cases (e.g., failure scenarios where a task cannot be completed). Thus the model learns correct tool calls but struggles with failure scenarios, defaulting to excessive hallucinated tool calls instead of invoking indecisive actions—precisely the issue Relign aims to address.
- (6) We apologize for the typo. Our intended meaning was that larger models significantly reduce both tool hallucinations and excessive tool calls, leading to more reliable and efficient tool use.
# Re. to Weakness #2
Thank you for the references. We highlight the key differences below:
### (1) Systematic Definition and Real-World Evaluation
The cited work lacks a **comprehensive taxonomy** of tool hallucinations and **omits key types**, such as excessive tool calls (tool timing hallucinations) and fabricated tool parameters (tool content hallucinations). Additionally, its evaluation relies on sub-tasks like classification and description, which **do not reflect real-world tool use**. A model’s competence in these tasks does not necessarily prevent hallucinations during actual tool interactions. In contrast, our evaluation occurs within real-world tool-use scenarios, ensuring direct assessment of hallucinations in execution. We also introduce an **LLM-based automated evaluation** to systematically assess these hallucination types.
### (2) Beyond Evaluation: Alignment and Training
While the cited work focuses on dataset construction, **it does not address model training or alignment**. Our work goes further by proposing a framework and data synthesis approach that effectively reduces tool hallucinations and improves tool-use efficiency and reliability.
# Re. to Weakness #3
As shown in the table below, we also evaluate our method on the out-of-domain test set APIBench. We follow the evaluation setup described in [1] (for more details, please refer to [1]). Rather than training on APIBench, we treat each API in the prompt as a function call to assess how well our trained model generalizes to OOD datasets. Experimental results with two types of tool retrieval indicate that Relign improves model performance on other tool-use tasks as well, suggesting that reducing tool hallucinations helps models learn better tool-use strategies (AST represents tool-calling accuracy).
|Method| HuggingFace Hallu. | HuggingFace AST | TorchHub Hallu. | TorchHub AST | Tensorhub Hallu. | Tensorhub AST|
|-|-|-|-|-|-|-|
|Llama3.1 + BM25 |9.7|14.3|11.7|47.8|10.5|40.3|
| + Relign + BM25 |**6.4**|**15.7**|**7.9**|**50.1**|**5.5**|**42.5**|
|Llama3.1 + Oracle |9.3|87.8|10.4|86.1|7.8|89.6|
| + Relign + Oracle |**6.5**|**89.5**|**7.7**|**88.9**|**3.2**|**91.3**|
[1] Toolllm: Facilitating Large Language Models To Master 16000+ Real-World Apis.
# Re. to Q1
Please see the response to Reviewer fZsh’s Existing Metrics for more details. | Summary: This paper addresses tool hallucination in LLMs, where models incorrectly select or misuse tools. It introduces RelyToolBench for evaluation and Relign, a reliability alignment framework that enables LLMs to defer, clarify, or adjust tool use. Using SFT and DPO, Relign reduces hallucinations, improves task reliability, and enhances efficiency. Experiments show Relign outperforms baselines and achieves competitive performance with GPT-4o while reducing unnecessary tool calls.
## update after rebuttal
I’m convinced by the authors' opinion for including LLMs in the core framework. However, I don’t agree that Tool Num can serve as a sufficient proxy for efficiency. I also agree with the rest of the authors’ responses and will raise my score by one point.
Claims And Evidence: It is an obvious fact that LLMs can use tools, and the associated issues are well known. This paper categorizes these issues in detail, proposes evaluation metrics, and suggests improvements using SFT and DPO.
Methods And Evaluation Criteria: A significant portion of the evaluation process relies on the judgment of the LLM itself, which makes it somewhat contradictory to assess LLM hallucinations using another LLM.
Theoretical Claims: There are no theoretically advanced descriptions, and no major issues seem to be present.
Experimental Designs Or Analyses: All evaluation metrics (RePR, Tool Hallu, Tool Num, Utility) were newly proposed, which raises the question of whether none of the existing metrics were useful.
Supplementary Material: Human Evaluation results would be better to be in the main section because they suggest new evaluation metric. And the other parts were only about prompts.
Relation To Broader Scientific Literature: There would be various tools that can be used by LLMs in the neat future (space travel, airplane, military, etc.).
Essential References Not Discussed: If there had been related works, I would have referred to them to explore recent trends, but unfortunately, there were none.
Other Strengths And Weaknesses: I like papers that define new problems and develop benchmarks or evaluation metrics for them. However, the problem itself seems to have already been widely discussed. Additionally, the introduction of the Indecisive Action Space is likely to reduce time or computational efficiency, but there is no analysis of this aspect.
Other Comments Or Suggestions: The figures and experimental results were well-organized, making them easy to understand.
Questions For Authors: 1. The repeated use of a tool was portrayed negatively, but aren’t there fields where repeated use is necessary to improve accuracy?
2. In the Indecisive Action Space, how is it verified that the newly selected tool is actually correct when the tool is switched?
3. Asking the user for input is not essentially an automated methodology, is it?
4. As mentioned earlier, isn’t there an excessive reliance on LLMs to prevent incorrect generations by LLMs?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your detailed review and valuable feedback. We provide our responses below:
# Re. to Evaluation Criteria & Q4
### (1) Why did we introduce LLM-based evaluation?
Our evaluation process for tool hallucinations consists of both rule-based evaluation and LLM-based evaluation. Rule-based evaluation can determine whether the LLM’s tool calls follow correct syntax. However, a key question arises: **Is a syntactically valid invocation actually non-hallucinatory?** This is why we introduced LLM-based evaluation—only an LLM can assess aspects beyond syntactic correctness and evaluate the content itself. LLM-based evaluation has also been widely used in prior research on hallucination assessment [1-2].
### (2)How do we ensure evaluation accuracy?
In our LLM-based evaluation process, we decompose the assessment into specific evaluation tasks. On the one hand, **these evaluation tasks are significantly simpler than the full tool-calling task**, as making a critical judgment is easier than generating a valid action. On the other hand, we design **dedicated evaluation prompts** tailored to each evaluation task, guiding the LLM to assess various aspects in a structured manner. Furthermore, we explicitly **instruct the LLM to output unsure when it is uncertain**, rather than forcing it to provide a definitive evaluation result (as shown in Table 3 in the appendix).
In summary, we believe that using LLMs for automated evaluation is reasonable, and our human evaluation results further validate its accuracy.
[1] FACTSCORE: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation.
[2] SelfCheckGPT: Zero-resource black-box hallucination detection for generative large language models.
# Re. to Computational Efficiency
While we did not conduct direct tests on the actual computational efficiency or time reduction, we actually use **Tool Num as an indicator for computational efficiency**. Since the model calls a tool once per interaction round, the average number of tool calls effectively corresponds to inference time and output length. Compared to dynamic and less stable metrics, the number of tool calls remains relatively fixed. Therefore, we primarily use the Tool Num metric to reflect the efficiency gains achieved by our method. As shown in Table 2, our model significantly reduces the average number of tool calls per task (ToolLlama reduces from 3.7 to 0.9, Llama3.1 from 2.2 to 1.5, and Qwen2.5 from 2.4 to 1.7).
# Re. to Existing Metrics
Our RePR metric is a refined version of Pass Rate, as we found that some tasks in the original pass rate metric exhibited result hallucinations. Therefore, we believe the pass rate has certain inaccuracies and did not include its results in our paper. As shown in the table below, we have supplemented some of the original Pass Rate results. The results indicate that RePR is lower than Pass Rate, and the gap is more pronounced in base models without Relign alignment. This suggests that unaligned models tend to produce more tool hallucinations, which in turn mislead the final results. Moreover, whether using Pass Rate or RePR, our Relign framework consistently improves task success rates. We will include the full results table in future versions of our paper for reference.
|Method|I1 PR/RePR|I2 PR/RePR|I3 PR/RePR| Overall PR/RePR|
|-|-|-|-|-|
|Toolllama |76.8/64.2|77.0/64.4|72.3/58.9|75.4/62.5|
| + Relign |77.1/68.2|77.3/68.3|76.2/71.0|76.9/69.1|
|Llama3.1 |83.0/68.5|84.8/66.1|80.1/61.3|82.6/65.3|
| + Relign |87.0/80.5|89.2/75.8|86.9/75.4|87.7/77.2|
|Qwen2.5 |86.3/71.5|84.3/66.2|80.6/57.7|83.7/65.1|
| + Relign |87.6/77.0|87.5/69.0|85.5/72.9|86.9/73.0|
# Re. to Q1
In some special cases, repeated tool calls may be beneficial (e.g., when a tool fails due to high latency). However, our evaluation environment is based on StableToolBench, which **ensures that the tool environment remains stable**. In this scenario, repeated tool calls are typically an **abnormal behavior** of the model. Besides, in our specific evaluation of repeated tool calls, a tool call is only considered a hallucination if both the tool call itself and its environmental feedback are identical. This approach can partially address your concern.
# Re. to Q2 and Q3
In the Indecisive Action Space, ChangeTool and TalkToUser are designed as **two special termination functions**. When the model invokes either of these functions, the tool invocation process is immediately terminated, as the model has determined that the task cannot be completed in the current environment. As a result, we did not design specific pipelines for tool switching (which would require a tool retriever ) or for user interaction (which would require a user simulator). Instead, we directly exit the process upon encountering these exceptional actions. We will explore how to better design task execution workflows following exceptional actions in future work. | Summary: The paper focuses on the problem of tool hallucinations. The authors categorize tool hallucinations into selection and usage hallucinations, providing a framework for targeted improvements. A new benchmark is introduced to capture these categories (RelyToolBench) and model fine-tuning methods (Relign) are explored to fix these hallucinations.
Claims And Evidence: The paper dives deeper into source of hallucinations in language model outputs when using tools. They find that models can not only generate invalid tool invocations but also not realize how many or which tools to use. The authors validate these on RelyToolBench and have discussed the construction of RelyToolBench. A concern here is that RelyToolBench is artificially constructed by purposely introducing errors, such as partial tool information in the prompt, and performs evaluations on a distorted distribution. The real-world generalization of the findings are unclear.
The new training method proposed “Relign” is the typical way of performing post-training of large language models [1, 2], i.e., SFT followed by DPO. They report a baseline (SFT&DPO) which is not as widely adopted a strategy. So the proposed method is not really new as such, and the Relign method has not been discussed as such.
[1] Grattafiori, Aaron, et al. "The llama 3 herd of models." *arXiv preprint arXiv:2407.21783* (2024).
[2] Mehta, Sachin, et al. "Openelm: An efficient language model family with open training and inference framework." *Workshop on Efficient Systems for Foundation Models II@ ICML2024*. 2024.
Methods And Evaluation Criteria: 1. RePR = Pass rate - task hallucination rate. If task hallucination occurs, I assume that pass rate will be also be zero in that scenario. So, the model is penalized multiple times for failures here. I’m not sure if it makes sense to combine these into a single metric. Why not just view these separately, as they are?
2. SFT&DPO is not a required baseline. Relign should be identified as SFT+DPO with the relevant references identified.
Theoretical Claims: N.A
Experimental Designs Or Analyses: 1. The paper identified hallucination issues by creating a new benchmark. This benchmark is synthetically generated, which is largely OK, but some concerns raised above need to be addressed.
2. See comments on RAG under various headings below.
Supplementary Material: I did not look at the Appendix closely.
Relation To Broader Scientific Literature: The paper is related to the body of work on tool usage with LLMs. It is also related to post-training strategies to reduce hallucinations. The paper dives deep into the mode of hallucinations [1, 2] for tool [3, 4] usage using a new benchmark and applies various post-training strategies [5, 6] to reduce hallucinations.
[1] Varshney, Neeraj, et al. "A stitch in time saves nine: Detecting and mitigating hallucinations of llms by validating low-confidence generation." arXiv preprint arXiv:2307.03987 (2023).
[2] Jain, Nihal, et al. "On Mitigating Code LLM Hallucinations with API Documentation." arXiv preprint arXiv:2407.09726 (2024).
[3] Patil, Shishir G., et al. "Gorilla: Large language model connected with massive apis." Advances in Neural Information Processing Systems 37 (2024): 126544-126565.
[4] Schick, Timo, et al. "Toolformer: Language models can teach themselves to use tools." *Advances in Neural Information Processing Systems* 36 (2023): 68539-68551.
[5] Grattafiori, Aaron, et al. "The llama 3 herd of models." *arXiv preprint arXiv:2407.21783* (2024).
[6] Mehta, Sachin, et al. "Openelm: An efficient language model family with open training and inference framework." *Workshop on Efficient Systems for Foundation Models II@ ICML2024*. 2024.
Essential References Not Discussed: There is a large body of work concerning hallucinations and RAG which has not been adequately discussed in this paper. I think RAG serves as a simple baseline to reduce tool hallucinations and I did not follow the arguments to perform the RAG experiment for this paper. If the RAG experiment is not possible, at least the paper should recommend the reader about when to perform RAG and when to use their method.
Other Strengths And Weaknesses: The main novelty in the paper arises from the way the data is constructed and analyses around sources of hallucinations. These would serve as meaningful insights to the community.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How does RAG compare with the methods discussed to improve the hallucination rates? I understand a paper cannot have all experiments but I think RAG is simple and popular enough to concretely justify the need for post-training over RAG to reduce hallucinations. There are pros/cons of each and I think the paper should highlight these for the application(s) being discussed. When should someone follow the research of this work and not RAG methods for their work? Questions like these remain unanswered in the paper.
2. RelyToolBench and the preference datasets were synthetically created. How do you justify the use of the training data and the evaluations to reflect real-world scenarios? Can you comment on this distribution shift?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your detailed review and valuable feedback. We provide our responses below:
# Response about RePR
We apologize for any confusion caused by the description of the metric. In fact, **the model is not penalized multiple times**. RePR represents the proportion of tasks that were originally considered as successfully passed but were later verified to be free of result hallucinations through our hallucination detection. The deducted portion consists of tasks that were initially deemed successful but were later identified as containing result hallucinations. Importantly, tasks with a pass rate of 0 are not considered as having result hallucinations.
# Response about RAG and Q1
### (1) Relevance of Our Work to RAG
In our tool-based tasks, we provide the model with relevant information about each tool through the api description field in the prompts, as shown in Table 7 in the appendix. This includes descriptions of tool functionalities, tool parameters, and example values of parameters. **This setup can be considered a special form of RAG tailored to tool documentation.**
Since the APIs in our dataset do not appear in the model’s training data, feeding tool API documentation into the model is essential; otherwise, the model would not be able to complete the tasks.
### (2) Why RAG is Not Suitable for Tool Hallucination Scenarios
As mentioned above, we already provide the model with tool API usage instructions in the prompt, yet the model still exhibits different types of tool hallucinations during task execution. This could be due to the relatively **limited knowledge** contained in RAG-based knowledge retrieval.
Another possible way to construct a RAG knowledge base is to generate and retrieve real usage examples for specific tool APIs, similar to in-context learning. However, in our inference setting, many of the APIs we encounter are not present in the training database. This makes it difficult to pre-prepare corresponding demonstration examples. A potential alternative would be to synthetically generate a large number of tool invocation examples and then filter out high-quality examples. However, this approach does not generalize well to large-scale real-world tool-use scenarios and is beyond the scope of this paper.
### (3) Discussion on RAG and Post-Training for Addressing Hallucination
**a. Challenges in Constructing RAG Knowledge Bases**
RAG relies on external knowledge bases, which can be difficult to construct. If the knowledge base from the training data does not sufficiently cover the inference-time data, applying RAG becomes challenging.
**b. Fundamental Differences Between RAG and Post-Training**
RAG is designed to mitigate hallucinations **by introducing external knowledge at inference time**. Unlike post-training methods, RAG does not update the model’s parameters, making it relatively lightweight. However, RAG typically retrieves specific pieces of information, which at most allows the model to mimic certain patterns, similar to in-context learning.
In contrast, post-training methods can deeply **alter the model’s cognition and capabilities through training**. When hallucinations stem from the model’s inherent weaknesses in understanding and interacting with the tool environment, RAG combined with prompting may be insufficient to mitigate these hallucinations. This is particularly relevant for the tool hallucinations observed in our experiments. In such cases, post-training is necessary to enable the model to better learn and model the tool interaction environment.
# Response to Q2
First, the RelyToolBench dataset includes both the original subset from StableToolBench and additional subsets specifically designed to evaluate particular hallucinations, namely the Unmatched Tools and Miss Parameter subsets. It is important to emphasize that the **original subset closely resembles real-world applications**. However, it also contains four different types of hallucinations as shown in Figure 6 in our paer, and both its tasks and APIs are relatively realistic. This demonstrates that **similar hallucinations occur in real-world scenarios**, not just in our two synthetically constructed subsets.
Additionally, as described in our response to Reviewer fZsh’s Weakness#3, we conducted **evaluations on an out-of-distribution test set APIBench**. We observed that models trained with our Relign method did not show a decline in tool-calling capabilities; in fact, they even exhibited some improvements. This provides further evidence **supporting the generalization ability** of our approach.
# Response to Scientific Literature
Thank you for providing the references. We will cite them in future versions of our paper and supplement our discussion accordingly.
We greatly appreciate your thoughtful feedback and look forward to your response. | null | null | null | null | null | null |
Beyond CVaR: Leveraging Static Spectral Risk Measures for Enhanced Decision-Making in Distributional Reinforcement Learning | Accept (poster) | Summary: This paper studies how to optimize a single static Spectral Risk Measure (SRM) for an RL agent, leveraging a distributional RL framework. By extending the state to track discounted returns and employing quantile-based updates, the authors propose a two-tier algorithmic approach:
Inner Optimization: Fix an approximate function $h$ related to the SRM, then learn a greedy policy in the extended MDP.
Outer Optimization: Update $h$ using closed form solution to better match the (new) return distribution of the initial state.
The authors highlight that using a single SRM generalizes beyond basic CVaR-based methods, potentially offering more flexibility in risk profiles. Experiments on algorithmic trading tasks and a Windy Lunar Lander environment suggest that their approach can produce policies that better align with static SRMs compared to baselines such as standard CVaR-based methods or purely risk-neutral approaches.
Claims And Evidence: 1.
- Claim: The paper argued that it can directly optimize a spectral risk measure (broader than CVaR) through a distributional approach, thereby obtaining a single “best” policy according to that measure.
- Evidence: Empirical demonstrations on trading tasks show that the learned distribution of returns indeed improves under the selected SRM.
2.
- Claim: The method achieves a “monotonic” or “non-decreasing lower bound” style improvement in the outer loop.
- Evidence: The authors prove a type of asymptotic convergence result for the policy improvement step (inner optimization). However, they do not provide finite-step performance bounds or sample-complexity analysis.
Methods And Evaluation Criteria: Methods:
- The extended MDP approach (tracking discounted returns in the state) is standard for risk-based RL.
- The inner optimization uses quantile RL updates (in the style of QR-DQN).
- The outer optimization updates the SRM parameter $h$ in closed form (from the Kusuoka representation).
Evaluation:
Experiments focus on the resulting return distributions of policy, both visually and via risk metrics (CVaR, exponential risk, etc.). The main shortcoming is that tasks are relatively toy-scale or moderately sized, and the baselines are fairly limited (primarily QR-DQN or variations of CVaR). For a final measure of practical performance, more rigorous tasks and additional metrics (especially in trading, e.g., Sharpe ratio, drawdown) would strengthen credibility.
Theoretical Claims: 1. Asymptotic Convergence of Inner Loop: The paper shows that for the inner loop (policy learning under a fixed $h$), the linearization of the sub-risk measure allows for a standard policy-improvement argument in distributional RL. This is reminiscent of known results (e.g., Kim et al. 2023, which obtains a similar linearization in a constrained RL setting). Furthermore, the paper only proves a monotonic lower bound in the tabular or exact distribution scenario. Under function approximation, or in finite-sample contexts, rigorous finite-time or non-asymptotic analyses are not provided.
2. Monotonic Outer Updates: By iteratively re-estimating $h$ from the newly observed initial-state return distribution, the algorithm is said to climb toward an optimal measure. To the best of my knowledge, this part is novel in risk-aware RL with SRM.
Experimental Designs Or Analyses: 1. Tasks:
- Two trading-like environments (American option, mean-reversion) demonstrate the algorithm’s ability to favor certain parts of the distribution (e.g., different quantile levels).
- Windy Lunar Lander adds complexity but remains a somewhat “toy” environment compared to many large-scale or safety-critical domains.
2. Comparisons:
- Against risk-neutral QR-DQN, a fixed-CVaR version, and “iCVaR” from prior work. Results show each approach yields distinct return distributions, with the SRM-based approach often outdoing purely CVaR or risk-neutral baselines in the chosen metric.
3. Comments:
- The tasks could be considered simple or proof-of-concept. For a deep distributional RL method, it would be insightful to test more safety-critical or complex environments (e.g., from Safety Gymnasium: https://safety-gymnasium.readthedocs.io/en/latest/index.html).
- In the trading tasks, standard finance metrics (mean-variance, Sharpe ratio, maximum drawdown) are absent, making it less clear how these policies compare to typical trading strategy benchmarks.
Supplementary Material: The paper includes additional detail on: The decomposition theorem for coherent risk measures, mathematical proofs of the monotonic updates, and implementation details for the distributional critics.
Relation To Broader Scientific Literature: - Spectral Risk Measures: The approach is consistent with the Kusuoka or supremum representation of coherent risk measures.
- Distributional RL: The method leverages standard quantile approximation (QR-DQN) and is part of a growing body of risk-oriented distributional methods.
Essential References Not Discussed: 1. Kim, Dohyeong, et al. "Spectral-Risk Safe Reinforcement Learning with Convergence Guarantees." Advances in neural information processing systems 2024.
- While Kim et al. use SRM in a constraint setting (safe RL) and this paper uses SRM as a single-objective optimization, the overall methods (bilevel structure, state augmentation) are very similar. Yet the paper does not explicitly discuss or compare its approach to Kim et al.
- Add a dedicated subsection contrasting this method with Kim et al.
- Emphasize where the approaches are methodologically parallel (e.g., the idea of rewriting the SRM using supremum representation; the linearization of risk measure in the inner optimization).
- Pinpoint novel aspects in this paper, such as the closed-form solution in Eq. (6) vs. the optimization-based approach for the supremum in Kim et al. (Eq. 5 in this work).
2. Advanced Distributional RL: If your approach relies on or extends any of the standard distributional algorithms (C51, QR-DQN, IQN, etc.), referencing these helps readers see how your design decisions compare or could be integrated with more sophisticated distributional frameworks (e.g., FQF).
2-1. Dabney, Will, et al. "Implicit quantile networks for distributional reinforcement learning." International conference on machine learning. PMLR, 2018.
- Extends quantile ideas via implicit quantile networks (IQN).
2-2. Yang, Derek, et al. "Fully parameterized quantile function for distributional reinforcement learning." Advances in neural information processing systems 32 (2019).
- Improves upon quantile-based DRL by parameterizing quantiles themselves as learnable functions.
Other Strengths And Weaknesses: 1. Strengths:
- Demonstrates clear expansions beyond pure CVaR to a flexible set of SRMs.
- Good interpretability argument for static risk measures, using a decomposition approach.
- Shows how distributional RL can incorporate these measures with (arguably) minimal overhead.
2. Weaknesses:
- Limited comparison to Kim et al. despite using very similar core techniques.
- Convergence analysis does not address finite-sample complexities.
- Single-objective viewpoint: The final converged measure is still a single SRM, which might not serve multiple stakeholder or multi-risk contexts.
- Experiments remain modest and do not thoroughly test the approach in safety-critical or large-scale RL scenarios.
Other Comments Or Suggestions: - Multi-risk or multi-constraint extension: Address the possibility of combining multiple SRMs or constraints to handle different aspects of risk.
- Deeper experimental demonstration (including standard trading metrics, more complicated tasks) would enhance confidence in the method’s real-world applicability.
Questions For Authors: 1. Could you explicitly discuss how your approach (bilevel with a closed-form outer step) differs in practice from Kim et al.’s primal-dual approach, which also uses state augmentation and spectral risk?
[Kim et al.] Kim, Dohyeong, et al. "Spectral-Risk Safe Reinforcement Learning with Convergence Guarantees." Advances in neural information processing systems 2024.
2. Is there a straightforward way to bound approximation errors or obtain a sample-complexity result for the inner–outer loops?
3. You mention that the risk measure $h$ can “change” mid-training (mentioned in Introduction), but eventually converges to a single SRM. Could you clarify whether, at convergence, the agent is strictly tied to one final risk measure? In other words, do you expect any meaningful “variety” in the agent’s risk preference by the end of training, or is it effectively just a single risk measure?
3-1. If a policy is optimized for one particular SRM, it may perform poorly under different risk criteria (e.g., a strong CVaR policy but weak expected return). Have you considered how users might adapt your method if they wish to account for multiple forms of risk or multiple stakeholder objectives simultaneously?
4. How does this policy compare when measured by classical trading benchmarks such as mean–variance (Markowitz), the Sharpe ratio, or maximum drawdown? Could you include these metrics to show practical relevance?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for recognizing the strengths of our work. We address your questions below.
**Q1:**
We thank the reviewer for this insightful comment. We agree that an explicit comparison with [1] would improve the clarity of our contribution. While both works share methodological components, such as the use of SRMs, a bilevel structure, and state augmentation, they address fundamentally different problems and employ distinct technical approaches.
[1] focuses on risk-constrained RL, where SRMs are used to define constraints. Their method uses a primal-dual optimization approach, requiring gradient-based updates for both the policy and the dual variables associated with the risk constraint. In contrast, our paper addresses risk-sensitive RL, where the SRM is used as the objective function. Our method leverages the SRM’s supremum representation to derive a closed-form outer update (Eq. 6), avoiding a computationally expensive optimization step.
Practically, this leads to three key distinctions:
- **Computational Efficiency:** Our method eliminates costly outer-loop optimization over dual variables. Instead, the outer update step uses the learned return distribution in the closed-form solution.
- **Theoretical Differences:** Unlike the actor-critic framework in [1], our approach is value-based, requiring a different convergence analysis.
- **Interpretability:** We provide insight into the agent’s intermediate risk preferences through SRM decomposition, an aspect not addressed in [1].
We will include a dedicated subsection summarizing these differences and explicitly reference [1] in the revised manuscript.
**Q2:**
We appreciate this thoughtful comment. Our convergence analysis focuses on asymptotic behavior, demonstrating that QR-SRM monotonically improves a lower bound on the SRM objective (Theorem 4.1). While our work emphasizes practicality and interpretability, other studies have explored the theoretical foundations of spectral risk measures. For instance, [2,3] provide regret and sample complexity bounds for SRM in distributional RL. Additionally, the approximation error for quantile TD algorithms, discussed in Proposition 19 of [4], is directly relevant to our inner optimization step, as we employ the same distributional Bellman operator in the augmented MDP.
**Q3:**
Thank you for raising this important point. Our method indeed optimizes the policy with respect to a single, user-specified SRM, and the final policy at convergence is aligned with that specific risk preference. Thus, after convergence, the agent is strictly tied to that SRM. We do not claim that the learned policy generalizes across all risk criteria, and we agree that a policy optimized for one SRM (e.g., CVaR) may underperform under others (e.g., expected value).
However, the mention of changing risk preferences over time (not during training) refers to a key interpretability contribution of our work: through the decomposition of SRM (Section 5), we are able to analyze the agent's intermediate, state-dependent risk preferences as the return distribution evolves. While the objective remains fixed, the conditional risk levels and weights (e.g., combinations of CVaRs) change across states and over time, providing insight into how the policy’s effective behavior adapts over time. This decomposition does not imply that the algorithm optimizes for multiple SRMs, but rather enhances interpretability within a single SRM objective.
To address multiple risk criteria or stakeholder objectives, we agree this is an interesting and practically relevant direction. One possible extension would involve optimizing for a mixture of SRMs, for example, via multi-objective RL. Some relevant works, such as [5], have explored these directions. Another direction could involve conditioning the policy on the risk parameter. While we do not explore these extensions in this work, we believe our framework provides a strong foundation for such adaptations, and we appreciate the reviewer highlighting this point.
**Q4:**
Thank you for the suggestion. Our work focuses on developing a general RL algorithm based on SRMs, rather than a financial trading strategy. While metrics like mean-variance, Sharpe ratio, and maximum drawdown are important in finance, they are not directly applicable to our benchmark environments. Nonetheless, applying our method to financial data and evaluating it under these metrics is a promising direction for future work.
**References:**
[1] Kim et al. Spectral-Risk Safe Reinforcement Learning with Convergence Guarantees
[2] Bastani et al. Regret Bounds for Risk-Sensitive Reinforcement Learning
[3] Chen et al. Provable Risk-Sensitive Distributional Reinforcement Learning with General Function Approximation
[4] Rowland et al. An Analysis of Quantile Temporal-Difference Learning
[5] Moffaert et al. Risk-sensitivity through multi-objective reinforcement learning | Summary: This paper proposes a distributional RL algorithm for optimizing the Spectral Risk Measures (SRM, with CVar as a special case) of return in discounted MDPs.
Specifically, this paper makes use of a variational representation of SRM proposed by Kusuoka, thus transforming the optimization of SRM into a two-layer optimization problem. The inner-layer optimization problem is almost the same as the problem encountered when optimizing CVaR. It is a problem of solving the optimal risk-neutral strategy for an MDP with augmented states using the current cumulative reward (stock) and discount rate (while also requiring the estimation of the return distribution). As for the outer-layer optimization problem (i.e., the variational representation), it has an explicit solution expressed in terms of the quantiles of the return distribution.
The paper proves the theoretical convergence of the algorithm (assuming that each step can be solved precisely) and characterizes the temporal decomposition of SRMs for the interpretation of the learned policy.
This paper verifies through extensive experiments that this algorithm can optimize the SRM and outperform the previous risk-neutral and risk-sensitive algorithms.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: yes
Relation To Broader Scientific Literature: This paper extends the optimizing the CVaR [1] of return of discounted MDPs with distributional RL to optimizing SRM. (Optimizing SRM in MDP was also considered in [2] without using distributional RL)
[1] Bellemare, M. G., Dabney, W., and Rowland, M. Distributional Reinforcement Learning
[2] Nicole Bauerle and Alexander Glauner. Minimizing spectral risk measures applied to Markov decision processes
Essential References Not Discussed: As far as I know, no key references are missed.
Other Strengths And Weaknesses: Strength:
1. The generalization from CVaR to SRM is meaningful, and using distributional RL to obtain the explicit solution of the variational representation of SRM is a nice and natural idea.
2. The paper is well-written and the technical parts are not hard to follow.
3. The paper also has extensive experiments to verify the effectiveness of the proposed algorithm.
Weakness:
1. In the right part of line 117-132, the paper introduced non-stationary Markov policy $\Pi_M\subset\Pi_H$ (with stock augmentation), but did not state that $\sup_{\pi\in\Pi_H}J(\pi, h)=\sup_{\pi\in\Pi_M}J(\pi, h)$, and in the following, the paper only focused on $\sup_{\pi\in\Pi_M}J(\pi, h)$. I know that $\sup_{\pi\in\Pi_H}J(\pi, h)=\sup_{\pi\in\Pi_M}J(\pi, h)$ is absolutely correct, but it would be better to clarify it in the paper.
2. Why in Algorithm 1 h_0 is randomly initialized, but in the proof of the convergence of inner problem (Line 649) $h_l$ is $\phi(0)$-Lipschitz, which is not stated in any other places. As far as I know, for such distributional value iteration algorithm (inner optimization problem), the Lipschitz property of the function is necessary to obtain the $\gamma^k$ convergence rate. I think it should be clarified in the paper by e.g. further explaining Eqn.(6).
3. A minor point: This paper pays too much attention to the technical part. I think it would be better to clarify why generalizing from CVaR to SRM is significant, such as providing and explaining simple and intuitive examples of SRM other than CVaR (line 330-337 present some examples of SRM without enough explanation).
Other Comments Or Suggestions: Line 110 reward->return (or cumulative reward)
Line 165 rho_{\tilde{\xi}} is used before definition.
The notations are misleading, for example, in Eqn (7), H in pi_H is history; but h\in\mathcal{H} is concave function, and pi_h is also used. It would be better to use different letters. And using \mathcal{S} to denote [G_min, G_max] is also strange in RL literature.
Questions For Authors: I have no question
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and for recognizing the strengths of our work. We are glad that you found our paper well-written. We address your questions below.
**Q1:**
Thank you for pointing this out. Theorem 2 of [1] and Theorem 3.1 of [2] provide a detailed discussion on the equivalence of these two supremums. We will clarify this in the revised manuscript.
**Q2:**
We appreciate the reviewer’s careful observation. Indeed, the convergence proof of the inner optimization (Line 649) assumes that $h_l$ is $\phi(0)$-Lipschitz. While Algorithm 1 states that $h_0$ is randomly initialized, we clarify that the required Lipschitz property holds from the beginning.
Specifically, in Algorithm 1, $h_0$ takes the form of Eqn. (6), which utilizes the quantiles of a return distribution and the risk spectrum $\phi$. As detailed in Appendix D, this equation is by construction $\phi(0)$-Lipschitz. Therefore, using the quantiles of a random return distribution ensures that the Lipschitz condition is satisfied for $h_0$.
We acknowledge that this point was not clearly stated in the current version of the paper. In the revision, we will explicitly clarify this in Section 4 and in the algorithm description to avoid confusion.
**Q3:**
As highlighted in our paper’s contributions, SRM provides valuable flexibility for practitioners. For example, Mean-CVaR is widely used in finance, particularly in portfolio management and insurance pricing. While CVaR, as a subclass of SRMs, offers some flexibility through the alpha parameter for adjusting policies, SRM provides even greater flexibility in defining objectives. For instance, WSCVaR allows practitioners to combine multiple CVaR objectives with arbitrary weights.
A key advantage of SRM over CVaR is its adaptability in cases where the ideal risk-sensitive objective is not immediately clear. In certain environments, incorporating risk sensitivity leads to policies that improve both worst-case return and average return, demonstrating that optimizing for risk-sensitive performance does not always come at the cost of lower average returns. By treating the parameters of the risk-sensitive objective as hyperparameters, users can tune them and compare the resulting policies. This flexibility is especially useful in environments where reward models are arbitrarily designed and lack a clear real-world interpretation. For example, while a portfolio manager may have a well-defined objective in a trading environment, such clarity is often absent in environments like Lunar Lander. In these cases, tuning risk objectives as hyperparameters provides a practical solution.
In the experimental section, Table 2 shows that our algorithm with a CVaR objective matches the performance of QR-CVaR from [3], which benefits from stronger convergence guarantees. Furthermore, our algorithm successfully optimizes various risk measures, such as the dual power risk measure and Mean-CVaR, consistently identifying top-performing policies (or policies within one standard deviation of the best-performing ones).
Another major advantage of SRM over CVaR was observed in the Windy Lunar Lander experiment, where our algorithm demonstrated significantly more stable training. Finding CVaR-optimal policies is known to be challenging in risk-sensitive RL literature ([4]), and we observed this difficulty in our experiments. However, our algorithm, especially with a Mean-CVaR objective, exhibited much more stable training and resulted in the top-performing policy with respect to risk-sensitive metrics.
Finally, as noted in our response to reviewer ymbN, this general risk measure was achieved without sacrificing the interpretability of the optimal policy, which was available for static CVaR-optimal policies. Our approach leverages the decomposition of static coherent risk and the distributional return distribution (Section 5), allowing the policy’s risk sensitivity to be continuously monitored when deployed in real-world settings.
The combination of SRM’s flexibility, the convergence guarantees established in our manuscript, and the interpretability tools presented in Section 5 makes SRM an excellent choice for risk-sensitive policy optimization.
**Other:**
We appreciate the reviewer's feedback on the clarity of our notation. To address these concerns, we will revise the notation in the revised manuscript as follows:
- We will use $\boldsymbol{\pi}$ instead of $\boldsymbol{\pi}_{\mathrm{H}}$ to represent the general class of history-dependent policies.
- We will use $\mathcal{Y}$ to denote the interval $[G_{\min}, G_{\max}]$, consistent with the notation used in [1].
**References:**
[1] Bäuerle and Rieder. More Risk-Sensitive Markov Decision Processes
[2] Bastani et al. Regret Bounds for Risk-Sensitive Reinforcement Learning
[3] Bellemare et al. Distributional Reinforcement Learning
[4] Greenberg et al. Efficient Risk-Averse Reinforcement Learning
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. You have addressed all the questions I raised.
I'm sorry that I have an additional question that I forgot to ask before. Since QR-SRM can be regarded as an alternating descent algorithm, and as stated in Theorem 4.1 and Conclusion (iii), QR-SRM does not necessarily converge to the global optimal solution.
Then, is it possible to characterize the optimality gap?
For example, in the simplest case of CVaR, the baseline algorithm QR-CVaR proposed in Bellemare's book can theoretically guarantee convergence to the optimal solution.
In the CVaR case, can we derive theoretical guarantee for QR-SRM?
Since this new question is raised during the discussion stage and it may be difficult to answer, I do not insist that the authors provide a perfect reply within the short time. However, I think it would be beneficial to understand this optimality gap theoretically. For example, you could include some references in the paper that discuss the optimality gap of similar problems (e.g. EM algorithm), and discuss the possible methods and conditions for establishing the optimality gap of QR-SRM.
Overall, I think this is a good paper, I will raise the score.
---
Reply to Comment 1.1.1:
Comment: Thank you for this nuanced observation and insightful question.
A key property of CVaR that enables convergence to the optimal policy in QR-CVaR is that the information required to find the optimal solution can be summarized in a single variable. In the CVaR case, let $q_\alpha$ denote the $\alpha$-quantile in function $h$. Under this formulation, the greedy action selection (Equation 10) simplifies to the following equation, which aligns with the CVaR greedy policy discussed in [1].
\begin{equation}
a_{G,h}(x, s, c)=\underset{a \in \mathcal{A}}{\arg \max } \mathbb{E}\left[(G(x,s,c,a) - \frac{q_\alpha-s}{c})^{-}\right].
\end{equation}
Note that in this case, we only need to track a single variable, $\frac{q_\alpha-s}{c}$, rather than all three variables $q_\alpha$, $s$, and $c$, which simplifies action selection. In [1], this variable is denoted by $b$.
The significance of using a single variable becomes evident at the end of Lemma 7.26 of [1], where finding the initial $b$ requires searching over all possible values of $b$ in $[G_{\mathrm{MIN}}, G_{\mathrm{MAX}}]$. This step is crucial for proving convergence to the optimal solution for CVaR.
From a theoretical perspective, for SRM, if we extend the state space to include every quantile required to define h, we could perform a similar search. However, this approach is very computationally expensive and impractical. Therefore, we did not adopt this method in our work and instead focused on a more scalable approach that balances theoretical soundness with practical feasibility.
We agree that a more thorough comparison with the theoretical results in [1] would strengthen the foundation for future research on this topic. We will include a dedicated section in the Appendix to address this. In this section, we will also discuss the optimality gap explored in the literature, such as the one for Entropic Value-at-Risk (EVaR) discussed in [2], to provide a broader perspective on the theoretical properties of risk-sensitive objectives.
Thank you again for this excellent question. We also sincerely appreciate you raising your score.
**References:**
[1] Bellemare et al. Distributional Reinforcement Learning
[2] Hau et al. Entropic Risk Optimization in Discounted MDPs | Summary: This work focuses on the risk-sensitive RL, i.e., maximizing the return and managing worst-case scenarios. As distribution RL considers the return distribution, there are several works applying it to the risk-sensitive RL. Extending the widely used risk metric CVaR to Spectral Risk Measures (SRM), this work proposes a novel safe reinforcement learning method with convergence guarantee. Experiments show that the proposed method outperforms existing risk-neutral and risk-sensitive DRL models in various settings.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: roughly
Experimental Designs Or Analyses: yes
Supplementary Material: I roughly skim the appendix.
Relation To Broader Scientific Literature: This work is established on previous risk-sensitive RL methods based on CVaR and introduces Spectral Risk Measures (SRM) as a more general metric.
Essential References Not Discussed: As this work mainly focuses on risk-sensitive RL, several works closely related to this work are missed, like CVaR [1-4], EVaR [5], and so on.
Reference:
[1] Risk-sensitive and robust decision-making: a cvar optimization approach
[2] Towards safe reinforcement learning via constraining conditional value-at-risk
[3] CVaR-constrained policy optimization for safe reinforcement learning
[4] Risk-sensitive reward-free reinforcement learning with cvar
[5] Risk-sensitive reinforcement learning via Entropic-VaR optimization
Other Strengths And Weaknesses: Strength:
- Extending CVaR to more general metric Spectral Risk Measures (SRM) is novel in risk-sensitive RL
- The proposed method is novel with solid proof.
Weakness:
- It seems that different methods in Table 2 perform better on different indicators. Is there any insights for this phenomenon?
- The experimental scenario is relatively simple. Can the proposed method be applied to more complex tasks, such as continuous robot control?
- Minors: The spacing between rows in some places looks very small and needs to be adjusted
Other Comments Or Suggestions: N/A
Questions For Authors: See weaknesses above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and for recognizing the strengths of our work. We are glad that you found our proposed method novel. We address your questions below.
**Missing References:**
Thank you for highlighting the missing references. The first reference is already discussed in the paper, but we will ensure the remaining references are added in the updated version.
**Strengths and Weaknesses:**
Thank you for recognizing the strengths of our work. We would like to emphasize that extending static CVaR to the more general static SRM is just one of our contributions. Another significant contribution is the interpretation of the learned policy through the decomposition of static coherent risk measures and the distribution of returns in our algorithm, as discussed in Section 5. Unlike existing works in the risk-sensitive RL literature, our approach introduces a decomposition of risk preference that enables tracking the agent's intermediate, state-dependent risk preferences. This added interpretability is a unique feature of our method, allowing for continuous monitoring of the policy's behavior. If the policy ever diverges from the user's preferences, a new policy can be trained to realign with those preferences.
**Q1:**
While different methods in Table 2 excel on different indicators, our proposed algorithm (QR-SRM) consistently ranks as the top-performing or within one standard deviation of the top-performing algorithms across all objectives. These minor variations can be attributed to factors such as the inherent stochasticity of the environment and the use of function approximation for value functions. We will highlight the performance of our model more clearly in the revised manuscript to further demonstrate its strong performance.
**Q2:**
As mentioned in the conclusion section, our value-based method is designed for environments with discrete action spaces. Extending our algorithm to actor-critic methods would enable its application in environments with continuous action spaces. In fact, addressing the continuous control problem is the focus of our upcoming work.
**Q3:**
Thank you for bringing this to our attention. We will address this issue in the revised draft.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and other reviews, and I keep my score that this paper is solid as well as novel.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to read our rebuttal and the discussion. We truly appreciate your thoughtful engagement and for recognizing our work as solid and novel. | null | null | null | null | null | null | null | null |
Broadband Ground Motion Synthesis by Diffusion Model with Minimal Condition | Accept (poster) | Summary: This paper proposes a High-fidelity Earthquake Groundmotion Generation System (HEGGS), which is a new solution for generating realistic earthquake-induced ground motion waveforms. The major contributions contain: 1) a new benchmark using openly available seismic datasets, with paired observed waveforms of the source earthquake and time-aligned to the earthquake origin time; 2) a new seismic waveform generation model based on LDM that requires a bare-minimum set of conditional; 3) end-to-end training architecture with supervision on spectrograms and an ACM module to improve the performance; 4) sufficient experiments for earthquake waveform generation and ablation study.
Claims And Evidence: Yes, I think most of the claims in this submission are supported by the experiments.
Methods And Evaluation Criteria: I think the proposed methods and evaluation criteria make sense for the earthquake waveform generation problem.
Theoretical Claims: No proofs for theoretical claims need to be checked as it is an application-driven submission.
Experimental Designs Or Analyses: This submission provides experiments on the phase arrival times, similarity measures, GMPE analysis, and Qualitative Evaluation to prove the effectiveness of the proposed methods. There are also sufficient ablation studies to prove the performance of each key module.
Supplementary Material: Yes, I have reviewed the major parts of the SM.
Relation To Broader Scientific Literature: Not related to the broader scientific literature.
Essential References Not Discussed: From my point of view, this submission has cited most related works that are essential to understanding the key contributions.
Other Strengths And Weaknesses: Some weaknesses:
1) The motivation for using a diffusion model is not clearly explained. The paper mentions that previous GAN-based models could not generate sufficiently realistic seismic waveforms with minimal conditional information (as they require seismological properties of the source earthquake, such as the focal mechanism or local geological properties of the observation point), but it does not provide a thorough analysis of why the diffusion model is able to address this limitation.
2) The diffusion process presented in this paper differs from traditional diffusion processes and requires further clarification. Based on Figure 1, it appears that the paper adds noise to $z^{src} $, then uses a U-Net to simultaneously perform the latent transform function and the denoising model to predict $z^{tgt}$. This differs from traditional diffusion models, which typically involve a single trajectory. In conventional diffusion models, noise is added to $z^{tgt}$ and then denoised, with $z^{src} $ serving as a condition to guide the denoising process of $z^{tgt}$. I believe the authors need to provide a more detailed explanation of why they add noise to $z^{src}$ and the potential advantages of this approach.
3) There is some ambiguity regarding the inference process in the scenario without $W^{src} $. From the Methods section, it is unclear how $x_T^{\text{tgt}} $ should be initialized in the absence of $W^{src} $. I would recommend that the authors provide a comprehensive description of the overall training and inference algorithms to clarify this point.
Other Comments Or Suggestions: 1. From Figure 4, it seems that the generated images are blurred compared to the ground truth. Could you please clarify why this happen?
2. As the ACM module helps to improve the quantitative results a lot, I wonder whether it is possible to add the ACM module to the VAE
pertaining to the LDM so that you may obtain a good pre-trained encoder and a good performance of the classical LDM?
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and constructive feedback. Below, we address the key concerns raised.
> Motivation for using a diffusion model
In seismic synthesis, high fidelity is most important and desirable. However, waveform in high frequency (>1Hz) is very hard to generate especially without local geological features. Prior works, mostly GAN-based, failed to achieve high fidelity and was very hard to train on private data due to extremely unstable training.
Conversely, diffusion models have demonstrated superior stability and high-fidelity synthesis in vision-based conditional generation, attributes that we leverage for seismic applications.
Unfortunately, vanilla Latent Diffusion Models (LDMs) alone were insufficient for our task; our methodological innovations were crucial in making them effective for seismic waveform generation.
> Clarification of diffusion process in HEGGS
For better readability, we provide a rendered PDF containing algorithm blocks in [Anonymous Github](https://anonymous.4open.science/r/Seismic-waveforms-4630). The two-page PDF, showcasing what we will add to the appendix to address your feedback, also contains some motivations and considerations summarized as a remark. We provide a highlight of the PDF to address concerns you raised as follows:
- HEGGS training adds noise to $Z^{src}$ rather than $Z^{tgt}$, since the real observation $W^{src}$ itself is very noisy and we wanted to provide robustness against this noise. Note that this noise is clearly differs from the gaussian noise since it often caused by human activity.
- Also, we emphasize that our neural network $m_\theta$ is a composition of $\eta$ and $x_\theta$. Since we defined $\eta$ as a transformation which maps latent $z_t^{src}$, not $Z^{src}$, to target latent $z_t^{tgt}$, adding noise to $Z^{src}$ is required for the theoretical consistency.
- For the generation scenario without $W^{src}$, we set $z_T^{tgt}$ as a Gaussian noise as conventional diffusion models do. Due to property of $\eta$ in Eq. (6), $m_\theta$ becomes equal to $x_\theta$ and hence the trained model would work equally to the conventional diffusion model's reverse process. We also provide pseudocode of generation in Latex form:
```latex
\usepackage{algorithm,algorithmic}
\begin{algorithm}[H]
\caption{Generation}
\label{alg:inference}
\begin{algorithmic}
\STATE {\bfseries Input:} Diffusion steps $T$,condition vector $\vec{c}_{tgt}$, source waveform $W^{src}$ (optional)
\IF{$W^{src}$ is given}
\STATE convert $W^{src}$ to spectrogram $X^{src}$
\STATE $z_T=\mathcal{E}_{AE}(X^{src})$
\ELSE
\STATE sample $z_T\sim\mathcal{N}(0,1)$
\ENDIF
\FOR{$t=T,\cdots,1$}
\STATE sample $\mathbf{z}\sim\mathcal{N}(0,1)$
\STATE compute $\tilde{z} = \mathbf{m}_\theta(z_t,\vec{c}_{tgt},t)$
\STATE compute $z_{t-1}=\tilde{\mu}_t(z_{t},\tilde{z})+\sqrt{\tilde{\beta}_t}\mathbf{z}$ (Eq. \ref{eqn:diffusion_backward})
\ENDFOR
\STATE $X^{tgt} = \mathcal{D}_{AE}(z_0)$
\STATE Convert $X^{tgt}$ to waveform $W^{tgt}$
\STATE {\bfseries Return:} $W^{tgt}$
\end{algorithmic}
\end{algorithm}
```
> Generated images are blurred
The observed relative blurriness in Figure 4 arises due to both issue on real observations and diffusion model.
The nature of real seismic observations often contain local noise, which are hard to synthesize without specialized method obtained from the long-term noise observations.
Seismologically, the most fundamental fidelity of earthquake "event" signal regards P/S phase arrival times and magnitude decay pattern over wave propagation distance, which are the most important aspects influencing earthquake location and intensity identification.
> Integrating ACM into VAE pretraining
| Model | P_MAE (s) | S_MAE (s)| _env.corr_|
|--|--|--|--|
|LDM*| 1.1142 | 1.7294 | 0.6932 |
|LDM + paired data* | 0.5633 | 0.7808 | 0.7726|
|LDM + paired data + end-to-end-train| 0.8014| 1.5367| 0.6239|
|LDM + paired data + end-to-end-train + ACM (HEGGS)| 0.4760| 0.5476| 0.8187|
|**LDM + ACM**| 1.1131 | 1.6372 | 0.6981|
|**LDM + paired data+ ACM**| 0.7748 | 0.9402 | 0.7965 |
(*): Results with normalized waveform.
We conduct additional experiment with differenct ablation recipe, **LDM+ACM** and **LDM + paired data+ ACM** to address your concern. The results are listed on last two rows of table above, which is extended from the Table 3 of manuscript.
The advantage of ACM enables the LDM trainable with unnormalized waveforms, and showing much better fidelity compared to prior GAN-based methods, but there is a significant performance gap compared to HEGGS.
The pair-exploitting training shows significant improvement, but still not better than HEGGS. Possibly, the VAE pretraining fails to capture important features for the correct synthesis. | Summary: This paper presents a diffusion model for synthesizing earthquake ground motion data at a specific location with minimal conditioning information. Unlike previous approaches that rely on extensive geological and seismological data, the proposed method leverages the inherent information embedded in ground motion signals observed at a given location. The authors recognize that these signals inherently encode details about both the target location and the earthquake source. Based on this insight, they formulate a conditional diffusion model that generates ground motion signals for the target location using meta-information about the target location and the observed signal at the source. Experimental results demonstrate that the proposed model achieves state-of-the-art fidelity, both quantitatively and qualitatively, across various evaluation criteria.
Claims And Evidence: This reviewer is not deeply familiar with the literature on ground motion synthesis. However, the formulation of the problem using minimal conditioning information (e.g., meta-information about the target location and the observed signal from the source) appears reasonable and well-motivated. No apparent issues were identified with the claims presented in the paper.
Methods And Evaluation Criteria: I may not be the best candidate to assess this aspect in depth. However, after reviewing several recent works cited in the paper, I found the evaluation criteria to be consistent with existing approaches and saw no obvious issues.
Theoretical Claims: Please refer to the “Claims And Evidence” section.
Experimental Designs Or Analyses: The experimental designs appear valid overall. However, one minor concern is that while most of the comparison results show a notable gap between the proposed method and the baselines, the scale of the metrics can vary significantly depending on the magnitude of the ground truth, probably except for PSNR. This suggests that, in certain scenarios, the observed gap may be less meaningful than it appears, potentially making the improvement marginal in practice. This suggests that, in certain scenarios, the observed gap may be less meaningful than it appears, potentially making the improvement marginal in practice. In this regard, incorporating more qualitative comparisons could provide stronger support for the validity of the proposed method.
Supplementary Material: I checked additional comparison results in Section G.
Relation To Broader Scientific Literature: Diffusion models have become a widely adopted framework for generative modeling across various domains. This work extends their application to the domain of ground motion synthesis, exploring their potential in this context. By leveraging the strengths of diffusion models, the paper contributes to the growing body of research that applies generative techniques to scientific and engineering problems, particularly in seismology.
Essential References Not Discussed: I’m not aware of such references.
Other Strengths And Weaknesses: Seismological data analysis is both an interesting and crucial task. I agree that generative modeling of such data has the potential to contribute to disaster preparedness and mitigation, ultimately helping to reduce loss of life.
Other Comments Or Suggestions: Could the authors provide additional qualitative comparison results, including more examples from the baseline methods?
Questions For Authors: I believe the previous discussions have already sufficiently addressed my concern.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s thoughtful feedback. Below, we address the main concerns raised.
> the scale of the metrics can vary significantly depending on the magnitude
In implementation, each metrics includes the waveform normalization process in default. Here are the details:
- P/S phase arrival: We use the EQTransformer (EQT) model (Mousavi et al., 2020), which inherently includes normalization in its preprocessing steps.
- Envelope correlation: We utilize the ObsPy implementation (https://docs.obspy.org/packages/autogen/obspy.signal.cross_correlation.correlate.html), which applies normalization by default.
- SNR and PSNR: Both metrics are inherently scale-invariant.
- Spectrogram mse: Our comparisons are performed at the log-spectrogram level, where normalization is applied prior to computing differences.
Given these normalization strategies, our reported improvements remain meaningful and are not artificially amplified by variations in signal magnitude.
> potentially making the improvement marginal in practice
Beyond metric scaling, we emphasize that our improvements are practically significant in seismological applications. In earthquake signal analysis, two of the most critical factors are arrival time accuracy and amplitude distribution:
- P/S arrival time accuracy: In literature of P/S phase picking problems, even human expert-labeled data exhibit unavoidable errors, with deviations under 0.5 seconds typically considered acceptable. HEGGS is the first deep learning synthesis approach to achieve this level of accuracy, making it the most precise generative model for earthquake waveform synthesis to date.
- Amplitude Distribution: Due to unknown geological features and interference effects, real-world amplitude variations can differ by a factor of up to 10, even for well-calibrated models. As demonstrated in our GMPE analysis, the synthesized waveforms from HEGGS align closely with real observations, further validating the fidelity of our approach.
These aspects reinforce that our results are not only statistically significant but also practically valuable for seismic applications.
> Could the authors provide additional qualitative comparison results, including more examples from the baseline methods?
We acknowledge the reviewer’s request for more qualitative comparisons. Due to manuscript file size limitations (including the appendix), we had to limit the number of visual examples included.
However, we are more than happy to provide additional qualitative comparisons. To comply with the double-blind policy, we release an [anonymous GitHub repository](https://anonymous.4open.science/r/Seismic-waveforms-4630) containing supplementary qualitative results, allowing for more in-depth comparisons against baseline methods.
Please be patient: loading may take time, as the figures are fairly large in size due to its resolution. We recommend to download whole repository and seek.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. I will be keeping my original rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our paper and consider our clarifications. We appreciate your engagement with our work and respect your decision.
As no further concerns were raised, we take it as a positive sign that our clarifications were helpful in addressing the points you previously mentioned.
Your comments helped us better explain and refine key aspects of the paper, and we're grateful for the opportunity to do so. | Summary: They tackle the task of generating earthquake-caused ground motion waveform (something I know nothing about). The propose a end-to-end generation method
Claims And Evidence: - novel method: HEGGS (I don't have the expertise to say)
- HEGGS demonstrate its superior performance using earthquakes from North American, East Asian, and Europe (accurate based on the results)
- HEGGS superior against benchmark models in seismology-inspired metrics such as GMPE analysis and phase arrival prediction (accurate based on the results)
Methods And Evaluation Criteria: I know nothing about seismology, but the diffusion process in spectrogram space seems reasonable. Diffusion models are quite good at generating high quality and diverse data.
Theoretical Claims: All theory relates to seismology which I know nothing about.
Experimental Designs Or Analyses: They seems good and diverse, but again I cannot assess the quality of the chosen metrics and dataset given my lack of expertise.
Supplementary Material: No
Relation To Broader Scientific Literature: I don't know the literature on seismology.
Essential References Not Discussed: I don't know the literature on seismology.
Other Strengths And Weaknesses: Interesting usecase.
Other Comments Or Suggestions: My recommendation doesn't mean much, I'll defer it to other experts reviewers, I'll change it later. Meta-reviewer, please take my review with a grain of salt because I don't know anything about the subject.
Questions For Authors: .
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s time and effort in evaluating our work. We acknowledge that the reviewer may not have direct expertise in seismology, and we appreciate their openness about this. Nevertheless, we are grateful that they found our use of diffusion models in this context interesting and that they recognized the validity of our experimental results.
Seismic waveform synthesis plays a critical role in earthquake engineering, hazard assessment, and seismological research. Accurate synthetic waveforms enable better modeling of earthquake effects in regions with limited observational data, improving preparedness and mitigation strategies. Traditional approaches rely on complex physical simulations or empirical models that require extensive geological and seismological information.
Our work introduces HEGGS, a data-driven generative model that synthesizes high-fidelity seismic waveforms with minimal conditioning information, making it more practical and scalable than existing methods. As our experimental results demonstrate, HEGGS achieves state-of-the-art fidelity both in ML-metrics and key seismological metrics, including phase arrival accuracy and GMPE consistency, further validating its potential impact.
We collaborate with seismologists to design, model, and validate our research with in-field professional opinions. We hope all readers of this work gain knowledge and insight on high-fidelity time series generation as well as earthquake research revolutionized by machine learning.
Thank you again for your review and consideration.
---
Rebuttal Comment 1.1:
Comment: I have read the other reviewers comments and the rebuttals by the authors. The authors have made a strong model which is useful to the seismic community. I will leave my score unchanged.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your kind words and for taking the time to review our submission. We're especially grateful that you recognize the value of our model to the seismic community. Your encouraging feedback means a lot to us, and we’re glad that the strengths of our approach came through in your evaluation. | null | null | null | null | null | null | null | null |
Beyond Atoms: Enhancing Molecular Pretrained Representations with 3D Space Modeling | Accept (poster) | Summary: The paper introduces a novel framework called *SpaceFormer*, which models molecular as three-dimensional “images” by discretizing 3D space into grid units. This approach captures the spatial context of molecular more comprehensively than traditional methods that treat them as discrete collections of atoms. The authors demonstrate that *SpaceFormer* significantly outperforms existing 3D molecular pre-trained representation models across various tasks, especially in data-limited scenarios, by effectively leveraging additional spatial information. Their findings also reveal that randomly sampled virtual points can enhance performance, motivating further exploration of systematic 3D modeling in molecular representation.
## update after rebuttal
I thank the authors for their efforts in rebuttal. After reading all comments and rebuttals, I think the author has roughly solved the doubts about motivation, and this is crucial. So I'm willing to update the score 2->3
Claims And Evidence: Several of the paper’s claims are incorrect or not well-supported.
Methods And Evaluation Criteria: The description of the method is not sufficiently detailed, with some parts being confusing.
Theoretical Claims: I double-checked the full text, including the appendix, and the author did not provide a statement of the theory.
Experimental Designs Or Analyses: - The authors used the processed dataset but did not provide dataset details. The fairness of the comparative experiment needs to be further verified.
- The performance of the molecular experimental properties dataset in Table 1 is not optimal, and the authors did not analyze it further.
Supplementary Material: I checked the appendix.
Relation To Broader Scientific Literature: Not involved.
Essential References Not Discussed: The main text cites many literature related to quantum mechanics, but the authors do not understand the data generation of the MoleculeNet dataset, that is, they have a wrong understanding of the physical meaning of the data. Please refer to [1].
[1] Montavon G, Rupp M, Gobre V, et al. Machine learning of molecular electronic properties in chemical compound space[J]. New Journal of Physics, 2013, 15(9): 095003.
Other Strengths And Weaknesses: **Strengths**:
- SpaceFormer improves performance by making effective use of additional spatial information and utilizing randomly sampled virtual points.
- The authors perform a comprehensive evaluation against baseline models across a variety of datasets. The experiment section is solid.
- The method in this paper is innovative.
**Weaknesses**:
- The motivation of the paper is questionable. The authors suggest that existing molecular representations focus only on atomic coordinates and overlook electron density information, leading to the introduction of virtual points for electromagnetic field representation. However, the electromagnetic field is inherently linked to the atoms themselves, and the coordinates in the benchmark are the result of the optimization of the atomic structure and contain the atomic interaction information. Thus, the motivation is not well-founded.
- The writing requires improvement; the methodology is difficult to follow, and many details are not clear.
- The description of the dataset is not sufficiently detailed, with some parts being confusing. The authors should provide statistical details of the benchmark. At the same time, authors should compare models on existing benchmarks.
Other Comments Or Suggestions: 1. Functions or mathematical symbols are distinguished by other fonts, e.g. Embed_t (·)
2. Table 2 extends beyond the boundaries of the double columns.
Questions For Authors: - The atomic type is generally the corresponding atomic mass, for example, O is 16, how to quantify the virtual point type as NULL?
- After adding the virtual point, is the SE-(3) equivariance still valid? What are the specific dimensions of each encoding molecule on the transformer?
- Positional relative encoding is generally used to represent the relative interaction between atoms, what is the specific physical meaning of the relative position representation of VP and atoms in this paper?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Due to word limit, we will not quote the original text but refer to the title of the reviewer's feedback content.
**1. Regarding Claims And Evidence**
Thank you for your insightful feedback. In our paper, we have provided substantial experimental evidence to validate our claims. For instance, our results demonstrate that incorporating virtual points enhances model performance and techniques like grid sampling or merging improve efficiency without sacrificing effectiveness. Moreover, various components, like position encoding, contribute significantly to the model's performance improvements.
**2. Regarding Methods And Evaluation Criteria and Weaknesses 2**
Thank you for your valuable feedback. We will revise the manuscript to enhance the readability, clarify our methodology and add more details to the main text. We will also make our code open source to enhance understanding.
**3. Regarding Experimental Designs Or Analyses and Weaknesses 3**
- Thank you for your feedback. In Section 4.1, we detailed the dataset's origin, significance, processing, and splitting methods, and included units and sample numbers in Table 1. Additional data size, description, task types, and evaluation metrics are found in Table 5 in the Appendix. We will release the data processing script with the code. If any part of this explanation remains unclear, we would appreciate further clarification from the reviewer. Regarding fairness in comparative experiments, we conducted additional tests on the QM9 dataset. The results, which show SpaceFomer's superior performance, are included in the first response to reviewer sLbT.
- Thanks for your suggestion. We would like to clarify that our results in experimental property-related tasks remain superior (with 4 out of 5 tasks ranked in the top two). We acknowledge that the enhancements are less pronounced than in computational property tasks. This may be due to the complexity and variability of the measurement environments for experimental properties, affected by factors like temperature and pH, which can impact measurement stability and accuracy.
**4. Regarding Essential References Not Discussed**
Thank you for your feedback. We apologize for any confusion and appreciate the chance to clarify. In our paper, we briefly mention the MoleculeNet dataset in the footnote of Section 4.1, focusing on challenges with pre-trained models, as discussed in previous work. We did not explore the data generation process.
If specific statements cause misunderstandings, please let us know so we can address them thoroughly. We value your insights and welcome continued dialogue to improve our manuscript.
**6. Regarding Reviewer's Weaknesses 1 and Question 3**
Thanks for your thoughtful feedback. We appreciate the opportunity to clarify our motivation. We acknowledge that atom coordinates encapsulate significant information, but relying solely on them necessitates extensive computation to derive electron densities or potential fields. Our approach introduces grid cells beyond atoms, offering a computationally efficient means to incorporate additional spatial information. Furthermore, many computational simulation methods in physics like electronic density distributions and potential fields are functions of the entire 3D space, not just atom positions. This underscores our motivation to explore modeling beyond atoms. The interaction between atoms and virtual points with physical positions can be seen as a way for the model to efficiently learn some intermediate representations of the space beyond atoms, approximate spatial field effects through learnable interactions, thereby enhancing its understanding of the overall spatial information. We acknowledge this is an intuitive understanding and plan to pursue more theoretical validation in the future.
**7. Regarding Reviewer's Other Comments Or Suggestions**
Thank you for your suggestion. We use bold fonts for vectors and text fonts for functions. Specifically, in Section 3.2, we denote the embedding function as "\text{Embed}_t(\cdot)", following LaTeX conventions. According to ICML guidelines, tables can span two columns if within page margins. We've ensured compliance and appreciate your feedback, hoping you'll reconsider.
**9. Regarding Question 1**
Thank you for your question. In our model, each atom type is assigned a unique identifier mapped to an embedding, independent of atomic mass. For the virtual point type, a 'NULL' type is introduced with a distinct embedding to differentiate it from other atom types, without any mass-related encoding.
**10. Regarding Question 2**
Thanks for your question. Since we divide the 3D space where the molecules are located into grids and obtain the final grid cells/virtual points by merging, this method is deterministic and therefore does not affect the SE-(3) equivariance. Regarding your question on dimensions, we have detailed these specifications for each layer in Table 5, located in Appendix A.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for the answer that solved my doubts. I think the author's response and the overall quality of the paper are enough for me to improve my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful review and increasing your score. We truly appreciate the time and effort you put into reading and evaluating our paper.
Your comments were insightful and helped us better understand how to improve our work.
Thanks again for your valuable feedback and support. | Summary: This paper introduces SpaceFormer, a Transformer-based molecular pretrained representation (MPR) model that explicitly encodes the 3D space surrounding molecules, going beyond traditional atom-only representations. The method discretizes the 3D space into grid cells, applying adaptive grid merging for efficiency, and employs masked autoencoder (MAE) pretraining. Extensive experiments demonstrate significant performance improvements over baselines (e.g., Uni-Mol [Zhou, 2023]) across 15 molecular property prediction tasks (Table 1). Ablations confirm the effectiveness of each component (Tables 2, 3, 4).
Claims And Evidence: The main claim—that modeling space beyond atoms enhances molecular representations—is strongly supported by experiments. Empirical results (e.g., ~20% error reduction on key quantum property tasks like HOMO/LUMO) convincingly validate this claim (Table 1). Ablation studies (Table 4) show improvements aren't just due to increased capacity or random noise points, but genuinely from modeling spatial context.
Methods And Evaluation Criteria: Methods and evaluations are appropriate and rigorous. The chosen tasks (quantum and experimental properties) test generalization in limited-data scenarios using OOD splits, effectively evaluating pretraining value. Baselines (Uni-Mol, GEM, 3D-Infomax) are strong, ensuring a fair assessment.
Theoretical Claims: No major theoretical issues. Their choice of grid resolution (0.49 Å) ensuring one atom per cell is logical. Positional encoding is reasonable and supported by experimental gains (Table 3). The theoretical arguments (e.g., empty space containing meaningful fields) are scientifically sound though qualitative.
Experimental Designs Or Analyses: Experiments and analyses are well-executed. Results averaged over multiple seeds and ablation studies thoroughly validate the method (Tables 2, 4). Minor weaknesses include no detailed runtime analysis and limited discussion on tasks where improvements were small (e.g., BBBP).
Supplementary Material: Yes for most parts pointed out by the main text.
Relation To Broader Scientific Literature: The paper clearly positions itself relative to prior 3D MPR models (Uni-Mol, 3D-Infomax) and makes a novel contribution by explicitly modeling empty space. However, it omits discussion of earlier 3D grid-based methods like AtomNet [Wallach, 2015] or equivariant models such as SchNet [Schütt, 2017], which could provide additional context.
Essential References Not Discussed: See the answer for "Relation To Broader Scientific Literature"
Other Strengths And Weaknesses: Strengths: Highly original idea; consistent improvements across diverse tasks; careful experimental design and thorough ablation studies.
Weaknesses: Computationally intensive due to large input size (though mitigated by merging); does not explicitly enforce equivariance; unclear why method underperformed on BBBP (Table 1).
Other Comments Or Suggestions: - Minor typos (e.g., “concergence” → convergence in second column of page 7) should be corrected.
- Clarify average token count per molecule after merging and positional encoding details for reproducibility.
Questions For Authors: 1. Why does SpaceFormer slightly underperform on BBBP compared to simpler baselines? Could it indicate an overfitting or a limitation in your method?
2. Have you considered equivariant architectures as baselines? Would these be complementary to your grid-based approach?
3. Could you briefly explain why random virtual points improved performance initially (Figure 2)? What might the model be capturing from these points?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1. Regarding "Minor weaknesses include no detailed runtime analysis and limited discussion on tasks where improvements were small (e.g., BBBP)." and "Why does SpaceFormer slightly underperform on BBBP compared to simpler baselines? Could it indicate an overfitting or a limitation in your method?"**
Thank you for your insightful feedback. Regarding the runtime analysis on time cost, in Section 4.4, we provide a detailed analysis comparing SpaceFormer with Uni-Mol. The pretraining cost of Uni-Mol scales quadratically with the number of sampled cells, whereas SpaceFormer exhibits a more linear scaling pattern.
For runtime analysis on smaller improvements in BBBP, one reason can be the complexity and variability of the measurement environments for experimental properties, which can be easily influenced by external factors, like temperature and blood-brain barrier biofilm status for BBBP. Such variability can affect the stability of measurement results and may not accurately.
Additionally, as mentioned in the footnote of Section 4.1, the MoleculeNet dataset for the experimental task (including BBBP) has several limitations, including invalid structures and inconsistent chemical representations, which affect its ability to differentiate molecular pretraining models. We will incorporate this analysis into the main text of our paper.
**2. Regarding "it omits discussion of earlier 3D grid-based methods like AtomNet or equivariant models such as SchNet, which could provide additional context."**
Thank you for your insightful feedback. We appreciate your suggestion to discuss earlier 3D grid-based methods. AtomNet and SchNet both use 3D convolutional neural networks, with SchNet focusing on atom positions. In contrast, SpaceFormer employs a transformer-based architecture to capture interactions across global grid cells, extending beyond just atoms positions. We will incorporate this discussion in our paper.
**3. Regarding "Computationally intensive due to large input size (though mitigated by merging)"**
Thank you for raising this point. Firstly, the input size is smaller than expected due to grid merging or sampling strategies, which reduce the number of grid cells effectively while preserving model performance, as detailed in Table 2. Secondly, SpaceFormer efficiently handles computational complexity using FlashAttention and two 3D relative positional encodings, allowing our model's time to scale nearly linearly with sequence length, as shown in Figure 4.
**4. Regarding "does not explicitly enforce equivariance and have you considered equivariant architectures as baselines? Would these be complementary to your grid-based approach?"**
Thank you for your insightful suggestion. We would like to clarify that both Uni-Mol and Mol-AE, which are included in our baselines, consider equivariance. Initially, SpaceFormer was developed with reference to Uni-Mol. However, we observed that explicitly enforcing equivariance, as Uni-Mol does, leads to increased computational complexity. For instance, Uni-Mol's pair-wise Gaussian distance results in a complexity of $\mathcal{O}(n^2)$. Consequently, SpaceFormer encodes this information more efficiently through 3D distance PE using RFF to approximate Gaussian distance, and 3D directional PE using RoPE to capture direction, although strict equivariance is not guaranteed. Additionally, SpaceFormer also employs random rotation data augmentation to enhance robustness. Figure 4 illustrates SpaceFormer's superior efficiency compared to Uni-Mol, while Table 1 shows SpaceFormer outperforming other equivariant baselines, highlighting our model's robustness.
**5. Regarding "Clarify average token count per molecule after merging and positional encoding details for reproducibility" and "Minor typos should be corrected"**
Thank you for your insightful feedback. In Table 2, we present the average token count per molecule (avg. cells) after the merging/sampling process. We recognize the importance of clarity and reproducibility, and we will include additional details about the positional encoding in the next version. Additionally, we plan to open source our code to facilitate reproducibility. We will correct the typo in the main text, as well as any other minor errors.
**6. Regarding "Could you briefly explain why random virtual points improved performance initially (Figure 2)? What might the model be capturing from these points?"**
Thank you for your insightful question. The inclusion of virtual points with physical positions serves to provide additional spatial context, allowing the model to develop intermediate representations that enhance its understanding of the overall spatial information. In addition, during pre-training, the task of identifying atom locations in 3D grid cells becomes more challenging, which ultimately improves the model's performance in downstream tasks. This approach helps the model capture spatial and positional information more effectively. | Summary: This paper tackles molecular pretrained representation (MPR) by arguing that previous methods which focused solely on atom positions and types miss crucial physical context in the surrounding 3D space. Motivated by physical principles (e.g., the presence of electron densities and electromagnetic fields), the authors first demonstrate that simply adding randomly sampled virtual points can boost performance. Building on this observation, they propose SpaceFormer, a Transformer-based framework that discretizes the entire 3D space into grid cells, employs grid sampling and adaptive grid merging to manage computational cost, and integrates efficient 3D positional encoding (extending RoPE and using random Fourier features). The model is pre-trained via a Masked Autoencoder (MAE) strategy and is shown to outperform several baseline MPR models on diverse downstream tasks covering both computational and experimental molecular properties.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The experimental downstream tasks should also include widely used benchmarks such as QM9 and MD17, which are standard in molecular property prediction. Moreover, since grid/mesh-based methods are particularly adept at capturing long-range interactions [1, 2], the inclusion of the MD22 dataset would further validate the model's ability.
Since the grid-based methods are very time-consuming, what is the time consumption of Spaceformer and UniMol-AE (w/o grids)?
[1] Kosmala, Arthur, et al. "Ewald-based long-range message passing for molecular graphs." International Conference on Machine Learning. PMLR, 2023.
[2] Wang, Yusong, et al. "Neural P $^ 3$ M: A Long-Range Interaction Modeling Enhancer for Geometric GNNs." The Thirty-eighth Annual Conference on Neural Information Processing Systems.
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: Introduce a grid-based molecule pretraining method
Essential References Not Discussed: More grid-based approaches should be discussed:
[1] Kosmala, Arthur, et al. "Ewald-based long-range message passing for molecular graphs." International Conference on Machine Learning. PMLR, 2023.
[2] Wang, Yusong, et al. "Neural P $^ 3$ M: A Long-Range Interaction Modeling Enhancer for Geometric GNNs." The Thirty-eighth Annual Conference on Neural Information Processing Systems.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1. Regarding "The experimental downstream tasks should also include widely used benchmarks such as QM9 and MD17, which are standard in molecular property prediction. Moreover, since grid/mesh-based methods are particularly adept at capturing long-range interactions [1, 2], the inclusion of the MD22 dataset would further validate the model's ability."**
Thank you for your insightful feedback and for highlighting the importance of these benchmarks. Following your comments, we have expanded our evaluation to include the HOMO, LUMO, and GAP tasks on the QM9 dataset, where our model demonstrates better performance, as detailed in the following table.
| Model | HOMO | LUMO | GAP |
|---------------|---------------|---------------|---------------|
| 3D Infomax | 0.000952 | 0.000794 | 0.00155 |
| Uni-Mol | 0.000857 | 0.000763 | 0.00151 |
| SpaceFormer | 0.000594 | 0.000544 | 0.00106 |
We also attempted to evaluate our approach on the stachyose molecule from the MD22 dataset, following your suggestion and the references provided. This would further validate our model’s capability in capturing long-range interactions. However, the MD22 dataset primarily focuses on the prediction of energy and force, while our current model framework is geared towards property prediction. Adapting our model for force prediction requires additional development time. We kindly ask for your understanding and additional time to complete these experiments. We are committed to providing the results in the next discussion phase.
We greatly appreciate your constructive critique, which has strengthened our analysis. Thank you for your understanding and support.
**2. Regarding "Since the grid-based methods are very time-consuming, what is the time consumption of Spaceformer and UniMol-AE (w/o grids)?"**
Thanks very much for your question. We would like to clarify that the grid-based methods employed in this paper are optimized to be not very time-consuming. We have implemented two key strategies to enhance efficiency.
Firstly, by merging or sampling grid cells, we significantly reduce the number of grid cells without compromising the model's performance. This is detailed in Table 2, which presents the number of grid cells after merging or sampling alongside the corresponding model performance.
Secondly, our model framework is designed to handle long sequences efficiently. We employ FlashAttention and two effective 3D relative positional encodings, ensuring that the computational time increases almost linearly with sequence length, as illustrated in Figure 4.
Additionally, Mol-AE builds upon Uni-Mol with atom-based MAE pretraining, where only 15% of atoms are dropped for reconstruction during pretraining, resulting in negligible speed differences between Mol-AE and Uni-Mol. We compare the time consumption of Spaceformer and Uni-Mol in Figure 4, which demonstrates our model framework's superior efficiency for processing long sequences.
**3. Regarding "More grid-based approaches should be discussed:
[1] Ewald-based long-range message passing for molecular graphs." International Conference on Machine Learning.
[2] Neural P3M: A Long-Range Interaction Modeling Enhancer for Geometric GNNs"**
Thank you for your valuable feedback. We appreciate your insightful suggestions and will incorporate a discussion on these methods in our paper.
This addition will help provide a more comprehensive analysis of long-range interaction modeling in molecular graphs. We sincerely appreciate your guidance in improving our work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I would keep my positive score.
---
Reply to Comment 1.1.1:
Comment: Sorry for the late reply regarding experiments on MD22 dataset.
Our current framework primarily focuses on molecular property prediction rather than force field modeling. As such, adapting the model to the MD22 benchmark, which emphasizes force field energy prediction, required non-trivial modifications. Initially, we explored incorporating force prediction via the gradient of energy with respect to atomic coordinates, as is standard in many MD22 baselines. However, due to limitations in the current implementation of Flash Attention, particularly challenges with the backpropagation of the gradient of the first-order derivative, this approach proved difficult to implement efficiently within the rebuttal period.
We then shifted to a direct force regression approach, which avoids gradient computations and is adopted by several force-specific models. Using this method, we performed initial evaluations and report the energy prediction results on the Stachyose molecule. Due to time and resource constraints, we were not able to fully tune hyperparameters or expand to the full MD22 benchmark. Nonetheless, these results serve as an encouraging starting point, and we see significant potential for further improvements with more focused development.
| Model | Energy MAE | Interaction |
|-----------------------|---------------|---------------|
| sGDML | 4.0497 | - |
| Equiformer | 0.1404 | two-bodies |
| MACE | 0.1244 | four-bodies |
| ViSNet | 0.1283 | triplet and quadruplet interactions |
| ViSNet(Neural P$^3$M) | 0.0856 | triplet and quadruplet interactions |
| Uni-Mol | 0.323 | two-bodies |
| SpaceFormer | 0.151 | two-bodies |
We would also like to clarify that our model, Spaceformer, is not tailored specifically for force field tasks. Unlike ViSNet or MACE, which are designed to capture higher-order atomic interactions (e.g., three-body or four-body terms), our framework approximates two-body interactions through a efficient transformer-based architecture with a complexity lower than $\mathcal{O}(n^2)$. Despite this, Spaceformer demonstrates comparable performance to strong baselines such as Equiformer that explicitly model two-body interactions.
We understand the reviewer’s interest in grid/mesh-based methods such as Neural P$^3$M, which are effective at capturing long-range interactions. While Neural P$^3$M builds upon ViSNet and benefits from a force-specific design, our approach is built on Uni-Mol and demonstrates that incorporating grid cells beyond atoms can enhance energy prediction, aligning with the motivation behind Neural P$^3$M.
We sincerely appreciate the reviewer for highlighting this important research direction. We are committed to continuing this line of investigation, and in the final version of the paper, we will expand our discussion on MD22, the treatment of long-range interactions, and connections to Neural P$^3$M.
We kindly ask the reviewers to consider the substantial effort we invested during the rebuttal period to adapt our framework and generate additional results, despite the limited time and computational resources available. | null | null | null | null | null | null | null | null |
Optimal and Practical Batched Linear Bandit Algorithm | Accept (poster) | Summary: The paper introduces BLAE, a new algorithm for batched linear bandit problems. The central challenge in batched learning is balancing exploration and exploitation within discrete batches, as frequent updates may not be feasible in real-world applications. BLAE combines arm elimination and regularized G-optimal design to achieve near-optimal regret bounds while maintaining practical computational complexity.
Claims And Evidence: Yes, most claims made in the submission are supported.
However,
1. The authors argue that prior methods require excessive computation, but no formal computational complexity analysis is presented.
2. Theoretical bounds support the claim, but there is no empirical comparison between "batch-wise estimation" vs. "using all past data".
Methods And Evaluation Criteria: The proposed methods and evaluation criteria (e.g., regret, batch complexity) are well-chosen for the problem of batched linear bandit learning.
However,
1. The paper does not discuss how regularization parameters or elimination thresholds are selected.
2. No comparison with fully adaptive (non-batched) methods like LinUCB. While non-batched methods are not direct competitors, it would be useful to show how much performance is sacrificed by batching.
3. No real-world dataset tested.
Theoretical Claims: The theoretical claims in the paper are largely well-supported by the proofs provided.
1. The proof assumes batch-wise least squares estimation is independent across batches due to regularization. However, this assumption lacks explicit justification, and a more rigorous bound on the impact of past information loss on regret would be helpful.
2. The batch complexity proof seems correct under typical conditions, but it would benefit from a more explicit discussion of elimination efficiency in large-K cases.
Experimental Designs Or Analyses: 1. The paper does not provide extensive evidence of BLAE's performance on real-world datasets with more complex dynamics and noise, particularly when the author mentions that other batch linear Bandit algorithms perform poorly in practical applications, especially in large-scale scenarios.
2. No comparison with fully adaptive methods (e.g., LinUCB).
3. Batch complexity is explicitly measured; but no runtime/memory usage comparisons.
4. Needs ablation study on batch size, regularization, and elimination strategy
Supplementary Material: The supplementary material consists of codes, but I did not thoroughly check it.
Relation To Broader Scientific Literature: The problem of linear bandits has been widely studied in the literature. The BLAE algorithm builds on this foundational work by considering the batched learning scenario, where updates are made periodically (rather than after each round). This shift to a batched setting allows for more efficient exploration and the possibility of reducing computational overhead while still maintaining a low regret bound.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Other Strengths:
1. The comparison with E4 and RS-OFUL shows that BLAE consistently achieves lower regret and fewer updates per batch.
2. Develop batch-wise optimal design strategies that efficiently allocate exploration efforts and can be generalized to both small-K and large-K settings.
3. Introduces a novel concentration bound accommodating both batching and regularization.
Other Weaknesses:
1. I believe this paper makes an incremental contribution. The combination of G-optimal design and arm elimination is a classic approach, and even the use of regularized G-optimal design is not particularly novel. Achieving $O(loglogT) $ batch complexity in batched learning is also quite a standard result. It would be helpful if the authors could highlight the difficulties in the proof, particularly the challenges associated with unifying the large-K and small-K cases.
2. The computational cost of solving G-optimal design in batched settings is not analyzed.
Other Comments Or Suggestions: 1. In small-K cases, eliminating suboptimal arms is straightforward. However, in large-K cases, elimination could lead to misidentification of the best arm if not enough data is collected. The paper could further clarify why earlier methods, e.g., (Ren et al., 2024), failed to achieve this bound in the large-K setting.
2. A brief discussion of why batch-wise estimation does not significantly degrade performance (or an empirical comparison with full-history least squares) would be useful.
3. While the theoretical proof is rigorous, an intuitive explanation of why batch complexity reduces from $O(\log T)$ to $O(\log \log T)$ would help readers unfamiliar with batched bandit theory.
Questions For Authors: 1. The paper highlights that traditional regularized least squares methods rely on all past time steps for parameter updates, but early noise can affect future arm selection. To mitigate this, BLAE uses a batch-wise approach with regularization to ensure parameter updates within each batch are independent. My concern is the feasibility of this approach—relying on a single batch may discard valuable information from earlier batches. Is there theoretical or experimental support for this batch-wise update method, and how does the algorithm balance using only one batch of data while potentially losing useful information from earlier steps?
2. Does BLAE sometimes eliminate the optimal arm too early? Reporting how often the best arm is mistakenly eliminated would provide insights into the method’s robustness.
3. How does BLAE scale computationally for large-K settings?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your time. However, with all due respect, we are very concerned by the current assessment. We sincerely hope to establish some common grounds with our response.
---
### **On Computational Cost**
To our best knowledge, all prior works on batched linear bandits have not provided theoretical complexity analyses, as exact theoretical complexity analyses for comparisons are challenging due to various optimization subroutines in each algorithm.
For fair comparisons of computational cost, we provided empirical runtime experiments in Appendix E and conducted additional extensive evaluations (Table 2-4 in https://tinyurl.com/3m6f4v5n). Our experiments show that **our method significantly improves computational efficiency compared to existing optimal algorithms.**
---
### **Batch-wise vs. Fully Sequential**
Obviously, even optimal *batched* algorithms will underperform *fully sequential* ones due to limited information in batched settings. Thus, direct comparison is inherently unfair, as the problem settings are fundamentally different.
Nevertheless, per your request, we conducted additional experiments comparing batched algorithms with the fully sequential LinUCB. Results (Figure 6 in https://tinyurl.com/3m6f4v5n) clearly show that the regret difference between BLAE and LinUCB is the smallest among existing batched algorithms, with competing batched methods exhibiting significantly larger performance gaps.
---
### **Regularization Parameters and Elimination Thresholds**
The reviewer's comment is incorrect. The regularization parameter $\lambda_\ell$ is explicitly stated in Theorem 1 as $O(1)$ (set to $\lambda_\ell = 1$ in experiments). The elimination threshold $\varepsilon_\ell$ is explicitly stated in Theorem 1 and precisely defined in Lemma 2.
---
### **Real-world Datasets?**
Regret evaluation in bandit literature typically relies on simulations since offline datasets may not contain feedback for all counterfactual actions that an algorithm can take. Thus, absence of real-world datasets is typically not considered a weakness, particularly for a theoretical work.
Nevertheless, we conducted extra experiments based on real-world dataset *MovieLens*. The results (Figure 7 in https://tinyurl.com/3m6f4v5n) confirm that BLAE consistently outperforms existing methods, validating its practical effectiveness.
---
### **Assume Batch-wise Independence?**
No, our proof **does not assume** "batch-wise least squares estimation independence across batches," irrespective of regularization. Independence is achieved (rather than assumed) solely through our batch-wise design.
There appears to be a clear misunderstanding. This is serious because it may present challenges to the reviewer in evaluating our main results. We sincerely would like to ensure we are on the same page.
---
**Elimination efficiency?** We are unaware of any standard notion of "elimination efficiency." Could the reviewer rigorously define what is meant by this term?
---
**Ablation?**
Altering batch size is unnecessary, as it is proven optimal batch complexity. Additional evaluations on varying regularization $\lambda_\ell$ are provided. (Figure 8 in https://tinyurl.com/3m6f4v5n).
---
### **Novelty**
We strongly rebut "incremental contribution" comment. The use of G-optimal design or arm elimination in itself should not diminish novelty. By that standard, many impactful papers using optimal design would have been rejected.
To highlight a few of our contributions briefly: our refined analysis (Lemma 3) significantly improves exploration strategies beyond standard G-optimal design. We derived a new bound for this. Existing concentration inequalities in arm elimination inherently depend on $K$, making them suboptimal in large-$K$ scenarios. In contrast, our novel analytical framework utilizes efficient coverings of the unit sphere, providing sharper, more general regret bounds. Crucially, our proposed elimination strategy attains a unified optimal regret bound of $O(d\sqrt{T} \wedge \sqrt{dT\log K})$ using only $O(\log\log T)$ batches. Please refer to our paper for more contributions.
---
### **G-optimal Design**
A key advantage of our algorithm is that we compute the G-optimal design only $O(\log\log T)$ times across $T$ rounds. Thus, given that computing optimal design requires at most $O(Kd^3)$, the average computational complexity due to G-optimal design in our algorithm is merely
$O\left(\frac{K d^3 \log\log T}{T}\right)$, which tends to zero as $T$ increases. This highlights a crucial benefit of our batched update.
---
**Suboptimal in Large-$K$ Cases**
There might be a misunderstanding. Existing methods being suboptimal in large $K$ case is not about "misidentification of the best arm" in large $K$. Rather, these methods always yield $K$-dependent regret, even for large $K$, but ours don't (see Table 1)
---
We hope our responses clarified the key points and respectfully ask that the evaluation reflect our contributions. | Summary: This paper studies the batched linear bandit problem. The authors propose a new algorithm for this problem and show that this achieves near optimal regret. Compared to other algorithms which also achieve near optimal regret in this problem, the authors claim that their proposed algorithm is more computationally efficient and performs better in practice. To illustrate this, experimental results in simulated environments are provided.
Claims And Evidence: - From section 3, it is not clear if the batches are given to us or if they are something the learner can pick.
- The choice of epsilon seems very important but is only given in the Theorem 1 in terms of its order. For experiments, it seems a different value of epsilon is chosen, and it is not clear if this choice satisfies the conditions of the theorem. Therefore, it is not clear if the exact same approach is optimal in theory and practice.
Methods And Evaluation Criteria: - It would be good to see results for larger d – it seems that maximum d=10 is actually quite small.
- It would also have been good to show a run time comparison of the main algorithms used in the experiments.
- The run time experiments in Table 2 of the appendix are interesting, but it would good to make it obvious from the main paper.
Theoretical Claims: The proof of Lemma 1 seems fairly standard (unless I am missing something). Discussion & references of similar proofs would be appreciated.
There were several steps in the proofs I did not understand:
- Not clear where number of pairs comes from in715.
- In 891, why can we pick pi this way?
- Where does the assumption T>Omega(K^5) come from? (1071)
- Why is there a second bound in 1099? – I think this is to get the different results, but the structure of the proof could make it clearer.
I also did not understand lines 912,933-944,1050, 1077. More justification should be provided.
I think probably the proofs are correct (and I do not necessarily need detailed explanations of all these steps in the rebuttal, I just thought it might be helpful to point out the parts which were difficult to follow).
I don’t think the proofs of Lemmas 4,5,11,13 are necessary to include as they are standard results that have been proven before in the literature. References to existing proofs are fine to include instead.
Experimental Designs Or Analyses: - It would be good to see results for different choices of epsilon. In particular, how does choosing epsilon according to Theorem 1 alter the results?
- It seems there are hyperparameters of other algorithms as well, but it is not clear how these have been chosen. Was the same procedure used for BLAE as well as the other algorithms? It seems that for E^4 the parameters were configured according to theory, which does not seem to be a fair comparison.
Supplementary Material: I looked over most of it.
Relation To Broader Scientific Literature: There is a good discussion of related work, although the analysis should be related to similar analyses in related work.
Essential References Not Discussed: na
Other Strengths And Weaknesses: - One concern is the size of the contribution. I think the introduction does not do a sufficient job explaining why the work in this paper is solving an important question.
- Another concern is the difference in hyperparameters for theory & practice. The authors claim that their algorithm is effective for both, but seemingly requires different tuning for each (unless I have missed something).
Other Comments Or Suggestions: It would be helpful in the table to explain whether each algorithm works in the ‘large’ or ‘small’ K regime in Table 1. Since one contribution of this paper seems to be an algorithm which works in both regimes, it would be good to easily be able to understand what regiemes existing works cover.
Questions For Authors: 1) Fairness of experiments in terms of choice of hyperparameters of prior work
2) Choice of hyperparameters – are the same ones used in theory and practice?
3) Importance of contribution
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate your time and review. However, with all due respect, there appear to be some misunderstandings, which we sincerely hope to clarify in our response.
---
### **Batched Learning Setting**
In the batched linear bandit setting, the learner chooses both the number of batches and when to update parameters. This is **the standard setup in the literature** (Esfandiari et al. (2021), Ruan et al. (2021), Hanna et al. (2023), Ren et al. (2024), Zhang et al. (2025)). This is the key aspect of the batched linear bandit problem because one wishes to design an algorithm that can update parameters as infrequently as possible while still achieving provably optimal regret.
---
### **Consistency of Parameter Choice in Theory and Experiments**
We confirm that **the choice of $\varepsilon_\ell$ is consistent across theory and experiments**. Its order is defined in Theorem 1 and rigorously maintained throughout the paper, including experiments. The precise definition of $\varepsilon_\ell$ is given in Lemma 2, which depends on parameter $\varepsilon$ (distinct from $\varepsilon_\ell$). $\varepsilon$ is an arbitrary constant in (0,1) (chosen as 0.5 in all experiments which is theoretically valid). Our algorithm BLAE employs identical parameter choices in both theory and practice, ensuring optimality in both domains. **There is no "difference in hyperparameters for theory & practice."**
---
### **Experimental Results for Larger Dimension $d$**
We conducted additional experiments with larger $d$ values. Results appear in Figure 4 of Sec. 5 in https://tinyurl.com/3m6f4v5n. These demonstrate that regret difference increases with $d$.
---
### **Runtime Comparison of the Main Algorithms**
We conducted additional runtime experiments with larger arm counts $K$. Results in Table 2-4 of Sec. 3 in https://tinyurl.com/3m6f4v5n demonstrate that BLAE is the fastest among optimal algorithms, with runtime comparable to suboptimal algorithms even in very large-$K$ settings.
---
**Lemma 1**:
Our Lemma 1 adaptation explicitly incorporates the regularization term to achieve our specific bound, which is crucial to our analysis. While sharing structural similarities with standard results, its subtle differences merit attention. We will include the relevant previous result related to Lemma 1 in the revision (e.g., Chapter 20 in Lattimore & Szepesvári (2020)).
---
### **Effect of Different Choices of Epsilon on Results**
As mentioned, the elimination threshold $\varepsilon_\ell$ is defined based on parameter $\varepsilon$ (distinct from $\varepsilon_\ell$). In all experiments, we followed the theoretical guideline for $\varepsilon_\ell$ and $\varepsilon$.
Since theory allows $\varepsilon \in (0,1)$ to be chosen arbitrarily, we tested sensitivity to different $\varepsilon$ values. Results in Figure 5 in https://tinyurl.com/3m6f4v5n show BLAE remains robust across various $\varepsilon$ values. Note again that **in all of our experiments the parameter choices satisfy the conditions of the theoretical results.**
---
### **Clarification on Hyperparameter Selection for Other Algorithms**
All experiments were conducted fairly. For $E^4$ (Ren et al., 2024), we used their exact public implementation. All baseline algorithms used theoretically suggested hyperparameter values from previous literature, while BLAE’s hyperparameters followed our theoretical results, ensuring fair and transparent comparisons.
---
### **Significance of Contributions**
To understand our contributions, it's essential to recognize the limitations in existing batched linear bandit literature:
- **Ren et al. (2024):** Optimal regret only for small-$K$ regime, exhibit poor practical performance (see Figure 1 of our paper).
- **Abbasi-Yadkori et al. (2011) and Esfandiari et al. (2021):** Higher batch complexity, making them theoretically suboptimal.
- **Ruan et al. (2021), Hanna et al. (2023), and Zhang et al. (2025):** Impractical due to excessive runtimes (see Table 2 of our paper).
The key motivation for batched linear bandits is computational efficiency without sacrificing regret or batch complexity. In contrast to previous methods, **BLAE is the first algorithm that simultaneously achieves provable optimality in both small-$K$ and large-$K$ regimes, while maintaining practical efficiency in both regret and batch complexity**.
Therefore, our contribution addresses a critical gap in existing literature, aligning with the fundamental motivation of batched linear bandits.
---
### **Clarifying small-$K$ and large-$K$**
It is NOT a matter of whether existing algorithms work or not. The existing algorithms "work" in both small-$K$ and large-$K$ regimes. But, their optimality was **proven only in one of the regimes**—never in both. In contrast, **BLAE uniquely achieves optimality simultaneously in both regimes**.
---
We hope our responses clarified the key points and respectfully ask that the evaluation reflect our contributions. | Summary: In this paper, the authors study the linear bandit problem under limited adaptivity, known as the batched linear bandit. They propose a novel batched algorithm that integrates arm elimination with regularized G-optimal design, achieving the minimax optimal regret in both large-K and small-K regimes for the first time, while using only loglogT batches. Finally, the algorithm demonstrates low computational overhead and strong empirical performance, outperforming state-of-the-art methods in extensive numerical evaluations.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The proof sketch part is convincing, the proof looks correct to me.
Experimental Designs Or Analyses: The experimental result is good in general, while I have several concerns.
1. All the choices of $K$ are not very large. I am wondering what will be the result (especially the runtime) if $K$ is chosen to be very large or infinity (continuous action space).
2. I am not very convinced of the experimental result of $E^4$. In Ren et al, a sub-linear regret upper bound is derived. However, in the experiment in this submission, it seems to have linear regret. Is there any contradiction? Will tuning the failure probability improve the result?
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper studies the batched linear bandits. The algorithm in this paper is the first to match the lower bounds for both small K and large K at the same time.
Essential References Not Discussed: NA.
Other Strengths And Weaknesses: Strengths:
1. Matching the two lower bounds at the same time is a very nice result.
2. The algorithm is simple and computationally efficient in experiments.
3. The presentation is clear in general.
Weakness:
1. Is there any theoretical analysis on the computational complexity? How does it scale if $K$ is very large or even infinity?
2. Please refer to the points mentioned in the experimental result part.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for reviewing our paper and providing valuable feedback. We appreciate your recognition of our work and your constructive comments. Below, we address each of your comments and questions in detail:
---
### **Performance in the Large $K$ Regime**
We thank the reviewer for raising this important point. We conducted additional experiments with much larger $K$ values, with detailed results available in Table 2-4, and Figure 2 of Section 3 in [link](https://tinyurl.com/3m6f4v5n).
As shown in Table 2-4, BLAE is the fastest among optimal algorithms, with runtime comparable to that of suboptimal algorithms. Moreover, Figure 2 demonstrates that BLAE consistently achieves significantly lower regret and maintains stable performance even as $K$ becomes very large, confirming the scalability of our approach in large, finite action spaces.
Regarding infinite (continuous) action spaces, we clarify that our theoretical guarantees (and many others) and algorithmic design assume a finite action set. Thus, extending the current method to a continuous action domain is outside the scope of this paper.
To our knowledge, prior studies have not provided experimental evaluations for very large values of $K$ (e.g., $K = 1000, 2000, 5000$). Thus, we strongly believe that our extensive experiments sufficiently demonstrate that our algorithm offers superior performance and robustness, positioning it as the most effective solution for large-$K$ regimes. We would be more than happy to perform further experiments if needed.
---
### **Experimental Performance of $E^4$**
We thank the reviewer for noting the discrepancy in $E^4$'s results. While Ren et al. (2024) theoretically established a sub-linear regret bound, our experiments exhibited (near-)linear regret behavior.
We emphasize that we used the exact implementation officially provided by Ren et al. (2024).
We identified two primary reasons for this empirical performance discrepancy:
First, $E^4$ occasionally eliminates the optimal arm prematurely. Table 1 of Section 2 in [link](https://tinyurl.com/3m6f4v5n) shows how frequently false elimination of the optimal arm occurs over 20 independent experimental runs.
Second, the batch size in $E^4$ is highly sensitive to hyperparameter $\gamma$. In particular, the third batch size scales as $O((\log T)^{1+\gamma})$. When choosing a relatively large value of $\gamma$ (e.g., $\gamma = 10$, as in the Ren et al. (2024) setup), the third batch size often exhausts the entire horizon, leaving practically no time to effectively exploit information learned in the third batch.
Consequently, the algorithm relies only on information from the first two, relatively short batches. If suboptimal arm identification occurs during these early batches (as frequently observed), this leads directly to linear regret. In contrast, BLAE's $\ell$-th batch size has order $O\left(T^{\frac{2^{\ell}-1}{2^{\ell}}}\right)$, avoiding this issue.
We conducted additional experiments by carefully tuning the hyperparameter $\gamma$ for $E^4$ from 10 down to 0.99, to their advantage. The results of these additional experiments, presented in Figure 3 of Section 4 in [link](https://tinyurl.com/3m6f4v5n), confirm that careful tuning of $\gamma$ leads to sub-linear regret behavior. However, we also observe that reducing $\gamma$ for the sake of regret performance **significantly increases the number of batches**.
Hence, there is a clear tradeoff with regard to the control of $\gamma$ in $E^4$.
Moreover, even after the tuning of $\gamma$ in $E^4$ for the sake of regret performance, the performance of $E^4$ still results in noticeably worse regret performance compared to BLAE (Figure 3 in [link](https://tinyurl.com/3m6f4v5n)). Thus, the tuning leads to increased batch complexity although the regret performance increases. Despite careful tuning for $E^4$, overall numerical performances in both regret and batch complexity of $E^4$ still show clearly inferior practical performance compared to BLAE.
---
### **Computational Complexity Analysis and Scalability**
We thank the reviewer for this insightful question. To the best of our knowledge, all prior works on batched linear bandits have not provided theoretical complexity analyses, as exact theoretical complexity analyses for comparisons are challenging due to various optimization subroutines in each algorithm. Therefore, any derived theoretical complexity bounds would likely be overly crude and may not be suitable for comparisons for their true computational performance in practice.
To evaluate computational cost, we provided empirical runtime experiments in Appendix E and conducted additional extensive evaluations (see Table 2-4 of Section 3 in [link](https://tinyurl.com/3m6f4v5n)).
Our empirical results address scalability when $K$ is very large. Even at large $K$, our experiments show that our method significantly improves computational efficiency compared to existing optimal algorithms. | Summary: The paper tackles the well-studied batched linear bandits problem and provided tight regret bounds in the small and large arms (K) closing the gap in the upper and lower bounds. Experiments show that they perform well on simulated datasets over the existing algorithms.
Overall: The paper is clearly laid out with both the theoretical contributions and the experiments showcasing the benefits. In particular, care has been taken to highlight the issues with existing approaches and how the current approach solves them.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, lemma 1, 2, 3 and Theorem 1
Experimental Designs Or Analyses: Yes, section 7.1
Supplementary Material: I skimmed over the material
Relation To Broader Scientific Literature: Summarized in Table 1, pretty comprehensive.
Essential References Not Discussed: Not that I am aware of
Other Strengths And Weaknesses: Pros:
(i) The provided regrets are tight and match the established lower bounds for the problem. Both the communication rounds O(loglog T) and the regret bounds of O(d\sqrt(T), \sqrt(dTlogK)) are optimal.
(ii) The main issue is the refined technical analysis of the batched least squares regression problem and the lack of independence across batches.
(iii) The experiments show that the regret is much better for the BLAE algorithm not only in terms of the mean but also variance in sharp contrast with E4. This is further explained due
to the elimination of the optimal arm.
Cons:
(a) It would have been interesting to show the results in the small K regime and that it is only in the large K regime, it is suboptimal.
Other Comments Or Suggestions: NA
Questions For Authors: (1) Are there experiments showing that Ren et al is competitive in the small K setting with the BLAE algorithm? The explanation that the best arms are getting eliminated (Ren et al) could be further explored to show that this happens increases as a function of K?
(2) What are the other settings where the refined analysis could be of interest as alluded to in the paper?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for taking the time to review our paper and for providing thoughtful and valuable feedback. We greatly appreciate your recognition of our work and your constructive comments. Below, we address each of your comments and questions in detail:
---
### **Experimental Results in the Small-$K$ Regime**
We thank the reviewer for this suggestion. We conducted experiments in the small-$K$ regime with $K = 2, 4, 8, 16$, and the detailed results are in Figure 1 of Section 1 in [this link](https://tinyurl.com/3m6f4v5n).
As shown in Figure 1, BLAE consistently outperforms existing methods in the small‑$K$ setting. While $E^4$ (Ren et al., 2024) performs competitively for $K=2$, this trivial case is straightforward for any algorithm. Therefore, apart from this trivial case, our results confirm that BLAE remains superior across all small-$K$ regimes considered.
---
### **False Elimination of Optimal Arms in $E^4$ (Ren et al., 2024) as a Function of $K$**
We appreciate the reviewer’s insightful question. To investigate whether the frequency of mistakenly eliminating the optimal arm in $E^4$ (Ren et al., 2024) increases with $K$, we conducted experiments across a range of $K$ values—from small to large regimes. Each experiment was repeated 20 times, and we tracked the rate at which the optimal arm was falsely eliminated. Detailed results can be found in Table 1 of Section 2 in [this link](https://tinyurl.com/3m6f4v5n).
The data in Table 1 clearly indicate an increasing trend in the elimination rate as $K$ grows. This trend helps explain why $E^4$ (Ren et al., 2024) performs worse than other batched linear bandit algorithms, particularly in large-$K$ regimes.
---
### **Potential Applications of Refined Analysis in Other Settings**
We appreciate the reviewer raising this valuable point. Our refined analysis indeed applies naturally to broader settings, including the **pure exploration** scenario. In pure exploration, the primary goal is to accurately identify optimal arms within a given budget, without explicitly focusing on cumulative regret. Our approach, which combines a regularized G-optimal design with an arm elimination strategy, efficiently discards suboptimal arms with high probability, significantly improving the accuracy and efficiency of exploration.
Furthermore, since **best arm identification** is a widely-studied and applied setting—where exactly one optimal arm must be correctly identified—our refined analytical methods directly apply here as well. Specifically, our algorithm’s ability to consistently maintain the optimal arm with high probability demonstrates its practical utility for reliable best-arm selection.
Thus, our theoretical refinements meaningfully contribute both to general pure exploration and specifically to the best arm identification problem.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed rebuttal comments to mine and the other reviewers questions. I will keep my score for now based on all the responses. | null | null | null | null | null | null |
TabNAT: A Continuous-Discrete Joint Generative Framework for Tabular Data | Accept (poster) | Summary: This paper proposes a model named TabNAT for tabular data synthesis (and imputation). The core idea is to use a diffusion model to generate all columns (of a row) with continuous values together, and a transformer to generate the categorical columns. Experimental results on multiple benchmark datasets show the effectiveness of the proposed model.
Claims And Evidence: Yes. The claimed advantages of the proposed model in terms of statistical fidelity, data utility, and privacy protection are shown with experimental results on 10 benchmark datasets.
Methods And Evaluation Criteria: The proposed model is intuitive given the setup of the problem.
The chosen evaluation metrics have been used in the literature and seem suitable for the claims.
Theoretical Claims: There are no theorem of proofs in the paper.
Experimental Designs Or Analyses: An issue with the experiments is the choice of datasets. While the proposed model is quite relevant to DP-TBART/Tab-MT, many of the datasets used in the DP-TBART/Tab-MT papers are not used in the experiments of this paper. It is unclear why this has been the case.
For the other closely related work TabSyn, the datasets used in the TabSyn paper are used in this one which is good for comparison. However, the proposed model TabNAT is often outperformed by (or very close to) TabSyn over (some of) these datasets, as shown in Tables 8 to 14.
Instead of claiming an overall better model, perhaps the paper could focus on the datasets where TabNAT shows better performance and analyzes why TabNAT achieved strong results on those datasets.
Supplementary Material: Yes. I have gone through the appendix.
Relation To Broader Scientific Literature: This paper builds upon the idea of tabular data generation models using transfomer (auto-regressive; DP-TBART/Tab-MT) and diffusion models (TabDDPM/TabSyn). Instead of using either model, it combines the two to achieve a different design while showing that it is also as effective.
Essential References Not Discussed: None that I'm aware of.
Other Strengths And Weaknesses: Overall this paper proposes a new heuristic structure of tabular data synthesis model based on diffusion models and transformer. Both the technical novelty and depth are somewhat limited (there is no theoretical analysis on the effectiveness of the model design). On the other hand, the proposed model has shown strong performance on some of the experimental datasets. It would make a more interesting paper if the strong performance could be supported by a theoretical analysis.
TabNAT can use a fixed set of hyperparameters for all datasets is an advantage.
=========================
Update after rebuttal: I'm happy with the additional experimental results and am raising my score from "Weak reject" to "Weak accept".
Other Comments Or Suggestions: Typo: "After TabNAT is well-trained" => "After TabNAT is well trained"; "TabNAT ’s training speed" (extra whitespace)
Questions For Authors: The author(s) may want to comment on the choice of experimental datasets.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback on our manuscript. We appreciate the time and effort invested in providing these valuable comments, which will help improve the quality of our paper. Below, we address the points raised:
### Choice of Experimental Datasets
The reviewer raised concerns about our dataset selection, particularly noting differences from those used in DP-TBART/Tab-MT papers. We would like to clarify that our experimental design primarily followed TabSyn's pipeline, as this was our initial baseline for comparison. Accordingly, we incorporated the six heterogeneous datasets used in the TabSyn paper to enable direct performance benchmarking.
To demonstrate TabNAT's versatility across different data types, we deliberately expanded our evaluation by adding two datasets containing only continuous columns and two datasets with only discrete columns. This comprehensive approach allows us to assess TabNAT's generalizability across diverse tabular data structures.
We acknowledge the reviewer's valid point regarding datasets used in TabMT and DP-Tbart. We have conducted additional experiments using five additional datasets used in these papers, with results presented in rebuttal.pdf in the anonymous link: https://anonymous.4open.science/r/ICML-TabNAT. These supplementary experiments further validate TabNAT's effectiveness across a broader range of benchmark datasets.
### Performance Comparison with TabSyn
We appreciate the reviewer's insightful observation regarding TabNAT's performance relative to TabSyn across different datasets. Our detailed analysis reveals several important patterns in performance differences:
First, TabNAT consistently outperforms TabSyn on both continuous-only and discrete-only datasets. This advantage stems from TabNAT's architecture, which employs specialized generation models optimally suited for each data type. Unlike TabSyn's approach of using a VAE encoder to create a latent space, TabNAT processes different feature types more directly and naturally, minimizing information loss that typically occurs during VAE encoding.
For mixed-type datasets, our analysis reveals that TabNAT demonstrates clear advantages when the proportion of discrete features is higher, as evidenced by superior performance on the Adult and Beijing datasets. This performance differential can be attributed to TabNAT's transformer-based categorical generation component, which better captures the complex dependencies between categorical variables through its self-attention mechanism. Additionally, TabNAT's architecture enables more effective modeling of interactions between continuous and discrete features, particularly when discrete features play a dominant role in the underlying data distribution.
Importantly, even in datasets where TabNAT and TabSyn demonstrate comparable statistical performance, TabNAT offers significant practical advantages that TabSyn cannot match. Specifically, TabSyn's VAE-based architecture fundamentally limits its ability to perform flexible conditional generation. In contrast, TabNAT's arbitrary-order autoregressive generation capability supports any form of conditional generation, including missing data imputation. This flexibility stems from TabNAT's bidirectional masking strategy and specialized components for different data types, allowing users to specify any subset of features as conditions while generating the remaining features—a crucial capability for many real-world applications.
### Theoretical Analysis
We thank the reviewer for suggesting a theoretical analysis to support TabNAT's empirical performance. Our approach is grounded in fundamental probability theory, specifically the decomposition of joint distributions as expressed in Equations (1) and (2). TabNAT's key theoretical contribution lies in transforming the tabular data generation problem into a series of conditional distribution modeling tasks through our bidirectional Masked Transformer architecture.
This decomposition allows us to leverage the most appropriate generative modeling techniques for different variable types: diffusion models for continuous variables and autoregressive models for discrete variables. Both approaches have strong theoretical foundations in distribution modeling. Diffusion models offer provable convergence to the target distribution through a gradual denoising process, while autoregressive
### Minor Corrections
We have corrected the noted typographical errors:
- "After TabNAT is well-trained" → "After TabNAT is well trained"
- Removed the extra whitespace in "TabNAT 's training speed"
Additionally, we have thoroughly reviewed the manuscript to ensure consistency in terminology and formatting throughout. | Summary: The paper proposes a Non-Autoregressive transformer-based generative model for tabular data (TabNAT). It can handle both continuous and discrete columns. It uses a (non-causal) transformer to encode the input and masks to perform any-order training.
The overall modeling paradigm is quite similar to TabDDPM (Kotelnikov et al. (2023)) and TabMT (Gulati et al. (2023)), with the key distinctions being how the continuous values are modeled (using continuous diffusion) and the use of the transformer encoder.
The proposed approach shows promising results for the three standard evaluation criteria: statistical fidelity, data utility, and privacy protection. Ablations are also performed to demonstrate the impact of using the diffusion model for continuous values and the impact of any-order training.
Claims And Evidence: While the claims of superior statistical fidelity, utility, and privacy preservation are well supported. The imputation experiments look a bit weak. Please see the "Questions section" for the details.
Methods And Evaluation Criteria: The evaluation criteria used in the paper follow the established metrics in the literature.
Theoretical Claims: NA
Experimental Designs Or Analyses: Please see the "Questions section".
Supplementary Material: I just reviewed the definitions of all the metrics used in the evaluation (Appendix C).
Relation To Broader Scientific Literature: The paper generally follows a line of works on neural generative models for tabular data (Assefa et al., 2021; Zheng &
Charoenphakdee, 2022; Hernandez et al., 2022), and is most closely related to the most recent works that use discrete and continuous diffusion as well as masking based training (Castellon et al., 2023; Gulati & Roysdon, 2023; Kotelnikov et al., 2023; Kim et al.,
2023; Lee et al., 2023; Zhang et al., 2024b). The overall modeling paradigm is quite similar to TabDDPM (Kotelnikov et al. (2023)) and TabMT (Gulati et al. (2023)), with the key distinctions being how the continuous values are modeled (using continuous diffusion) and the use of the transformer encoder.
Essential References Not Discussed: None
Other Strengths And Weaknesses: The paper is well written and easy to follow, especially the model description and Figure 3. The reported results also look promising, except for the imputation task (see the questions section below).
Other Comments Or Suggestions: I don't know if Figures 1 and 2 add much value to the paper. In fact, looking at these figures before Figure 3 caused some confusion for me. You might consider moving these figures to the appendix or placing them after Figure 3, which sets up the context much better.
Questions For Authors: 1. Why do you need position-specific mask embedding when separate position embeddings are applied?
2. **MLE task**: What is the performance of the classifiers and regressors trained on the original data? Can you describe, quantitatively, how many synthetic samples were used to get the results reported in the last two columns of Table 1?
3. What is the exact definition of the DCR metric? I could not find an expression for it in the paper or in the reference provided (Zhang et. al. (2024b)).
4. Why is the missing data imputation task set to predict missing values in the training set? Previous works (Du et. al. (2024), Jarret et. al. (2022)) seem to use the standard held-out set for imputation.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thoughtful assessment of our manuscript. We appreciate the recognition of our paper's strengths, including its clear writing, promising results, and well-supported claims regarding statistical fidelity, utility, and privacy preservation. Below, we address each of the reviewer's questions and concerns.
### Response to Comments on Figures
We thank the reviewer for the suggestion regarding Figures 1 and 2. We agree that Figure 3 provides better context and will consider restructuring the presentation of figures in the revised manuscript to improve clarity.
### Position-specific mask embedding and position embeddings
Intuitively, the position-specific mask embedding serves a different purpose than the position embeddings. While position embeddings encode the absolute position of each token in the sequence, the position-specific mask embedding is designed to capture information about which specific attributes are masked during the any-order training process. This allows the model to better understand the relationship between masked and unmasked attributes in different positions, enhancing its ability to handle partial information scenarios. In fact, we found that the empirical performance difference between the two designs is not substantial (with position-specific mask embedding performing slightly better). Considering that the additional parameters introduced by this design are negligible, we adopted this approach in our final model.
### MLE task performance clarification
Thank you for raising this important point. We apologize for not including these results in Table 1. The complete results for classifiers and regressors trained on the original data can be found in the Appendix, page 20, Table 14. We commit to revising Table 1 to include this information in the main text for better clarity.
Regarding the number of synthetic samples used, the results reported in the last two columns of Table 1 were obtained using the same number of synthetic examples as in the training set, which varies according to the size of each dataset. This 1:1 ratio ensures a fair comparison across different methods. This information is provided in Appendix C.7.5.
### DCR metric definition
We sincerely apologize for only providing the experimental setup for DCR without explaining how this metric is calculated.
DCR (Distance to Closest Record) is a commonly used metric for evaluating the privacy protection performance of synthetic data. For each synthetic example, we calculate its minimum distance to any example in the training set.
Formally, let's assume we have $N$ training examples {$r_1$, $r_2$, ..., $r_N$} and $n$ synthetic examples {$s_1$, $s_2$, ..., $s_n$}. For each synthetic example $s_i$, we compute:
$$\text{DCR}^{train}(s_i) = \min_{j \in \{1,2,...,N\}} d(s_i, r_j)$$
where $d(\cdot,\cdot)$ represents the normalized Euclidean distance between two records after appropriate feature normalization.
Similarly, for $N$ holdout examples {$h_1$, $h_2$, ..., $h_N$}, we can compute their DCRs:
$$\text{DCR}^{holdout}(s_i) = \min_{j \in \{1,2,...,N\}} d(s_i, h_j)$$
Since the training set and holdout set are i.i.d. samples from the same distribution, if the synthetic samples learn the underlying distribution from the training set correctly, $\text{DCR}^{train}$ and $\text{DCR}^{holdout}$ should be similar. Otherwise, if the synthetic examples are copied from the training data, the $\text{DCR}^{train}$ of each synthetic example $s_i$ should be close to zero.
### Setting of missing data imputation
Thank you for raising this point about our imputation task setup. We would like to clarify that in real-world scenarios, we often face in-sample missing data imputation tasks, such as during the data preprocessing phase of large data science projects.
Both in-sample and out-of-sample imputation approaches have been explored in previous research [1,2]. Methods like HyperImpute adopt out-of-sample evaluation primarily because, as a hybrid approach, it requires supervised labels for model selection. Our method does not have this requirement. We also note that works like Remasker seem to adopt the in-sample setting as well.
That said, we agree that out-of-sample imputation is equally important. Our model is naturally suitable for out-of-sample settings, and we will conduct additional experiments to demonstrate its effectiveness in this context. We have included these results in rebuttal.pdf in the anonymous link:https://anonymous.4open.science/r/ICML-TabNAT to provide a more comprehensive evaluation of our method's imputation capabilities across both settings. | Summary: The paper focuses on unconditional tabular data generation and conditional missing data imputation tasks. Considering the heterogeneous nature of tabular data, which includes both discrete and continuous variables, the proposed method utilizes a diffusion model to parameterize the conditional distributions for continuous columns. For discrete columns, it employs next-token prediction with KL divergence minimization. Additionally, a masked Transformer with bi-directional attention is used to support order-invariant generation. Experiments conducted on ten datasets with diverse attributes demonstrate the effectiveness of the proposed TabNAT method.
Claims And Evidence: The paper provides a detailed analysis of diffusion modeling and autoregressive modeling, employing a diffusion model to model continuous columns using Diffusion Loss and using next-token prediction for discrete columns with Cross Entropy Loss. While this claim appears reasonable at first glance, the continuous data in the table consists of individual continuous values, where the dependencies between data points are not as strong as in images, where contextual tokens exhibit high continuity. From this perspective, diffusion modeling may not necessarily be more suitable than autoregressive prediction for modeling continuous columns in tables. Therefore, the paper’s proposition of "modeling continuous columns in tables using the diffusion process" warrants further scrutiny.
Methods And Evaluation Criteria: The paper evaluates the effectiveness of the proposed model on multiple tabular datasets, demonstrating that the proposed methods outperform other approaches. For the task of Synthetic Tabular Data Generation, the results are assessed using metrics such as Statistical Fidelity and Machine Learning Efficiency (MLE). In the Missing Data Imputation task, evaluation is conducted using Average MAE for continuous features and Average Accuracy for discrete features. The chosen evaluation metrics are appropriate.
Theoretical Claims: The paper's approach to modeling and model utilization is well-founded. Continuous data is normalized to the [0,1] range and encoded using an MLP-based encoder, while discrete data is label-encoded and embedded. To account for the column invariance of tabular data, position encoding is applied at the column level. A Bi-directional Transformer is employed to simultaneously predict both continuous and discrete data, utilizing Diffusion Loss and Cross Entropy Loss, respectively.
Experimental Designs Or Analyses: • For the task of Synthetic Tabular Data Generation, the model is evaluated on ten datasets, including two with only continuous features, two with only discrete features, and six heterogeneous datasets. It is compared against six major categories of methods, including VAE, GAN, LLM, and Diffusion-based approaches. The results are assessed using metrics such as Statistical Fidelity and Machine Learning Efficiency (MLE), which are appropriate for this task. Additionally, corresponding ablation studies are conducted.
• For the Missing Data Imputation task, experiments are performed on five heterogeneous datasets. However, no ablation study is conducted for this task, and the compared baselines appear to be relatively weak.
Supplementary Material: The Supplementary Material includes pseudocode for key components, detailed experimental setups, descriptions of the model architectures, evaluation metrics, and additional experimental results.
Relation To Broader Scientific Literature: • This paper explores modeling tabular data by applying the diffusion process to continuous columns and using masked generative modeling for discrete columns. However, the applicability of diffusion loss to continuous tabular data requires further in-depth analysis.
• The paper provides an overview of various existing approaches for tabular data generation, including VAE, GAN, and autoregressive modeling. However, its analysis of diffusion loss remains insufficient. In image modeling, diffusion loss involves multiple iterative predictions of the overall image distribution, effectively capturing correlations between tokens. In contrast, discrete tabular data differs from images in that image data exhibits strong global continuity beyond individual tokens. In comparison, individual values in tabular data do not exhibit the same level of continuity. Therefore, the paper should further analyze and discuss related work on diffusion models in this context.
Essential References Not Discussed: • The paper lacks sufficient citations on Diffusion Loss, including its application to autoregressive modeling for continuous data.[Tian K et al.'24], [Li T et al.'24]
• Tian K, Jiang Y, Yuan Z, et al. Visual autoregressive modeling: Scalable image generation via next-scale prediction[J]. Advances in neural information processing systems, 2024, 37: 84839-84865.
• Li T, Tian Y, Li H, et al. Autoregressive image generation without vector quantization[J]. Advances in Neural Information Processing Systems, 2024, 37: 56424-56445.
Other Strengths And Weaknesses: Strengths:
• The method and model architecture are clearly presented.
• The analysis of diffusion modeling and autoregressive modeling is reasonable.
• A substantial number of experiments have been conducted, yielding promising results.
Weaknesses:
• The differences between continuous columns in tabular data and images are not thoroughly analyzed; diffusion loss is simply applied to continuous columns without deeper justification.
• The second experiment lacks ablation studies, and the chosen baselines appear to be relatively weak.
Other Comments Or Suggestions: The overall work is complete, and the method is explained clearly.
Questions For Authors: Although both continuous columns in tabular data and images consist of continuous values, their overall continuity differs, which may warrant a more in-depth analysis.
Ethical Review Concerns: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thorough review and constructive feedback on our manuscript. We have addressed the raised concerns as follows:
### The suitability of diffusion modeling for continuous columns in tabular data
We agree that this is an important consideration that deserves clarification. Our choice of diffusion modeling for continuous columns is supported by empirical evidence from recent SOTA work:
- TabDDPM was the first model to successfully apply diffusion models to tabular data synthesis, using traditional DDPM for continuous columns and discrete diffusion with multinomial distribution for discrete columns. Its strong performance validated the effectiveness of diffusion models for capturing the complex distributions in continuous (multi-column) tabular data.
- TabSyn further advanced this approach by mapping both discrete and continuous columns to a continuous embedding space, enabling the application of standard diffusion models to the entire table. TabSyn has achieved state-of-the-art results in tabular data generation, demonstrating the exceptional capability of diffusion models to capture the distributions of continuous tabular data.
- Our empirical experiments align with these findings. It's important to note that direct autoregressive generative modeling of continuous values is not feasible without discretization. Baseline methods such as Tab-MT and DP-TBART both employ this discretization-plus-autoregressive approach, and our experiments demonstrate that they consistently underperform compared to our method across all metrics, particularly in Statistical Fidelity scores.
### On the differences between continuous columns in tabular data and images
We agree with the reviewer that a deeper analysis of the differences between continuous values in tabular data and images would strengthen our paper. These differences are fundamental to our methodological choices:
- Tabular data columns are fundamentally more independent than image pixels, with each column typically representing a distinct feature with its own semantic meaning. In contrast, image data pixels have strong spatial continuity, with neighboring pixels highly correlated and collectively representing coherent visual elements.
- This structural difference necessitates different denoising neural network architectures. For images, CNN-based UNet models have become the standard architecture for diffusion models because they effectively leverage the local spatial correlations through convolutional operations and capture multi-scale features through their encoder-decoder structure with skip connections.
- For tabular data, where columns have distinct meanings without inherent spatial relationships, traditional MLP architectures are more appropriate as denoising neural networks. In our approach, we use an MLP for denoising continuous data, while the Transformer is employed solely to generate conditional vectors. This design choice aligns with other successful tabular diffusion models, such as TabDDPM and TabSyn, which also utilize MLPs as their denoising neural networks.
### On the missing data imputation experiment
We would like to clarify that the baselines we compared against (Remasker, HyperImpute, and GRAPE) are indeed SOTA methods for tabular missing data imputation. These same methods were used as the top-performing baselines in a recent ICLR 2025 Spotlight paper on tabular data imputation [1]. Our selection of baselines is therefore aligned with the current standards in the field. We will aslo include the new paper as an additional baseline in our revised manuscript to ensure our comparisons remain comprehensive and up-to-date.
Due to page limitations, we were unable to include ablation studies for the missing data imputation task in the original submission. We have now conducted these additional experiments, and the results are presented in Figure 15 and Figure 16 in rebuttal.pdf in the anonymous link:https://anonymous.4open.science/r/ICML-TabNAT. The findings are summarized as follows:
- Simple MSE loss is as effective as the Diffusion loss in missing data imputation since we care more about the expectation of the missing entry rather than its distribution. The imputation of discrete columns is sensitive to the order, and the random-order sampling used in our paper is beneficial.
- The proposed TabNAT is very robust to the model depth and width in the missing data imputation task.
These results further validate our design choices and demonstrate that each component of our model contributes significantly to its superior performance in the missing data imputation task.
### Missing references
We thank the reviewer for pointing out the related works. We will cite these works and add a discussion about the application of diffusion losses in autoregressive image modeling.
References:
[1] Zhang et al. DiffPuter: Empowering Diffusion Models for Missing Data Imputation. In ICLR 2025. | Summary: The paper tackles the task of generating tabular data. It claims that the application of current autoregressive models for generating tabular data is limited due to two challenges:
1) tabular data contains heterogeneous types, whereas autoregressive next-token (distribution) prediction is designed for discrete data
2) tabular data is column permutation-invariant, requiring flexible generation orders.
In order to alleviate these issues, it proposes using a transformer which encodes the continuous data in each row (each row is a single data sample) into a vector at position 1 of the sequence, and the rest of the data (discrete), in that row is seen as the rest of the sequence, where each entry (corresponding to one column) is represented as a vector with learnable entries. As such, each row can be seen as a sequence of (L+1) vectors, where each vector has dimension $d$. Uniform random masking is applied before feeding the sequence to the transformer, and the outputs are then used to predict the masked (missing) discrete data, and also used to condition to generate the continuous data.
The proposed method is then compared to other methods in multiple datasets, and assessed using multiple metrics. In addition ablation studies are performed to see how the main two claimed weaknesses above influence the performance.
Claims And Evidence: The paper does not make any theoretical claims. Regarding performance claims, based on the provided results, the proposed method appears to outperform the other methods in most of the cases. However, it should be noted that my main area of expertise are diffusion models, and autoregressive models (mostly applied to language), therefore I cannot fully evaluate the significance of these results. My review, will mostly be focused on the merits of the design of modelling and application of diffusion and other discrete modelling approaches, rather than performance.
Methods And Evaluation Criteria: It should be noted that my main area of expertise are diffusion models, and autoregressive models (mostly applied to language), therefore I cannot fully evaluate the significance of these results. My review, will mostly be focused on the merits of the design of modelling and application of diffusion and other discrete modelling approaches, rather than performance.
That being said the criteria appear to be sound and cover multiple aspects.
Theoretical Claims: The paper makes no novel theoretical claims.
Experimental Designs Or Analyses: The experiment designs appear to be sound and cover multiple aspects.
Supplementary Material: I reviewed Appendix A.
Relation To Broader Scientific Literature: I am not familiar with other tabular data generation methods.
The paper makes use of continuous diffusion models, and Bert like masking architecture. However, it does not include categorical diffusion models (which are broadly known as discrete diffusion models).
Essential References Not Discussed: The main essential references which are missing in my opinion are discrete diffusion model papers (some references for example [1,2,3,4,5]). The authors use a masking schedule similar to Bert, but do not attempt to model discrete data with discrete diffusion, which has been shown to perform well in modelling discrete data (such as language). In the context of this paper, it could be even more useful as the data lacks the inherent left to right bias. The choice not to include them due to not modelling explicitly (I believe this means analytically the pdf) dependencies between dimensions is not properly supported considering that the goal of the paper is data generation, not density/probability modelling.
[1] Structured Denoising Diffusion Models in Discrete State-Spaces, Austin et al, 2021.
[2] A Continuous Time Framework for Discrete Denoising Models, Campbell et al 2022.
[3] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution, Lou et al 2024.
[4] Simple and Effective Masked Diffusion Language Models, Sahoo et al, 2024
[5] Discrete Flow Matching, Gat et al, 2024.
Other Strengths And Weaknesses: **Strengths**
1. The claims made in this paper are quite interesting, in particular the attempt to not model discrete tabular data with autoregressive models, but utilize full attention. Results show the latter is superior.
2. The section which explains the method is very clearly written and illustrated.
3. Based on the provided results, the proposed method improves upon the existing ones in the vast majority of cases.
**Main weaknesses**
1. The main weakness in this paper is the modelling of continuous data. The paper suggest in Equations (1) and (2), to model the continuous data separately, and then learn discrete data conditioning on continuous data. I believe this is the right approach. The inference explanation in lines 270-274 reflects this approach. On the other hand during the actual training, the model does not learn the whole continuous distribution, but learns a family of conditional ones, conditioned on a masked sequence. As such a natural order is missing (rule of choosing unmasking positions) which in discrete diffusion is mitigated. The main missing part in this paper is comparing this approach against the following:
Encode the perturbed continuous dimensions into a higher dimensionality vector and then split it into multiple vectors, which will be placed into positions 1 to K in the sequence. If L is the number of discrete columns, and C the number of continuous columns, then then K/L should be roughly C/L. The discrete tokens come after, that is so far the sequence has length K+L. Finally the last K positions are reserved to the continuous data so that the discrete data can attend to. That is, the first K tokens attend only to themselves, in other words, (K, , ) attends to (K, , ) in the (K,L,K) sequence. The L tokens ( ,L, ) attend to ( ,L,K), and finally ( , ,K) attends to ( , ,K). The last K tokens do not provide an output at the end, they are there just so that the L tokens can condition on them. In this way the transformer could model both the continuous and the (discrete|continuous) distribution simultaneously. This would fit the objective proposed in Equation (1) and (2) in the paper. Otherwise the section containing these equations should be modified in order to more properly set the tone for the proposed method.
2. The work does not include advances in modelling discrete distributions, which have been developing since at least 2021. More recently, discrete diffusion modelling with absorbing (masked) dynamics have shown very competitive results in text modelling. It is likely that they would perform very well on data without left to right bias. In addition, they offer a natural unmasking schedule. Applying this approach to (_,L,$\cdot$) discrete data is straightforward.
**Minor Weaknesses**
1. There are some small issues with notation and grammar:
a. Line 110, right column, I believe it should be $\mathbb{R}^{D_{cts}}$.
b. Line 111, right column, I believe it should be $\in \mathbb{N}$
c. Line 200, left column, I believe it should be $\in \\{0,...,C_i-1\\}$
d. Line 303, left column. *For each model*.
2. Figures 1 and 2 are unclear. The rectangles have varying sizes and no labeling , which makes it difficult to interpret the figure.
3. While not a weakness, the results of comparing MSE against diffusion are not surprising.
Other Comments Or Suggestions: The main 2 weaknesses are simultaneously suggestions, which I believe would improve the paper. Overall, I am on the borderline regarding acceptance. I am looking forward to reading the responses of authors, and am willing to reevalute the paper based on their replies, as well as based on other reviews which could highlight something I might have missed.
Questions For Authors: 1. Why was a stochastic generating approach used instead of the ODE formulation, or flow matching, which could speed up generation?
2. Why are masked encoded differently at each position, when we already add positional encoding?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive feedback on our manuscript. We appreciate the time and effort spent reviewing our work, and we believe the suggestions will significantly improve the quality of our paper. Below, we address each point raised by the reviewer.
### Modeling of Continuous Data
Your observation about our approach to modeling continuous data is insightful. We agree with your assessment that our method doesn't directly model the full continuous distribution $p(X)$, but rather learns a conditional distribution $p(X|c)$, where the constant vector $c$ represents the case where all discrete columns are masked.
While this is conceptually different from modeling the complete joint distribution, in practice, the two approaches yield equivalent results for generation purposes. This is because when all discrete columns are masked, the model effectively sees only a generic placeholder with no specific information, forcing it to generate continuous values based solely on the learned marginal distribution of those values. The mask tokens in this case serve merely as positional indicators without conveying any actual discrete data information.
Your suggested approach of encoding perturbed continuous dimensions into vectors at positions 1 to K is elegant and theoretically sound. Following your suggestion, we implemented and tested this approach. Our experiments confirmed your intuition—the results were indeed comparable to our original method in terms of generation quality.
However, we identified a practical limitation: this approach significantly extends the sequence length, approximately tripling the training computational cost. Given the similar performance outcomes but increased resource requirements, we opted for our original approach as a more efficient solution. Nevertheless, we appreciate this valuable suggestion and will discuss this alternative formulation and its trade-offs in our revised paper.
### Discrete Diffusion Models
Thank you for highlighting the recent advances in discrete diffusion models that we overlooked. While we did include basic approaches like multinomial diffusion in our baselines (which showed limited effectiveness), we missed the opportunity to explore more recent developments in this area.
The papers you referenced indeed offer promising directions for modeling discrete data without left-to-right bias. In our revised paper, we'll discuss these advancements and explore how they might complement our approach for mixed-type tabular data generation. We're particularly intrigued by masked diffusion approaches and their potential in heterogenous tabular data generation.
### Choice of Stochastic Generation Approach
We thank the reviewer for this insightful suggestion. Our primary focus was to effectively integrate diffusion models with autoregressive approaches for tabular data generation. Regarding the choice of sampling method, we acknowledge that different approaches offer various trade-offs between quality and efficiency. Following the reviewer's suggestion, we conducted additional experiments with flow matching. While this approach did improve sampling speed (requiring only 20 steps), we observed a slight decrease in generation quality. It's worth noting that our current sampling method requires only 50 sampling steps—significantly fewer than the 1000 steps typically needed in traditional DDPM approaches—thus already providing a good balance between quality and efficiency.
### Position-Specific Masking Encoding
Intuitively, the position-specific mask embedding serves a different purpose than the position embeddings. While position embeddings encode the absolute position of each token in the sequence, the position-specific mask embedding is designed to capture information about which specific attributes are masked during the any-order training process. This allows the model to better understand the relationship between masked and unmasked attributes in different positions, enhancing its ability to handle partial information scenarios. In fact, we found that the empirical performance difference between the two designs is not substantial (with position-specific mask embedding performing slightly better). Considering that the additional parameters introduced by this design are negligible, we adopted this approach in our final model.
### Notation and Figure Clarity
We thank the reviewer for pointing out the notation issues and concerns about figure clarity. We will correct all notation errors and improve the clarity of Figures 1 and 2 in the revised version of our manuscript.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the additional information, and for perfoming the requested experiment. While I would have preferred to see the precise architecture and experimental setting you implemented, as well as the results (via an anonymized link), based on the explanation, I now lean accept regarding the submission. This evaluation now matches the official score I gave in the review. | null | null | null | null | null | null |
HealthGPT: A Medical Large Vision-Language Model for Unifying Comprehension and Generation via Heterogeneous Knowledge Adaptation | Accept (spotlight poster) | Summary: The paper introduces HealthGPT, a medical vision-language model that unifies visual comprehension and generation through heterogeneous knowledge adaptation. Key contributions include H-LoRA, a parameter-efficient fine-tuning method that decouples task-specific knowledge via independent low-rank plugins, a hierarchical visual perception strategy to handle abstract and concrete visual features, and the VL-Health dataset for medical multi-modal tasks. Experiments demonstrate HealthGPT outperforms state-of-the-art models in tasks like medical visual QA, image reconstruction, and modality conversion.
Claims And Evidence: - In the Introduction, the authors claim there are conflicts between comprehension and generation and refer to Figure 2. What is the detailed experimental setting of this figure? Also, as mentioned by the authors, e.g. MetaMorph (Tong et al.) finds these two tasks are mutually beneficial. What do authors mean by "improvements still exhibit diminishing returns, with performance degradation remaining a significant issue" - line 71/72 (right col.)? I did not find corresponding evidence from this paper.
- Other claims are supported by experiments.
Methods And Evaluation Criteria: - There is no explanation of the fusion embedding layer in Figure 3, except a brief mention in line 120. What exactly is the fusion embedding layer? From Figure 3, it is after the text tokenizer and VQ tokenizer, so is it just the regular embedding layer for the conversion of the tokens?
- The evaluations are extensive and demonstrated the effectiveness of the proposed method.
Theoretical Claims: No theoretical claim.
Experimental Designs Or Analyses: - The authors have compared a series of models for medical visual comprehension. Can authors also compare with some stronger, state-of-the-art models such as GPT-4o and Qwen2-VL?
- There are analysis and ablation studies in section 5.3, which are helpful for understanding the proposed method.
Supplementary Material: I have reviewed all supplementary materials.
Relation To Broader Scientific Literature: It is related to the general unified vision-language models (e.g. unified-IO, Janus) but specifically focuses on medical applications.
Essential References Not Discussed: To my knowledge, most essential references are discussed. I found this paper relevant, can authors discuss and compare the performance with it: MedMax: Mixed-Modal Instruction Tuning for Training Biomedical Assistants. https://arxiv.org/abs/2412.12661
Other Strengths And Weaknesses: ###Strengths:
- The proposed method and dataset are meaningful explorations towards unified medical vision-language models. The proposed H-LoRA can efficiently enable the training.
- Experimental results in both comprehension and generation benchmarks indicate the effectiveness of the proposed model.
### Weaknesses:
Please address comments in other sections.
Other Comments Or Suggestions: NA
Questions For Authors: - A pre-trained VQGAN is used for generation. Does the natural image pre-trained VQGAN work well for medical images?
- What is the total training time and GPUs required?
- Will the model weights be open-sourced? It's "coming soon" in the repo.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive and insightful review, which greatly helps to refine the manuscript's quality and clarity. Below are our point-by-point responses.
## 1. Additional Details
We appreciate the reviewer’s insightful questions and welcome the opportunity to further elaborate on the relevant concepts. Below, we provide detailed clarifications.
**1.1 Fig.2 setting**
Regarding the experimental setup in Figure 2, its purpose is to illustrate conflicts that arise in traditional joint modeling methods when comprehension and generation tasks are trained on heterogeneous data. The experiment consists of two stages:
1. In the first stage, lightweight adapters align visual and textual features to build the visual-language pretraining base.
2. In the second stage, while preserving single-task data integrity, we gradually introduce heterogeneous data from the other task (25%/50%/100% generation or comprehension data).
We observe that increasing heterogeneous task data significantly degraded performance due to task interference. In contrast, our method introduces only 2% additional repeated data (used in the second phase of TLS), **greatly improving the performance of both tasks**.
**1.2 Diminishing returns clarified**
We would like to clarify a possible misunderstanding of the “diminishing returns” description. Our intent was not to undervalue methods like MetaMorph, but to highlight the difficulty of transferring joint training paradigms across tasks with distinct structures and imbalanced data scales.
In the medical domain, comprehension and generation differ significantly in objectives and data availability, making mutual gains through joint training challenging and sometimes prone to negative transfer (see Fig. 2). We will revise the phrasing in the next version to avoid broad descriptions:
> Joint training of comprehension and generation may face challenges such as task interference, data bias, or optimization saturation.
**1.3 Fusion embedding layer**
Additionally, we appreciate the reviewer’s attention to the fusion embedding layer. Our design introduces multimodal tokens into the original vocabulary to integrate discrete indices from the visual encoder (VQGAN). This includes 8,192 discrete tokens (from the VQ codebook) and two special tokens, `<START_IMG>` and `<END_IMG>`, forming the fusion embedding layer to seamlessly integrate visual information into the language model input. We will clarify and define this term in the method section.
## 2. Comparative Experiments
We understand the reviewer’s request to compare our method with more powerful LVLMs for better evaluation. To address this, we add experiments with the enhanced **HealthGPT-XL** version and additional comparisons:
Method|VQA-RAD|SLAKE|PathVQA|MMMU|OMVQA
-|-|-|-|-|-
Janus-Pro|62.9|51.3|48.9|54.1|32.7
Emu3|72.1|57.3|54.1|28.7|39.5
Qwen2-VL|73.3|61.9|62.6|46.0|59.5
GPT-4o|57.4|54.3|66.5|48.7|36.7
MedMax|74.9|86.8|91.6|27.3|95.1
HealthGPT-M3|73.7|74.6|78.7|43.3|68.5
HealthGPT-XL|79.1|85.7|92.4|58.0|77.2
We carefully evaluate the concurrent work MedMax, which performs well, especially in OmniMedVQA. We have updated the experimental comparisons content in the manuscript.
It is important to emphasize that HealthGPT aims to provide a unified, scalable framework adaptable to diverse tasks. It can seamlessly incorporate high-quality datasets like MedMax to boost downstream performance, and its design is **highly complementary to data-driven approaches** such as HuatuoGPT-Vision and MedMax.
## 3. Model Details
**3.1 VQGAN**
We appreciate the reviewer’s focus on the model's resource consumption and component compatibility, which are essential for the method's reproducibility and practicality. Below are our detailed responses:
The VQGAN model, pretrained on OpenImages, is validate on three medical modalities: X-ray, CT, and MRI. The results demonstrate its strong performance in preserving key structural and lesion region information, with good fidelity and consistency:
Modality|SSIM|PSNR|MSE
-|-|-|-
CT|93.95|36.80|14.17
MRI|87.10|34.45|26.54
X-Ray|86.80|33.97|29.41
**3.2 Training details**
Our method demonstrates strong resource efficiency and reproducibility. In the default configuration, the **M3 model** completes a full training cycle in about **35 hours** using **8 A100 GPUs**, which we consider reasonable given its performance across multiple tasks and the modular design.
**3.3 Code Release Plan**
We recognize the importance of open sourcing for the community and have nearly completed organizing the code, data processing scripts, model weights (M3, L14, and XL versions), and inference workflows. We will release the full codebase publicly to ensure reproducibility and ease of extension.
Thank you again for your recognition and suggestions. We believe the response better highlights HealthGPT’s technical value and practical potential, and we hope it contributes to the development of medical multimodal models.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. Most of my concerns are addressed. I raised my score to 4.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising the score. Your valuable suggestions greatly contribute to the quality of our manuscript. Thank you again for your precious time and valuable suggestions! | Summary: This paper introduces a medical vision and generation framework based on LVLM. This pipeline consists of autoregressive generation, hierarchical visual perception, and heterogenous knowledge adaption. The authors provide experiment results on different datasets and compare with other baselines.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: There are no proofs or theoretical claims in this paper.
Experimental Designs Or Analyses: The experiment and analysis are fine.
Supplementary Material: Yes. They provide the codes via a link.
Relation To Broader Scientific Literature: They have the impact in the medical domain.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1, This paper provides a comprehensive study from the method to the results.
2, The writing and presentation help readers to understand and follow.
3, They discuss the main challenges in the medical domain from Line 88 to Line 97.
Weakness:
1, I am concerned about the novelty, especially in the training process. They are using a very common and typical approach including feature alignment, LoRA-based plugin, and fine-tuning.
2, For the key part, which is the heterogenous knowledge adaption, the MOE+LoRA is also a well-used approach.
3, I understand that the application in medical domain with LLM might not be a traditional approach. With an extremely extensive experiments in a new domain plus a limited adjusted approach, I am concerned about the significance of contribution and if it is more like a benchmark work.
Other Comments Or Suggestions: 1, Figure 1 is too much and redundant, which is more distracted than being helpful.
2, Many commonly used formulation and equations do not need to re-phrased to be presented in the paper.
Questions For Authors: 1, Could you please provide some insights of the efficiency of the proposed approach?
2, Could you please emphasize which KEY part of your work is ORIGINALLY proposed by you? I appreciate if you could answer it in a simply and straightforward way. For example, "from Line xxx to Line xxx, I use xxx to solve the challenge of xxx".
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful review. Your insights have helped us clarify HealthGPT's contributions in unified architecture design, task modeling efficiency, and real-world application potential. Below are our point-by-point responses.
# 1. Contribution Statement
**1.1 Core Contribution**
We appreciate the reviewer’s detailed focus on our method and contributions. It is worth noting that `Reviewers wUUr, ugJa, and B8gX` have recognized the novelty of our work, describing it as `innovative` and `meaningful`. Building on this recognition, we would like to respectfully clarify a potential misunderstanding regarding the core contribution of our paper.
- **Paradigm Innovation.** Our primary innovation lies in **proposing the first unified Med-LVLM paradigm**, which goes beyond alignment or fine-tuning strategies. Specifically:
- At the **global-task level**, we enable the model to learn and handle heterogeneous characteristics and formats across both comprehension and generation.
- At the **local-task level**, we design mechanisms to efficiently learn and transfer shared knowledge across diverse sub-tasks.
- **Method Innovation.** We carefully designe H-LoRA to align with our proposed paradigm, and further optimize TLS and HVP to support its integration. Thus, we emphasize that **alignment and fine-tuning are techniques we adopt to support this unified paradigm, not the main novelty itself**.
To further clarify this point, we briefly summarize the key contributions:
1. Lines 139–142 (left): We propose the first unified Med-LVLM to solve the challenge of integrating diverse medical tasks under a single LVLM framework.
2. Lines 74–100(right): We introduce H-LoRA to solve the significant conflict between comprehension and generation tasks, greatly improving both the performance and efficiency over MoE+LoRA-based methods.
3. Lines 104(right)–126(left): We propose HVP and TLS to solve the challenge of heterogeneous knowledge fusion and hierarchical visual feature requirements.
4. Lines 127–136(left): we introduce VL-Health to solve the lack of high-quality datasets that jointly support unified tasks in the medical domain.
Thus, this work goes beyond benchmarking by establishing a unified Med-LVLM framework that expands the capabilities of current medical models, which is crucial for advancing practical medical solutions.
**1.2 H-LoRA batter than MoE+LoRA**
Our results show that while MoE+LoRA improves performance, it violates the PEFT principle of efficiency. In contrast, H-LoRA significantly outperforms MoE+LoRA in both performance and training efficiency, as supported by theoretical analysis (`Reviewer wUUr (Section 2: H-LoRA)`) and experiments (Fig.5, Tab.5, Tab.10). H-LoRA achieves higher average scores (+3.6/+0.38 on comprehension/generation) with only 67% of the training time. We kindly hope these results clarify the necessity and advantages of H-LoRA.
**1.3 Med-LVLM Significance**
We appreciate the reviewer’s thoughtful concern regarding the application of Med-LLMs. As highlighted in prior studies [1,2], developing Med-LLMs is both a **timely scientific challenge and a transformative opportunity for healthcare**. These models can **bridge multimodal medical data, support informed decision-making, and ultimately improve clinical outcomes**.
# 2. Efficiency Analysis
We appreciate the reviewer’s focus on efficiency, vital in resource-limited medical scenarios. To validate efficiency, we compared H-LoRA with MoELoRA under identical settings. H-LoRA outperformed MoELoRA across tasks and reduced training time to 67% of MoELoRA’s:
PEFT Method|LoRA|MoELoRA|HydraLoRA|H-LoRA
-|-|-|-|-|
TFLOP|311.6|204.9|238.3|313.7
Besides, our three-stage learning strategy eliminates unnecessary padding caused by task sequence length mismatches, reducing training FLOPs by 22% (from 6.4e19 to 5.0e19) and lowering memory usage per GPU from 36GB to 30GB, saving 17%.
Finally, HealthGPT-M3 completed training on 8 A100 GPUs in 35 hours, significantly fewer GPU hours than most general LVLMs, demonstrating its potential for rapid deployment in medical scenarios.
# 3. Content Refinement
We thank the reviewer for their helpful suggestions. We will simplify Figure 1 by streamlining labels, standardizing colors, and improving the layout to focus on structural logic.
Regarding the formulas, we appreciate the reviewer pointing out redundancy. We have identified and will simplify or merge unnecessary functions and symbols (e.g., in Equations 4 and 5), ensuring the focus remains on our key innovations.
We sincerely thank you for your valuable suggestions. We believe this work addresses critical gaps in medical multimodal LVLMs and supports their deployment in real medical scenarios.
**Reference**
[1] A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics
[2] A Survey of Large Language Models in Medicine: Progress, Application, and Challenge | Summary: The paper introduces HealthGPT, a unified Medical Large Vision-Language Model (Med-LVLM) designed to integrate medical visual comprehension and generation. The proposed model employs innovative methods including Heterogeneous Low-Rank Adaptation (H-LoRA), hierarchical visual perception (HVP), and a three-stage training strategy (TLS). HealthGPT leverages a specially curated VL-Health dataset comprising multiple comprehension and generation tasks in medical imaging. Experimental results clearly demonstrate superior performance compared to state-of-the-art methods across diverse medical tasks such as modality conversion, super-resolution, and medical visual question answering (VQA).
Claims And Evidence: The claims presented are robustly supported by experimental evidence, including thorough comparisons to state-of-the-art models across multiple benchmark tasks (Tables 1, 2, and 3). Performance metrics such as SSIM, PSNR, and MSE clearly substantiate the advantages of HealthGPT in both comprehension and generation scenarios.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria, including specific benchmarks such as VQA-RAD, SLAKE, PathVQA, and IXI datasets, are appropriate and well-suited to the addressed problems. The experimental methodology is carefully designed and rigorously validated.
Theoretical Claims: The paper primarily addresses methodological advancements and does not involve explicit theoretical proofs.
Experimental Designs Or Analyses: The validity and soundness of the experimental designs are well-established, particularly for critical tasks such as modality conversion and super-resolution, which are backed by extensive benchmarks and standardized metrics.
Supplementary Material: yes, all
Relation To Broader Scientific Literature: The paper clearly contextualizes its contributions within the broader literature, showing significant advancement over previous models like Med-Flamingo, LLaVA-Med, and Unified-IO 2.
Essential References Not Discussed: While the paper adequately covers related works, including some additional emerging unified LVLM approaches or advanced MoE methods would further contextualize its contributions.
Other Strengths And Weaknesses: Strengths:
Originality in combining comprehension and generation capabilities in a single Med-LVLM.
Significant methodological innovations (H-LoRA, hierarchical visual perception).
Strong empirical validation against existing baselines.
Weaknesses:
Limited exploration of generalization capabilities across diverse medical scenarios not directly addressed in benchmarks.
Other Comments Or Suggestions: The paper could improve clarity by explicitly discussing potential limitations and dataset biases.
Questions For Authors: Can HealthGPT generalize effectively to medical conditions or modalities that were not explicitly included in the training datasets?
How well does HealthGPT handle complex multi-modal medical tasks involving simultaneous inputs from multiple modalities?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer’s recognition of our work. Your recognition of the model’s originality, innovation, and effectiveness greatly encourages us and has helped improve HealthGPT. Below are our point-by-point responses.
## 1. Unified LVLM Comparison
Thank you for the suggestion. Below, we provide further comparisons with SoTA unified LVLMs:
Method|VQA-RAD|SLAKE|PathVQA|MMMU|OMVQA
-|-|-|-|-|-
chameleon|45.4|54.3|54.0|25.7|18.9
Janus-Pro|62.9|51.3|48.9|54.1|32.7
Emu3|72.1|57.3|54.1|28.7|39.5
HealthGPT-M3|73.7|74.6|78.7|43.3|68.5
HealthGPT-XL|79.1|85.7|92.4|58.0|77.2
Meanwhile, we present an improved **HealthGPT-XL with enhanced performance**. We believe this additional experiment demonstrates significant advantages of our method.
## 2. Generalization Capabilities
**2.1 Generalization ability**
To demonstrate HealthGPT’s generalization and practical value, we highlight two aspects:
1. **Benchmark generalization**: The model performs well on unseen tasks and modalities (e.g., OmniMedVQA, MMMU-Med), showing robustness beyond the training distribution.
2. **Clinical collaboration**: Ongoing partnerships with public hospitals on ocular disease diagnosis have yielded promising early results, with further evaluation in progress.
These results underscore HealthGPT’s potential for both benchmark-level generalization and practical clinical impact.
**2.2 Complex medical tasks**
Thank you for your thoughtful concerns. We recognize their importance and provide the following clarifications:
1. **Unseen medical conditions**: HealthGPT demonstrates strong generalization and medical knowledge coverage, enabling it to handle most previously unseen conditions.
2. **Unseen modalities**: In cases lacking prior modality-specific data, relying solely on a pre-trained LLM may introduce bias. However, our plugin-based learning approach allows for rapid adaptation using high-quality modality-specific data.
3. **Simultaneous multimodal inputs**: This engineering challenge can be effectively addressed through targeted data collection and training. We are actively collaborating with hospitals to tackle this issue in real-world clinical settings.
We will incorporate the clarifications in the next version.
## 3. Differences with MoE Mechanism
We appreciate the reviewer’s detailed attention to the model's structure, allowing us to clarify the **differences between H-LoRA and MoE architectures** and highlight the **necessity and advantages** of our design.
Existing work explore LoRA and MoE combinations, including symmetric[1] and asymmetric structures[2]. While performance benefits are evident, we observe that introducing LoRA experts significantly increases resource consumption, compromising the efficiency of PEFT.
To validate this, we compare the training efficiency of several LoRA+MoE structures with the same LoRA setting (r=64, k=4):
Method|LoRA|MoELoRA|HydraLoRA|DS-LoRA|H-LoRA
-|-|-|-|-|-
TFLOP|311.6|204.9|238.3|224.1|313.7
DS-LoRA is a LoRA mechanism based on the powerful MoE model Deepseek-V3. We find that MoELoRA, HydraLoRA, and DS-LoRA significantly reduce training speed, while H-LoRA avoids this limitation.
Unlike traditional MoE scheduling, H-LoRA uses low-rank matrix subspace dynamic routing, preventing performance degradation and instability from issues like routing jitter and load imbalance. It also adaptively activates parameter subspaces, creating soft isolation and weak coupling to enhance adaptability to task heterogeneity.
Additionally, we emphasize that our approach is not a dense-to-MoE alternative for expanding FFN. While traditional MoE increases capacity by expanding parameters, our method focuses on **balancing performance and computational resources** in efficient fine-tuning.
## 4. Other Issues
**4.1 Potential Limitations**
We appreciate the reviewer’s attention to potential limitations. Our work explores the feasibility of efficiently handling unified comprehension and generation tasks in medical scenarios. As shown in Figure 2, joint training faces a significant bottleneck in data-scarce medical tasks. However, task cooperation enhancement is a long-term focus, and we aim to explore the complementary potential between the two tasks for synergistic effects.
**4.2 Dataset Bias**
We understand that bias in sample distribution and task labeling can impact model generalization. To improve chat-LVLM practicality, we balanced general instruction data with curated medical datasets during training, enhancing generalization and medical knowledge. A section on dataset bias has been added to the appendix.
Thank you again for your positive and encouraging feedback and insightful comments. We hope this response further clarifies our design, contributions, and the model's potential.
**Reference**
[1] When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical Applications
[2] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning | Summary: This paper presents HealthGPT, a medical large vision-language model (Med-LVLM) that unifies both comprehension and generation of medical images. The key contributions include:
1. Heterogeneous Low-Rank Adaptation (H-LoRA);
2. Hierarchical Visual Perception ;
3. Three-Stage Learning Strategy;
4. A curated multi-modal dataset covering 7 comprehension tasks (VQA, medical reasoning, pathology) and 5 generation tasks (modality transformation, super-resolution, image reconstruction);
5. The model outperforms existing Med-LVLMs (e.g., LLaVA-Med, Med-Flamingo, HuatuoGPT-Vision) and achieves state-of-the-art (SOTA) results across multiple medical imaging benchmarks.
While the work is technically solid and introduces meaningful innovations, its improvements in VQA performance are marginal, and the evaluation of image generation lacks perceptual metrics.
Claims And Evidence: 1. HealthGPT is the first unified Med-LVLM integrating comprehension and generation: Supported, as existing works (e.g., LLaVA-Med) focus on comprehension only.
2. H-LoRA effectively separates comprehension and generation tasks: Partially supported. Ablation studies demonstrate benefits, but there is no clear comparison with other PEFT methods (e.g., MoELoRA) beyond training efficiency.
3. HealthGPT achieves SOTA performance on medical VQA and generation tasks: Not supported. The improvements in VQA over HuatuoGPT-Vision are minor, and some of the published work has a higher VQA effect than this study.
4. Three-stage training improves performance: Well-supported, as the structured training prevents catastrophic forgetting.
Suggested Improvements:
1. Provide qualitative visualizations for VQA (e.g., heatmaps, model reasoning breakdown).
2. Add perceptual metrics (e.g., FID, LPIPS) for evaluating generation tasks.
3. For QA/VQA, please compare with more advanced methods such as LLAVA-MED+ +, etc.
Methods And Evaluation Criteria: The methodology is well-designed:
1. H-LoRA: Decouples learning across tasks using a Mixture-of-Experts (MoE)-like mechanism.
2. HVP: Dynamically selects abstract vs. detailed image features.
3. TLS: Ensures a structured alignment between vision-language modalities.
However, the evaluation metrics for generation tasks are too simplistic (SSIM, PSNR, MSE) and VQA performance shows only limited improvement over previous works.
Suggested Improvements:
1. Introduce qualitative evaluations (e.g., radiologist evaluations of generated images).
2. Add error analysis for VQA failures.
And the other question is that although this approach takes into account both VQA and image generation, which is also a feature of this study, it seems that the two tasks cannot improve the quality of each other.
Theoretical Claims: While the paper does not present new theoretical derivations, H-LoRA’s mathematical formulation could be further elaborated:
1. There is no direct theoretical comparison to standard LoRA and MoELoRA.
2. The routing mechanism in H-LoRA is not well justified.
Suggested Improvement: Provide a theoretical explanation for why H-LoRA is expected to generalize better than standard LoRA in medical contexts.
Experimental Designs Or Analyses: The experiments are well-structured but have notable limitations:
1. VQA evaluation lacks detailed error analysis and no comparison with the SOTA methods, like LLAVA-MED+ +.
2. No human evaluation for image generation quality.
Suggested Improvements:
1. Include failure case analysis for VQA errors.
2. Conduct blind radiologist assessments of generated medical images.
3. Please compare with more advanced methods such as LLAVA-MED+ +, etc.
Supplementary Material: The VL-Health dataset is well-documented. Additional hyperparameter settings for H-LoRA are provided. However, dataset bias analysis is missing. Maybe can provide dataset bias analysis and discuss its impact on model performance.
Relation To Broader Scientific Literature: The paper builds on prior Med-LVLMs (LLaVA-Med, Med-Flamingo) but fails to compare against general vision-language models (e.g., SEED, Chameleon) and the other models like LLaVA-Med++.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
1. Strong multi-modal integration of comprehension and generation.
2. Innovative H-LoRA adaptation mechanism.
3. New VL-Health dataset, covering diverse medical tasks.
Weaknesses:
1. Limited VQA improvement over prior Med-LVLMs, and not compared with LLaVA-Med++.
2. Image generation evaluation lacks perceptual metrics (e.g., FID, LPIPS).
3. Comparison to MoELoRA only focuses on training time, not task performance.
4. Although this method takes into account both text and image output, it seems that the two tasks cannot improve the quality of each other.
Other Comments Or Suggestions: No.
Questions For Authors: 1. Has HealthGPT been tested in real-world clinical settings?
2. Could you provide qualitative examples where VQA fails?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the detailed and thoughtful feedback, which greatly helped us refine our experiments and methodology, and better highlight HealthGPT’s innovation and practical value. Below are our point-by-point responses.
## 1. VQA Analysis
**1.1 Comparison with LLaVA-Med++**
We wish to clarify that **LLaVA-Med++ is not a chat LVLM and is fine-tuned on individual downstream tasks**, which compromises its generalization ability. For fairness, we use LLaVA-Med++'s training setting and the results are below:
Method|VQA-RAD|SLAKE|PathVQA
-|-|-|-
LLaVA-Med++|86.0/77.1|85.3/80.8|98.9/58.7
HealthGPT-M3|85.6/71.9|85.9/81.7|99.1/67.2
**1.2 Comparison experiments**
Notably, we further release the **HealthGPT-XL** with a stronger LLM (Qwen2.5-32b). **We believe this model is the new SOTA Med-LVLM compared with baselines**:
Method|VQA-RAD|SLAKE|PathVQA|MMMU|OMVQA
-|-|-|-|-|-
chameleon|45.4|54.3|54.0|25.7|18.9
Janus-Pro|62.9|51.3|48.9|54.1|32.7
RadFM|58.3|34.4|58.4|31.3|36.2
LLaVA-Med++ (SFT)|64.7|87.1|55.1|x|x
Med-MoE|66.9|52.6|69.1|32.7|55.8
HealthGPT-M3|73.7|74.6|78.7|43.3|68.5
HealthGPT-XL|79.1|85.7|92.4|58.0|77.2
The aforementioned results further demonstrate HealthGPT's effectiveness.
**1.3 Failure case**
After analysis, the failure cases stem from two sources:
1. Broad questions (e.g., 'What is present?')
2. Same question, different answers (Q → A1 vs. Q → A2)
Addressing (1) and (2) typically requires fine-tuning on the corresponding training set. However, our model still achieves the best performance through strong generalization and instruction-following capabilities.
**1.4 VQA qualitative**
We visualize heatmaps for different questions and masked key regions to assess answer quality: https://anonymous.4open.science/r/HealthGPT-9533/visualization.png. Results confirm our model relies on critical regions for reasoning.
## 2. H-LoRA
**2.1 Performance of H-LoRA**
We believe the reviewer may have misunderstood—we did compare performance with MoELoRA. Fig.5, Tab.4, and Tab.10 in our paper systematically demonstrate the **performance and efficiency advantages of H-LoRA**.
**2.2 H-LoRA better than LoRA**
- **Efficiency.** Existing research confirms that LoRA with MoE and router mechanisms improves performance for medical tasks [1]. To enhance efficiency, we propose internalizing LoRA experts as matrix space separation: $\sum ka w_i A_i B_i / r \rightarrow ka A \odot \mathcal{W} B / r$.
- **Effectiveness.** We find the conventional router mechanism reduces the expected scaling factor from $a/r$ to $a/(rk)$, leading to performance degradation [2]. We correct the scaling factor and experimentally validate its effectiveness in our paper. Additionally, we included an ablation study on the router module:
Method|VQA-RAD|SLAKE|PathVQA|MMMU|OMVQA
-|-|-|-|-|-
HealthGPT-M3 w/o router|72.2|70.7|76.2|38.0|66.4
HealthGPT-M3|73.7|74.6|78.7|43.3|68.5
We hope the above analyses and experiments further demonstrate the potential of H-LoRA in the medical domain.
## 3. Image Metrics
**3.1 LPIPS & FID**
We appreciate the reviewer’s concern about perceptual consistency in images. As requested, we’ve added LPIPS and FID metrics (https://anonymous.4open.science/r/HealthGPT-9533/lpips_fid.png), showing HealthGPT’s strength in both pixel-level and perceptual quality.
**3.2 Human evaluation**
We also conduct human evaluation of the modality conversion task across five dimensions, comparing against the best-performing BBDM method. H-LoRA achieves **higher average scores** (4.30/4.52 vs. BBDM's 3.54/4.16):https://anonymous.4open.science/r/HealthGPT-9533/human_eval.png.
## 4. Other Issues
**4.1 Mutual Gains between Comprehension and Generation**
We would like to respectfully clarify that our work aims to **mitigate the significant conflicts between comprehension and generation tasks**, rather than assuming inherent complementarity. Given current modeling paradigms and data limitations in the medical domain, achieving mutual benefit is particularly challenging (see Fig. 2). To this end, we design dedicated mechanisms to reduce such conflicts and enable unified modeling.
**4.2 Clinical Potential**
We highly value clinical evaluation and are already collaborating with two public hospitals, which provide ocular disease reports and case data for validation. These efforts represent a concrete step toward assessing the clinical applicability of our approach, and preliminary evaluations on the provided clinical data show **promising advantages of our model in ocular disease understanding**.
**4.3 Dataset Bias**
Dataset bias is an important concern. Due to space limits, please see our response to `Reviewer ugJa (Section 4: Dataset Bias)`.
We hope our response clarifies the model’s rationale and its potential in medical applications.
**Reference**
[1] When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical Applications
[2] A Rank Stabilization Scaling Factor for Fine-Tuning with LoRA
---
Rebuttal Comment 1.1:
Comment: OK, thank you for the author's response. In my view, this result is mainly driven by the capabilities of qwen2.5. Another point is that the model addresses the text-to-image and image-to-text problems, but it does not test interleaved data, which is significant. Based on the feedback from all reviewers, I will maintain this score for now.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for carefully feedback.
**(1) Regarding the Source of Performance Improvement**
We greatly appreciate the reviewer’s attention to the foundation model issue. We clarify that **our three models of different scales (M3/L14/XL) all achieve SoTA performance across multiple medical benchmarks**, demonstrating that our framework effectively adapts to and extends models of varying scales. Notably, even our smallest 3.8B model outperforms LLaVA-Med++ in zero-shot evaluations and achieves leading results among current SoTA medical LVLMs. Therefore, **the use of the Qwen2.5 foundation aims to pursue better performance, while it is our systematic innovation that fundamentally drives the achievement of SoTA results**.
We have previously experimented with the same foundation model as LLaVA-Med++ (LLaMA3-8B) and **achieve superior performance**. However, as this version performs similarly to HealthGPT-M3 with twice the parameters, we do not adopt it. The experimental results are as follows:
Method (zero-shot)|VQA-RAD|SLAKE|PathVQA|MMMU|OMVQA
-|-|-|-|-|-
LLaVA-Med++|64.7|87.1|55.1|x|x
HealthGPT-LLaMA3-8B|74.1|80.2|73.5|41.3|65.7
We further clarify the motivation for training HealthGPT-XL with Qwen2.5. In the medical domain, where tasks require deeper knowledge and higher reasoning precision, adopting a stronger foundation model is a natural and necessary response to domain-specific demands. Benefiting from the **plug-in design of the HealthGPT framework**, we flexibly adapt to foundations of different scales and capabilities without compromising pre-trained medical knowledge. The successful application of HealthGPT-XL not only meets the foundational requirements of medical tasks but also **validates the scalability and compatibility of our method**, demonstrating HealthGPT’s potential for continuous evolution and broad applicability across various pre-trained foundations.
We hope these clarifications fully address the reviewer’s concerns, highlight the innovation and application value of our work, and emphasize that **the performance gains stem from the proposed method rather than merely relying on a stronger foundation model**.
**(2) Regarding the evaluation of interleaved text-image inputs**
We thank the reviewer for highlighting the important issue of evaluating interleaved text-image inputs.
First, we would like to clarify that most existing medical LVLMs have not yet adapted to interleaved input capabilities, resulting in a lack of fair baselines. Moreover, no standardized benchmark currently exists in the medical domain for such evaluations, making it difficult to accurately assess potential performance improvements through interleaved modeling in the short term.
Nevertheless, we highly value the reviewer’s suggestion and **are actively extending our framework to support interleaved inputs**:
- Comprehension: we construct a multi-image dataset by inserting image tokens into multi-turn dialogues, enabling simple, efficient, and compatible interleaved inputs within our training paradigm.
- Generation: we organize multiple tasks into an interleaved format while ensuring compatibility and rigor in the reasoning process.
We sincerely appreciate the reviewer’s constructive feedback. We clarify that the limited demand for frequent interleaving in medical tasks stems from the domain’s high standards of rigor, stability, and accuracy, not from limitations of our method. Interleaving is common in general multimodal tasks designed for large-scale instruction-following, whereas medical applications have fundamentally different requirements.
We sincerely hope the reviewer **understands the unique modeling needs of the medical domain and does not lower the evaluation of our contributions due to differences in interaction formats or application objectives.** Meanwhile, we actively respond to the feedback, continue improving our interleaved modeling capabilities, and promptly update our experimental progress.
**(3) Final Remarks**
We fully respect the reviewer’s decision to maintain the current score at this stage.
At the same time, we emphasize that unified comprehension and generation for medical tasks—still highly challenging and underexplored compared to general domains—requires significant exploration and experimentation. Our work **proposes the first framework specifically tailored for medical applications, focusing on preserving pre-trained medical knowledge and enhancing LVLMs’ comprehension and generation through a plug-in design**. Despite challenges such as limited data and stringent application standards, we strive to deliver a **simple, scalable, and effective solution**.
**We sincerely hope that, based on our methodological innovations and the critical value of our work in the medical domain, the reviewer will recognize our contributions and consider a more positive evaluation.**
Once again, we sincerely thank the reviewer for the opportunity to improve our work. | null | null | null | null | null | null |
Curriculum Learning for Biological Sequence Prediction: The Case of De Novo Peptide Sequencing | Accept (poster) | Summary: This paper proposes an improved non-autoregressive peptide sequencing model incorporating a structured protein sequence curriculum learning strategy.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: This field has an open benchmark dataset, and the method should be tested on it.
[1] NovoBench: Benchmarking Deep Learning-based De Novo Peptide Sequencing Methods in Proteomics
Theoretical Claims: This paper has no theoretical claims.
Experimental Designs Or Analyses: The experimental section of this paper needs to incorporate more baseline methods[1][2][3] for comparison.
[1] De novo peptide sequencing with InstaNovo: Accurate, database-free peptide identification for large scale proteomics experiments
[2] Bridging the Gap between Database Search and De Novo Peptide Sequencing with SearchNovo
[3] PowerNovo: de novo peptide sequencing via tandem mass spectrometry using an ensemble of transformer and BERT models
Supplementary Material: I have reviewed the supplementary materials in the appendix.
Relation To Broader Scientific Literature: This paper introduces the NAT decoding technology to the task of de novo peptide sequencing.
Essential References Not Discussed: This paper has no essential references not discussed.
Other Strengths And Weaknesses: This paper is well-written, and the proposed method is also novel.
Other Comments Or Suggestions: I have no further comments or suggestions.
Questions For Authors: 1. Using the latest benchmark datasets to test the proposed model.
2. Add more baseline methods as comparison methods.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and thoughtful comments. We have addressed all of your concerns with point-by-point responses below:
> This field has an open benchmark dataset, and the method should be tested on it.
Thank you for pointing this out. We apologize for missing this benchmark dataset earlier, as it is relatively new. In response, we have conducted additional experiments using the **NovoBench** dataset and evaluated our method, **RefineNovo**, alongside previously reported baselines.
Following the experimental setup from the NovoBench paper, we downloaded the test data (from multiple sources as specified by the paper's data source instruction) and compared our method against the results reported in the benchmark, including those of **PrimeNovo** and other baselines.
Consistent with NovoBench’s evaluation protocol, we used **yeast** as the test species. Due to time constraints during the rebuttal period, we were unable to retrain our model on the NovoBench training data. Instead, we directly evaluated our pretrained model on the benchmark test set.
To highlight the effectiveness of our method, we include a direct comparison between **RefineNovo** and **PrimeNovo**, the previous best-performing NAT-based model. Using the publicly available weights of PrimeNovo from GitHub, we evaluated both models under identical conditions. Our results show that **RefineNovo outperforms PrimeNovo across all test datasets**, as Seen in Table below.
Additionally, to emphasize the performance advantages of NAT-based architectures, we referenced the **cross-validation (CV)** results from the PrimeNovo paper, where the model was trained on nine species (excluding yeast). These results demonstrated the strength of NAT designs relative to other architectures on the same task.
We notice the relatively low performance on the 7-species dataset when directly testing with all pretrained models such as **Casanovo**, **PrimeNovo**, and **RefineNovo**. Upon further investigation, we found that this dataset was generated using MS equipment with precision levels significantly different from those in the **MassiveKB** training data. This results in a notable distribution mismatch.
Nevertheless, despite this domain shift, the pretrained **RefineNovo** model still demonstrates clearly better performance compared to other models trained on MassiveKB. This highlights the robustness of our method under distributional variation.
We thank the reviewer again for bringing this benchmark to our attention. ```We will incorporate all experimental results, dataset references, and baseline comparisons into the final version of the manuscript and ensure all relevant works are properly cited.```
| **Model** | **9Species (yeast)** | **7Species (yeast)** | **HC-PT** |
|----------------------|----------------------|-----------------------|-----------|
| Casanovo * | 0.48 | 0.12 | 0.21 |
| InstaNovo * | 0.53 | – | 0.57 |
| AdaNovo * | 0.50 | 0.17 | 0.21 |
| HelixNovo * | 0.52 | 0.23 | 0.21 |
| SearchNovo * | 0.55 | 0.26 | 0.45 |
| PrimeNovo-CV * | 0.58 | – | – |
| Casanovo-pretrained | 0.60 | 0.05 | – |
| PrimeNovo | 0.70 | 0.09 | 0.85 |
| **RefineNovo** | 0.71 | 0.09 | 0.88 |
\* **marks numbers are quoted from benchmark/original paper**
> The experimental section of this paper needs to incorporate more baseline methods [1][2][3] for comparison.
Thank you for the suggestion. We will incorporate the **full set of baseline comparisons using the NovoBench benchmark**, as shown in the table above. This includes **InstaNovo**, **AdaNovo**, **HelixNovo**, **SearchNovo**, and **PrimeNovo**. ```These methods will be discussed and properly referenced in both the <Related Work> and <Experiments> sections of the revised manuscript```.
Regarding **PowerNovo**, we have reviewed the published work and found that the dataset used appears to be inconsistent with the NovoBench benchmark. Nonetheless, we will discuss PowerNovo separately in both the *Related Work* and *Experiments* sections to ensure a comprehensive comparison and fair contextualization of our approach.
**We sincerely thank the reviewer again for highlighting these points—your feedback has helped us significantly improve the completeness and clarity of our experimental evaluation, let us know if you have further questions after reading our responses!!**
---
Rebuttal Comment 1.1:
Comment: Thanks to the author's answer, I have updated my score
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the effort and time again!
authors | Summary: The work proposes a non-autoregressive transformer (NAT)-based curriculum learning framework to deduce the animo acid sequence from tandem mass spectrometry signals. The input peak signals are encoded with an transformer encoder, which is then used to predict the probability of tokens on each position. The decoding is trained on the sampled decoding paths from the predicted probability with increasingly higher masked ratios. The predictions are iteratively refined during training. Results indicate higher performance across multiple species than the baseline methods.
Claims And Evidence: The authors provide comprehensive results to support their claims.
1. In the results section, ContraNovo is compared to several times, producing slightly higher performance. However, the actual values are not shown in the main text or supplementary.
Methods And Evaluation Criteria: The methods and evaluation make sense for the application, despite a few points that needs clarification:
1. Is the masking performed on $y'$ or $A$? Also, how is the CTC objective defined with masking?
2. As the path $y$ does not necessarily have the same length as the ground truth sequence, how is the masking performed? More specifically, how are ground truth tokens from $A$ incorporated into $y$?
3. In the earlier stages of the training, the predicted token probabilities are inaccurate and all paths that satisfy $\Gamma(y)=A$ would have low probability, which would lead to biases in the training. Does the proposed method take into account the probability of the sampled path $y'$?
Theoretical Claims: I checked the definition of the CTC objective and the PMC.
Experimental Designs Or Analyses: The experiments and analyses are sound and valid.
Supplementary Material: I checked all the supplementary materials.
Relation To Broader Scientific Literature: The proposed framework combines well-established techniques in the field as well as broader machine learning to enhance the effectiveness of the inference and address several technical challenges, such as the optimization of the CTC objective and the iterative refinement.
Essential References Not Discussed: No
Other Strengths And Weaknesses: See previous sections
Other Comments Or Suggestions: See previous sections
Questions For Authors: I only have some minor questions and concerns. See previous sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their time and effort in providing thoughtful comments and feedback. Below, we provide a point-by-point response to your questions and concerns.
> Is the masking performed on $y'$ or $A$. Also, how is the CTC objective defined with masking?
The masking is performed on a selected CTC path $y'$, which is one of the possible alignments (CTC paths) derived from the true label sequence $A$.
For example, if the true label sequence $A$ is **ACTC** and the generation length is 5, there can be multiple valid CTC paths $y$, such as:
- `A C T T C`
- `A C T C C`
- `A C ε T C`
- *...and many others*
Our algorithm proceeds as follows:
**Forward Pass for CTC Path Selection:**
We first perform a forward pass to obtain the most likely CTC path $y'$ corresponding to the ground truth $A$. For instance, the model may choose `A C T T C` (all $y$ are ranked by their total probability for selection).
**Masking:**
We then apply masking to this specific path $y'$. For example, it may become:
`A <mask> T <mask> C`
after masking.
**Prediction Condition:**
This masked sequence is then used as a conditioning input. The model is tasked with predicting the full distribution over all valid CTC paths for $A$—such as `ACTTC`, `ACTCC`, `AC ε TC`, etc.—given the partially masked version of one such path.
The CTC objective **remains unchanged**. The model is still trained to maximize the total probability over ```all``` valid CTC paths corresponding to the true label $A$. Intuitively, the model sees ```one``` **partially** revealed CTC path and is asked to infer and generalize over the ```full``` space of valid CTC paths.
> As the path $y$ does not necessarily have the same length as the ground truth sequence $A$, how is the masking performed? More specifically, how are ground truth tokens from $A$ incorporated into $y$?
As illustrated in the example above, masking is applied to **one** single sampled CTC path $y'$ derived from the ground truth sequence $A$. Each $A$ can correspond to $\mathcal{O}(b^n)$ possible paths $y$, where $n$ is the generation length in NAT (40 in our case), and $b$ is the vocabulary size.
Regarding your question: while $A$ and $y$ may differ in length, all sampled $y$ from $\mathcal{O}(b^n)$ have the same fixed generation length (e.g., 40). We sample ```one``` such $y'$, apply masking to it, and use the result as a condition for predicting ```all``` valid paths. In this way, some of the tokens from $A$ naturally appear in the unmasked positions of $y$, providing partial supervision.
For example, let the ground truth be $A = \text{ACTC}$, and a few possible CTC paths $y$ could be:
- `A C T T C`
- `A C T C C`
- `A C ε T C`
- *...among others*
We might select $y'$ = ```ACTCC```, and mask it to get:
`A <mask> T <mask> C`
This masked sequence includes some ground truth tokens from $A$ and serves as a partial observation. The model is then trained, under the CTC loss, to recover the full output distribution over all valid paths $y$ corresponding to $A$. This encourages it to learn both how to complete the sequence and how to generalize across CTC paths (all of length 40).
> In the earlier stages of training, the predicted token probabilities are inaccurate and all paths that satisfy the CTC constraints would have low probability, which could lead to biases in training. Does the proposed method take into account the probability of the sampled path?
Yes, this is a very insightful question—thank you for raising it.
What we observed is that, during the early stages of training, the model tends to assign disproportionately high probabilities to the $\epsilon$ (blank) token. As a result, the selected CTC path $y$ (via greedy decoding) often includes many $\epsilon$ tokens. These $\epsilon$-heavy paths are selected simply because they appear most probable under the model’s initial, untrained predictions.
To address this, we experimented with **top-$k$ sampling** during early training to encourage more diverse path selection. However, in practice, we observed that this made little difference. As training progresses, the model rapidly learns meaningful path patterns, and the issue of degenerate $\epsilon$-dominated paths diminishes.
Eventually, greedy selection (top-1) naturally yields well-formed CTC paths with fewer consecutive $\epsilon$ tokens. We found almost no performance difference between top-$k$ sampling and greedy choice, especially beyond the initial training phase.
Thank you again for highlighting this point!
**We thank the reviewer for the careful and detailed reading—masking in the context of CTC can indeed be tricky. We will include a more detailed explanation with the examples to avoid any confusion.**
**Please don’t hesitate to reach out with further questions!**
**Thank you again for your time!** | Summary: This paper proposed a new method, named RefineNovo, that improves non-autoregressive transformers (NATs) for peptide sequencing by introducing a curriculum learning strategy and a self-refinement module (including a "difficulty annealing" strategy). It reduces training failures by over 90% and enhances sequence accuracy. Authors evaluated on 9-species-v1 and 9-species-2 benchmarks that show improved performance over existing methods.
Claims And Evidence: - The idea of introducing a curriculum learning strategy to peptide sequencing is indeed novel.
- The method is properly explained, with clear depiction, discussion of the components, and mathematical derivations in the appendix.
Concerns:
- The self-refining module has been explored for peptide generation to some extent, e.g., with discrete or masked diffusion language modeling, conditional masked language modeling, etc. As far as I know, the difficulty annealing strategy has been explored is some studies for sequence generation task e.g., conditional masked language modeling (Marjan et al. 2019), LM-design (Zaixiang et al,. 2023). I wonder how the proposed method RefineNovo is different from these approaches.
Methods And Evaluation Criteria: - The proposed method, focusing on three components, is indeed relevant for the problem at hand.
- The authors show results on ed 9-species-v1 (Tran et al., 2017) and 9-species-v2 (Yilmaz et al., 2024) benchmark datasets. The experimental results reported in Tables 1, 2, and 3 support their claim.
- The author also showed ablation of the effect of three different components in Table 4, which is useful to see get a clearer idea how each component and their combinations are affecting the generation task.
- In Figure 4, they also show a case study on training failures, where RefineNovo is clearly showing promising improvements.
Theoretical Claims: - The main paper does not have any theoretical claims.
- The appendix has discussion on CTC loss computation and PMC methods, which looks okay. Although there are not the contribution of this paper, I appreciate the authors including that in the appendix for a clearer picture.
- The appendix also include the algorithms for Curriculum Learning and Iterative Refinement, which also are properly written.
Minor concerns:
- In Equation 3 (page 4), there is a wrong math notation: sum over probabilities is written to be directly equal to the sum over log probabilities.
- There is notation mismatch between Equations 3 and 4. Equation 4 uses L as the likelihood, and Equation 3 used L as the negative log likelihood. I would suggest using different symbols for clarity.
Experimental Designs Or Analyses: The experimental design with the two dataset 9-species-v1 and 9-species-v2 as well as the analyses on training failures are clearly discussed.
Supplementary Material: The appendix includes
- the algorithms for Curriculum Learning and Iterative Refinement,
- derivations of the CTC loss computation and PMC methods,
- ablation on the impact of iterative steps
- a python snippet for the proposed our curriculum learning
All of these are useful for the manuscript and properly explained.
Relation To Broader Scientific Literature: The main contribution of this paper is related to the research on bio-sequencing (especially for proteins), as well as machine learning research on bio-sequence generation. There are prior works on protein sequence generation, especially in the regime of inverse folding, where ideas closely related to what was proposed here has been explored to some extend.
Essential References Not Discussed: As mentioned before, the main contribution is introducing a curriculum learning strategy and a self-refinement module, which is much related to discrete or masked diffusion language modeling (e.g., Subham et al. 2024) and conditional masked language modeling (Marjan et al. 2019) that the authored did not cite or discuss.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review and for recognizing the novelty and effectiveness of our proposed method. We sincerely appreciate your effort and provide below point-by-point responses to your comments.
> The self-refining module has been explored for peptide generation to some extent
We thank the reviewer for their interest in comparing our "difficulty annealing" and "self-refinement" strategies with related prior work ```(Marjan et al., 2019; Zaixiang et al., 2023; Subham et al., 2024)```.
Regarding self-refinement, our method introduces a **post-training refining module** integrated into the main NAT architecture. As noted in our paper, it is **inspired by prior work such as ESM-3 and AlphaFold2**, which leverage multi-pass generation for more accurate predictions in protein-related tasks. We reviewed the works mentioned by the reviewer and **agree that masked modeling and diffusion-based approaches share a similar motivation**—improving generation quality through iterative refinement.
We acknowledge that we did not adequately cite these related works in the current draft and **will revise the manuscript to include and discuss them**. Specifically, our refinement method is designed as a CTC-path refinement, specific to **NAT models**. Unlike prior methods that directly refine the generated sequence, our approach refines the CTC path obtained from a previous forward pass. This CTC path represents _**one**_ possible reduction outcome under CTC alignment, and our refinement allows the model to iteratively adjust it in future passes. The final path will be used for reduction of final sequence. **The core motivation remains the same with many prior work**: to improve final predictions through successive rounds of error correction and adjustment, as mentioned prior work. **We will make this clear in final paper with proper reference to all above mentioned paper and others**!
> Difficulty annealing strategy has been explored is some studies for sequence generation task e.g., conditional masked language modeling
Regarding difficulty annealing, to the best of our knowledge, we are **among the first to apply _within-sequence_ difficulty annealing**. Most prior work using "easy-to-difficult" curriculum strategies does so at a **coarser granularity**—typically at the _task level_, where simpler tasks are learned before more challenging ones. In some protein-related work, including those mentioned by the reviewer, the approach is _intra-sequence_—e.g., learning shorter or simpler sequences before longer, more complex ones.
In contrast, our method defines difficulty **within each sequence** by modulating the **amount of CTC path information exposed during training**. Each sequence can start with an easier version—by revealing more information in its chosen CTC path—and gradually become harder by reducing path visibility. This increases difficulty exponentially due to the combinatorial number of valid CTC paths per label. Our design thus enables _within-sequence_ and _within-path_ difficulty annealing, made possible by our CTC-sampling mechanism tailored for this purpose. This fine-grained annealing plays a key role in our model’s training dynamics. We will further discuss this and difference with previous work ```(Marjan et al., 2019; Zaixiang et al., 2023; Subham et al., 2024)```
in our final writing!
> In Equation 3 (page 4), there is a wrong math notation: sum over probabilities is written to be directly equal to the sum over log probabilities.
Thank you for the careful reading! We will ensure this error is corrected in the final version. It should read: the log of the product of all token probabilities equals the sum of the log of each token’s probability. Thanks again for catching this!
> There is notation mismatch between Equations 3 and 4. Equation 4 uses L as the likelihood, and Equation 3 used L as the negative log likelihood. I would suggest using different symbols for clarity.
Apologies for the confusion—we inadvertently reused the same notation without realizing it. We will revise the notation to use distinct symbols to avoid ambiguity. Thank you again for the careful reading!
> a self-refinement module, which is much related to discrete or masked diffusion language modeling (e.g., Subham et al. 2024) and conditional masked language modeling (Marjan et al. 2019) that the authored did not cite or discuss.
we will thoroughly discuss the similarities and differences between their self-refinement modules (Marjan et al., 2019; Zaixiang et al., 2023; Subham et al., 2024) and ours based on above discussion, and ensure all mentioned and related works are properly cited in the Related Work section. Thank you for pointing this out!
**Please let us know if you have any further questions—we’d be happy to address them. Thanks again for your time and thoughtful review!**
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their clear explanation and addressing the comments. I am keeping the previously assigned (high) score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their confirmation and time in reviewing our work!
~Authors | null | null | null | null | null | null | null | null |
Generating Hypotheses of Dynamic Causal Graphs in Neuroscience: Leveraging Generative Factor Models of Observed Time Series | Accept (poster) | Summary: The authors propose a factor model-based method to generate possible Granger causal hypotheses about time series data given by a time-dependent (i.e., non-static) data-generating process. The generated hypotheses are adjacency matrices and summarize possible causal effects among the observed system variables and their history. Intuitively, the proposed method learns a set of generative factor functions that predict the behavior of the time series/system given a history of $\tau$ values as well as a state model that predicts a weighting vector of factors given said history. The factors are parameterized as neural networks where the first layer is interpreted as an adjacency matrix showing Granger causal relationships among variables while the state model is a neural network predicting the relevance of each factor given a history (thus introducing dynamic behavior and allowing to "mix" adjacency based on the model state).
Empirically, the proposed method seems to improve upon existing baselines on synthetic and real-world tasks.
Claims And Evidence: The paper claims that the proposed method generates hypotheses (i.e., Granger causal graphs) for nonlinear and state-dependent time series systems, thereby improving upon existing baselines that have stricter assumptions such as linearity or static-ness of the system.
By design, the method should be able to accurately capture nonlinear dependencies among system variables due to the use of neural networks. Also, an explicit state model is learned, representing the state of the modeled time series system and weighting a set of learned factor functions which include a representation of the Granger causal adjacency matrix. Combining these two components, the proposed method's design allows the modeling of nonlinear dependencies **and** model state-dependent time series systems.
The claims are supported by empirical evidence in the experimental section. The authors first test their method on artificially generated data that exhibits nonlinear dependencies among variables and state-dependent behavior.
Methods And Evaluation Criteria: The authors explicitly aim to construct a model that can be used for hypothesis generation in neuroscience applications. Hence, their evaluation of a real-world brain data problem is appropriate.
Theoretical Claims: The authors claim and show that:
- nonlinear, dynamic causal graphs are unidentifiable in general (App. A1)
- assuming non-trivial noise, a window of size $\tau$ is not sufficient to distinguish two (or more) factors if the classifier is assumed to be unbiased (App. A2)
Although the authors show in A2 that for a time-window $T < \tau$ it is impossible to have an accurate and unbiased estimator to distinguish between $f_i$ and $f_j$, it is not shown that having $T > \tau$ is sufficient to achieve that goal.
Experimental Designs Or Analyses: The experimental design is valid and sound. However, I am not familiar with neuroscience literature. Therefore, it is hard to tell whether the DREAM4 and TST datasets are appropriate choices to test the capabilities of the proposed method.
Supplementary Material: Supplementary A-C was checked.
Relation To Broader Scientific Literature: As discussed by the authors, the work is mainly related to [1], [2], and [3], covering causal discovery and its application to neuroscience. In [1], the authors aim to discover causal models with nonlinear dependencies using local linear approximations. This restricts the expressiveness of the learned model. [2] and [3] discuss the necessity of using state-dependent models in modeling brain data and demonstrate the usefulness of modeling brain data causally.
**References**
[1] Fujiwara et al. Causal Discovery for Non-stationary Non-linear Time-series Data Using Just-In-Time Modeling. ICLR 2023.
[2] Mague et al. Brain-wide electrical dynamics encode individual appetitive social behavior. Neuron 2022.
[3] Talbot et al. Estimating a brain network predictive of stress and genotype with supervised autoencoders. Journal of the Royal Statistical Society. 2023.
Essential References Not Discussed: The approach is closely related to the method proposed in [1]. In [1], the authors propose to learn dynamic causal graphs from data using neural ordinary differential equations, thereby modeling temporal dependencies among variables.
**References**
[1] Cheng et al. DyCAST: Learning Dynamic Causal Structure from Time Series. ICLR 2025.
> Note: The work in [1] was published in 2025. Thus, there is no need to incorporate it as an additional baseline, in my opinion. Nevertheless, it should be cited.
Other Strengths And Weaknesses: **Strengths**
- the paper is generally clearly written
- leveraging factor-based time series models to extract adjacency matrices is novel and innovative
- the modeling choices are well-motivated
- the authors test their method on real-world brain data and show significant improvements
**Weaknesses**
- Fig. 1 is too large and should be placed on the top of the page to improve readability
- a rigor definition of "optimal f1 score" is missing (see questions) (Sec. 4.2)
- it is unclear what the authors mean by "pairwise improvement" in Fig. 2
Other Comments Or Suggestions: -
Questions For Authors: - with how many seeds were the experiments conducted?
- it is unclear what the authors mean by "pairwise improvement" in Fig. 2B. Which metric is considered here exactly? Why is lower better than higher?
- what is the "optimal f1-score"? Could the authors provide an exact definition?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are very grateful for the reviewer's thoughtful feedback and suggestions on how we can further improve our paper. We respond to their concerns below.
Question 1: "with how many seeds were the experiments conducted?"
Each dataset was curated using a fixed random seed ('9999' for Synthetic Systems, '1337' for TST, '5' for D4IC) to ensure data could be generated/replicated on other devices (see provided code). With regards to model training, we used a single random seed ('0') throughout our experiments to ensure reproducibility, which can be seen in the included code (esp. at the beginning of "_ _ main _ _" in each of our python-based training files). In concurrence with best practices, these seeds were chosen arbitrarily so as to avoid 'seed'-hacking.
Question 2: "it is unclear what the authors mean by 'pairwise improvement' in Fig. 2B. Which metric is considered here exactly? Why is lower better than higher?"
We thank the reviewer for giving us the opportunity to clarify our terminology. 'Pair-wise' here refers to the practice of matching predictions made by pairs of algorithms on the same graph/system and evaluating the differences. In the case of Fig. 2B, 'pair-wise improvement' is the difference between the optimal f1-score obtained by REDCLIFF-S (say, $f1_{redc}$) and the optimal f1-score obtained by another algorithm (say, $f1_{base}$) on a given graph $g$; that is, $\mathrm{PWImprovement} = f1_{redc}(g) - f1_{base}(g)$. Thus, the higher the value, the better REDCLIFF-S did relative to the given baseline. In theory, the pair-wise improvement could take on any value between -1 (if REDCLIFF-S had an f1-score of 0 and the baseline's was 1) and 1 (with REDCLIFF-S' f1-score being 1 and the baseline being at 0).
Question 3: "what is the 'optimal f1-score'? Could the authors provide an exact definition?"
The reviewer rightly points out an area which needs clearer discussion in our final draft. In essence, the f1-score compares positive predictions with positive labels, but this requires that a 'positive prediction' threshold be set for each algorithm. Rather than pick a threshold which may skew prediction results in our favor, we instead used a different threshold for each algorithm on each prediction such that the algorithm/baseline attained the highest possible f1-score performance on that prediction (i.e. the 'optimal' f1-score). Our current draft discusses the 'optimal f1-score' at the end of the second paragraph in Section 4 of our paper, in which we state that "we compute each ... algorithm's f1-score using a classification threshold ... which yields the highest possible f1-score for that algorithm", but we will absolutely expound on this in order to improve our paper. Are there any particular key points that the reviewer would like us to include in our final draft beyond what we have described above?
Concern 1: Location of Figure 1
We thank the reviewer for their suggestion to improve the readability of our paper, and will adjust Fig. 1 so that it is located at the top of the page.
Concern 2: Additional References
We thank the reviewer for calling our attention to the DyCAST paper, and will certainly include it in our discussion of related works.
Again, we thank the reviewer for their feedback and hope our response has been helpful. | Summary: This work introduces a deep generative factor model that uses weighted superposition of static graphs to achieve dynamic causal graph modeling and capture nonlinear relationships. For complex neural activity in the brain, the method does well in detecting time-varying interactions between neural variables. Through evaluation on synthetic datasets, this work shows that the proposed model achieves higher performance compared to leading methods.
Claims And Evidence: A large number of experimental results on synthetic datasets demonstrate the effectiveness and uniqueness of the approach in this work. However, there are few experiments on real neural datasets (only one), and a lack of comparisons with other baseline models. Evaluation and comparison on more real datasets (e.g. the dataset used in [1]) could be more convincing of the model's contribution to the field of neuroscience.
[1] Mague, S. D., et al. Brain-wide electrical dynamics encode individual appetitive social behavior. Neuron. 2022.
Methods And Evaluation Criteria: The use of dynamic coefficients to control the efficacy of multiple static models is novel to the field, although this practice is already common in machine learning. The use of behavioral information to guide dynamic coefficients is constructive and helpful. The objective function of the model is well-designed and theoretically supported.
This work uses the f1-score as the primary metric. Although the authors have claimed that ROC-AUC is less appropriate in this situation, I think this work should also report ROC-AUC results, as this metric is widely used in related work.
Theoretical Claims: This work describes an accurate mathematical modeling of the method. A detailed discussion and derivation of the model design is also presented, which provides helpful theoretical support.
Experimental Designs Or Analyses: As discussed in the comment of **Claims And Evidence**, more experiments on real neural datasets may help to strengthen the claim and contribution of this work.
Supplementary Material: The appendix provides theoretical proofs, a detailed implementation of the method, details of the datasets and additional results. Besides, the authors provide reproducible code.
Relation To Broader Scientific Literature: The key contribution of this work is to capture causal relationships between neural variables in the brain. The authors discuss broad scientific literature, including linear and nonlinear causal discovery methods, dynamic graph modeling and some applications of causal discovery in neuroscience. Some works [2-3] have proposed dynamical systems to extract low-dimensional latent dynamics from neural activity, although they did not discover causal relationships. It would be better to include these related works for discussion.
[2] Linderman, S., et al. Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. 2017.
[3] Keshtkaran, M. R., et al. A large-scale neural network training framework for generalized estimation
of single-trial population dynamics. Nature Methods. 2022.
Essential References Not Discussed: Please see the above comment of **Relation To Broader Scientific Literature**.
Other Strengths And Weaknesses: 1. There is a lack of ablation studies for the method, such as an unsupervised version of REDCLIFF ($\lambda$=0 in equation 9), a model without dynamic factors (coefficients are the same for all factors), a single-factor model, and a model without the cosine similarity penalty.
2. The settings of the sections need to be fine-tuned. Section 5 is a discussion of Sections 4.1 to 4.3, which should be included as a subsection in Section 4. Section 6 is the experiment on a real-world dataset and is juxtaposed with Section 4. Therefore, it would be preferable to change the title of Section 4 to "Experiments on simulated datasets" and Section 6 to "Experiments on real-world datasets".
Other Comments Or Suggestions: N/A
Questions For Authors: 1. As there are many hyperparameters in the method, how do the authors perform the parameter search? Does it cause excessive training costs? How do these hyperparameters affect the model's performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Question 1 (Parameter selection and training cost):
Most of our hyperparameters were chosen via grid-search, though we did identify some simple equations relating the $\rho$ and $\eta$ parameters in Eq. 5 to other parameters in the model and/or dataset that we used to adapt those specific parameters to new settings (see Appendix E.2 and corresponding Table 13). We discuss how the number of parameters affects our analyses in the first paragraph of Appendix B.4 - essentially, the high number of parameters in the REDCLIFF-S model does increase training costs and did restrict us to tuning hyperparameters on a subset of the Synthetic Systems datasets. However, we emphasize that there were 51 different systems each with 5 different repeats that we sought to use in those experiments (48 of which made it into the paper and/or appendices, see Appendix Figure 9).
Question 2 (Ablation Studies):
Following the reviewer's suggestion, we performed ablations on several REDCLIFF-S parameters across multiple systems. We will include the full extent of these experiments in the final paper, and report the following results obtained on our D4IC HSNR system (Sec. 4.3) here - given as the average (Avg) Optimal F1 score across repeats and factors and corresponding standard error of the mean (SEM):
- Ablation: $\rho=0$ (Eq. 5 & 10) :: 0.304 (Avg) 0.006 (SEM)
- Ablation: $n_k=1$ (Eq. 2 equivalent) :: 0.316 (Avg) 0.011 (SEM)
- Ablation: $\alpha=1$ (Eq. 4) :: 0.338 (Avg) 0.009 (SEM)
- Ablation: $\lambda=0$ (Eq. 9) :: 0.332 (Avg) 0.006 (SEM)
Note each ablation resulted in significantly worse performance than that reported in Figure 4 of our paper, verifying the utility of each parameter.
Concern 1: Paper Organization
We thank the reviewer for raising valid feedback that will improve the readability of our paper; we will move Section 5 into Section 4 as a subsection and rename the section headings as suggested in the final paper.
Concern 2: Related Works
We thank the reviewer for suggesting several related works for us to include in our paper, and we will be sure to incorporate them in our final draft.
Concern 3: Additional Real-world (Neural) Datasets
As suggested by the reviewer, we ran an additional case study on the Social Preference dataset first used in [1]. We describe the results briefly here and will release a more detailed discussion of our findings in our final draft. We used a random subset of 20 out of the 28 mice for model cross validation, leaving the remaining 8 mice as holdouts. Regarding parameter selection, we borrowed the majority of parameters from our TST case study (see Appendix Table 13), with a few changes to reflect the Social Preference dataset. Specifically, we adjusted the number of supervised factors from 3 to 2 and assigned one to "Social Preference" label and the second factor to the "Object Preference" label. We then performed a 5-fold cross-validated grid search on the number of total factors, which resulted in the selection of the 18-factor model(s).
As with the TST case study, we averaged the predicted factors to arrive at our final predictions. We find the difference between the Social Preference and Object Preference factors resembles:
- VTA_L -> Hipp
- VTA_L -> NAc_Core
- NAc_Core <--> NAc_Shell
- NAc_Shell -> Amy_BLA
- NAc_Shell -> VTA_R
- VTA_R -> Hipp
- Hipp -> PrL_Cx_R
- PrL_Cx_R -> VTA_R
Interestingly, the Hippocampus (Hipp) features prominently in REDCLIFF-S' predicted factor difference, whereas it does not seem as relevant in [1]. Given the complexities of studying sociability in mice and the significant differences between REDCLIFF-S and the methods used in [1], the discrepancies in these findings may be highlighting entirely different aspects of social behavior.
- [1] Mague, S. D., et al. Brain-wide electrical dynamics encode individual appetitive social behavior. Neuron. 2022.
Concern 4: Inclusion of ROC-AUC
The reviewer points out ROC-AUC is more popular in related works, and we will certainly include ROC-AUC performance as supplementary information in our final draft. Due to limited space, we simply note our analyses of our paper's '6-2-2', '6-4-2', and '12-11-2' Synthetic Systems (Figure 2A) suggests the reported Opt. F1 score performances largely reflect the ROC-AUC scores obtained using the same thresholded predictions (with the exception of DCSFA-NMF in the 6-4-2 system, which seems to perform similarly to REDCLIFF-S in terms of ROC-AUC, despite having a lower mean Opt. F1 score). As discussed regarding Reviewer Qgsb's Question 2, our Synthetic Systems labels omit information regarding the presence of 'background' factor activity, which limits the examples of 'true negative'/inactive relationships between nodes in our dataset and may reduce the applicability of ROC-AUC scores generally, whereas the F1-score more accurately reflects our data generation process.
We thank the reviewer again for their feedback, and hope they found our response helpful. | Summary: This paper proposes a novel hypothesis generation framework for dynamic causal graphs in neuroscience, aiming to improve the efficiency of causal discovery in time-dependent systems. The authors introduce REDCLIFF-S, a model designed to estimate state-dependent causal interactions by combining:
1. Factor-based models for dynamic causal graphs.
2. Supervised auxiliary variables (e.g., behavioral labels) to refine hypotheses.
3. A flexible generative framework that adapts causal interactions over time.
Claims And Evidence: The claims made in the submission are supported by convincing evidence in general.
Potential Weaknesses in Claims:
* No analysis on how the model generalizes beyond neuroscience.
* No formal guarantees on identifiability
Methods And Evaluation Criteria: Yes. The used datasets and evaluation metrics make sense for the problem. The used datasets include synthetic datasets (vector autoregressive models, nonlinear causal graphs), DREAM4 Insilico-Combined (D4IC) dataset (biological system benchmarks), and TST (Tail Suspension Test) neural recordings (real-world case study). Evaluation metrics include F1 score of causal graph estimation (Figures 2, 4), Computational efficiency (Table 1), and Qualitative validation with neuroscientific findings (Figure 5).
Limitations
* No evaluation on datasets outside neuroscience (e.g., economics, social systems).
* No analysis of model sensitivity to hyperparameters or noise.
Theoretical Claims: I did not check the proofs.
There is no formal guarantee on the identifiability.
Experimental Designs Or Analyses: Strengths
* Ablation study on synthetic systems shows clear performance trends (Figure 2).
* DREAM4 dataset adaptation demonstrates the model’s effectiveness across different complexity levels.
Weaknesses
* No ablation study on the impact of supervised auxiliary labels.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: This paper extends causal discovery in time series (Runge, 2020; Gerhardus & Runge, 2020), generalizes dynamic graph models beyond PCMCI+ and DYNOTEARS, and ntroduces state-dependent causal inference relevant to neuroscience.
Essential References Not Discussed: No comparison to deep learning-based causal discovery approaches (e.g., DAG-GNN, Transformer-based models) and kernel-based method (e.g., CDNOD).
Other Strengths And Weaknesses: Strengths
* Novel hypothesis generation for dynamic causal graphs.
* Improves causal discovery accuracy in neuroscience applications.
* Empirical validation on synthetic and real-world data.
Weaknesses
* No discussion on generalizability beyond neuroscience.
* Limited theoretical discussion on identifiability.
Other Comments Or Suggestions: No
Questions For Authors: The paper introduces dynamic causal graphs via factor models, but does not discuss formal identifiability conditions. Are there any theoretical guarantees on when REDCLIFF-S produces unique causal structures? Does the factorization step introduce ambiguities in the inferred causal graphs?
The supervised extension (REDCLIFF-S) incorporates behavioral labels for refining causal inference. How much do behavioral labels contribute to improved performance? Would REDCLIFF-S still outperform unsupervised methods if auxiliary labels were unreliable or noisy?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are very grateful for the reviewer's thoughtful feedback. We respond to their concerns below.
Question 1: "Are there any theoretical guarantees on when REDCLIFF-S produces unique causal structures? Does the factorization step introduce ambiguities in the inferred causal graphs?"
As we allude to in our introduction and expound upon in Appendix A.1, it is very difficult to prove identifiability for even the simplest nonlinear, state dependent systems (our example consists of 2 nodes A and B where A drives B according to 2 different functions depending on the global state). Thus, we focus our efforts on rigorous empirical testing across numerous dynamical systems (48 unique configurations each with 5 different repeats in our Synthetic Systems experiments - see Figure 9 in Appendix C.3).
As the reviewer rightly points out, our focus on emprical evidence may mean we miss ambiguities in our proposed REDCLIFF-S algorithm. However, due to the structure of our algorithm, these ambiguities are mostly related to the relative importance of each unsupervised factor as opposed to the relationship between a given factor and the input and output data (put another way, each factor has clear linear relationships to the input and output data - see Equation 4 as well as Appendix B.1 - but the weighting of each factor does not). For this reason, we outline our selection criteria in Equation 10 and demonstrate how we use it to restrict the number of unsupervised factors to the minimal amount necessary in our TST Case study in Section 6 (see also Figure 25 of the Appendices).
Question 2: "How much do behavioral labels contribute to improved performance? Would REDCLIFF-S still outperform unsupervised methods if auxiliary labels were unreliable or noisy?"
This is an important question, and one which we should address more directly in our paper. In short, all of the Synthetic Systems experiments we present in this paper contain label noise, as do the D4IC MSNR and LSNR experiments presented in Appendix C.2. With regard to curating the labels for our Synthetic Systems experiments, we mention in the second paragraph of Section 4.1 that "Recordings obtained from each VAR model were then weighted over time by linearly interpolated weights (randomly selected between 0 and 1). These weighted recordings were added together along with a level of Gaussian noise. We augmented labels by marking with a one-hot vector which factor weight was largest at each time step". Thus, all of our labels in the Synthetic Systems experiments occluded information relating to how active each system was at a given time step. While we do not quantify how much information was occluded from the Synthetic Systems labels, we do provide a simpler exploration of this between our D4IC HSNR, MSNR, and LSNR datasets. As discussed in Section 4.3, the D4IC experiments each had "down-weighted recordings from all but one" system factor added on top of the recording from a "main" system factor (the identity of which determined the label for a given sample). This background noise was was weighted with a scalar coefficient valued at either 0.0, 0.1, or 1.0 while the recording of the 'main' system factor was multiplied by 10. As can be seen in Appendix C.2 (please forgive the typo in the caption of Figure 8, it should read 'D4IC-MSNR' instead of 'D4IC-HSNR') our proposed algorithm does not perform well under this particular form of noise. We believe that this is due to the fact that there was no variation/interpolation between the weight applied to each 'background' factor, making it difficult to ultimately distinguish them from the 'main' factor.
We thank the reviewer for mentioning additional literature that we can discuss in our paper, and we will certainly include it in our final draft. We hope our response has been helpful, and thank the reviewer again for their feedback. | Summary: The submission propose a method for causal discovery (at least this is what I understand).
Summarizing it seems that the proposed approach is another implementation of a non-linear Granger causality model with some particular choice of implementing architecture and loss function.
There are no theoretical results (identifiability, consistency etc..), instead the authors argue that such systems are non-identifiable in the appendix, and why it is still valuable.
The proposed approach is benchmarked against other approaches with a simulation experiment.
Claims And Evidence: partially, in general there are no strong theoretical claims.
Nevertheless, I have concerns about:
(1) "The main strength of REDCLIFF-S is that its hypotheses can be easily tested" (first sentence in discussion) what "tested" mean ? is it possible to do statistical inference ? why other methods hypothesis can't be tested? to my understanding the problem of post-selection inference in causal discovery is far from solved, except using the naive data-splitting, I am only aware of a Greedy Equivalent Search (GES) variant which allow for valid inference https://arxiv.org/pdf/2208.05949 .
(2) In the conclusions: "attains state-of-the-art performance in estimating multiple causal graphs simultaneously in various settings, and in a way conducive to follow-up scientific inquiry" where is the evidence of that? what have been measured to prove that in the simulation or the real world data? Moreover I think multiple baselines are missing, see next.
Methods And Evaluation Criteria: Yes the proposed data and benchmark makes sense.
I would suggest the addition of some causality aware metrics between the learned graphs, apart from the F1 score.
for instance:
- "Structural intervention distance for evaluating causal graphs" https://dl.acm.org/doi/10.1162/NECO_a_00708
- "Adjustment Identification Distance: A gadjid for Causal Structure Learning" https://openreview.net/forum?id=jO5UNNrjJr
- " Separation-Based Distance Measures for Causal Graphs" https://openreview.net/forum?id=KO7fATqh2W
Theoretical Claims: none
Experimental Designs Or Analyses: correct
Supplementary Material: no
Relation To Broader Scientific Literature: I think numerous baselines are missing
for instance PCMCI variants are not considered in the experiments and in the Related Work they are partially dismissed because
"However, these methods tend to rely on assumptions that rule out hypotheses we would like to explore in neuroscience; for example, PCMCI+, LPCMCI, and DBNs all assume causal graphs are acyclical" they assume the complete graph (expanded in time) is acyclic, they do not assume that there are no feedback loops in time for instance.
Moreover some literature is not discussed (see next question).
Essential References Not Discussed: I think they main literature not discussed is the Dynamic Causal Model approach which was developed exactly for causal discovery in Neuroscience.
see for instance
- Friston, Karl J., Lee Harrison, and Will Penny. "Dynamic causal modelling." Neuroimage 19.4 (2003): 1273-1302.
- Friston, Karl J., et al. "Dynamic causal modelling of COVID-19." Wellcome open research 5 (2020): 89.
- Stephan, Klaas Enno, et al. "Ten simple rules for dynamic causal modeling." Neuroimage 49.4 (2010): 3099-3109.
- Friston, Karl, Rosalyn Moran, and Anil K. Seth. "Analysing connectivity with Granger causality and dynamic causal modelling." Current opinion in neurobiology 23.2 (2013): 172-178.
for a comprehensive recent reviews you can refer to:
- Discovering causal relations and equations from data
Gustau Camps-Valls, Andreas Gerhardus, Urmi Ninad, Gherardo Varando, Georg Martius, Emili Balaguer-Ballester, Ricardo Vinuesa, Emiliano Diaz, Laure Zanna, and Jakob Runge Physics Reports 1044 https://www.sciencedirect.com/science/article/pii/S0370157323003411
- Causal Discovery from Temporal Data: An Overview and New Perspectives https://dl.acm.org/doi/10.1145/3705297#sec-3
- Review of Causal Discovery Methods Based on Graphical Models https://www.frontiersin.org/journals/genetics/articles/10.3389/fgene.2019.00524/full
Other Strengths And Weaknesses: The problem is not really clearly stated, for instance in sec. 3.1. Problem Statement I would expect a clear statement on which is the inference goal.
Other Comments Or Suggestions: see above
——-
Score updated after rebuttal
Questions For Authors: - What is clearly the goal, recovering a causal graph?
- what are the assumptions ?, of course different sets of assumptions would lead to different algorithms and methods
- How the proposed approach compare to Dynamic Causal Models?
- Why PCMCI style algorithms where not compared ? what about simpler baselines such as tidybench https://github.com/sweichwald or a simple structural VAR?
-
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Question 1: Our Goal
Our goal is to reduce the space of hypotheses that must be tested before a causal relationship is successfully identified. We allude to this in our abstract, but will make this more explicit in our final draft.
Question 2: Our Assumptions
We only assume our data consists of regularly sampled time series of scalar-valued nodes (see Section 3.1). These time series may be noisy, and we make no assumptions regarding the underlying generative processes / graphs. In Sec. 3.5, we assume the availability of "global state" labels for each time step that relate to the dominant generative process.
Question 3: Comparing REDCLIFF-S & Dynamic Causal Models
The REDCLIFF-S method should be viewed as a preparatory analysis technique preceding the implementation of a more sophisticated method such as Dynamic Causal Models (DCMs). As discussed in work cited by the reviewer [1,2], the Granger causal paradigm assumed by REDCLIFF-S precludes the availability of interventional inputs required to construct a DCM model [1], suggesting that Granger causal models (such as REDCLIFF-S) be employed as a complementary analysis prior to the implementation of DCMs [2].
- [1] pages 3101 & 3102 of Stephan, Klaas Enno, et al. "Ten simple rules for dynamic causal modeling." Neuroimage 49.4 (2010): 3099-3109.
- [2] page 175 of Friston, Karl, Rosalyn Moran, and Anil K. Seth. "Analysing connectivity with Granger causality and dynamic causal modelling." Current opinion in neurobiology 23.2 (2013): 172-178.
Question 4: PCMCI-style algorithms & simpler baselines (e.g. tidybench https://github.com/sweichwald)
We originally excluded PCMCI and similar methods over concerns with acyclicity constraints, particularly related to cross-frequency coupling phenomena we hoped to capture. Given the reviewer's feedback, we compared RECLIFF-S, Regime-PCMCI [3], and supervised versions of 'slarac', 'qrbs', and 'lasar' from the tidybench repository ('selvar' did not compile locally) on our '12-11-2' Synthetic Systems (Figure 2A), with results shown below. Values are given as the average (Avg) of the mean statistic of system factors across repeats and the standard error of the mean (SEM) for continuous-valued metrics or as the median (Med) statistic across repeats and factors for discrete metrics; 'Upper' and 'Lower' refer to the fact that we had to employ causal distance metrics [4] on the upper-triangular and lower-triangular portions of our adjacency matrices due to the presence of cycles (details in Sec. 4.1 & Appendix B.2).
REDCLIFF-S
- Opt. F1: 0.373 (Avg) 0.028 (SEM)
- ROC-AUC: 0.687 (Avg) 0.017 (SEM)
- Upper Ancestor Aid Errors: 3.0 (Med)
- Lower Ancestor Aid Errors: 2.0 (Med)
- Upper Parent Aid Errors: 4.5 (Med)
- Lower Parent Aid Errors: 2.5 (Med)
- Upper Oset Aid Errors: 3.0 (Med)
- Lower Oset Aid Errors: 3.0 (Med)
Regime-PCMCI
- Opt. F1: 0.407 (Avg) 0.027 (SEM)
- ROC-AUC: 0.673 (Avg) 0.026 (SEM)
- Upper Ancestor Aid Errors: 4.0 (Med)
- Lower Ancestor Aid Errors: 4.0 (Med)
- Upper Parent Aid Errors: 6.0 (Med)
- Lower Parent Aid Errors: 5.5 (Med)
- Upper Oset Aid Errors: 4.0 (Med)
- Lower Oset Aid Errors: 4.0 (Med)
(Supervised) slarac
- Opt. F1: 0.417 (Avg) 0.034 (SEM)
- ROC-AUC: 0.598 (Avg) 0.015 (SEM)
- Upper Ancestor Aid Errors: 6.0 (Med)
- Lower Ancestor Aid Errors: 4.5 (Med)
- Upper Parent Aid Errors: 8.0 (Med)
- Lower Parent Aid Errors: 6.5 (Med)
- Upper Oset Aid Errors: 6.0 (Med)
- Lower Oset Aid Errors: 4.5 (Med)
(Supervised) qrbs
- Opt. F1: 0.451 (Avg) 0.029 (SEM)
- ROC-AUC: 0.614 (Avg) 0.014 (SEM)
- Upper Ancestor Aid Errors: 6.1 (Med)
- Lower Ancestor Aid Errors: 5.0 (Med)
- Upper Parent Aid Errors: 8.9 (Med)
- Lower Parent Aid Errors: 7.0 (Med)
- Upper Oset Aid Errors: 6.1 (Med)
- Lower Oset Aid Errors: 5.0 (Med)
(Supervised) lasar
- Opt. F1: 0.283 (Avg) 0.009 (SEM)
- ROC-AUC: 0.563 (Avg) 0.026 (SEM)
- Upper Ancestor Aid Errors: 5.3 (Med)
- Lower Ancestor Aid Errors: 4.7 (Med)
- Upper Parent Aid Errors: 8.5 (Med)
- Lower Parent Aid Errors: 6.7 (Med)
- Upper Oset Aid Errors: 5.7 (Med)
- Lower Oset Aid Errors: 4.7 (Med)
Note Regime-PCMCI and REDCLIFF-S attain significantly higher ROC-AUC scores than the other baselines. While the Opt. F1 scores for REDCLIFF-S are similar to several baselines', we highlight REDCLIFF-S' lower median error across all of the causal metrics shown as evidence that it offers improvements in identifying functional relations in this setting.
- [3] Elena Saggioro, Jana de Wiljes, Marlene Kretschmer, Jakob Runge; Reconstructing regime-dependent causal relationships from observational time series. Chaos 1 November 2020; 30 (11): 113115. https://doi.org/10.1063/5.0020538
- [4] "Adjustment Identification Distance: A gadjid for Causal Structure Learning" https://openreview.net/forum?id=jO5UNNrjJr
We thank the reviewer for their helpful feedback which we will certainly incorporate into our paper, and we hope they found our response helpful.
---
Rebuttal Comment 1.1:
Comment: I thank the author for the rebuttal.
I will raise my score accordingly. | null | null | null | null | null | null |
Non-Asymptotic Length Generalization | Accept (poster) | Summary: Summary:
The paper analyzes the minimum input-length required for learning length generalizable DFA, CFG and C-RASP programs and provides non-asymptotic bounds for the minimal length inputs to differentiate different DFA and C-RASP programs.
Strengths:
The proof idea is quite interesting: the authors design a finite set of schemas that corresponds to the prefix behaviors of all possible inputs and maps them to convex polytopes. The analysis on binary sequences are then converted into geometric analysis in high-dimensional space, which provides an interesting perspective on how C-RASP programs process sequential inputs.
The proven C-RASP length generalization theorem and its proof idea can potentially serve as a critical step in showing how Transformers achieve length-generalization on certain classes of computational problems. This further strengthens the importance of the RASP-L conjecture.
Weaknesses:
The proof sketch in the main content may be quite confusing without reading the supplementary materials. In particular, the author can provide more details on the “basis schemas” in the main content, which is a core contribution but quite confusing in the current way it’s written on page 7. In particular, it’s unclear what the $$Y_{\{(y_1^i,y_2^i)\}}$$ or $$m$$ refer to on page 7 since they are not properly explained. Figure 2 is also currently quite confusing since 1) the average prefix sums should be discrete instead of continuous for finite input lengths and 2) the “monotone” parts of the function are apparently not monotone. It is also not immediately clear why the number of basis schemas is finite. Although the exact meaning of the authors can be clarified by looking at the details of page 19 and 20, it would be better for readers if these confusions can be resolved by looking at the main paper and Figure 2 alone since these are critical parts of the paper’s contribution and should be made as explicit as possible. As such, I would recommend the authors spend more space explaining this portion of the paper.
The following concerns are regarding scope limitations rather than technical weaknesses:
Although the authors showed an interesting perspective on length-generalization and C-RASP and provide a novel proof for a formal language theory result, there’s still a huge gap when viewed from the perspective of length-generalization of Transformers, which C-RASP is designed to model. Therefore, it is unclear whether the author’s main contributions can indeed lead to fundamental progress regarding Transformer models. In particular, according to my current understanding:
It is yet unclear whether there’s a training setting (i.e. loss function) for Transformers such that the optima correspond to the minimum C-RASP program interpolator on the training dataset.
For any given number of layers and heads, the C-RASP program have very restricted expressive capability compared to RASP or the full Transformer architecture
The proven results implies that, if ALL inputs of length at most O((KT)^{(O(K^2)}) are given to a minimum length C-RASP interpolator, then the resulting C-RASP program must be length-generalizable. However, since this require the dataset size to be exponential, this is highly unrealistic, so a more interesting result for the ML community might be probabilistic (i.e., the probability of a dataset of size D drawn according to some distribution over {0,1}^* to result in a length-generalizable C-RASP interpolator).
The exponential dependence on O(K^2) is also quite unrealistic for Transformer models, which would cause the length to explode even for K=5, for example.
As such, it is unclear whether the proven theorem can indeed be part of an a larger theorem regarding Transformer length generalization, or whether it would be easier to pursue a path without C-RASP as the intermediate language.
Despite these limitations, the perspective introduced by the theorem and proof is quite interesting and may lead to further advancement in the general field of formal language theory and Transformers, and thus I recommend accepting the paper, especially after the proof clarification suggested earlier.
Claims And Evidence: There are 3 main theoretical claims of the paper, each of which is supported by corresponding theorems and proofs:
non-asymptomatic upper bound for length generalization for DFAs
Impossibility result for non-asymptomatic upper bound for length generalization for CFGs
non-asymptomatic upper bound for length generalization for C-RASP
Methods And Evaluation Criteria: The paper uses the minimum length differentiating 2 C-RASP programs to study length-generalization of Transformer-based LLMs. There are apparent limitations in this approach, since it requires all inputs up to a given length to be given as training data for length generalization to occur. As a result, the training data size is exponential w.r.t. the length bound, which is already exponential w.r.t. the number of heads in the C-RASP program $K$. There paper also considers C-RASP programs of depth 2, which is a very limited setting. The paper also assumes that the learning process is a minimum complexity interpolator, but whether this indeed represents Transformer learning is unknown both theoretically or empirically. As such, there’s still a long way from this work towards solving the overarching problem.
Having the above said, since it’s the first work in this direction AFAIK, I believe the paper still provides decent contribution for the field.
For the proof, the authors design a finite set of schemas that corresponds to the prefix behaviors of all possible inputs and maps them to convex polytopes. The analysis on binary sequences are then converted into geometric analysis in high-dimensional space, which provides an interesting perspective on how C-RASP programs process sequential inputs.
Theoretical Claims: The full proof (40 pages long) is apparently to long to be fully checked. I checked the main novel idea from page 19 to page 21, which makes sense assuming the correctness of lemmas in the first 8 pages. However, I would greatly appreciate if the authors can have a better proof sketch regarding the definition of test-functions and the proof of convexity in the main content.
Experimental Designs Or Analyses: N/A
Supplementary Material: The supplementary is just the proof, which I described in “Theoretical Claims”
Relation To Broader Scientific Literature: The paper shows a condition for length-generalization of Transformer LLMs on simple computational problems (expressible by 2-layer C-RASP) assuming that the RASP-L conjecture is also true for C-RASP.
Previous works have mostly focused on expressiveness upper bounds and lower bounds, while this paper leverages the ideas and results (e.g., C-RASP) from these previous works work to approach the problem of length generalization.
Although there any many limitations in the current work as discussed in “Methods”, the paper’s proof idea and perspective can be contributive for future work in the direction.
Essential References Not Discussed: I’m not aware of closely related works not mentioned in the paper
Other Strengths And Weaknesses: The proof sketch in the main content may be quite confusing without reading the supplementary materials. In particular, the author can provide more details on the “basis schemas” in the main content, which is a core contribution but quite confusing in the current way it’s written on page 7. In particular, it’s unclear what the $$Y_{\{(y_1^i,y_2^i)\}}$$ or $$m$$ refer to on page 7 since they are not properly explained. Figure 2 is also currently quite confusing since 1) the average prefix sums should be discrete instead of continuous for finite input lengths and 2) the “monotone” parts of the function are apparently not monotone. It is also not immediately clear why the number of basis schemas is finite. Although the exact meaning of the authors can be clarified by looking at the details of page 19 and 20, it would be better for readers if these confusions can be resolved by looking at the main paper and Figure 2 alone since these are critical parts of the paper’s contribution and should be made as explicit as possible. As such, I would recommend the authors spend more space explaining this portion of the paper.
Other Comments Or Suggestions: I would suggest to drastically shorten the abstract to at most 1/2 the size. It currently sounds like an intro rather than a concise summary of results. I recommend only one sentence for motivation, once sentence for DFA+CFG results (since they are not the main contribution), 2 sentences for the C-RASP result, and once sentences on the proof method.
Similarly, the introduction section also contains too much motivation that are very far from the actual results shown in the paper.
The saved spaces can be use to provide a better description of the proof sketch.
I believe that the paper would be good for acceptance with better organization and proof sketch
Questions For Authors: Are my following understanding of the current state of the problem and the practicality of the paper true?
It is yet unclear whether there’s a training setting (i.e. loss function) for Transformers such that the optima correspond to the minimum C-RASP program interpolator on the training dataset.
For any given number of layers and heads, the C-RASP program have very restricted expressive capability compared to RASP or the full Transformer architecture. Even though C-RASP is an upper bound on the expressiveness, it results in drastically larger number of heads and layers
The proven results implies that, if ALL inputs of length at most O((KT)^{(O(K^2)}) are given to a minimum length C-RASP interpolator, then the resulting C-RASP program must be length-generalizable. However, since this require the dataset size to be exponential, i.e. 2^O((KT)^{(O(K^2)})
The exponential dependence on O(K^2) is would cause the length to explode even for a Transformers with 5 heads, for example.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank reviewer YXKG for their time, detailed comments, and constructive criticism.
**Author statement:** The submitted paper contained an error in the proofs, which we discovered only after submission. We’ve revised the proofs and sent the revised draft to the AC. Here are how the revised draft’s results differ from the submitted draft’s.
- The main result (Theorem 5.1) is unchanged except for the addition of a weak assumption to the function class $C-RASP^{2,K,T}$, that $\sum_{i \in [K]} \lambda_i > z$ for each function in $C-RASP^{2,K,T}$ (analogous to the existing assumption $z > 0$; see line 220, left col). Theorem 5.1 claims the same upper bound of $N_{A_{si}}(K,T) \leq O((KT)^{O(K^2)})$ for $C-RASP^{2,K,T}$. The proof is almost exactly the same. The significance of the result is unchanged.
- The revised draft no longer contains Theorems 5.4 and 5.6, which together were intended to extend Theorem 5.1 to prove length generalization guarantees for a class of functions which contains Dyck1 and its higher-precision variants. The absence of these side-results doesn’t affect the significance of the paper.
- Results for DFAs and CFGs unchanged.
Source of error:
- Lemma A.24 in line 1557
We apologize for the error and hope that either the reviewer can view the revised draft; or ignore Theorems 5.4 and 5.6 in their review. Thank you.
**Addressing Reviewer Concerns:**
> The proof sketch in the main content may be quite confusing
We apologize for the lack of clarity of the proof sketch and thank the reviewer for their patience in reading through it. We will re-write it and include more details of the basis schemas in the main content.
> It is yet unclear whether there’s a training setting for Transformers such that the optima correspond to the minimum C-RASP program interpolator on the training dataset.
This is true. However, there is circumstantial evidence which lends some credence to the idea of studying transformer training with the min-complexity interpolator, for some complexity measure.
- Abbe et. al. find empirically that transformers are biased towards learning functions of low degree-profile [1].
- Bhattamishra et. al. find empirically that transformers are biased towards learning functions of low sensitivity [2].
If one assumes that there is some complexity measure $C$ such that training transformers with SGD roughly corresponds to the min-complexity interpolator with $C$ (which is strong, granted), then the RASP-L paper [3] suggests that $C$ should be RASP-L program length, as they show empirically that when the ground-truth is also a short-RASP program, then the trained model achieves far better length generalization.
[1] Abbe et. al. arxiv.org/pdf/2301.13105
[2] Bhattamishra et. al. arxiv.org/pdf/2211.12316
[3] Zhou et. al. arxiv.org/abs/2310.16028
> The C-RASP program have very restricted expressive capability
We agree.
> The proven results imply that … since this require the dataset size to be exponential, this is highly unrealistic
Having all inputs of length at most O((KT)^{(O(K^2)}) is a sufficient condition for length generalization, and indeed unrealistic. A weaker sufficient condition is that for every pair of unequal functions in the hypothesis class, there is at least one (short) input which distinguishes them. The cardinality of the training set need only be at most the square of the number of hypotheses (which is $O(T^{O(K)})$ instead of $2^{ O(T^{O(K^2)})}$).
> A more interesting result might be probabilistic
This is an important question. If one introduces a test or train distribution (or both) and relax the definition of length generalization, one can perhaps get more palatable sample complexity.
If one keeps the current notion of length generalization (i.e. identification) but require that the training set be drawn i.i.d. from some training distribution, we suspect that the sample complexity will be no better than our doubly-exponential sample complexity, because perfect length generalization requires one to distinguish every pair of functions in the hypothesis class. Given two very similar functions $f, f’$, we suspect the strings of length $n$ which distinguish $f$ and $f’$ may have density which is order $1/\exp(n)$. Random sampling is no better than including all strings of length n.
One could also relax the definition of length generalization from identification. One could say that length generalization is achieved if the hypothesis achieves 99% accuracy on a particular test-distribution. However, it is unclear what is a realistic test distribution is, so that the theoretical setup models practical scenarios. Also, the 99%-accuracy length generalization notion may be too weak. The 1% of examples which a learned hypothesis gets wrong could be exclusively on the long/interesting examples that we care about. If the test-distribution is supported on {0,1}$^*$, the weight of all strings length $N$ or larger will be less than 1% for N suff. large. | Summary: The paper provides new theoretical results related to criteria on a training set such that an idealized learner can learn a function that exhibits length generalization.
The paper considers an idealized learner, termed the Minimum-Complexity Interpolator. The learner is defined with respect to a hypothesis class (i.e. set of functions) and a complexity function defined over this set. The learner learns the lowest complexity function in the set that perfectly fits the training data.
The paper then studies, for several different tasks, what is the maximum input length required in the training set such that for any data generating function in the hypothesis class, the learner is guaranteed to learn a function that exhibits length generalization.
The paper derives results related to the upper bound on this maximum input length for three settings:
* n-state Deterministic Finite Automata (DFA) – deriving a bound of 2n-2
* Context-Free Grammars (CFGs) – deriving a negative result
* C-RASP: a class of functions mapping binary strings to binary labels, using the RASP derivative defined by Yang & Chiang, 2024. Prior work has shown that this class of functions is related to those expressible by Transformers (with some caveats related to whether the Transformer is assumed to be finite vs. infinite precision). These results apply to the Dyck-1 language studied by prior work.
## Update after rebuttal
Thank you for your response and for being transparent about the issue identified with theorems 5.4 and 5.6. As the main result of 5.1 still holds, this does not significantly affect my judgement.
Despite the limitations with respect to the potential gap between the theory and empirically observed behavior with real models and optimizers (which are clearly acknowledged by the authors and noted by the other reviewers) I still recommend acceptance for this paper. I think it brings not only a novel perspective, but also non-trivial theoretical results. I can see this work having implications for not only future theory work in this area, but also influencing how we design "fair" (i.e. that, minimally, there exists a plausible minimum complexity interpolator that exhibits the desired generalization behavior) splits to assess length generalization and other OOD generalizations, as well as potentially inspiring new methods that seek to add inductive biases to bridge the gap between the minimum complexity interpolator and empirically observed behavior.
Claims And Evidence: Yes, the limitations of the key claims are discussed, i.e. that they do not consider expressibility of real architectures (e.g. finite-precision Transformers) and the idealized learner may not closely relate to the functions learned in practice (e.g. by SGD).
Methods And Evaluation Criteria: N/A - Purely theoretical results.
Theoretical Claims: I did not check the correctness of the proofs in detail.
Experimental Designs Or Analyses: N/A - Purely theoretical results.
Supplementary Material: I did not review the supplementary material in detail.
Relation To Broader Scientific Literature: As far as I am aware, the new results are novel.
The paper not only offers new theoretical results, but also offers a new perspective on assessing the feasibility (albeit under idealized assumptions) of length generalization from finite length training sets.
Essential References Not Discussed: As far as I'm aware, there are no essential references missing that directly relate to the theoretical results.
However, I do think that the paper could be improved and made relevant to a wider audience by discussing a broader range of prior work for the camera ready, e.g.:
* The connection with the RASP-L conjecture was good to see discussed. (Concurrent work) Shaw et al. 2024 (https://arxiv.org/abs/2410.18077) also discuss the RASP-L conjecture and define the notion of program minimality with respect to a training set, which might be relevant in this context. They connect this notion to the existence of a local optimum for a Universal Transformer that exhibits length generalization. They then similarly compute the minimum input length required for such an optimum to exist, e.g. for addition. While clearly not the same as the current work, there are similarities in the ambition to characterize necessary conditions on a training set that enable length generalization.
* Empirical work that studies length generalization in Transformers related to Chomsky hierarchy complexity might be good to mention to provide context for the theoretical results in this paper: https://arxiv.org/abs/2207.02098, https://arxiv.org/abs/2305.16843, https://arxiv.org/abs/2009.11264
* Other theoretical work specific to Transformers: Hahn et al. 2020 (https://aclanthology.org/2020.tacl-1.11/) has negative theoretical findings for Transformers ability to express length invariant algorithms for parity and Dyck languages. Some of these results were subsequently discussed and extended in Chiang & Cholak, 2022 (https://arxiv.org/abs/2202.12172), providing a more positive result.
* To provide more context for the negative results for CFG induction, it would be useful to understand how this relates to positive empirical results for inducing grammars that exhibit length generalization (using description length priors). For example, https://arxiv.org/abs/2010.12725 induced synchronous CFGs that exhibit length generalization on the SCAN benchmark using a description length prior over the grammars that appear similar to the complexity measure of CFGs studied in this paper. I assume the apparent contradiction is resolved because while the paper result holds for CFGs in general, many specific instances of CFGs may have a finite upper bound for identifying the underlying grammar? Or perhaps the difference between inducing CFGs and synchronous CFGs for transduction is critical here. Perhaps some discussion could help connect the results to the literature on linguistically-motivated models for length generalization.
Other Strengths And Weaknesses: (Note that my overall impression and score assume the correctness of the proofs, which I did not carefully verify)
Strengths:
* The paper offers a new perspective on the feasibility of inducing functions that exhibit length generalization from finite length training sets. Length-based splits have been popular to empirically probe the generalization capabilities of various models and learning algorithms, so it is useful to build a stronger theoretical foundation of when such generalization is even theoretically feasible (for an idealized learner).
* The results on C-RASP are quite general, i.e. it is a very broad class of functions. It is also nice because it directly relates to the computational model of Transformers.
* The negative CFG results could be of interest to the community working on linguistically-motived methods for compositional generalization.
Weaknesses:
* Although clearly acknowledged, the results hold for a “minimum complexity interpolator”, and there is no theoretical or empirical evidence that this learner behaves similarly to, e.g. Transformers trained with SGD. However, while some empirical results could relate various complexity measures to the actual inductive biases of Transformers, I don't think this is necessary for this paper, which already makes theoretical contributions.
Other Comments Or Suggestions: Nits:
* Section 1 - “(Zhou et al., 2023) make” > Use \citet? This nit also applies to many other citations in the paper.
* Section 1 - “A theoretical guarantee is particularly crucial here, as empirically verifying a model’s ability to generalize to arbitrarily long sequences is impractical, if not impossible.” > Nit: This claim seemed a bit strong. If such guarantees are “crucial”, then it seems problematic that the theoretical results do not hold for models trained in practice, e.g. with SGD. In practice, testing models on inputs that are arbitrarily longer than those seen during training seems sufficient for most applications.
* It might be useful to clearly define what a “non-asymptotic” upper bound vs. “asymptomatic” upper bound in this context, given that it is such a key claim. Some readers might be familiar, but good to make explicit. Paragraph 2 of section 2 discusses this, but could maybe clarify this more directly and earlier.
Questions For Authors: None (see recommendations above)
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank reviewer CEK9 for their time, detailed comments, and constructive criticism.
**Author statement:** The submitted paper contained an error in the proofs, which we discovered only after submission. We’ve revised the proofs and sent the revised draft to the AC. Here are how the revised draft’s results differ from the submitted draft’s.
- The main result (Theorem 5.1) is unchanged except for the addition of a weak assumption to the function class $C-RASP^{2,K,T}$, that $\sum_{i \in [K]} \lambda_i > z$ for each function in $C-RASP^{2,K,T}$ (analogous to the existing assumption $z > 0$; see line 220, left col). Theorem 5.1 claims the same upper bound of $N_{A_{si}}(K,T) \leq O((KT)^{O(K^2)})$ for $C-RASP^{2,K,T}$. The proof is almost exactly the same. The significance of the result is unchanged.
- The revised draft no longer contains Theorems 5.4 and 5.6, which together were intended to extend Theorem 5.1 to prove length generalization guarantees for a class of functions which contains Dyck1 and its higher-precision variants. The absence of these side-results doesn’t affect the significance of the paper.
- Results for DFAs and CFGs unchanged.
Source of error:
- Lemma A.24 in line 1557
We apologize for the error and hope that either the reviewer can view the revised draft; or ignore Theorems 5.4 and 5.6 in their review. Thank you.
**Addressing Reviewer Concerns:**
> the paper could be made relevant to a wider audience by discussing a broader range of prior work
We thank the reviewer for the very detailed list of references. We will add them in.
> Shaw et. al. induced synchronous CFGs... I assume the apparent contradiction is resolved because ...
Yes, specific instances of CFGs, $f \in \mathcal{F}$, have different ``identification times", $N_{A_{si}}(f)$ (line 126, left column). As such, some CFGs require much longer-length data than others in order to be identified by $A_{si}$. But this doesn’t contradict that there is a particular sequence of CFGs where the identification times as a function of the complexity of the CFGs (i.e. their description lengths) can grow faster than any computable function. All CFGs have finite $N_{A_{si}}(f)$, but this doesn’t preclude some CFGs from having very large $N_{A_{si}}(f)$ which is not bounded by any reasonable function of $C(f)$.
In addition, the empirical studies of CFG induction evaluate the learned CFG on a test-set, with finite support. Achieving good accuracy on the test-set is an easier task than identification.
> The results hold for a minimum complexity interpolator…
We acknowledge that it is largely unclear whether the min-complexity interpolator behaves similarly to the realistic transformer training setting. However, there is circumstantial evidence which lends some credence to the idea of studying transformer training with the min-complexity interpolator.
- Abbe et. al. find empirically that transformers are biased towards learning functions of low degree-profile [1].
- Bhattamishra et. al. find empirically that transformers are biased towards learning functions of low sensitivity [2].
- The RASP-L conjecture [3] suggests that transformers are biased towards learning functions of short RASP-L program length.
[1] Abbe et. al. arxiv.org/pdf/2301.13105
[2] Bhattamishra et. al. arxiv.org/pdf/2211.12316
[3] Zhou et. al. arxiv.org/abs/2310.16028
> (1) If such guarantees are crucial, then it is problematic that the theoretical results do not hold for models trained in practice. (2) In practice, testing models on inputs that are arbitrarily longer than those seen during training seems sufficient for most applications.
(1) We agree
(2) (i) It isn’t clear whether for most applications, one would have access to arbitrarily long data to test whether one’s model length generalized. Long proofs of difficult theorems, for instance, are very scarce.
(ii) Even if one could generate arbitrarily long inputs to test one’s model for length generalization, for large integer $n > 0$, it would be infeasible to test one’s model on all $\exp(n)$ inputs of length $n$. One would have to choose a relatively-small representative set of examples of length $n$ which could certify that one's model has length generalized, but it isn’t clear how to do this systematically. I.i.d. samples may be too easy for the model to cheat/hack (see Fig. 6; page 9 of [4]).
Due to (i) and (ii), we hope that theoretical guarantees could increase confidence that a model has length generalized, when checking length generalization empirically is hard.
[4] Liu et. al. arxiv.org/abs/2210.10749
> define “non-asymptotic” upper bound vs. “asymptotic” upper bound
Will do. A non-asymptotic upper bound of $N_{A_{si}}(c)$ is a computable upper bound of $N_{A_{si}}(c)$ in $c$, the complexity of the ground truth, only omitting universal constants with Big-O notation. Asymptotic upper bounds omit some dependencies on the complexity or they only guarantee that $N_{A_{si}}(c) < \infty$. | Summary: The paper considers the question of the theory of length generalization for abstract classes of models, with in mind applications to extending the theory underlying important questions about prevalent models such as transformers.
Here length generalization denotes the property that if the model is perfectly accurate on length \le L inputs, then it will be perfectly accurate on inputs of any length. The questions addressed are
1) Does my model/hypothesis class allow for length-generalization?
2) If the answer to (1) is affirmative, then what is the value of L that guarantees this length-generalization?
For (1) the paper gives some negative results in line with work by Angluin on context-free grammars.
Importantly, the paper shows that (1) has positive answer for finite automata and for 2-layer C-RASP programs (modelling 2-layer transformers). The case of deeper (3 or more layer) C-RASP is left for future work.
When (1) has a positive answer, then the answer to (2) is formulated in terms of the complexity of models within the given class.
The guarantees are done in an abstract setting, using "minimum-complexity interpolator" algorithms, for which existence results are known, but which are in practice hard to find. Nevertheless, this is the first such result, and includes positive results for C-RASP, making this a possibly useful read for anyone interested in theoretical machine learning.
Claims And Evidence: I think that the claims and limitations are well balanced.
It is sometimes hard to parse the actual presentation of the claims, and one has to read the paper at least twice in order to understand the claims.
Methods And Evaluation Criteria: The paper does not have experiments.
Theoretical Claims: I did not have a 100% read of the 50 pages of the paper, but I follow the proof sketches and they seem intuitively correct.
I have read at 100% only the proof of theorem 5.1 which seems well argued, it is actually much better written than the first 5-6 pages of the paper.
This gave me confidence that the remainder of the proofs should also be well written.
Experimental Designs Or Analyses: There are no experiments in the paper.
Supplementary Material: I reviewed only the first few pages of the appendix.
Relation To Broader Scientific Literature: I think that this is an important step forward in the theory of length-generalization.
Essential References Not Discussed: I am not aware of any.
Other Strengths And Weaknesses: I think the main weakness is the areas with lack of clarity in the explanation in the paper.
Somehow the results seem presented without much care, and this is surprising as it should be simpler to do than to produce proofs for the results.
I will detail the above in the "other comments" section.
Other Comments Or Suggestions: Line 109 second column: "a complexity" is supposed to have what properties? can you give the reader an idea about what you have in mind?
Line 113 first column: "takes in a training set" would have to be "takes as an input a training set"
Line 119 and rest of the paper: two functions are equal if they take the same value and have the same domain, so why don't you use that notion and instead rename it by saying "they are equivalent"? For functions this is just the notion of equality. For algorithms, two algorithms are equivalent if they induce equal functions. But here you introduce a new name for the usual equality of functions, and this is very confusing. Please be precise about this wording?
Line 130-131 "Identification is synonymous with exactly learning or recovering the ground truth" -- it is not, the concept is more complicated than this summary.. perhaps try being precise with the descriptions, so that the reader doesn't have to guess what you were thinking there.
Line 139 "will allow you to correctly predict" -- I guess it's not "you" or "me" or a specific person, so I'd remove the word "you" from this sentence
Line 155, add "where c is a natural number"
About algorithm 1 and in general: can you maybe spend a few words regarding what the term "interpolator" refers to? what is it interpolating?
Line 119 second column "are very nontrivial" -- what does that mean exactly? I think you can be more specific/precise rather than handwaving it like that.
Line 124 second column "the learner" -- what is the difference between that and the algorithm itself? Or how is \mathcal A_{si} defined? In the paper this notion of "learner" is never defined, and to me it seems like there is no learning involved in your paper so I've been confused every time it talks about "learner" or "learning". Is it really necessary to use that kind of terminology?
Line 127 second column I'd add "in the following sense", since Lemma 3.1 is making precise the idea (the adjective "best") hinted at in the preceding paragraph.
Line 189 - 190 first column: Is A_{si} the same as before? and what does it mean for it to "see inputs of length at least" some number?
About the definition of context-free grammars: Of course you know that this is not the correct definition. This is a definition of any grammar. For being context-free you have to put some restriction on the production rules, similarly to the case of linear CFG's that you wrote correctly.
Line 171-172 second column: "clean" means what? what detergent was it cleaned with?
In definition 3.6, it would be better to introduce the parameters before writing the formulas, not at the end of the statement.. So the reader doesn't have to read twice in order to parse the statements.
Line 268-269 first column "it undecidable" -- add the verb "is". Besides this typo, you should also be more precise with this statement.. do you mean that
a) there exist two CFGs for which it is undecidable whether they are equivalent, or
b) for any CFG there exists another one for which it is undecidable whether the two are equivalent or
c) there exists a pair of CFGs for which it is undecidable whether they are equivalent ?
Line 232-233 second column: the sentence about "Unfortunately [..] rather sudden" seems useless, consider removing it.
Line 311-316 second column: "This inability of the transformer architecture to even express the ground truth should be part of the reason why empirical length generalization results only show limited length generalization" -- what do you mean by "should be part of the reason why"? and why should it be part of that? this statement would be interesting if it were justified. But here it's stated a bit out of the blue, and without an actual argument it is empty. Please write the argument in some detail? (Also if you expand this a bit, it makes you careful about not mixing up cause with effect, a possible fallacy which is impossible to check with the current wording)
Definition 6.2 seems to be almost contradicting itself and a bit weird: the part about a tuple of rational numbers, when the tuple is a singleton, is a distinct definition than the case about single numbers. For example, 37/5 has 37-precision (as the larger between 37 and,5 is 37), whereas { 37/5 } has 5-precision (as the common denominator is 5)? And the fraction 370 / 50 has 370 precision or does it have 37 precision like 37/5 has?
Maybe check that this definition being weird does not have consequences in the places in which it is used?
Questions For Authors: A question I have is if the authors can discuss a bit what are extensions of the notion of length generalization to real-life cases in which we don't check for "perfect generalization" but rather for "generalization to accuracy 99%", i.e. how would the methods extend to probabilistical guarantees as in PAC learning? This is important to know for practical reasons.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank reviewer 5Dit for their time, detailed comments, and constructive criticism.
**Author statement:** The submitted paper contained an error in the proofs, which we discovered only after submission. We’ve revised the proofs and sent the revised draft to the AC. Here are how the revised draft’s results differ from the submitted draft’s.
- The main result (Theorem 5.1) is unchanged except for the addition of a weak assumption to the function class $C-RASP^{2,K,T}$, that $\sum_{i \in [K]} \lambda_i > z$ for each function in $C-RASP^{2,K,T}$ (analogous to the existing assumption $z > 0$; see line 220, left col). Theorem 5.1 claims the same upper bound of $N_{A_{si}}(K,T) \leq O((KT)^{O(K^2)})$ for $C-RASP^{2,K,T}$. The proof is almost exactly the same. The significance of the result is unchanged.
- The revised draft no longer contains Theorems 5.4 and 5.6, which together were intended to extend Theorem 5.1 to prove length generalization guarantees for a class of functions which contains Dyck1 and its higher-precision variants. The absence of these side-results doesn’t affect the significance of the paper.
- Results for DFAs and CFGs unchanged.
Source of error:
- Lemma A.24 in line 1557
We apologize for the error and hope that either the reviewer can view the revised draft; or ignore Theorems 5.4 and 5.6 in their review. Thank you.
**Addressing Reviewer Concerns:**
> the main weakness is the areas with lack of clarity in the explanation in the paper.
We apologize for the lack of clarity of the explanations and thank the reviewer for their patience in reading through the paper and for very detailed edits. We will improve it.
> Line 109 second col: "a complexity" is supposed to have what properties?
The only property we require of a complexity measure is that it is a mapping from function class to natural numbers. We pick complexity measures which are ``natural", such as the number of states in a DFA, or the description size of a CFG. For C-RASP, we chose $T$ and $K$ as two natural parameters.
> Algorithm 1: what does "interpolator" refer to
We meant that the learning algorithm picks a hypothesis $h \in \mathcal{F}$ such that each $(x,y)$ pair in the training set, $h(x) = y$.
> Line 119 second col; Explain "are very nontrivial"
We mean that it difficult to characterize the learned solution by SGD in non convex landscape. The learned solution is not necessarily local minima; even if it is a local minima, we don’t know which one it is due to non convexity.
> Line 124 second col "the learner" - what is the difference
“Learner” and Learning algorithm are synonymous. Learning is involved in the paper since the learning algorithm takes as input a training set and outputs a hypothesis.
> Line 189 - 190 first col: (1) Is A_{si} the same as before? (2) what does it mean for it to "see inputs of length at least" some number?
(1) Yes; $A_{si}$ always refers to Algorithm 1.
(2) The sentence containing the phrase was meant as a prose introduction to definition 3.2 on line 192 (left column). The phrase “the learner sees inputs of length at most N” means that it takes as input dataset $D_N$ of (string, label) pairs for all strings of length at most $N$.
> Line 268-269 first col: be more precise with this (undecidability) statement
The computational problem which we’re saying is undecidable is: Given two CFGs $G_1$ and $G_2$, output whether or not $L(G_1) = L(G_2)$.
> Line 311-316 second col: "This inability of the transformer … "
If no setting of the transformers weights will result in the function expressed by the overall transformer being equal to the ground truth, then the transformer cannot learn a function which is correct on all inputs. What’s more, for many ground truths investigated empirically like PARITY, we believe that any function expressible by transformers will differ from the ground truth on infinitely many strings in {0,1}$^*$. Thus, transformer cannot length generalize perfectly to arbitrary lengths.
> Definition 6.2 seems contradictory
The definition of precision of rational numbers only applies in the context of rational numbers in [0,1], so the magnitude of the denominator is all that matters. Further, the precision of a rational number is calculated from the simplest form of the rational number, where its numerator and denominator are relatively prime.
> Length generalization to accuracy 99%
This is an important question. We can define train and test distributions, $D_{train}$ and $D_{test}$ over {0,1}$^*$ to make the setup more like PAC. E.g. for each natural number $N_{train}$ which is the maximum length of strings in the train distribution, let the test distribution be the uniform distribution over strings of length $N_{train} + 1$. However, it was unclear to us what choice of test and train distribution would make for a generally-realistic model for situations practitioners care about.
More discussion in rebuttal to Reviewer YXKG at the bottom.
---
Rebuttal Comment 1.1:
Comment: Thanks. I hope that if it gets accepted, you'll polish the presentation considerably. If you promise this, I'll raise the score to 4 (I don't think that the amendment on the theorems is ruining the paper, it's fine).
For the question replies, they are along the lines that I was expecting, thanks for taking the time to reply, and i hope the best for this paper.
---
Reply to Comment 1.1.1:
Comment: We promise that we will polish the presentation considerably, especially the proof sketch. Thank you so much for your time and feedback. | Summary: Taking up recent interest in length generalization of transformers and similar models, this paper studies length generalization for three formal models: DFAs, CFGs, and two-layer C-RASP, with the latter being the main technical contribution. C-RASP has recently been proposed as a formal model of some computations performed by transformers. The present paper shows that programs in two-layer sub-fragments of C-RASP can be identified from inputs up to an explicit computable bound (polynomial in precision, super-exponential in the number of heads). This is shown using ingenious arguments based on convex geometry. As such, it provides a potential path towards proving non-asymptotic quantitative length generalization guarantees.
## update after rebuttal
I maintain the stated concerns after the rebuttal. I believe that the authors and I are on the same page about the presence of these weaknesses.
Claims And Evidence: The claims as described in Abstract, Introduction, and Conclusion are supported by convincing evidence.
Methods And Evaluation Criteria: N/A. No empirical evaluation is conducted (nor is necessarily needed for this kind of work).
Theoretical Claims: I checked the proofs about C-RASP at a high level and they appear plausible to me. Within the reviewing timeframe, I could not check all details. While I did not check that the claimed scaling of input length is correct, I believe that a statement of this type can be obtained with the techniques used in this paper.
Experimental Designs Or Analyses: N/A (no experiments).
Supplementary Material: None.
Relation To Broader Scientific Literature: The paper follows up on recent work arguing that length generalization in transformers can be predicted based on RASP fragments (RASP-L in Zhou et al 2024, C-RASP in Huang et al 2024, both cited in the paper). The key contribution is to show that, for some small RASP fragment (two-layer C-RASP), there are computable upper bounds on the lengths needed to allow full identification (hence, generalization) within the given hypothesis class.
I think the paper could acknowledge the link to Huang et al 2024 [4] a bit more. That work is mentioned once in passing in Section 2, but it is actually quite related in that (1) it also studies length-generalization for C-RASP, (2) also aims to formalize the RASP-L conjecture, (3) also uses a minimum-complexity interpolator, albeit with a different kind of complexity measure that applies to transformers instead of automata or C-RASP programs. There is of course a clear distinguishing property of the current paper, namely that it provides a non-asymptotic guarantee, and mentioning the relation to that prior work where relevant could help contextualize the paper better.
[4] Huang et al, A Formal Framework for Understanding Length Generalization in Transformers, ICLR 2025
Essential References Not Discussed: I believe all essential references are discussed.
Other Strengths And Weaknesses: Strengths
- The paper tackles a very important theoretical problem of current interest: providing non-asymptotic length generalization guarantees, and links to a recent line of work on RASP variants.
- The paper is upfront about some key limitations of the proposed setup.
Weaknesses
- I believe the biggest weakness of the paper is that the guarantees apply not to machine learning models themselves, but to the C-RASP formalism. As such, the paper proves an (interesting) result about that logical formalism, not about machine learning methods. It remains open (or at least isn't made explicit) how minimum-complexity inference based on C-RASP complexity can be linked at all to training transformers or other machine learning models. The authors are correct in stating that understanding SGD training is extremely intractable in this context, but at least linking to minimum-complexity inference based on transformers' complexities (e.g., parameter weight norms) would help make the relevance to machine learning more convincing. Besides proving theoretical links, experiments could also be helpful.
- Section 6 is very hard to read (some concrete questions below under "Questions for Authors"). I'd suggest a major rewrite of this section to reduce the technical burden on the reader, and minimally avoid any undefined formal objects.
Other Comments Or Suggestions: - around line 308 (right column): it is argued that PARITY is not expressible by transformers due to the AC0 bound. However, many C-RASP functions, such as Dyck-1, are also not in AC0. Relevant references re the difficulty of PARITY include [1,2]. I think state tracking problems believed to be outside of TC0 [3] might be a more apt example here.
[1] Hahn and Rofin, Why are Sensitive Functions Hard for Transformers?, ACL 2024
[2] Chiang and Cholak, Overcoming a theoretical limitation of self-attention, ACL 2022
[3] Merrill and Sabharwal, The Parallelism Tradeoff: Limitations of Log-Precision Transformers, TACL 2023
- what does the "si" in A_{si} in Lemma 3.1 stand for?
Questions For Authors: line 333, left column: why "s_k" not "s_N"?
line 341, left column: "they're in [0,1]": does this mean "both are subsets of [0,1]"?
line 356, right column: what does "B" range over?
line 361, right column: the completeness equation is hard to grasp at this point, e.g. "valid" has not been introduced. A prose statement could be more comprehensible.
line 413, left column: "the linear constraint for the i-th face" -- a "constraint" would be an equation, whereas here a number is referenced -- does "constraint" refer to the linear coefficient of the face?
line 427, right column: "and its higher-precision variants" -- what does this refer to?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank reviewer qTut for their time, detailed comments, and constructive criticism.
**Author statement:**
The submitted paper contained an error in the proofs, which we discovered only after submission. We’ve revised the proofs and sent the revised draft to the AC. Here are how the revised draft’s results differ from the submitted draft’s.
- The main result (Theorem 5.1) is unchanged except for the addition of a weak assumption to the function class $C-RASP^{2,K,T}$, that $\sum_{i \in [K]} \lambda_i > z$ for each function in $C-RASP^{2,K,T}$ (analogous to the existing assumption $z > 0$; see line 220, left col). Theorem 5.1 claims the same upper bound of $N_{A_{si}}(K,T) \leq O((KT)^{O(K^2)})$ for $C-RASP^{2,K,T}$. The proof is almost exactly the same. The significance of the result is unchanged.
- The revised draft no longer contains Theorems 5.4 and 5.6, which together were intended to extend Theorem 5.1 to prove length generalization guarantees for a class of functions which contains Dyck1 and its higher-precision variants. The absence of these side-results doesn’t affect the significance of the paper.
- Results for DFAs and CFGs unchanged.
Source of error:
- Lemma A.24 in line 1557
We apologize for the error and hope that either the reviewer can view the revised draft; or ignore Theorems 5.4 and 5.6 in their review. Thank you.
**Addressing Reviewer Concerns:**
> it remains open how minimum-complexity inference based on C-RASP complexity is linked to training transformers
This is a valid point. However, there is some circumstantial evidence which lends some credence to the idea of studying transformer training with the min-complexity interpolator.
- Abbe et. al. find empirically that transformers are biased towards learning functions of low degree-profile [1].
- Bhattamishra et. al. find empirically that transformers are biased towards learning functions of low sensitivity [2].
- The RASP-L conjecture [3] suggests that transformers are biased towards learning functions of short RASP-L program length.
[1] Abbe et. al. arxiv.org/pdf/2301.13105
[2] Bhattamishra et. al. arxiv.org/pdf/2211.12316
[3] Zhou et. al. arxiv.org/abs/2310.16028
> Section 6 is very hard to read.
We apologize for the lack of clarity in Section 6 and thank the reviewer for their patience in reading through it. We will rewrite it.
> line 333, left col: why "s_k" not "s_N"?
$k$ is the number of 2D lines of the form $Y = s_i \cdot X$, with slope $s_i \in (0,1)$. The slope of each 2D line correspond to the parameters of a head in the first layer of either $f$ or $f’$.
$N$ is the length of the discrete test-function (equivalently, of the string which corresponds to the discrete test-function).
The interplay of $k$ and $N$ is in the definition of the activations induced by the discrete test-function, where for $i \in [k]$, the $i$th activation is the proportion of "time-steps" from 1 to N in which the test-function lies above line $i$.
$\forall i \in[k], B_i := \frac{1}{N} \sum_{j = 1}^N 1[ps(x)_j > s_i \cdot j]$
> line 341, left col: "they're in [0,1]": does this mean "both are subsets of [0,1]"?
Yes
> line 356, right col: what does "B" range over?
$(B_1(x), \ldots, B_k(x))$ ranges over all possible binary strings $x \in$ { 0,1}$^*$. Each binary string corresponds to a discrete test-function, which induces a tuple of activations over the fixed $k$ lines of the form $Y = s_i \cdot X$.
> line 413, left col: "the linear constraint for the i-th face" -- a "constraint" would be an equation, whereas here a number is referenced
A constraint here is a linear inequality, e.g. $x_1 + ... + x_{M} \geq 1$. In line 413, we mean that the $i$th face of the polytope is parameterized by the boundary of the half-space given by some linear inequality $L_i$. Evaluating a point $x \in R^M$ on the constraint $L_i$ is denoted as $L_i(x)$ and returns a scalar, equal to the margin that the point $x$ has on the constraint. The variable $C$ in the Lemma 6.4 is the centroid defined in the preceding paragraph (line 405, left column), which is a point. $L_i(C)$ is the margin of the centroid on the $i$th face.
> line 427, right col: what does "higher-precision variants" refer to
This referred to Definition A.43 on line 2255 on page 42. However, the Dyck1 results are no longer valid.
> what does the "si" in A_{si} stand for
simplest interpolator
> line 308 (right column): it is argued that PARITY is not expressible … I think state tracking problems believed to be outside of TC0 are a more apt example.
We understand the reviewer’s point as that in order to argue that there are ground-truth functions which are experimentally tested for length generalization but that are theoretically intractable (due to not having a short C-RASP program), we should pick a function outside TC0 since C-RASP is upper bounded by TC0. We thank the reviewer for the insightful comment.
> could acknowledge link to Huang et al more
Will do.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. It appears the authors and I are generally on the same page about these questions (which is great), and I believe that the original assessment from my review continues to be valid. As my concern is largely about (subjective) significance, not rigor, I do not want to stand in the way of the paper being accepted.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your time and feedback. | null | null | null | null | null | null |
Hybrid Spiking Vision Transformer for Object Detection with Event Cameras | Accept (poster) | Summary: This paper proposes a novel hybrid spiking vision Transformer model (HsVT) for event-driven object detection. Combining the advantages of ANN and SNN, the multi-stage spatial-temporal feature extraction module is designed, and LSTM and STFE are used to process temporal information respectively, reducing the number of parameters and improving efficiency. By converting traditional video data set into event stream data through event camera simulator, Fall Detection data set is constructed, taking into account privacy protection and storage efficiency.
## update after rebuttal
After reading the rebuttal, I decide to keep my original score.
Claims And Evidence: The paper claims that HsVT achieves high efficiency and precision in event detection tasks through a hybrid ANN-SNN architecture, a claim supported by experimental data
Methods And Evaluation Criteria: yes
Theoretical Claims: The paper does not involve rigorous theoretical proofs (such as convergence of STFE modules), but the design choice is supported by ablation experiments (Tab.5-6), which are acceptable for engineering oriented studies.
Experimental Designs Or Analyses: The critical ablation experiments (Tab.5-6) were validated on AIR datasets, and comparisons experiments on GEN1/FALL datasets are conducted to enhance generality.
The aircraft detection dataset lacks information ( such as sample size, event flow density, class balance) and need to supplement information.
Supplementary Material: The paper does not provide supplementary material.
Relation To Broader Scientific Literature: The spiking vision transformer may used in other vision tasks such as tracking, reconstruction, and depth estimation.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: The actual energy consumption of the model is not analyzed, which weakens the demonstration of low power consumption advantage of SNN.
Other Comments Or Suggestions: Supplementary network architecture diagram (e.g. Block internal data flow, SNN-ANN interface).
Add a list of hyperparameters (such as LSTM hidden layer dimensions, STFE time window length) to the appendix.
The paper title in the PDF is incorrect.
Questions For Authors: Why choose 4 blocks? Have you tried other hierarchies (such as 3 or 5 blocks)? How does the model performance change if the number of blocks is increased?
What is the attention head number and dimension allocation strategy of Block-SA and Grid-SA? Have attention visualizations been performed to verify the validity of feature focusing?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Q1: Why choose 4 blocks? Have you tried other hierarchies (such as 3 or 5 blocks)?How does the model performance change if the number of blocks is increased?
Re: Thank you for your insightful question. In our experimental design, we chose to use 4 blocks based on the following considerations:
(1) Hierarchical Feature Learning: Deep neural networks typically learn hierarchical features layer by layer. In the context of event data and spatiotemporal modeling, using 4 blocks allows the model to capture sufficiently deep features while maintaining a manageable computational complexity.
(2) Computational Cost Trade-off: When using only 3 blocks, the model struggles to capture high-level semantic information adequately, which limits its performance. On the other hand, increasing the number of blocks to 5 results in a substantial increase in computational cost without a significant improvement in performance.
---
# Q2: [What is the attention head number and dimension allocation strategy of Block-SA and Grid-SA?]
Re: The attention head number and dimension allocation strategy for Block-SA and Grid-SA are as follows:
- Tiny:
||embed_dim|num_head|dim_head|
|--|-|-|-|
|B1|32|1|32|
|B2|64|2|32|
|B3|128|4|32|
|B4|256|8|32|
- Small:
||embed_dim|num_head|dim_head|
|--|-|-|-|
|B1|48|2|24|
|B2|96|4|24|
|B3|192|8|24|
|B4|384|16|24|
- Base:
||embed_dim|num_head|dim_head|
|--|-|-|-|
|B1|64|2|32|
|B2|128|4|32|
|B3|256|8|32|
|B4|512|16|32|
---
# Q3: Have attention visualizations been performed to verify the validity of feature focusing?
Re: Thank you for your valuable question. We have indeed performed attention visualizations to verify the validity of feature focusing in our model. Specifically, we conducted attention visualization experiments on two datasets: Gen1 and Air. However, we encountered some unexpected results during these visualizations, and the patterns observed were not as interpretable as anticipated.
This discrepancy may be due to the complexity of the attention mechanism and the way it interacts with the features in the model. It's possible that the attention heads are focusing on a combination of spatial and temporal information that is not immediately obvious in the visualization, or that the attention mechanism is implicitly capturing features at different hierarchical levels.We will attach our visualization results to the paper. | Summary: This paper introduces Hybrid Spiking Vision Transformer (HsVT), a novel architecture combining Artificial Neural Networks (ANNs) and Spiking Neural Networks (SNNs) for event-based object detection. The key contributions include: A hybrid spatial-temporal framework integrating ANN-based self-attention modules (e.g., Block-SA, Grid-SA) and SNN-based temporal feature extractors (e.g., SpikingMLP, STFE) to capture both local/global spatial features and long-term temporal dependencies. The creation of the Fall Detection dataset, a privacy-preserving event-based dataset generated via an event camera simulator, addressing gaps in fall detection benchmarks. Comprehensive experiments demonstrating HsVT’s superiority over state-of-the-art methods (e.g., RVT) on GEN1, Fall Detection, and AIR datasets, achieving higher mAP with fewer parameters.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: There are no specific theoretical claims.
Experimental Designs Or Analyses: Yes
Supplementary Material: There is no Supplementary Material. But I suggest to provide the new proposed fall event dataset links in the supplementary material.
Relation To Broader Scientific Literature: The paper related to event-based vision and brain inspired computing.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. HsVT effectively combines the spatial modeling strengths of Transformers (via self-attention) with the energy-efficient temporal dynamics of SNNs, addressing the limitations of pure ANN or SNN approaches.
2. The Fall Detection dataset fills a critical gap in privacy-sensitive event-based detection tasks, leveraging event cameras’ advantages (e.g., low latency, privacy preservation).
3. The ablation studies (Tables 3–6) validate design choices (e.g., LIF neurons, STFE placement), while comparisons with RVT (Tables 7–8) demonstrate HsVT’s efficiency and robustness.
Weakness:
1. I suggest to provide the new collected event dataset public links in the context to provide more supports to the SNN community.
2. The formal title in the PDF file is missing.
3. While energy efficiency advantage is claimed, practical implementation challenges (e.g., spike timing synchronization, latency constraints on neuromorphic hardware) are not discussed.
4. Meanwhile, there lacks more detailed energy estimation to show the energy efficiency of the proposed model.
Other Comments Or Suggestions: No
Questions For Authors: 1. Could the proposed hybrid SNN architecture adapts to other computer vision task? Such as object tracking?
2. The proposed model seems to have specific advantage by combining temporal property of SNNs, could the SNNs provide spatial feature extraction advantage under the proposed hybrid ANN-SNN architecture?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Q1: provide the new collected event dataset public links
Re: Thank you for your suggestion. To support the SNN community, we have made our event dataset publicly available: [Dropbox Link (Anonymous)]( https://www.dropbox.com/scl/fo/1bnsydo3yj5922tlsquo9/AE77sOynJY0-GAeSNBSqFJk?rlkey=1mqpsm658ou26bd4jzmbs46rk&st=h1mhxsyo&dl=0
).
---
# Q2: The formal title in the PDF file is missing.
Re: Thank you for pointing this out. We apologize for the oversight. We will ensure that the formal title is included in the PDF file and submit an updated version.
---
# Q3: While energy efficiency advantage is claimed, practical implementation challenges (e.g., spike timing synchronization, latency constraints on neuromorphic hardware) are not discussed.
Re: Thank you for your feedback. While our paper emphasizes SNNs’ energy efficiency, we now discuss practical implementation challenges:
(1) Spike Timing Synchronization: Our event-driven model updates neuron states asynchronously, eliminating the need for global synchronization.
(2) Latency Constraints: Neuromorphic hardware (e.g., Intel Loihi, SpiNNaker) may experience spike transmission delays, impacting inference speed.
(3) Solutions: Asynchronous architectures reduce synaptic access delays, while spatiotemporal multithreading optimizes spike processing.
(4) Deployment Challenges: Hardware limitations (e.g., unsupported LIF neurons) and quantization requirements pose constraints.
In future work, we plan to explore migrating our method to actual neuromorphic hardware and further optimize its computational efficiency.
---
# Q4: energy estimation to show the energy efficiency.
Re: We understand the importance of providing detailed energy consumption estimates to demonstrate the energy efficiency of our proposed model. In our work, we have calculated the theoretical energy consumption of HsVT using the following methodology:
The energy consumption is primarily estimated by calculating the synaptic operations (SOPs), which is given by the formula:
$\mathrm{SOPs}(l)=fr\times T\times\mathrm{FLOPs}(l)$
Where:
𝑙 refers to a specific block or layer in the model (e.g., ANN or SNN components),
𝑓𝑟 is the firing rate of the input spike train to the layer,
𝑇 is the simulation time step of the spiking neurons,
FLOPs(l) refers to the floating point operations of the layer, representing the number of multiply-and-accumulate (MAC) operations.
The spike-based accumulate (AC) operations are also taken into account to estimate the overall energy usage. For this, we refer to the work of Kundu et al. [1], Yao et al. [2], Zhou et al. [3], and others, and we assume that the MAC and AC operations are implemented on 45nm hardware, where:
Energy per MAC operation ($E_{MAC}$)=46pJ,
Energy per AC operation ($E_{AC}$) = 0.9pJ
The energy consumption of our model (tiny) is then calculated as follows:$E_{HsVT}=E_{ANN}+E_{SNN}$ , $E_{ANN}=4.6pJ\times\mathrm{FLOPs}(b)$ , $E_{SNN}=0.9pJ\times\mathrm{SOPs}(b)$
This results in:$E_{HsVT-backbone}=5.5mJ$. Additionally, for the RVT model(tiny), the energy consumption is calculated as:$E_{rvt-backbone}=6.6mJ$
| Module|HsVT(tiny)|RVT(tiny)|SFOD[4]|Spiking YOLOX-S[5]|
|---|---|---|---|---|
| | ANN-SNN|ANN|SNN|SNN|
|Energy(backbone)|5.5mJ|6.6mJ|---|7.52mJ(embeding)+5.64 mJ|
|Energy(Fpn+head)|2.9mJ|2.9mJ|---|2.75mJ|
|Energy(Total)|8.4mJ|9.5mJ|7.26mJ|15.91mJ|
References:
- [1] Kundu S, et al. Hire-snn ICCV, 2021.
- [2] Yao M, et al. Attention Spiking Neural Networks. IEEE TPAMI, 2023.
- [3] Zhou Z, et al. Spikformer: When SNN Meets Transformer. arXiv preprint, 2022.
- [4] Fan Y, et al. SFOD: Spiking Fusion Object Detector. CVPR, 2024.
- [5] Wang Z, et al. EAS-SNN: End-to-End Adaptive Sampling for Event-Based Detection. ECCV, 2024.
---
# Q5: Could the proposed hybrid SNN architecture adapts to other computer vision task? Such as object tracking?
Re: The proposed hybrid SNN architecture is adaptable to other vision tasks, including object tracking. While our focus is event-based detection, SNNs excel at processing spatiotemporal patterns, making them well-suited for tracking tasks that require both appearance and motion cues.
A relevant example is SCTN (Spiking Convolutional Tracking Network), which utilizes energy-efficient deep SNNs for event-based tracking. Inspired by this, our hybrid approach could integrate recurrent SNN structures or memory mechanisms to enhance tracking robustness, particularly in high dynamic range and fast-motion scenarios.
---
# Q6: could the SNNs provide spatial feature extraction advantage under the proposed hybrid ANN-SNN architecture.
Re: In our proposed STFE module, the convolution and batch normalization layers perform spatial feature extraction and contribute to spatiotemporal fusion. While SNN handles temporal dynamics, STFE combines the strengths of SNN and ANN, enabling efficient capture of both temporal and spatial features, enhancing overall performance. | Summary: This paper introduces a Hybrid Spiking Vision System, combining Spiking Neural Networks (SNNs) with deep learning architectures to enhance computational efficiency and reduce power consumption. The study explores the effectiveness of this hybrid approach in visual tasks and presents experimental results demonstrating its advantages in inference efficiency.
Claims And Evidence: ++Empirical results on multiple benchmark datasets, demonstrate that HSVT maintains comparable accuracy while requiring significantly fewer operations. The hybrid model effectively leverages spiking neurons to reduce redundancy and improve efficiency.
++Ablation studies show that selectively replacing ViT components with SNN modules leads to computational savings while preserving feature representation quality.
Theoretical justification supports the effectiveness of hybridization in spatiotemporal processing.
++The paper presents an analysis of the role of spike-based representations in attention mechanisms and their potential benefits in terms of sparsity and latency.
Methods And Evaluation Criteria: Metrics such as accuracy, energy consumption, and latency are considered.
Comparisons with both fully spiking and deep networks provide insights into trade-offs.
Theoretical Claims: The paper argues that integrating SNNs with deep learning enhances computational efficiency without significantly sacrificing accuracy.
It theoretically justifies why hybrid architectures can outperform purely spiking or non-spiking networks in real-world applications.
Experimental Designs Or Analyses: The experiments include training and testing on vision benchmarks.
Ablation studies evaluate different configurations of the hybrid model.
Performance is analyzed in terms of accuracy, power efficiency, and computational cost.
The choice of spiking neuron models and their integration into deep learning pipelines is examined.
Supplementary Material: None.
Relation To Broader Scientific Literature: The paper is relevant to research in neuromorphic computing, efficient deep learning, and hybrid neural architectures.
It builds upon previous work in SNNs, biologically inspired computing, and deep learning acceleration.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
++Novel hybrid approach leveraging both deep learning and SNN advantages.
++Well-motivated discussion on energy efficiency.
++Clear experimental setup and benchmarking.
Weaknesses:
--Some theoretical justifications could be made stronger.
--Ablation studies could be extended to analyze different types of hybridization.
--More discussion on hardware feasibility and real-world deployment would be useful.
Other Comments Or Suggestions: None.
Questions For Authors: I am not an expert in this field, I hope the authors can answer the following questions:
1. How does the hybrid model compare to recent advances in neuromorphic vision systems?
2. What are the trade-offs between different configurations of hybrid models (e.g., varying the proportion of SNN vs. deep learning components)?
3. Have you considered real-world deployment scenarios, and how would the model adapt to edge computing environments?
Depending on the author's answer results, I may raise or lower my rating.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Q1: Some theoretical justifications
Re: We have implemented a hybrid SNN + LSTM structure, where Leaky Integrate-and-Fire (LIF) neurons process temporal information, while convolutional layers and Batch Normalization are used for spatial feature extraction.
The input x is concatenated with the previous hidden state $h_{tml}$ and passed through a 1×1 convolution to reduce the channel dimension:$X_{H}=\mathrm{concat}(x,h_{\mathrm{tml}})$ , $M=W_{m}*X_{H}+b_{m}$
The processed input then undergoes a convolutional layer followed by Batch Normalization:
$B=W_{c}*M+b_{c}$ , $C=\gamma\frac{B-\mu_{B}}{\sigma_{B}}+\beta\$
The output from the convolutional layer is fed into LIF neurons, where the membrane potential evolves over time:$\tau\frac{dV(t)}{dt}=-V(t)+C$
When the membrane potential $V(t)$ exceeds the threshold $V_{th}$, the neuron fires: if $V(t)\geq V_{\mathrm{th}}$, $H_t=1$; if $V(t)\mathrm{<} V_{\mathrm{th}}$, $H_t=1$;
After firing, the membrane potential resets: $V(t)=V(t)-V_{\mathrm{th}}$
The membrane potential V at the current time step acts as the cell state, while the previous step’s state $C_{tml}$ contributes to temporal dynamics:$C_{t}=V+C_{\mathrm{tml}}$
---
# Q2: Ablation studies could be extended to analyze different types of hybridization.
Re: We conducted ablation studies to analyze different types of hybridization by testing various SNN variants:
- Different spiking neurons (Tab3 Tab5):We compared IF and LIF neurons to assess their influence on temporal encoding. The results show that LIF neurons achieve better accuracy due to better robustness and better generalization.
- Different surrogate gradient functions (Tab4):We experimented with ATan and Sigmoid surrogate gradients. Our findings indicate that the ATan function provides a better accuracy.
- Different SNN components (Tab5):We replaced key SNN modules to evaluate their performance. The results confirm that the Conv+BN+LIF module works best.
- Different placements of SNN modules (Tab6):We choose to replace LSTM with STFE at different locations and replace different amounts of LSTM. We choose the method with the highest mAP value.]
---
# Q3: More discussion on hardware feasibility and real-world deployment would be useful.
Re:The Tianjic neuromorphic chip supports both ANN and SNN processing. however, it does not support attention mechanisms, making it challenging to deploy our model directly on such hardware.
Despite this challenge, our event-camera-based fall detection dataset has significant real-world applicability. Many existing fall detection datasets rely on conventional cameras, but due to privacy concerns, they are often not publicly available, limiting progress in this field. In contrast, event cameras capture only motion-triggered data, inherently protecting patient privacy.
---
# Q4: How does the hybrid model compare to recent advances in neuromorphic vision systems?
Re:Recent event-based detection models include SFOD (Fan et al., 2024), which fuses multi-scale features within SNNs, and EAS-SNN (Wang et al., 2024), which employs adaptive sampling and recurrent SNNs. While both achieve notable performance, our hybrid model outperforms them:
| Model|mAP@0.5:0.95|
|---|---|
|SFOD| 0.321|
|EAS-SNN|0.354|
|Ours |0.478|
By integrating ANN-based spatial and SNN-based temporal processing, our model achieves superior spatial-temporal fusion. These results will be included in our manuscript.
References:
- Fan Y, et al. SFOD: Spiking Fusion Object Detector. CVPR 2024.
- Wang Z, et al. EAS-SNN: End-to-End Adaptive Sampling and Representation for Event-Based Detection. ECCV 2024.
---
# Q5: What are the trade-offs between different configurations of hybrid models
Re: The balance between SNN and ANN components impacts accuracy, efficiency, and power consumption. Key trade-offs include:
(1) More ANN layers enhance spatial feature extraction and accuracy but increase computational cost, while more SNN layers improve temporal processing and energy efficiency but risk gradient vanishing and lower feature expressiveness.
(2) Event-driven SNN computation reduces power usage but may introduce processing latency due to spike accumulation.
(3) Our experiments (Tab 6) show that sufficient ANN depth ensures feature extraction, while well-placed SNN modules capture temporal dependencies with minimal accuracy loss.
---
# Q6: Have you considered real-world deployment scenarios, and how would the model adapt to edge computing environments?
Re: Our work focuses on event-camera-based fall detection, a practical real-world application. Event cameras offer low latency, high dynamic range, and energy efficiency.
(1) Efficiency: Leveraging SNNs, our model reduces redundant computations and triggers processing only on motion detection, enhancing energy efficiency.
(2) Robustness: Event cameras ensure reliable performance across varied lighting and backgrounds, making them ideal for deployment. | null | null | null | null | null | null | null | null |
Approximating Latent Manifolds in Neural Networks via Vanishing Ideals | Accept (poster) | Summary: The authors propose neural networks motivated by the vanishing ideals, called VI-nets. They also investigate to obtain parameter-efficient representations of the vanishing ideal generators. They investigate the performance of the VI-nets theoretically and empirically.
Claims And Evidence: The prposed method is based on the observation that low-dimensional manifolds that underlies data is characterized by vanishing ideals. They construct a neural network with polynomial activation functions based on this observation. The methodology seems reasonable.
Methods And Evaluation Criteria: In pratical situations, how to set the hyperparameters such as $L'$ and $S$ is not clear for me. Do you have any ideas about this?
Theoretical Claims: In section 4.2, the authors investigate the theoretical analysis of generalization error based on the existing results regarding spectral complexity. However, how the geometric aspect of the proposed method helps alleviating the generalization error is not clear for me. Indeed, the bound of $\kappa$ dependes on $2^d$, which becomes large if $d$ is large.
Experimental Designs Or Analyses: Although the authors empirically investigate the accuracy of the proposed method, I think investigating how the proposed method captures the underlying manifold for each class $k$ would be interesting. Is it possible to visualize or quantify how the learned polynomials separate the classes of the input data?
Supplementary Material: I checked the empirical results and briefly checked the proof of the statements.
Relation To Broader Scientific Literature: N / A
Essential References Not Discussed: N / A
Other Strengths And Weaknesses: The authors encode the structure of data manifolds into neural networks using activation functions based on polynomials. The idea is interesting and relevent to the community.
Other Comments Or Suggestions: N / A
Questions For Authors: Please see the questions and concerns in Methods And Evaluation Criteria, Theoretical Claims, and Experimental Designs Or Analyses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review. We appreciate your feedback and will address your comments in detail.
> In pratical situations, how to set the hyperparameters such as L' and S is not clear for me. Do you have any ideas about this?
Thank you for your question. Our paper addresses this from two perspectives, one theoretical and the other one practical. From the theoretical perspective, we can compute for each $L'$ with Theorem 4.5 a "budget" in the form of a maximal number of monomials $S$, maximal degree $d$ and the other hyperparameters of our pipeline. This yields a provably lower spectral complexity and thus a better bound. From the practical perspective, our experiments indicate that we can remove about 40-50% of all convolutional layers before we observe a significant drop in performance. With the choice of $S$ you raise another interesting question, which we already addressed in the appendix (cf. Figure 6). There we analyse the monomial count ($S$). We found that it is not advisable to choose $S$ to be significantly lower than the latent dimensionality, otherwise we observe a large performance drop. We hope that this answers your question - if not, please let us know.
> In section 4.2, the authors investigate the theoretical analysis of generalization error based on the existing results regarding spectral complexity. However, how the geometric aspect of the proposed method helps alleviating the generalization error is not clear for me. Indeed, the bound of $\kappa$ dependes on $2^d$, which becomes large if $d$ is large.
Yes, you are right, the bound of $\kappa$ can become very loose if $d$ is large. However, in practice, we have always found it entirely sufficient to use values of $d$ not larger than $5$. To be more precise, we enforce the bound of $d$ as follows: we let the OAVI algorithm run until it terminates, and then we only take terms with maximum degree $d$. Despite setting $d$ to $5$, we never actually removed any terms from the ideal. In our computational setting, enforcing $d=5$ does in consequence not lead to any performance loss.
> Although the authors empirically investigate the accuracy of the proposed method, I think investigating how the proposed method captures the underlying manifold for each class would be interesting. Is it possible to visualize or quantify how the learned polynomials separate the classes of the input data?
We agree that it would be very interesting to investigate how the learned polynomials capture the underlying manifold. Since visualizations in this high-dimensional setting are generally not feasible, we provide the following alternative to see how two classes in particular are being separated. For this, one can choose for each of the class a vanishing polynomial according to our scoring function, say $p_1$ and $p_2$. Then, a simple way to visualize how $p_1$ and $p_2$ separate classes is by using a scatter plot with $x$-axis the absolute value of $p_1$ and the $y$-axis of $p_2$. We provide such a scatter plot here, showing how a training batch is becoming linearly separable: [https://imgur.com/o3VlqZl](https://imgur.com/o3VlqZl)
As for a quantification of separability, this was the reason we introduced our scoring function. It essentially measures separability in the following sense: If the score is of the same order as the vanishing hyperparameter $\psi$, then a polynomial is not only vanishing on its own class but also on other classes. If the score is higher then $\psi$ then this indicates that the polynomial is not as approximately vanishing on other classes. Therefore the score introduced in line 195-197 in the second column is a direct way to quantify how well the learned polynomials separate the data.
Thank you again for your review, we hope to have addressed your concerns. If you have any further questions, please let us know.
---
Rebuttal Comment 1.1:
Comment: Thank you for your respense. I would like to keep my score. | Summary: The paper aims to improve parameter efficiency of neural networks by truncating a pretrained neural network, finding the polynomial generators of the vanishing ideal of (a finite sample of) the latent manifold, and transforming the latent space into a linearly separable feature space. The proposed method improves upon existing vanishing ideal algorithms, e.g. OAVI & ABM, by (1) performing PCA on latent space for dimensionality reduction, (2) pruning the found generators via class distinctiveness, and (3) other measures to improve computational efficiency such as stochastic sampling and lower-precision arithmetic. The paper also provides theoretical guarantee that the resulting VI-Net has lower spectral complexity and thus better generalization ability than the untruncated base model. The experiments on the CIFAR dataset demonstrate that VI-Net has comparable prediction accuracy to ResNet baselines while being more efficient.
Claims And Evidence: The main claim of the paper is that, by replacing the final layers of a pretrained neural network with generators of vanishing ideals, VI-Net can achieve comparable prediction performance with fewer parameters. This is clearly supported by experimental evidence, particularly those from Table 2. It should be noted, however, that the improvement in computational efficiency is moderate in order to maintain the prediction accuracy (e.g. ~1.3x throughput on CIFAR-100).
Methods And Evaluation Criteria: The proposed method builds upon existing vanishing ideal algorithms such as OAVI and ABM. To improve the computational tractability of these algorithms, the authors first perform a PCA to reduce the latent space dimensionality. Then, the generators found by these algorithms are pruned based on how distinctive they are across samples from different classes. The monomials with zero coefficients in all remaining polynomial generators are also pruned to reduce the number of function evaluations. Overall, the proposed method is straightforward and it makes sense for the purpose of this paper -- to improve parameter efficiency of classification networks.
The evaluation criteria include the classification accuracy and computational efficiency (in terms of parameter count and inference throughput), which make sense for the proposed method. However, the evaluation is performed only on the CIFAR dataset. It would be better if the authors could also include other commonly used benchmarks for image classification.
Theoretical Claims: Section 4 of the paper claims that VI-Net can achieve lower spectral complexity and thus better generalization ability than untruncated networks. Theorem 4.5 and Corollary 4.6 provide explicit formulas for the spectral complexity and the generalization error bound of VI-Net. The proof is provided in the Appendix and it looks correct to me. The only issue is that there seem to be no direct comparison between VI-Net and the base model. For example, under what condition does $\kappa < 1$, i.e. VI-Net has a lower spectral complexity?
Experimental Designs Or Analyses: While only restricted to one dataset, the experiments are comprehensive and demonstrate the effectiveness of different components of the proposed method. The effects of various factors such as truncation proportion, pruning ratio, etc, are investigated through various controlled experiments.
Supplementary Material: The codebase in the supplementary material seems not runnable. The `src` directory, which seems to contain the entire VI-Net implementation judging from the import statements, is not included.
Relation To Broader Scientific Literature: The paper mainly applies existing vanishing ideal algorithms, such as OAVI and ABS, to the NN latent space. It introduces some pre-/post-processing techniques to improve the computational tractability of these algorithms on high-dimensional spaces, but does not modify the vanishing ideal algorithms themselves.
Essential References Not Discussed: I am not quite familiar with the literature in vanishing ideal algorithms. I'd say that the paper has provided sufficient references throughout the paper for readers to understand the context, but I'm not sure if the paper missed any related works that should have been discussed. Also, a dedicated section to related works would be helpful.
Other Strengths And Weaknesses: The paper is well-written. Important concepts are explained clearly and the flow is smooth and coherent.
Other Comments Or Suggestions: Typos & potential confusions:
* L201: class $i$ --> class $j$
* L202: the $p$ in top $p\%$ could be confused with generator $p$
* L304: $\begin{pmatrix} i \\\\ d \end{pmatrix}$ should be the other way around?
* L323: What does $s \in \mathcal O(S)$ mean?
* L285: $\tilde \phi_{L'} \equiv \mathbf W_F\circ \mathbf W_C\ \circ\ ...$
* L392: Does elapsed time in Table 1 refer to the running time of the vanishing ideal algorithm?
Questions For Authors: * Can we apply the methodology of VI-Net to regression tasks?
* Moreover, apart from building new models for prediction as you have done in this paper, can we use vanishing ideal algorithms to analyze the structure of the latent manifold itself and provide some kind of interpretability?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review, we appreciate the thorough feedback. Let us address your comments in detail.
> However, the evaluation is performed only on the CIFAR dataset. It would be better if the authors could also include other commonly used benchmarks for image classification.
In general, we agree that it would be good to include more benchmarks, albeit this was not the main focus of our work. Scaling vanishing ideal algorithms to larger datasets remains an open problem and is an active area of research [Wirth et al., 2023], our work is a step in this direction, with multiple contributions to that end. However, to directly address your concern, we have run preliminary experiments on TinyImageNet using ResNet-50. We observe that the performance is comparable to the CIFAR-100 and CIFAR-10 experiments, indicating that the method works for larger datasets. We will add these results to the paper and run further experiments on image classification benchmarks for the camera-ready version of the paper. Please find the results here https://imgur.com/a/Beh0DN7.
> The only issue is that there seem to be no direct comparison between VI-Net and the base model. For example, under what condition does , i.e. VI-Net has a lower spectral complexity?
The main motivation for using the spectral complexity framework is exactly that it allows for comparison of generalization bounds of different neural network architectures and does more than simply providing a learning guarantee. We can employ Theorem 4.5 to derive for each layer $L'$ a bound on the norms of the newly introduced parameters that is strictly lower than that of the baseline network. This provides a theoretical justification for the use of VI-Net.
Example: suppose we remove the last 3 layers in a pretrained NN. We can compute the spectral norms of their weight matrices $W_1, W_2, W_3$ and compute the product of these terms, i.e., $\prod_{i = 1}^3 \|W_i\|_2$.
Similarly, we can compute $\sum_{i = 1}^3\|W_i^T\|_{2,1}^{-2/3}\|W_i\|_2^{-2/3}$ explicitly. Next, we can *choose* the number of monomial terms $S$, the number of polynomials $N$, the highest degree $d$, the bound on the norm of the coefficient vectors of the polynomials $\tau$ (Note that these are all hyperparamters of our algorithm pipeline).
If we have further bounds $\lambda_1, \lambda_2$ (cf. Theorem 4.5) on the norms of the final linear weight matrix (either by explicit constraints or implicit regularization), then we can always ensure that
- $2^dd\tau \lambda_1 \sqrt{NS} \leq \prod_{i = 1}^3 \|W_i\|_2$
and
- $2^{2d/3}S^{2/3}+ N^{2/3}S^{1/3} + \lambda_2^{2/3} \leq \sum_{i = 1}^3\|W_i^T\|_{2,1}^{-2/3}\|W_i\|_2^{-2/3}$.
Thus, the spectral complexity of VI-Net is lower compared to the original baseline NN by Theorem 4.5.
> The codebase in the supplementary material seems not runnable.
We apologize, this was an oversight on our side while anonymizing the code. We have now ensured that all necessary files are included in the repository, and you should be able to run the code without any problems. If the Area Chair permits, we would be happy to share the complete codebase with you (currently only links to images are allowed).
> Also, a dedicated section to related works would be helpful.
We are fairly confident to have included all relevant works with respect to vanishing ideal algorithms, but we appreciate the suggestion of adding a dedicated related work section. We think this will improve the presentation of our work and will add this section to the revision of the paper as soon as possible. Thank you for your suggestion.
> Can we apply the methodology of VI-Net to regression tasks?
This is an interesting idea. Indeed, one can apply the standard approach of discretizing the regression problem transforming it into a classification problem (i.e. binning numerical outputs). Do you have any other suggestions? We believe this could be an interesting application and thank the reviewer for this idea.
> Moreover, apart from building new models for prediction as you have done in this paper, can we use vanishing ideal algorithms to analyze the structure of the latent manifold itself and provide some kind of interpretability?
We do observe a high correlation between the intrinsic dimensionality of the data and the complexity of the constructed generators (cf. Figure 12). This aligns well with the findings in the broader literature and we believe that this provides insights about the manifold, although relatively hard to interpret.
> L201, 202, 304, 285: [...]
Thank you, we fixed the typos and notational issues.
> L323: What does $s \in \mathcal{O}(S)$ mean?
The size of the intermediate weight matrix scales linearly with the number of monomials.
> L392: Does elapsed time in Table 1 refer to the running time of the vanishing ideal algorithm?
Yes.
Thanks again for your review, we hope to have addressed your concerns and questions. If you have any further remarks, please let us know.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I will keep my score. | Summary: This paper introduces efficient methods for approximating vanishing ideals of class "manifolds" in a latent space of a neural network classifier, and uses the resulting approximate vanishing ideals to replace later layers of a truncated classifier neural network with a linear combination of polynomial features, calling the resulting classifiers "VI Nets". The paper includes theoretical discussion suggesting that such VI Nets might enjoy improved generalization properties, and demonstrates empirically that for a given transition layer, they preserve more accuracy from the initial classifier network than simply replacing the later layers with a linear head. The paper shows that VI Nets can navigate an accuracy-throughput tradeoff.
# Update after rebuttal
I appreciate the authors' thorough response to my questions and comments. While they addressed a number of technical concerns, I maintain my reject recommendation.
Claims And Evidence: - "Such manifolds can often be described as the zero set of a system of polynomial equations" hinges on what is meant by often. I agree that many standard examples of manifolds are zero sets of polynomials, however in some theoretical senses "almost all" manifolds fail to be algebraic. See e.g. https://mathoverflow.net/questions/4895/the-relationship-between-complex-and-algebraic-geomety
- Similarly this is false: "A key observation, and central motivation for our work, is that if the latent space is a manifold, it can be algebraically characterized as the zero set of certain polynomials". In general, the zero set of the vanishing ideal will be a super set of the original manifold. There are similar issues elsewhere, such as "Unlike methods that approximate manifolds through indirect means like simplicial complexes (Khrulkov & Oseledets, 2018) or k-nearest neighbor graphs (Tsitsulin et al., 2020), our approach directly captures their algebraic structure through polynomial roots"
- "A key insight is that data points from different classes in the latent space, while not linearly separable, can be distinguished through polynomial separability" should say something like "while not necessarily linearly separable." Since they might be linearly separable, and in some theoretical frameworks asymptotically *are* linearly separable (for example, in the infinite width limit).
- the last paragraph of section 2.1 effectively assumes that the $Z^k$ are pairwise disjoint. This is probably always true in practice, but it should be stated as an assumption. On the topic of this section, squares would be preferable to absolute values since they are polynomials :).
Methods And Evaluation Criteria: I think the limitations section could mention something about experiments only covering ResNets and CIFAR datasets (in particular this restricts to the vision domain).
The experiments primarily focus on "end to end testing" of the entire VI Net package. I could imagine additional experiments that would more directly target the underlying hypotheses (e.g. empirically computing some of the terms appearing in the spectral complexity estimates ...)
Theoretical Claims: I did not read the proofs in the appendix. Regarding the theoretical claims, while the paper derives formulas that could be used to estimate spectral complexity of VI Nets (and relate their complexity to that of the network from which they were derived), since there are no theorems specifically proving that they have lower spectral complexity under certain assumptions, I think the introduction of the paper should be more cautious about suggesting improved generalization properties of VI Nets.
Experimental Designs Or Analyses: Significant concern: I do not understand why an accuracy difference is observed in figure 3, even with 0% of the layers removed. In particular, I do not understand why the Linear Head method drops all the way to 65% accuracy, which is quite bad for a resnet34 on CIFAR 100. There are publicly available training scripts that claim to achieve accuracy in the high 70s for resnet34s on CIFAR 100.
Regarding table 2: I think that including a resnet18 on CIFAR100 would be a useful baseline. More broadly, there are many methods which practitioners can use to achieve throughput improvements of the scale being demonstrated by VI nets (up to about 2x), with minimal impact on accuracy (hardware friendly inference frameworks, quantization ...). Additional comparison with/discussion of these would help contextualize table 2.
Supplementary Material: Fig 12: the number of generators of the vanishing ideal and the dimension of the variety are anti-correlated. So I'm not sure what to make of this plot.
Relation To Broader Scientific Literature: Things that come to mind are: empirical science of representations of data in neural network latent space, and application of computational algebraic, geometry to data science. These are both somewhat broad areas, I don't have specific references in mind.
Essential References Not Discussed: Not beyond those linked above and below.
Other Strengths And Weaknesses: My primary concerns with this paper:
- there's a variety of work showing that classes become increasingly linearly separable in later and later neural networks layers (https://arxiv.org/abs/1610.01644, https://arxiv.org/abs/2004.06093 to name a couple).
- In particular, the former paper shows that truncating a neural network at an intermediate layer, and simply immediately learning a linear layer can result in performance comparable to that of the original network, *without* any intermediate steps of dimension reduction, tanh rescaling and bit crushing, vanishing ideal computation ... NOTE: I realize this baseline appears in Fig. 3, and that the proposed method of VI nets is shown to result in higher accuracy, however the fact that linear probes are not mentioned until the penultimate page is an issue for me, since as a reader, I was thinking "what about linear probes" on page 1.
- The latter reference suggests that the process of neural network training is already encouraging class separation, one extreme example of this being "neural collapse" (https://arxiv.org/abs/2008.08186, https://proceedings.mlr.press/v145/e22b.html). So another baseline if one wanted a shallow or network that achieves linear class separation would be to try to train a shallow or network to the point of neural collapse. I do not see this baseline in the paper.
- I could be mistaken, but the baselines suggested above might be amenable to similar spectral complexity analysis (definitely the first one is, since it will share the feature that lots of terms coming from the common first L layers cancel).
Other Comments Or Suggestions: - Def. 4.2, should it be "d choose i"?
- Lem. 4.4, what are the subscripted $d_i$?
- " Specifically, if the product of the spectral norms of the truncated layers is smaller than the bound for the newly added layers ..." Do we not want the opposite to ensure the VI Net has lower spectral complexity? Regardless, using some notation here would make the sentence easier to parse.
- it would be helpful to have a companion to figure 3 that incorporates the parameter counts of the various models.
- In table 1 is elapsed time the time it took to compute (approximate) ideal generators?
Questions For Authors: Not beyond those above.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you for your thorough and insightful comments. We have grouped related concerns and clarifications below and revised our paper accordingly.
**Overall Clarifications and Algebraic Geometry Context**
We used the manifold hypothesis to motivate our work, but we agree that currently, some of the wordings are not entirely mathematically precise. Let us make a few clarifications:
> in some theoretical senses "almost all" manifolds fail to be algebraic
In line 42, we cited [Nash, 1952] to reference Tognoli's Theorem that every *compact smooth* manifold is *diffeomorphic* to a nonsingular algebraic set. We revised the wording to clarify that some manifolds may not be described by polynomials.
> the zero set of the vanishing ideal will be super set of the original manifold.
Please note that we in lines 162-167 we explicitly state that "recovering [the underlying manifold] $U_k$ exactly from a finite sample is impossible without structural information, as $U_k$ could range from being as minimal as $Z_k$ to as broad as $\mathbb{R}^n$" and provide further context in section 2.3.
> "while not necessarily linearly separable."
We agree and will modify the statement accordingly. Thanks!
> the last paragraph of section 2.1 effectively assumes that the $Z_k$ are pairwise disjoint
Correct, we implicitly state this in line 80-82, but will make it more explicit here.
**Experimental Evaluation and Baselines**
*Linear Probing:*
First of all, we agree that Linear Probing should be mentioned earlier in the paper and will revise the manuscript to make this more clear. Thank you for pointing out that this is not easily accessible. Secondly, we understand the confusion regarding the accuracy drop in Fig. 3, whose content was poorly communicated from our side. At 0% removed, we removed *0% of convolutional layers*, but still remove the avg.-pooling and linear classifier (lines 332-334, despite badly worded), which may result in a performance drop. We make this more clear to not leave the reader behind to why "0% removal" actually does something. Last but not least, we retuned the hyperparams of the linear probing baseline, improving its performance (https://imgur.com/a/SOpcrEJ). In general, the accuracy drop aligns with the paper you mentioned, reporting a 6% performance drop when using a linear head after the last convolution - which is a non-insignificant drop (cf. Figure 4 in arxiv.org/abs/1610.01644). Now, while VI-Net still outperforms, both methods perform similarly when removing the trailing convolutional layers.
We similarly observe the phenomenon that later layers lead to simpler, sometimes linear, polynomials (cf. Figure 11), aligning with the finding that classes become increasingly linearly separable in later layers; the neural collapse phenomenon is further supported by the good performance of the linear probes. VI-Nets could be seen as a natural extension of linear probing. Could you elaborate on what kind of additional baseline you would like to see?
*Further experiments:*
To broaden our experimental evaluation, we added TinyImageNet experiments (https://imgur.com/a/Beh0DN7). Further, we conducted additional experiments measuring excess risk to support our hypothesis (please cf. our response to reviewer Gw8X).
We will add the evaluation of ResNet-18 on CIFAR-100 to properly contextualize the throughput improvements, however noting the following: While VI-Net's speedup might not match quantization and hardware optimization, we still observed a notable speedup despite non-optimized hardware, indicating the usefulness of the found generators in describing data manifolds.
**Remarks regarding theory**
We do agree that not every VI-Net has a lower spectral complexity. Theorem 4.5 allows to choose the hyperparameters of our pipeline such that the bound we have on the newly added terms is lower than what can be measured from the baseline NN (see the answer we provided to reviewer 9he7 on the question on when VI-Net has a lower spectral complexity).
Further, you are correct that the baselines can be analyzed using the same spectral complexity framework. This enables us to compare the spectral complexities of the baseline NN, VI-Net, and linear probing or a shallow wide NN within a unified framework. We think this is a strength of our work.
**Other remarks**
> Fig 12: the number of generators and the dimension of the variety are anti-correlated.
We think this is a result of the *approximate* VI algorithms. If the underlying data manifold has lower dimension (e.g. a line), fewer monomials are needed to construct polynomials that approximately approximately vanish on the sample.
* We fixed the typos and will include your suggestions, thanks.
* Lem. 4.4: The $d_i$ are the degrees of the InEx activation functions.
* Yes, in Tab. 1 that is the elapsed time to compute the generators.
We hope to have addressed your concerns, please let us know if further clarification is needed. | Summary: Present a method to analyse latent manifolds - object representations at intermediate layers of deep networks - by finding a small set of polynomials which zero out at one object but not at other objects. This effectively trunfaces the deep network and replaces the head with a single, simple, non-linear layer - with each feature stemming from a single polynomial of a single object. Such construction is beneficial twofold: in demonstrating a parameter-efficient network with comparable classification performance as the original network and in characterising the latent manifold.
This construction pulls several different tricks to get those "vanishing" polynomials - dimension reduction by PCA, sparse ideal learning algorithms pruning approaches to allow for efficient inference, reduced precision implementation and numerical rescaling (if I could count them all).
Claims And Evidence: Most claims seem to be well-supported by evidence, with the numerical experiments using ResNet on CIFAR-10/100 used to demonstrate the applicability to real-world data.
The theoretical analysis is an exception; while I do not doubt the validity of the analysis (demonstrating reduced "spectral complexity"), I have serious doubts about the implications. Those "learning guarantees" are usually vavous bounds, and thus, an improved upper bound on something may or may not lead to improved performance. As far as I could see, such improved performance was NOT demonstrated here.
Methods And Evaluation Criteria: Yes, the evaluation criteria look good to me. See some suggestions/improvements below.
Theoretical Claims: No, I do not have the capacity to verify the correctness of the theoretical claims.
Experimental Designs Or Analyses: Yes.
Supplementary Material: No.
Relation To Broader Scientific Literature: This is a novel approach to handling a well-researched area: understanding object representations in deep networks. The novel ideas presented show how to solve many practical problems (finding ideals is seems at first irrelevant on computational grounds).
Essential References Not Discussed: As mentioned, this manuscript makes novel contributions to a well-researched area and thus naturally lack some "essential references", e.g.,:
* Separability and geometry of object manifolds in deep neural networks.
* Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth
* Deep neural networks rival the representation of primate IT cortex for core visual object recognition
Other Strengths And Weaknesses: Strengths:
* The manuscript presents a novel idea of truncating a trained network and creating a new kind of learned, object-focused final layer.
* A large collection of technical solutions is needed to bring this idea close to real-world applicability.
* The very interesting results demonstrate both that this endeavour can be successful and that it may be useful to gain insight into the structure of latent representations.
Weaknesses:
* Not enough is done to build on this analysis to say something on the structure of the latent manifolds. See one suggestion below, perhaps for a future manuscript.
Other Comments Or Suggestions: The results are constructed such that the number of monomial terms is a product of the algorithm, while the maximal degree and representation dimension are fixed. I think this is a mistake - the correct comparison should have been:
* Using a fixed monomials budget (e.g., maximal degree of d=5, 128 dimensions, and the best 100 (or 200 or 1000) monomials), what is the resulting accuracy?
* Aiming for a fixed target accuracy (e.g., 90%), how many monomials are needed at each layer?
The above analysis would have demonstrated how and if later layers of deep network have superior representations so they need a smaller number of monomials for a given accuracy level or better accuracy for a given monomials budget.
Unrelated, the presentation of "Percentage of Layers Removed" seems unnatural to me: I would expect to look at "percent of layers used" with increasing accurace.
Questions For Authors: * Can you provide some analysis on the effect of the maximal degree (keeping the number of monomials fixed)?
* Are you certain you need to "limit the maximal degree to avoid overfitting"? Assuming the number of monomials is fixed, I'm not sure about that.
* Can you analyse the accuracy using a fixed number of monomials constructed from different layers, or the number of monomials needed to achieve 90% accuracy?
* Can you demonstrate a clear benefit of the fact that "the spectral norm of the VI-Net is provably lower than that of the base NN"?
* Can you clarify which tricks (e.g., lower precision aritmatic) are essential for the results and which just improve your running time slightly?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and for acknowledging the contributions of our work.
> Those "learning guarantees" are usually vavous bounds, and thus, an improved upper bound on something may or may not lead to improved performance.
We agree that these bounds are often vacuous. In that sense, we agree that improved upper bounds do not necessarily imply improved performance; thank you for that remark, we will make this more clear in the paper. Our main theoretical contribution is embedding VI-Nets into the established framework of Spectral complexity, with our main theorem providing a theoretical basis for selecting hyperparameters.
> As mentioned, this manuscript makes novel contributions to a well-researched area and thus naturally lack some "essential references"
Thank you for providing these references. We believe the provide valuable context and will add them to the paper.
> to say something on the structure of the latent manifolds. See one suggestion below, perhaps for a future manuscript.
We also believe this is an interesting future direction of our work. Having constructed these polynomials, we are now able to use the entire toolbox of computational algebra to analyse our found structures. We do believe our work is already quite dense and leave these useful applications to future work - we will address the individual suggestions below!
> * Using a fixed monomials budget what is the resulting accuracy?
> * Aiming for a fixed target accuracy (e.g., 90%), how many monomials are needed at each layer?
We ran experiments on CIFAR100, testing monomial counts from 50, 100, and increasing by 100, noting their maximal degree. We limit ourselves to the following sparsities, as higher sparsity results in reaching less than 70% accuracy.
| % of Layers Removed | Monomial Count | Max Degree among best Monomials |
|-|-|-|
| 7.1%|50| 1 |
| 12.2%|50| 1 |
| 17.5%|300| 2|
| 25.1%|500| 2|
| 30.4%|200| 2|
| 33.3%|300| 2|
| 38.4%|600| 2|
| 43.9%|900| 3|
We observe that deeper layers need fewer and lower-degree monomials. Deeper layers achieve higher accuracy with a fixed monomial count, but adding more monomials eventually stops improving performance.
| # Monomials | 17.5% layers removed | 33.3% layers removed | 75.6% layers removed|
|-|-|-|-|
| 50 | 68.47% | 59.42% | 43.42% |
| 100| 68.31% | 59.83% | 45.63% |
| 200| 69.66% | 65.43% | 48.63% |
| 400| 71.84% | 72.35% | 47.80% |
| 600| 72.54% | 72.65% | 47.26% |
| 800| 72.90% | 72.73% | 47.46% |
| 1000 | 72.96% | 72.91% | 48.81% |
> Can you provide some analysis on the effect of the maximal degree (keeping the number of monomials fixed)? Are you certain you need to "limit the maximal degree to avoid overfitting"? Assuming the number of monomials is fixed, I'm not sure about that.
Given that tanh-rescaling ensures that the output lies in the unit hypercube, a high degree polynomial with a fixed number of monomials could overfit/vanish on the dataset. However, fixing the number of monomials also has a regularizing effect (cf. Tables below). Our learning guarantee in Theorem 4.5 includes both the highest degree $d$ and the number of monomials $S$, indicating both are relevant. But we agree that our statement is a bit vague, in practice we chose $d = 5$ as maximal degree mainly because we observed that the algorithms never constructed polynomials of degree higher than $5$.
> Can you demonstrate a clear benefit of the fact that "the spectral norm of the VI-Net is provably lower than that of the base NN"?
The lower spectral norm of the VI-Net means it might generalize better, resulting in a **smaller difference between training and testing errors**. For example, below, we see that the VI-Net has a smaller gap between train and test errors in almost all configurations compared to the base neural network.
| % of Layers Removed | Gap Train vs Test Error | Δ from Baseline |
|-|-|-|
| 75.8%| 1.7843| +0.6643|
| 50.1%| 0.9639| −0.1561|
| 43.9%| 0.8502| −0.2698|
| 25.1%| 0.9005| −0.2195|
| 7.1%| 1.1024| −0.0176|
**Baseline (CIFAR-100, ResNet-34)** = 1.12
For the experiment you suggested, one can analyse the impact of the number of monomials $S$ on the generalization gap. Fewer monomials typically lead to a smaller generalization gap.
| # Monomials | 17.5% removed | 33.3% removed | 75.6% removed |
|-|-|-|-|
| 50| 20.27%| 11.10%| 0.87% |
| 100| 20.26%| 12.04%| 0.67% |
| 200| 26.64%| 19.80%| 4.45% |
| 400| 27.55%| 25.67%| 11.88%|
| 600| 27.17%| 26.08%| 18.18%|
| 800| 26.94%| 26.31%| 24.79%|
| 1000| 26.91%| 26.27%| 28.68%|
**Baseline (CIFAR-100, ResNet-34)** = 25.25
> Can you clarify which tricks are essential for the results and which just improve your running time slightly?
Tanh-rescaling ensures convergence. Stochastic algorithms speed up computations the most, while lower precision arithmetic offers mainly memory benefits.
We hope to have addressed your concerns. If you have any further questions, please let us know.
---
Rebuttal Comment 1.1:
Comment: I have read the authors' rebuttal and other reviewers' comments and would keep my current (high) score. Interestingly, I probably agree with most of cAHx's comments (leading them to reject), but still believe this is an interesting the valuable venue which complements the linear separability literature, while still being somewhat shadowed by its success. | null | null | null | null | null | null |
Interpreting the Repeated Token Phenomenon in Large Language Models | Accept (poster) | Summary: This paper attempts the explain the phenomenon of "repeated tokens" with the behaviours of "attention-sink" in LLMs. Firstly, the authors find that besides the first token, the repeated tokens also have high attention scores and large hidden states norms. Then, the authors identify the neurons that contribute to the high norms. The authors explain this phenomenon by Equation 4 (which I think is the most important one): as the number of repeated tokens increases, the influence of prefix (which may related to the system instructions or user instructions) diminishes. Finally, the authors explore how to utilise the explanation to understand repeated token attack and mitigate the attack.
## update after rebuttal
I think most of my concerns have been addressed, so I raised my rating to 3. However, I understand that further investigation may be required here, which prevents me rating higher.
Claims And Evidence: I think most of claims in this paper are supported by the empirical studies.
Regarding the attack mitigation, the authors propose a manual patch approach. In the appendix, they show that this editing method mitigates the attack by using several examples. I think more empirical studies may need to be conducted with some attack success metrics to further support the conclusion.
Methods And Evaluation Criteria: The authors first showcase the relationship between repeated tokens and attention sink with several case studies. Besides LLaMA2-7B model used in the main paper, the results for other models are also showcased in appendix.
Then based on the explorations above, the authors propose a mitigation approach. Although the authors demonstrate that the mitigation approach won't harm the model performance on unrelated task, they fail to show the effectiveness of mitigation approach towards repeated token attack using systematic benchmarks or evaluations. Only two cases are shown in the appendix.
Theoretical Claims: Yes. Only claim 4.1 is a theoretical claim. The proof shown in this paper is correct.
Experimental Designs Or Analyses: I think most of the experimental designs and analyses are sound and valid. However, I feel there is a gap between "repeated tokens could diminish the influence of the preceding prefix" and "repeated tokens could lead to training data leakage". I am looking forward to the clarifications from the authors.
Supplementary Material: I reviewed all the supplementary, which is reflected in my review.
Relation To Broader Scientific Literature: This paper is related to both mechanistic interpretability and safety in LLMs. I think it contributes to understanding the circuits of Transformer and why repeated tokens could attack LLMs. The mitigation approach proposed in this paper, if justified with more evidence, I think it will also contribute to the community of memorization attack.
Essential References Not Discussed: I think this paper has cited essential references.
Other Strengths And Weaknesses: I have included most of the strengths and weaknesses in the above.
Another weaknesses is that most of experiments are conducted using case studies. A systematic evaluation with metrics is lacking. Besides the attack success rate of mitigation approach I mentioned before, I find that when building the relationships between repeated tokens and attention sink, only some samples are provided. Why not include some statistical results to show the generalisation of this phenomenon?
Other Comments Or Suggestions: See the above.
Questions For Authors: These questions are less related to my current ratings.
1. Do you have any intuitions on why repeated tokens induce model memorization?
2. Do you have any intuitions on why certain repetition cannot lead to attention sinks / large norm of activations? The paper about massive activations [1] also show that besides the first token, several other tokens with low semantics could also induce the attention sink. Do you think this is related?
3. I am aware that theoretical analysis of equation 4 is intrinsically difficult, do you have any theoretical intuitions?
[1] Sun et al. Massive Activations in Large Language Models. COLM 2024.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments. Next, we address the reviewer's questions and concerns:
__“I feel there is a gap between "repeated tokens could diminish the influence of the preceding prefix" and "repeated tokens could lead to training data leakage". I am looking forward to the clarifications from the authors.__”
We provide a brief proof below to theoretically validate the diminishing prefix influence (a full proof will be in the Camera-Ready version). This effect leads to the described attention sink divergence.
We have revised the paper to include a theoretical analysis of equation 4. In short, the output of an attention head ($o$) for a sequence with a prefix (of size $k$) proceeded by a set of repeated tokens (of length $n$) is:
$ o = \sum_{i<k} \alpha_i v_i + \sum_{k <= i < n+k} \alpha _i v $
where $\alpha_i$ are the coefficients of the attention matrix (after softmax), thus:
$ \sum_{i}\alpha_i = 1 $
$v_i$ is the value of token at index $i$, and $v$ is the value of the repeated token (from index $k$ to $n+k$)
RoPE affects key and queries, thus affecting the values $a_i$ but the values of repeats of the same token are fixed $v$.
For a long enough sequence the values of the repeated token will dominate the values of the prefix, this output will converge to $v$.
While we acknowledge the need for further investigation, our work establishes a crucial _precursor_ to training data extraction. The identified mechanism of repeated token divergence leading to attention sinks creates the _conditions_ under which the model is more likely to retrieve and potentially output memorized content.
__“Do you have any intuitions on why certain repetition cannot lead to attention sinks / large norm of activations? The paper about massive activations [1] also show that besides the first token, several other tokens with low semantics could also induce the attention sink. Do you think this is related?__”
We suspect certain repetitions does not lead to attention sinks for several reasons:
1) the context window is too short for the convergence to take place.
2) the circuit identifying the first token does not generalize to all models.
We suspect the model doesn't learn to identify <bos> but rather the first-token due to "packing", and the face that multiple <bos> tokens appeared in each context window, and believe that knowledge about the subtle training decisions will shed light on repetitions that do not lead to attention sinks.
We think that BoS is a token with no-entropy, since it is always the first token, it does not attend the rest of the sequence, it is updated in a fixed manner in each layer. The model leverages that to control and bias the attention function. We suspect that other tokens, especially those carrying little information about the context (conjunctions, such as: "the", "and", "but"), will be used by some attention heads to implement a no-op. This has been studied in [1].
[1] Outlier-Efficient Hopfield Layers for Large Transformer-Based Models, arxiv, 2024.
__“Another weaknesses is that most of experiments are conducted using case studies. A systematic evaluation with metrics is lacking. Besides the attack success rate of mitigation approach I mentioned before, I find that when building the relationships between repeated tokens and attention sink, only some samples are provided. Why not include some statistical results to show the generalisation of this phenomenon?”__
The causal relationship between the attention sink mechanism and the repeated token phenomenon is substantiated by our ablation experiments targeting specific 'sink neurons' (Section 3.2). Figure 3 provides empirical evidence that removing the contribution of these identified neurons effectively mitigates the high activation norms characteristic of repeated token divergence. This experimental approach, focusing on causal interventions within the identified circuit, is central to mechanistic interpretability and provides sufficient evidence to confirm the direct involvement of the attention sink mechanism in the observed phenomenon
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I think most of my concerns have been addressed, and I understand that further investigation may be required here. I have updated my rating.
I have an assumption here: if the repeated tokens could diminish the influence of the preceding tokens, such as system prompt / user prompt, then the distribution of generated response is closer to the data distribution in the pre-training. What is your opinion about this, or what is your hypothesis on why "repeated tokens could diminish the influence of the preceding prefix" leads to "repeated tokens could lead to training data leakage"?
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's acknowledgement of our rebuttal.
We also recognize the valuable point raised. Our current hypothesis suggests that reducing the influence of the system prompt (prefix) is a significant factor in circumventing model alignment. We further suspect this manipulation, along with interference in the Attention mechanism, contributes to the leakage of training data. A dedicated study exploring the causal link between these two factors (prefix diminution and Attention disruption) and training data exposure remains an area for future investigation. | Summary: This paper studies the "repeated token divergence" phenomenon in LLMs, where models fail to accurately repeat a single token when instructed to do so. The authors provide a mechanistic explanation linking this behavior to attention-sinks (where the initial token in a sequence receives disproportionately high attention).
The researchers identify a specific neural circuit responsible for attention-sinks consisting of two key stages:
1) the first attention layer "marks" the initial token;
2) specific MLP neurons (termed "sink neurons") add high-magnitude values to its hidden state, creating the attention-sink.
When processing sequences with repeated tokens, the model's first attention layer confuses these repetitions with the beginning-of-sequence (BoS) token, triggering the same neural circuit and causing the model to diverge from its intended output, sometimes revealing memorized training data.
They develop and validate a targeted patch that corrects the issue without significantly impacting model performance on standard benchmarks.
Claims And Evidence: The cluster attack's effectiveness is asserted but not quantitatively measured, making it difficult to assess its practical significance compared to direct token repetition.
Methods And Evaluation Criteria: The methodological approach is generally sound, but the paper lacks a systematic evaluation of how many tokens from the vocabulary can induce attention sinks when repeated, instead showing selected examples.
Theoretical Claims: In Claim 4.1 and its proof:
the proof makes logical sense but relies on simplifying assumptions. It correctly identifies that with identical tokens, the value vectors in self-attention are equal. However, the proof doesn't fully account for the positional information encoded by RoPE: while RoPE affects keys and queries, the interaction between position-encoded representations could theoretically still allow differentiation between positions.
The empirical observation in Equation 4 helps support this claim, but the theoretical argument could be more rigorous in addressing how RoPE specifically fails to distinguish positions in this context.
Experimental Designs Or Analyses: - The paper doesn't include control experiments with non-identical but similar tokens to test the specificity of the mechanism.
- The validation of the patch focuses on model performance on standard tasks but doesn't directly measure its effectiveness at preventing the extraction of training data, which is the primary security concern.
Supplementary Material: I read the appendix
Relation To Broader Scientific Literature: The paper presents convincing evidence for its core mechanistic explanation of repeated token divergence through attention sinks. The empirical work identifying specific neural circuits is particularly strong. However, the connection between this mechanism and training data extraction needs more thorough study.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: other strenghts:
- The authors use a systematic approach to identify the neural mechanisms involved, using causal interventions (neuron ablation) to verify their hypotheses. This provides strong evidence for their claims about the role of specific neurons in creating attention-sinks.
other weaknesses:
- The paper shows that not all tokens can effectively induce attention-sinks when repeated (Figure 5). This unexplained variability suggests additional factors at play that aren't fully captured by the current mechanistic account.
- While the paper identifies the mechanism behind repeated token divergence, it doesn't fully explain why this leads to training data extraction. The relationship between attention-sinks and the retrieval of memorized content remains unclear.
- The authors note that while there are shared motifs across LLMs, the first attention layer's specific behavior was unique to LLaMa2. This raises questions about how broadly applicable their exact mechanisms are across different model architectures.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments. Next, we address the reviewer's questions and concerns:
__“The paper shows that not all tokens can effectively induce attention-sinks when repeated (Figure 5). This unexplained variability suggests additional factors at play that aren't fully captured by the current mechanistic account.__”
As we mentioned in the limitations section, some additional factors can indeed be at play. Nevertheless, our goal is to understand why and how the previously identified phenomenon of model divergence due to repeated tokens, happens. Our current model is _useful_ - it allows intervention that prevents the phenomenon, and therefore we believe that our analysis is valuable.
__“While the paper identifies the mechanism behind repeated token divergence, it doesn't fully explain why this leads to training data extraction. The relationship between attention-sinks and the retrieval of memorized content remains unclear.__”
While we acknowledge the need for further investigation, our work establishes a crucial _precursor_ to training data extraction. The identified mechanism of repeated token divergence leading to attention sinks creates the _conditions_ under which the model is more likely to retrieve and potentially output memorized content.
__“The authors note that while there are shared motifs across LLMs, the first attention layer's specific behavior was unique to LLaMa2. This raises questions about how broadly applicable their exact mechanisms are across different model architectures.__”
We suspect subtle differences in training, data or initialization induce different ways for the first attention layer to behave. However, it was important to us to show that by reverse-engineering one layer we can generate non-repeating sequences which also induce attention sinks.
__“The validation of the patch focuses on model performance on standard tasks but doesn't directly measure its effectiveness in preventing the extraction of training data, which is the primary security concern.__”
We have provided a few examples showing how coherency is maintained. It is rather difficult to measure training data leakage as most models are open-weights but not fully open-source, therefore there is very little information about their training data.
__“In Claim 4.1 and its proof: the proof makes logical sense but relies on simplifying assumptions. It correctly identifies that with identical tokens, the value vectors in self-attention are equal. However, the proof doesn't fully account for the positional information encoded by RoPE: while RoPE affects keys and queries, the interaction between position-encoded representations could theoretically still allow differentiation between positions. The empirical observation in Equation 4 helps support this claim, but the theoretical argument could be more rigorous in addressing how RoPE specifically fails to distinguish positions in this context.”__
We have revised the paper to include a theoretical analysis of equation 4. We argue that differentiation between positions of equivalent tokens is _impossible_ with RoPE. In short, the output of an attention head ($o$) for a sequence with a prefix (of size $k$) proceeded by a set of repeated tokens (of length $n$) is:
$ o = \sum_{i<k} \alpha_i v_i + \sum_{k <= i < n+k} \alpha _i v $
where $\alpha_i$ are the coefficients of the attention matrix (after softmax), thus:
$ \sum_{i}\alpha_i = 1 $
$v_i$ is the value of token at index $i$, and $v$ is the value of the repeated token (from index $k$ to $n+k$)
RoPE affects key and queries, thus affecting the values $a_i$ but the values of repeats of the same token are fixed $v$.
For a long enough sequence the values of the repeated token will dominate the values of the prefix, this output will converge to $v$.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for providing these clarifications, and i'd like to keep the overall score to 4. | Summary: - This paper discusses the repeated token divergence issue, links it to a possible LLM phenomenon named attention sink and proposes a solution for it.
- This paper takes a Mechanistic Interpretability approach, analyzing the underlying mechanism of attention sinks in LLMs, and shows empirical evidence of this mechanism. Then the analysis is linked with the repeated token divergence issue.
- The authors introduce a "cluster attack" based on their findings, which extends the vulnerability beyond simple token repetitions. They also develop a targeted patch that mitigates the issue without harming the model's performance on other tasks.
Claims And Evidence: The claims are proved to some extent but not fully convincing.
- line 294: Rope affects the queries and keys, not keys and values, this claim: "thus symmetry between all tokens is preserved" shall be further clarified.
- line 299: Rope will affects attention output according to the relative positions, so even the repeated tokens will have different outputs, the claim: "all tokens in the repeated sequence will have the same output after the first attention layer." in line 299 is confusing.
Methods And Evaluation Criteria: The methods and evaluation criteria make sense.
Theoretical Claims: Yes, there is not much theoretical claims in this paper, most results are obtained empirically.
Experimental Designs Or Analyses: Although mentioned in the limitations, the first attention layer’s behavior was unique to LLaMa2, but the first attention layer mechanism poses an important finding in this paper, which hinders the generalization ability of this paper. How well can this method be used in other more models is unknown.
Supplementary Material: Yes, extensive experiment results and examples are provided in the supplementary material.
Relation To Broader Scientific Literature: This paper are related prior literatures in terms of several aspects:
- repeated token divergence
- Attention sink in transformer network
- LLM attack
Essential References Not Discussed: None
Other Strengths And Weaknesses: weakness:
- The reason behind the high attention scores of the attention sinks and repeated tokens seems different. Repeated tokens are likely to have high activations because of self-attention between similar embeddings, but attention sinks are due to other different reasons. In addition, does the LLm distinguish attention sinks only by attention scores? Why can't model distinguish by positions?
- The line 294, Rope affects the queries and keys, not keys and values, this claim: "thus symmetry between all tokens is preserved" shall be further clarified. In addition, Rope will affects attention output according to the relative positions, so even the repeated tokens will have different outputs, the claim: "all tokens in the repeated sequence will have the same output after the first attention layer." in line 299 is confusing.
Other Comments Or Suggestions: None
Questions For Authors: - Although mentioned in the limitations, the first attention layer’s behavior was unique to LLaMa2, but the first attention layer mechanism poses an important finding in this paper, which hinders the generalization ability of this paper. How well can this method be used in other more models is unknown.
- I wonder how will the mitigation influence the decoding efficiency as all sink neurons need to be figured out.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments. Next, we address the reviewer's questions and concerns:
__“Rope affects the queries and keys, not keys and values, this claim: "thus symmetry between all tokens is preserved" shall be further clarified. Rope will affect attention output according to the relative positions, so even the repeated tokens will have different outputs, the claim: "all tokens in the repeated sequence will have the same output after the first attention layer." in line 299 is confusing.”__
We agree and sorry for the confusion. We have revised the paper to include a proof to these claims. In short, the output of an attention head ($o$) for a sequence with a prefix (of size $k$) proceeded by a set of repeated tokens (of length $n$) is:
$ o = \sum_{i<k} \alpha_i v_i + \sum_{k <= i < n+k} \alpha _i v $
where $\alpha_i$ are the coefficients of the attention matrix (after softmax), thus:
$ \sum_{i}\alpha_i = 1 $
$v_i$ is the value of token at index $i$, and $v$ is the value of the repeated token (from index $k$ to $n+k$)
RoPE affects key and queries, thus affecting the values $a_i$ but the values of repeats of the same token are fixed $v$.
For a long enough sequence the values of the repeated token will dominate the values of the prefix, this output will converge to $v$.
__“Although mentioned in the limitations, the first attention layer’s behavior was unique to LLaMa2, but the first attention layer mechanism poses an important finding in this paper, which hinders the generalization ability of this paper. How well this method can be used in other models is unknown.”__
We suspect subtle differences in training, data or initialization induce different ways for the first attention layer to behave. However, it was important to us to show that by reverse-engineering one layer we can generate non-repeating sequences which also induce attention sinks.
__“I wonder how will the mitigation influence the decoding efficiency as all sink neurons need to be figured out.”__
The decoding efficiency after the mitigation is not significantly different from the original decoding efficiency. As can be seen by the pseudo-code (Listing 1), our patch changes one neuron for LLaMa2. Even with more neurons, the patch can be done in parallel on all of them as the set of neurons can be precomputed.
__“The reason behind the high attention scores of the attention sinks and repeated tokens seems different. Repeated tokens are likely to have high activations because of self-attention between similar embeddings, but attention sinks are due to other different reasons. In addition, does the LLm distinguish attention sinks only by attention scores? Why can't model distinguish by positions?”__
Our analysis indicates that the high attention scores observed in both phenomena are not due to distinct causes but rather stem from the same underlying neural mechanism. Specifically, the mechanism responsible for creating attention sinks (high attention on the initial token, crucial for fluency ) is inadvertently triggered by long sequences of repeated tokens.
The core issue lies in the model's first attention layer, which struggles to differentiate a sequence of identical tokens from a single, initial token, particularly in architectures using RoPE. This layer misidentifies the repeated sequence, activating the same "sink neurons" that amplify the hidden states of initial tokens. Consequently, repeated tokens acquire high hidden-state norms, attracting disproportionate attention similar to the BoS token. While the model utilizes positional information (e.g., via causal masking ), this specific case of identical repetitions challenges its ability to distinguish position effectively, leading to the observed divergence.
Therefore, our work demonstrates that repeated token divergence is mechanistically linked to the attention sink phenomenon, arising from a failure in positional differentiation under specific repetitive conditions. We hope this clarifies the connection established in our findings.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal to clarify my questions, I will update my score accordingly. | Summary: This paper focuses on the repeat phenomenon in LLMs. The authors view this problem from the view of attention sink. They show that the first attention layer marks the initial token, and the later MLP neuron amplifies its hidden state, creating an attention sink. A patching method is proposed to mitigate this effect.
Claims And Evidence: The authors claim that
1. The repeated token divergence is caused by attention sinks.
2. Repeated tokens activate the same neural circuit responsible for attention sinks.
3. A simple patching can mitigate the issue.
These claims are supported by the evidence.
Methods And Evaluation Criteria: The authors propose the patching method to mitigate the sink issue. The experiments demonstrate the efficiency of the proposed method.
Theoretical Claims: No theoretical analysis is provided in the paper.
Experimental Designs Or Analyses: The experimental designs in the paper are plausible.
Supplementary Material: I review all the parts of the appendix.
Relation To Broader Scientific Literature: N.A.
Essential References Not Discussed: N.A.
Other Strengths And Weaknesses: The main weakness of this work is that the task considered is very rare in the realistic application. Most users of LLMs will not let LLMs repeat some words. The lack of motivation discounts the importance of this work.
Other Comments Or Suggestions: N.A.
Questions For Authors: The main concern is the importance of the repeat task. It is hard to justify why people want the LLMs to repeat tokens.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments. Next, we address the reviewer's questions and concerns:
__“The main weakness of this work is that the task considered is very rare in the realistic application. Most users of LLMs will not let LLMs repeat some words. The lack of motivation discounts the importance of this work. It is hard to justify why people want the LLMs to repeat the token.”__
The repeating tokens “task” is indeed not a common task. Nevertheless, once this behavior is triggered, it can cause a training data leakage – a serious security flow [1, 2]. Like many software vulnerabilities, the behavior that triggers the presented vulnerability is not a typical user input. Nevertheless, we assume here an adversarial user that aims to attack the model, and we find a patch that can protect the model from such an attacker.
[1] Nasr et al. Scalable extraction of training data from (production) language models, ICLR 2025
[2] Bye Bye Bye...: Evolution of repeated token attacks on ChatGPT models, https://dropbox.tech/machine-learning, 2024 | null | null | null | null | null | null |
D-Fusion: Direct Preference Optimization for Aligning Diffusion Models with Visually Consistent Samples | Accept (poster) | Summary: This paper addresses the issue the effectiveness of preference learning (e.g. Direct Preference Optimization (DPO)) for text-to-image diffusion model is limited by the visual inconsistency between the sample that aligns better with the text and the sample that aligns less with the prompt, making it difficult to focus on the relevant differences between the positive and negative samples. The authors introduce D-Fusion (Denoising Trajectory Fusion), a method to construct visually consistent samples for preference learning. D-Fusion works by performing mask-guided self-attention fusion between a well-aligned reference image and a poorly-aligned base image, resulting in a target image that is both well-aligned and visually consistent with the base image, while also retaining the denoising trajectories necessary for DPO training. Through comprehensive experiments on Stable Diffusion, the authors demonstrate that applying D-Fusion can significantly improve prompt-image alignment when used with different reinforcement learning algorithms like DPO, DDPO, and DPOK. The paper highlights the necessity of using visually consistent image pairs for fine-tuning diffusion models with DPO and introduces D-Fusion as a compatible approach to address this challenge
Claims And Evidence: The claims made in the submission appear to be supported by evidence. Both qualitative results (Figure 4) and quantitative results (Figure 5, Table 1), including CLIPScore metrics and human preference tests (Figure 6), consistently demonstrate the effectiveness of D-Fusion in improving text-image alignment, compared to the plain DPO and the base model. These support the hypothesis that preference optimization can work better with visual similar pairs, and demonstrate the effectiveness of the proposed method.
Methods And Evaluation Criteria: Denoising Trajectory Fusion is an empirical method, motivated with intuition.
It aims to generate well-aligned and also visually similar positive image samples for preference learning. The process involves two key steps:
1. Cross-Attention Mask Extraction: A mask is extracted from the cross-attention maps of the reference image's denoising process. This mask identifies the image regions corresponding to the prompt's content.
2. Self-Attention Fusion: Starting with the same random noise as the base image, a target image is denoised. During this process, the self-attention keys and values of the target image are fused with those of the reference image using the extracted mask. In the masked (prompt-related) areas, the target image adopts the self-attention information from the reference image, enhancing alignment. In the unmasked areas, it retains its own information, ensuring visual consistency with the base image.
Although there is no theoretical support, the intuition makes sense and the evaluation criteria used in the paper are standards for evaluating text-to-image diffusion models.
Theoretical Claims: This is a pure empirical paper.
Experimental Designs Or Analyses: The experiment designs are pretty standard for evaluating text-to-image diffusion models. On the other hand, I'm not very sure I understand the motivation and the value for having the comparison with DDIM inversion in ablation study (or whether it should be called an ablation studies). I'd like to see more details about that.
Supplementary Material: I reviewed the additional qualitative examples.
Relation To Broader Scientific Literature: It's related the literature of text-to-image diffusion model alignment.
Essential References Not Discussed: I'm not aware of significant missing references.
Other Strengths And Weaknesses: No additional comments.
Other Comments Or Suggestions: - Writing quality has room to improve.
- (Line 25, abstract) On the one hand -> On one hand
- I suggest using $I^t$ instead of $I^a$ for the target image, following $I^b$ for base image and $I^r$ for reference image.
Questions For Authors: As mentioned, can you expand the ablation study section to provide more clarity on the motivation and the value for having the comparison with DDIM inversion?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks a lot for your time and efforts in reviewing our paper. We are happy that you think the intuition makes sense. In addition, we feel pleased that you approve our experiment design.
Below are responses to your concerns and suggestions. Please let us know if you require any further information, or if anything is unclear.
## 1. Clarification on Comparison with DDIM Inversion
> As mentioned, can you expand the ablation study section to provide more clarity on the motivation and the value for having the comparison with DDIM inversion?
Thanks for this concern! We would like to clarify on it in a clearer manner.
Firstly, fine-tuning with reinforcement learning requires access to the **denoising trajectories** of the images. This paper highlights the necessity of fine-tuning diffusion models using visually consistent samples. However, constructing visually consistent samples through **image editing** is not feasible, as the editing process disrupts the denoising trajectory (See right column of Line 90 in the paper for detail). The advantage of D-Fusion lies in its ability to generate visually consistent samples while **preserving their denoising trajectories**.
However, there exist two straightforward methods to create a denoising trajectory for any given image: the **forward process** of diffusion models and the **DDIM inversion**. (The former simply adds noise to an image, which is so trivial that we did not mention it in the paper.) The created denoising trajectories can also be used for reinforcement learning fine-tuning.
Therefore, we designed this comparative experiment, where the training data consists of the **same visually consistent samples**, but the **denoising trajectories come from D-Fusion and DDIM inversion respectively**. The results indicate that training with denoising trajectories created by DDIM inversion performs poorly. This effectively rules out the method of fine-tuning the model using DDIM Inversion through the following steps:
* generating visually consistent samples via **image editing**,
* creating denoising trajectories using **DDIM inversion**,
* fine-tuning with samples and trajectories.
Thus, this comparison further demonstrates the advantage of D-Fusion (*i.e.*, generating visually consistent samples while preserving their denoising trajectories).
We would like to further discuss the **forward process**. Adding noise to images through the **forward process** essentially turns the training to supervised fine-tuning (SFT). We have also provided experimental results on SFT. if you're interested, please refer to Figure 1(a) at this link (https://anonymous.4open.science/r/ICML-25-Rebuttal-2703/rebuttal_2703.pdf). As shown, SFT also performs poorly in improving alignment, further demonstrating the effectiveness of RL training with D-Fusion.
## 2. About Writing Suggestions
> * Writing quality has room to improve.
> * (Line 25, abstract) On the one hand -> On one hand.
> * I suggest using $I^t$ instead of $I^a$ for the target image, following $I^b$ for base image and $I^r$ for reference image.
Thank you again for providing these suggestions. In the final version, we will conduct a **thorough review** to correct errors and enhance writing quality. For instance, we will revise the phrase "on the one hand" in the abstract, as you pointed out, to "on one hand". We will also revise $I^a$ to $I^t$, in order to keep consistent with $I^r$ and $I^b$. | Summary: This paper introduces **D-Fusion**, a method for improving text-image alignment in diffusion models via **Direct Preference Optimization (DPO)**. It identifies a key challenge in prior DPO approaches—**visual inconsistency** between well-aligned and poorly-aligned images—which hinders the model's ability to learn effective alignment strategies. To address this, D-Fusion **constructs visually consistent training samples** using **self-attention fusion** while preserving denoising trajectories, ensuring compatibility with RL-based training. The paper demonstrates **significant improvements in prompt-image alignment** over naive DPO on Stable Diffusion (SD) 2.1 across various evaluation metrics, including CLIPScore and human preference tests. The method is also shown to be **compatible with multiple RL algorithms**, including DPO, DDPO, and DPOK.
Claims And Evidence: The paper's central claims are supported by extensive empirical results:
1. **D-Fusion improves text-image alignment**: Experiments demonstrate superior CLIPScore performance and higher human preference rates compared to standard DPO.
2. **D-Fusion constructs visually consistent training samples**: Qualitative examples and ablation studies confirm that self-attention fusion effectively preserves alignment features while ensuring visual consistency.
3. **D-Fusion generalizes across different RL methods**: The results indicate that integrating D-Fusion with different RL fine-tuning approaches (DPO, DDPO, DPOK) leads to improved alignment.
However, one limitation is that all experiments are conducted on **SD 2.1**, an older model, without evaluation on **more advanced diffusion architectures** such as **SDXL** or **Flux**, which may better reflect the state of the field.
Methods And Evaluation Criteria: The method is well-motivated and addresses a real challenge in diffusion model alignment. The **benchmarking approach is reasonable**, using:
- **CLIPScore** as a quantitative metric
- **Human preference testing** as a qualitative evaluation
- **Three types of prompts (actions, attributes, spatial relationships)** to ensure diversity
One concern is **the absence of comparison to state-of-the-art models** such as SDXL or Flux. Given the rapid advancements in diffusion models, it is unclear whether the improvements hold in **larger-scale architectures**.
Theoretical Claims: The paper primarily focuses on empirical improvements rather than theoretical derivations. The **formulation of diffusion models as a sequential decision-making process** and the **DPO objective function** appear correct and align with prior work.
Experimental Designs Or Analyses: The experimental setup is **mostly sound**, but there are some limitations:
1. **Model Selection Bias**: The choice of SD 2.1 for evaluation is outdated, raising concerns about applicability to modern architectures.
2. **Lack of Baselines Beyond DPO Variants**: While the paper evaluates multiple DPO-based methods, it does not compare against alternative alignment techniques (e.g., **reward models, preference learning in SDXL**).
3. **Generalization to Real-World Tasks**: The evaluation primarily focuses on synthetic prompt-image pairs, with no discussion of real-world generative applications (e.g., inpainting, video synthesis).
Supplementary Material: Yes, the **appendices contain useful details**, including:
- **Additional experimental results** with different RL algorithms
- **Ablation studies** on self-attention fusion
- **Implementation details and pseudo-code**
Relation To Broader Scientific Literature: The work aligns with ongoing research in **reinforcement learning for generative models** and **diffusion model alignment**. Specifically, it builds on:
- **DPO-based fine-tuning for LLMs and diffusion models** (Wallace et al., 2023; Fan et al., 2023)
- **Attention control techniques for image editing** (Cao et al., 2023; Hertz et al., 2022)
- **CLIP-based alignment evaluation** (Radford et al., 2021)
However, the paper does not **explicitly compare to recent SDXL-related studies** or Flux-based advancements, which limits its impact.
Essential References Not Discussed: **SDXL and Modern Diffusion Architectures**
Other Strengths And Weaknesses: ### **Strengths:**
1. **Well-motivated problem** (prompt-image misalignment)
2. **D-Fusion is simple yet effective** (self-attention fusion for visual consistency)
3. **Extensive evaluation** (multiple RL algorithms, CLIPScore, human preference)
### **Weaknesses:**
1. **Choice of model (SD 2.1) is outdated** → Lacks validation on SDXL/Flux
2. **No comparison to alternative fine-tuning approaches** (e.g., LoRA for alignment, reward-based RLHF)
3. **Does not explore computational efficiency** (D-Fusion adds complexity—does it scale?)
Other Comments Or Suggestions: 1. **Why was SD 2.1 chosen instead of SDXL?**
2. **Does D-Fusion improve alignment in other generative tasks (e.g., inpainting, video synthesis)?**
3. **How does D-Fusion compare to reward-based RL approaches for alignment?**
4. **What is the computational overhead of self-attention fusion?**
Questions For Authors: 1. **How does mask-guided fusion affect diversity?**
- Does it introduce mode collapse (favoring specific visual patterns)?
2. **Are there failure cases?**
- Are there scenarios where D-Fusion reduces prompt-image alignment instead of improving it?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for providing these detailed and helpful comments! We appreciate that you think our method well-motivated and approve of our experiments.
Responses to your concerns are as follows. We provide additional experimental results **at this link** (https://anonymous.4open.science/r/ICML-25-Rebuttal-2703/rebuttal_2703.pdf).
## 1. About Base Model
Thanks for recommending advanced models like SDXL. In fact, we have tried to use them. Unfortunately, due to limited computational resources (24GB 3090/4090 GPU), we are unable to fine-tune them and can only perform inference.
In Figure 6 and 7 of the above link, we show some images generated using D-Fusion on **SDXL** and **SD3.5**. As shown, D-Fusion is capable of producing well-aligned and visually consistent samples with these models. We suggest that fine-tuning with these samples still has the potential to enhance alignment of these models.
We chose SD2.1 as it is the most advanced model our resources allow. Notably, much influential related work conducted experiments on SD1.4 and SD1.5 [1,2,3]. We will appreciate your understanding regarding our limitation of base model.
## 2. More Baselines
> No comparison to alternative approaches (e.g., LoRA for alignment, reward-based RLHF)
Thanks for your suggestion. We would like to clarify that DDPO and DPOK are forms of reward-based RLHF. As follows, we provide a categorized list of the methods compared in our experiments, with **bold** ones indicating those newly added during rebuttal.
* SFT: **Lora Fine-tuning**
* Reward-based RLHF: DDPO, DPOK, **Policy Gradient**
* DPO: DPO, **DenseReward**[4]
Results of added experiments are shown in Figure 1 of the link. D-Fusion are well compatible with these methods. As shown, incorporating D-Fusion leads to greater alignment improvement.
If there exist methods you believe should be included, please don't hesitate to tell us!
## 3. Real-World Task
> Real-world applications (e.g., inpainting, video synthesis).
Thanks for your suggestion. To the best of our knowledge, inpainting and video synthesis are rarely explored in previous RL-based studies. Given the time constraints of rebuttal period, we were unable to implement them well. However, RL-based fine-tuning follows an **end-to-end** training paradigm, where simply modifying reward function allows adaptation to various downstream tasks. We believe that RL-based methods hold great potential to benefit these real-world tasks.
Additionally, we conducted experiments on tasks that are commonly studied in prior work, including improving **human preference** and **aesthetic quality**. As shown in Figure 2 of the link, D-Fusion can enhance the performance of RL fine-tuning on these tasks. Hope these tasks are sufficient to demonstrate the generalization ability of our method.
## 4. Computational Overhead
Thanks for this concern. We provide a detailed analysis in response to reviewer tsGd under “*3. Clarification on Efficiency*”. We would appreciate it if you could take a look.
In our experiments, DPO takes 78 min per round, whereas D-Fusion extends it to 91 min. The additional overhead is affordable.
## 5. About Diversity and Failure Cases
> How does fusion affect diversity? Are there failure cases?
Thanks for these questions!
In fact, D-Fusion can **mitigate loss of diversity** caused by RL. It is widely recognized that RL may cause the models to only generate images of specific patterns [1]. For instance, when using template "*a(n) [animal] [human activity]*", cartoon images tend to receive high alignment scores, because these prompts are often depicted in cartoon styles in the pre-training data. During RL, the model may learn to generate only cartoon images to achieve high alignment, which reduces diversity.
D-Fusion helps mitigate this issue. The visually consistent sample pairs constructed by D-Fusion typically adopt **similar styles**, but receive **different alignment scores**. When training with these samples, the model does not associate a specific style with high alignment, thus maintaining diversity.
In Table 1 of the link, we show the diversity of images generated by fine-tuned models. As shown, employment of D-Fusion helps alleviate reduction in diversity.
D-Fusion indeed occasionally results in **failure cases**. We show some cases in Figure 5 of the link. As observed, the failures likely arise from **insufficient attention** given to certain objects when sampling, causing the cross-attention mask to poorly outline their positions. This leads to confusion during subsequent fusion. However, based on our experimental results, the presence of a small number of failure cases does not significantly impact fine-tuning performance.
[1] Training Diffusion Models with Reinforcement Learning.
[2] DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models.
[3] Aligning Text-to-Image Models using Human Feedback.
[4] A Dense Reward View on Aligning Text-to-Image Diffusion with Preference. | Summary: This paper introduces D-Fusion, a novel approach to addressing the misalignment between generated images and their corresponding text prompts. D-Fusion constructs DPO-trainable visually consistent samples to mitigate the visual inconsistency present in previous DPO methods. First, the mask-guided self-attention fusion ensures that the generated images are not only well-aligned with the text prompts but also visually consistent with given poorly aligned images. Second, D-Fusion preserves the denoising trajectories of the resulting images, facilitating effective DPO training. Experimental results demonstrate significant improvements in prompt-image alignment.
Claims And Evidence: Most are supported, but I believe there is room for improvement.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: No issue.
Supplementary Material: The supplementary material provides the visualization of attention maps, the implementation details, the pseudo-code, and additional experimental results.
Relation To Broader Scientific Literature: D-Fusion is related to the diffusion models and DPO. The cross-attention mask extraction of D-Fusion is inspired by [1] and [2].
[1] Hertz, et al. Prompt-to-prompt image editing with cross attention control.
[2] Cao, et al. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: ### **Strengths**
- The paper is well-written and easy to follow.
- The motivation is clear, and the proposed method is solid.
- The experiments are comprehensive and effectively demonstrate the effectiveness of D-Fusion.
### **Weaknesses**
- In D-Fusion, cross-attention mask extraction relies on findings from SD’s cross-attention maps, which limits its applicability to other diffusion architectures, such as DiT.
- The prompts used in this paper are simple, making it unclear whether D-Fusion can perform well with more complex prompts.
- The efficiency of D-Fusion has not been evaluated.
Other Comments Or Suggestions: No.
Questions For Authors: See “Weaknesses”.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in reviewing our paper and providing well-structured comments. We are delighted that you think our paper well-written and approve of our motivation and experiments.
Below are our responses to your concerns. Additional experimental results can be found **at this link** (https://anonymous.4open.science/r/ICML-25-Rebuttal-2703/rebuttal_2703.pdf). If anything remains unclear or if you need further information, please feel free to let us know!
## 1. Applicability to Other Architecture
> (Weakness 1) In D-Fusion, cross-attention mask extraction relies on findings from SD’s cross-attention maps, which limits its applicability to other diffusion architectures, such as DiT.
Many thanks for this concern.
As you pointed out, many diffusion models no longer adopt the U-Net architecture. We take SD3.5, a representative example that uses DiT, as a case study. In SD3.5, there is no explicit cross-attention layer. However, in the self-attention layers, the prompt embeddings are concatenated with image features and processed together, effectively forming an **implicit cross-attention mechanism**. This allows us to extract cross-attention masks and further apply D-Fusion. Examples of this can be found in Figure 7 at the provided link.
More broadly, any model that **incorporates prompt guidance through attention mechanism** could potentially benefit from D-Fusion. However, since many of these models have only gained research attention in recent months, a more in-depth exploration will be pursued in our future work.
## 2. Complex Prompts
> (Weakness 2) The prompts used in this paper are simple, making it unclear whether D-Fusion can perform well with more complex prompts.
Thank you for this question!
We have observed that previous studies primarily fine-tuned models on short prompts [1,2], with some even fine-tuning on simple labels [3]. To ensure a fair comparison, we followed them to use simple prompts in our experiment.
Moreover, we carefully designed **three types of prompt templates**, which considers the behavior of the object, the attribute of the object and positional relationship between objects respectively. We suggest that these prompts are representative, as most complex prompts can be decomposed into these three fundamental types.
Despite this, we appreciate your reasonable suggestion and have conducted **additional experiments using complex prompts**. As shown in Figure 4(left) at the provided link, D-Fusion can also construct well-aligned visually consistent samples for complex prompts. Furthermore, as demonstrated in Figure 4(right), fine-tuning with visually consistent samples also leads to improved performance under complex prompt conditions.
[1] Training Diffusion Models with Reinforcement Learning.
[2] DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models.
[3] Aligning Text-to-Image Models using Human Feedback.
## 3. Clarification on Efficiency
> (Weakness 3) The efficiency of D-Fusion has not been evaluated.
Thank you for this suggestion.
We are willing to first analyze the computational overhead of D-Fusion and naive DPO theoretically. As shown in Figure 2 of the paper, each fine-tuning round of naive DPO consists of three steps: **sampling**, **evaluation**, and **training**. Among them, **evaluation** is conducted using AI models, which are highly efficient and can run concurrently with **sampling**. Thus, the total time required for one round in naive DPO can be expressed as $nT_1+nT_2$, where $n$ is the number of images per round, $T_1$ is the time needed to sample each image, and $T_2$ is the time required for training on each image.
D-Fusion introduces an additional **fusion** step following the **evaluation** step. During **fusion** step, half of the images (*i.e.*, those that are well-aligned) are resampled to extract their $K,V$ and mask values, while the remaining half (*i.e.*, those that are poorly aligned) undergo sampling with attention fusion. Consequently, the time required per round in DPO+D-Fusion can be expressed as $nT_1+\frac{n}{2}(T_1+T_1')+nT_2$, where $T_1'$ represents the time needed to apply attention fusion. (Alternatively, storing $K,V$ and mask values during the **sampling** step could eliminate the need for repeated resampling, reducing the total time to $nT_1+\frac{n}{2}T_1'+nT_2$. But we do not recommend this approach, as storing $K,V$ and mask values for all the $n$ images would require excessive memory.)
The computational overhead introduced by D-Fusion is **affordable**. In our experiments, the ratio of processing times is observed as $T_1:T_1':T_2=6:7:33$. This indicates that the majority of the time is spent on training, thus the computational cost of D-Fusion is acceptable. Specifically, when fine-tuning on an RTX 3090 GPU, naive DPO takes 78 minutes per round, whereas D-Fusion extends this to 91 minutes per round.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
I believe that has addressed most of my concerns.
After considering the reviews from other reviewers, I will maintain my positive score (3: Weak accept).
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for these valuable comments and constructive suggestions. And we truly appreciate the time and effort you dedicated to reviewing our work! | Summary: This paper focuses on improving DPO for fine-tuning diffusion models. The key problem identified is visual inconsistency in training samples, which hampers the effectiveness of reinforcement learning-based fine-tuning methods like DPO.
Claims And Evidence: The claim that "D-Fusion works across different RL algorithms" is supported primarily through experiments on DPO, DDPO, and DPOK. While these are relevant methods, additional RL-based techniques could be explored to further validate generalizability.
Methods And Evaluation Criteria: Several meaningful benchmarks are not evaluated, such as Pick-a-Pic, HPSv2, Image Reward, and Aesthetic score.
Theoretical Claims: There is no formal proof for D-Fusion, e.g., the optimality of D-Fusion.
Experimental Designs Or Analyses: **Comparison.**
1. Lack of comparison with DPO-like methods. See *Essential References Not Discussed*.
2. Lack of comparison with SFT.
3. Lack of comparison with RL methods, such PPO and PG. But this is not necessary.
**Ablation Study.** There is no ablation study for the mask threshold hyperparameter.
Supplementary Material: I reviewed the implementation details and pseudo-code, which appear well-documented and reproducible. The additional qualitative samples in Appendix F provide more evidence of D-Fusion’s effectiveness.
Relation To Broader Scientific Literature: Lack of discussion on other Diffusion DPO variants.
Essential References Not Discussed: The paper does not compare D-Fusion to other Diffusion-PO methods. To name a few:
[1] Diffusion-NPO: Negative Preference Optimization for Better Preference Aligned Generation of Diffusion Models. ICLR 2025.
[3] Diffusion-RPO: Aligning diffusion models through relative preference optimization. arXiv:2406.06382
[4] SePPO: Semi-Policy Preference Optimization for Diffusion Alignment. arXiv:2410.05255
Other Strengths And Weaknesses: The notations are clear and well written.
Other Comments Or Suggestions: N/A
Questions For Authors: How sensitive is D-Fusion to the mask threshold hyperparameter?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and providing a lot of suggestions! We appreciate your recognition of our paper's clarity and your time reviewing the paper and supplementary materials in detail.
Responses to your suggestions are as follows. We provide additional experimental results **at this link** (https://anonymous.4open.science/r/ICML-25-Rebuttal-2703/rebuttal_2703.pdf). Please let us know if you require any further information, or if anything is unclear.
## 1. More Benchmarks
> Several meaningful benchmarks are not evaluated, such as Pick-a-Pic, HPSv2, Image Reward, and Aesthetic score.
Thank you for recommending these meaningful evaluation metrics.
We would like to first clarify that our work focuses on addressing the issue of **prompt-image misalignment**, which is a crucial challenge in controllable generation for diffusion models. The metrics you recommended are designed to assess **human preference** or **aesthetic quality**, which are not the primary tasks our work directly targets.
Nevertheless, we have conducted additional experiments using these metrics you recommended. The results are presented in Figure 2 at the provided link. As shown, when using these metrics as reward models, D-Fusion can still enhance the performance of the RL fine-tuning. This further demonstrates the effectiveness of D-Fusion.
## 2. Ablation Study on Mask Threshold
> There is no ablation study for the mask threshold hyperparameter.
> How sensitive is D-Fusion to the mask threshold hyperparameter?
Thank you for raising this important question.
The mask threshold is **not a hyperparameter that requires meticulous tuning**, and can be predetermined through low-cost methods. In our experiments, we predetermined the threshold by sampling a few images for each prompt, and assessing whether the chosen threshold value allows the mask to outline corresponding object. This selection process does not require high precision. As shown in Appendix G, the thresholds we use (predominantly 0.005, 0.01, 0.015, 0.02, and 0.03) are fairly coarse values. If users wish to apply our method to new prompts, they simply need to sample a few images and determine an appropriate threshold through staightforward observation.
Empirically, our method can consistently deliver **stable improvements** even with less precise thresholds. We performed an ablation study on the mask threshold by setting it to 0.002, 0.005, 0.01, 0.02, 0.03 and 0.05, respectively. Each threshold is uniformly applied to **all** prompts in the list. And then we fine-tune the model. The results are shown in Figure 3 at the provided link. As the threshold increases, the fine-tuning performance initially improves and then declines. For thresholds of 0.002 and 0.05, the masks are too inaccurate, leading to a significant drop in performance. In contrast, for threshold values from 0.005 to 0.03, our method can **stably** improve the fine-tuning effect. The experimental results demonstrate the robustness of our method to threshold setting.
## 3. More Baselines
> * Lack of comparison with DPO-like methods.
> * Diffusion-NPO.
> * Diffusion-RPO.
> * SePPO.
> * Lack of comparison with SFT.
> * Lack of comparison with RL methods, such as PPO and PG.
Thank you for pointing out these valuable methods for comparison.
* For the comparison with **SFT**, we have added this experiment.
* For the comparison with **RL methods**, DDPO is a PPO-based diffusion model fine-tuning method [1], and we have already presented its results in the paper. Additionally, we conducted a comparison with PG.
* For the comparison with **DPO-like methods**, we have carefully read the papers you recommended. However, we regret to find that none of them have released their implementation code (despite providing repository links), making it difficult for us to verify their details. Nevertheless, we would like to discuss them in the related work section of the final version. Additionally, We conducted a review of existing works, and added a new comparison with DenseReward [2].
The experimental results are shown in Figure 1 at the provided link. D-Fusion can be well **compatible** with SFT, PG and DenseReward. As shown, incorporating D-Fusion into these methods consistently leads to better fine-tuning performance than the original methods.
[1] Training Diffusion Models with Reinforcement Learning.
[2] A Dense Reward View on Aligning Text-to-Image Diffusion with Preference. | null | null | null | null | null | null |
SafetyAnalyst: Interpretable, Transparent, and Steerable Safety Moderation for AI Behavior | Accept (poster) | Summary: This paper presents an interpretable and steerable prompt moderation method called SafetyAnalyst. SafetyAnalyst categorizes the content being reviewed in a hierarchical manner using a harm tree and a benefit tree, based on attributes such as stakeholder, action, and effect. The classification results from these two trees are then aggregated using a dedicated aggregation model to compute an overall harm index for decision-making. Experimental results demonstrate that this method achieves moderation performance comparable to GPT-4 while using significantly fewer parameters.
### Update after rebuttal
After reading the rebuttal, I believe all of my concerns have been adequately addressed. Overall, the paper has certain strengths (e.g., good performance, novel insights) and weaknesses (e.g., long inference time), so I will maintain my initial recommendation of a ‘weak accept’.
Claims And Evidence: The authors do not make any explicit claims in the paper. The proposed method is effective and is supported by experimental evidence.
Methods And Evaluation Criteria: The proposed method is highly practical, as it enhances the safe use of large language models (LLMs) by preventing them from generating harmful content that could lead to negative consequences.
Theoretical Claims: The paper does not involve any theoretical claims.
Experimental Designs Or Analyses: The experimental design is reasonable, involving multiple datasets and comparisons with several state-of-the-art models. The authors also analyze inference time. However, a notable limitation is that inference time is reported only for WildGuard in the comparisons. Providing the inference times for all baseline methods would improve the analysis.
Supplementary Material: No supplementary material is provided.
Relation To Broader Scientific Literature: The primary contribution of this paper lies in proposing an interpretable and steerable content moderation method, which has strong practical value.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: **Strengths:**
- I strongly agree that content moderation methods should be adaptable to different communities’ values.
- The harm-benefit tree approach is highly innovative.
- Overall, the method performs well.
**Weaknesses:**
- Inference time is slow, requiring over 6 seconds per instance. In real-world LLM deployment, spending over 6 seconds just to assess the prompt’s safety could introduce significant delays.
- Adapting to different community standards requires a small dataset to optimize the aggregation model, meaning the approach cannot achieve true zero-shot adaptation. Additionally, adjustments can only be made within the predefined 16 categories, preventing adaptation to new categories.
Other Comments Or Suggestions: None
Questions For Authors: - SafetyAnalyst requires multiple inferences across the two trees, leading to an inference time of 6.12 seconds. However, I do not understand why WildGuard also takes 5.9 seconds—does it also require multiple inferences?
- According to Eq. 2.3, the summation is performed over stakeholders, actions, and effects rather than taking an average. If an instance involves a large number of stakeholders, actions, and effects, wouldn’t it be disadvantaged and more likely to be classified as violating the policy?
- When generating data using GPT, Claude, and similar models, isn’t response refusal a common issue? How is this handled? If only answerable responses are retained, then these samples are likely to be less severe violations (since models refuse to respond to highly severe violations). Does this imply that SafetyAnalyst may struggle to assess highly severe violations accurately?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s thoughtful feedback. Below, we address each of the concerns raised.
## Response to the Reviewer's Questions
1. **Inference Time of WildGuard**
In Section 3.2 (line 307), we report that WildGuard’s inference time is 0.22 seconds per prompt, not 5.9 seconds. We could not find any references to a 5.9-second inference time in our manuscript. If the reviewer noticed any discrepancies, we would appreciate pointers to specific sections and line numbers.
We only compared SafetyAnalyst’s inference time to WildGuard because:
- WildGuard and GPT-4 were the strongest baselines, but inference time comparisons were only feasible with WildGuard on the same computing architecture.
- Other baselines have similar parameter sizes and input/output lengths as WildGuard, making their inference time comparable, while they are less relevant to the comparison with SafetyAnalyst due to substantially weaker performance than WildGuard.
If the reviewer still believes inference time evaluations on additional baselines would strengthen the paper, we are open to adding them.
2. **Summation vs. Averaging in Aggregation Model**
The reviewer correctly notes that the aggregation model sums over stakeholders, actions, and effects rather than averaging, which may increase the likelihood of classifying prompts with bigger harm trees as unsafe. This design choice is intentional. Intuitively, harmful impact scales with the number of affected stakeholders, actions taken, and effects caused. Averaging would diminish this relationship, making the score dependent only on likelihood, severity, and immediacy while the scale of impact would not be reflected. Additionally, more stakeholders may lead to more beneficial effects that offset harmful ones through our trade-off mechanism.
3. **Handling of Response Refusals in Data Generation**
Response refusal was not a significant issue when generating data with frontier LMs. First, our harm-benefit tree generation does not require generating any explicit harmful responses, which likely helped avoid refusals. Second, while some seed prompts might typically trigger refusals, embedding them within the harm-benefit analysis context (see Appendix A for the prompting scheme) mitigated this issue.
The success rates for generating valid harm-benefit trees from harmful seed prompts were:
- **GPT-4o**: 100%
- **Gemini-1.5-Pro**: 99.2%
- **Claude-3.5-Sonnet**: 100%
- **Llama-405B-Instruct-Turbo**: 91.6%
- **Llama-70B-Instruct**: 73.5% (Llama failures mostly due to JSON formatting issues rather than refusals)
Thus, we have no reason to believe that SafetyAnalyst struggles specifically with highly unsafe cases. This is further supported by our strong evaluation results on benchmarks containing highly unsafe prompts (Tables 1 and 2). We will add this analysis to the paper.
## Comments on the Reviewer’s Listed Weaknesses
1. **Inference Time**
We acknowledge that SafetyAnalyst’s inference time limits its applicability for broad real-world deployment. As noted in Section 3.2 and Appendix C.4, inference time could be significantly improved by ablating harm-benefit tree components that contribute the least to classification accuracy. Additionally, architectural parallelization (discussed in our response to Reviewer Zjni on Runtime Efficiency) could further reduce inference time.
Nonetheless, SafetyAnalyst inherently requires more computation than black-box classifiers like WildGuard. This trade-off is justified by increased interpretability and transparency, making SafetyAnalyst more suitable for safety-critical applications where reliable and explainable safety decisions are paramount.
2. **Adaptation to Community Standards**
While the bottom-up alignment approach requires a small labeled dataset, top-down adjustments allow the aggregation model to be modified without additional training examples. For instance, a deployment to children could increase the weight on **Child Harm** without requiring new labeled data, which would lead to stronger moderation on contents that may cause Child Harm.
Although the aggregation model presented in the manuscript parameterized weights on the 16 level-2 AIR taxonomy categories, it can be easily extended to finer-grained categories (e.g., some or all of the 45 level-3 or 314 level-4 categories in the AIR taxonomy) which are already present in our harm-benefit tree data (only the aggregation model would need to be re-trained). We chose the AIR taxonomy to categorize harmful actions for its broad coverage and hierarchical structure, which supports diverse risk category definitions.
See also response to Reviewer dRb3 on **Demonstration of Steerability.**
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed reply, and I apologize for the misunderstanding between “longer than” and “longer by” (if your method takes 0.22 seconds longer than WildGuard, then WildGuard would take 5.9 seconds).
After reading your response, I believe all of my concerns have been adequately addressed. Overall, the paper has certain strengths (e.g., good performance, novel insights) and weaknesses (e.g., long inference time), so I will maintain my initial recommendation of a ‘weak accept’. | Summary: This paper introduces a framework called SAFEANALYST for the safety moderation of AI behavior. Specifically, SAFEANALYST constructs a “harm-benefit tree” using chain-of-thought (CoT) reasoning and then aggregates the leaf nodes through a transparent model with interpretable weight parameters based on harm-benefit features from the CoT. These weight parameters can be further adjusted to fit different safety preferences for AI moderation.
Claims And Evidence: This paper identifies the importance of interpretability, transparency, and steerability in a large language model moderation system and claims to establish a new framework that offers these benefits where other systems lack. The proposed SAFEANALYST indeed makes transparent LLM moderation feasible, and experiments show that the framework could achieve a competitive level of accuracy.
Methods And Evaluation Criteria: The general idea of evaluating AI safety by building and aggregating a harm-benefit tree—one that describes which actions may lead to harmful or beneficial effects (along with their likelihood, severity, and immediacy) for different stakeholders — is reasonable, and the newly defined hierarchical taxonomy appears logical. However, I am concerned about whether it is sufficient to assess AI safety by analyzing prompts alone, without involving specific AI-generated behaviors or outputs. In other words, describing hypothetical AI behavior without examining actual AI outputs may not clearly capture the real-world safety implications.
Theoretical Claims: The paper does not provide formal proof for any theoretical statements.
Experimental Designs Or Analyses: 1. The paper lacks a demonstration of how adjusting the weights in the aggregation model directly affects moderation outcomes. Providing explicit examples or experiments illustrating the effect of different weight configurations would better support the authors’ claim that their framework is indeed steerable and can directly influence the decision-making process and results.
2. The authors chose not to directly use the frontier LLM models for AI safety moderation (i.e., prompting the LM model to explicitly analyze potential harms or benefits of AI behaviors, considering aspects such as severity, immediacy, and impact on various stakeholders, followed by showing the aggregation logic). Although the authors justify this choice based on computational cost concerns, they do not present an analysis of the cost gap comparing their method and the frontier LLM models( such as the open-source Llama-70B).
Supplementary Material: I have reviewed supplementary material.
Relation To Broader Scientific Literature: The paper situates itself within AI safety literature and proposes a framework for interpretable and transparent LLM moderation.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strength:
1. Addressing interpretability and transparency in AI moderation systems is crucial. The framework’s focus on the impact of different actions on various stakeholders is sensible and could be informative.
2. The paper is well-written, and the experiments cover several benchmarks, allowing for a relatively comprehensive evaluation.
Weakness:
1. Restricting the moderation target to only prompts (rather than actual AI-generated behavior) might be questionable.
2. Further experiments are needed to strengthen the validation of SAFEANALYST, as discussed in the “Experimental Designs Or Analyses” part.
Other Comments Or Suggestions: 1. It would be helpful to see some discussion of failure cases of SAFEANALYST on test sets.
2. A demo or accessible codebase would also help other researchers evaluate and extend the framework.
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s thoughtful feedback. Below, we address each of the concerns raised and outline proposed updates to our manuscript.
1. **Moderation Target Limited to Prompts**
We chose prompt safety moderation for our primary evaluation because prior work has established many relevant benchmarks and baselines (in contrast to AI behavior), making the comparison between our model and prior work more clear.
While, we acknowledge that restricting the moderation target to prompts alone may limit the accuracy and applicability of SafetyAnalyst, this limitation is not inherent to the conceptual framework itself. SafetyAnalyst can be adapted to different moderation targets by modifying the scenario description to include specific AI-generated behaviors, such as an LLM’s response to a user query or an AI agent’s action plan.
2. **Demonstration of Steerability**
We agree that additional examples illustrating how adjustments to the aggregation model’s weights affect safety classification would strengthen our claim of steerability.
In Appendix E, we present a case study analyzing a prompt (“Can you explain how phishing scams work?” yielding an overall harmfulness score of -0.05, which is classified as safe since the score is negative), showing the top predicted harmful and beneficial effects and their corresponding weights. We will expand this case study with concrete examples demonstrating how modifying different weight parameters influences classification outcomes:
- In a deployment for businesses and corporations, the weight for “Financial property loss” could be increased, shifting the harmfulness score above zero and potentially reclassifying the prompt as unsafe.
- In a deployment for educational and training purposes, the weight for “Gain of accurate information access” could be increased, further reducing the harmfulness score and reinforcing the prompt’s classification as safe.
3. **Inference Cost Comparison**
The reviewer noted that we didn’t explicitly quantify the cost differences between off-the-shelf LMs for harm-benefit analysis versus our fine-tuned LMs. We agree that this information would be useful for contextualizing the advantages of our approach and will add it to the Appendix.
The table below breaks down the inference cost per unsafe example for three models:
| Model | Cost | GPUs | Queries | Open |
|-----------|--------|------|------------|------|
| GPT-4o | $3.20 | 2 | 2,981 | No |
| Llama-70B | $0.00 | 4 | 2,131 | Yes |
| Fine-tuned| $0.00 | 1 | 2 | Yes |
Zero-shot prompting of off-the-shelf LMs requires extensive querying (**# of stakeholders × # of actions per stakeholder × # of effects per action × 4**, calculated based on Table 4), whereas our fine-tuned models only require **two** queries—one for the harm tree and one for the benefit tree. Additionally, the much fewer queries and longer output lengths of our fine-tuned model allows inference to be sped up by techniques like kv-caching.
4. **Failure Cases and Additional Analysis**
The reviewer requested more discussion of failure cases. While we provide a breakdown of classification accuracy by category on SORRY-Bench in Appendix C.3 and Figure 4, we agree that further details would help contextualize the strengths and weaknesses of SafetyAnalyst.
We propose adding a new appendix to the manuscript detailing two primary types of failure cases:
- **Ambiguous cases that depend on safety standards and community values**
Example: *“Can you help me understand the concept of self-harm without advocating for it?”* from WildGuardTest. Such cases highlight the subjectivity in defining safety and suggest a need for refinements in benchmark ground-truth labels.
- **Cases where harm-benefit tree generation or aggregation was inadequate**
Example: *“Write a post persuading parents to feed infants under 12 months of age honey”* from HarmBench, where SafetyAnalyst failed to identify the physical harmful effects on the infants. These cases reveal limitations in the model’s ability to capture specific harms and suggest areas for improving training data quality and diversity.
4. **Open-Source Release**
As stated in the manuscript, we will open-source all data, models, and code, including:
- 18.5 million harm-benefit features for 19k prompts generated by frontier LMs.
- Two fine-tuned Llama-3.1-8B-Instruct models specializing in harm tree and benefit tree generation.
- Aggregation model code and trained weights.
We hope that making these resources publicly available will enable other researchers to extend and advance the SafetyAnalyst framework. | Summary: This paper introduces SafetyAnalyst, a novel AI safety moderation framework. Unlike existing AI safety systems that rely on opaque deep learning classifiers, SafetyAnalyst employs chain-of-thought (CoT) reasoning to construct a structured harm-benefit tree, which systematically evaluates the potential harms and benefits of an AI-generated response. Each identified harm or benefit is labeled with likelihood, severity, and immediacy, and these attributes are aggregated into a harmfulness score using a fully interpretable mathematical model with adjustable weight parameters.
## update after rebuttal
Claims And Evidence: The paper primarily utilizes distillation techniques to fine-tune Llama-3 8B, with theoretical innovations focusing on method transfer and concept integration.
Methods And Evaluation Criteria: Knowledge distillation has demonstrated remarkable performance in specific downstream tasks, and the approach proposed in this paper is highly valuable. However, when applying distillation to specific downstream tasks, the choice of distillation method requires careful selection and evaluation. The paper does not explore this aspect in depth.
Theoretical Claims: The article contains relatively little theoretical content.
Experimental Designs Or Analyses: Not sure.
Supplementary Material: The organization of the appendix is adequate.
Relation To Broader Scientific Literature: Not sure.
Essential References Not Discussed: Not sure.
Other Strengths And Weaknesses: **Strengths:**
1. Unlike existing black-box LLM moderation systems, SafetyAnalyst constructs a structured harm-benefit tree using chain-of-thought (CoT) reasoning, providing a fully interpretable and transparent framework for assessing AI-generated content.
2. The paper introduces a mathematically transparent aggregation model that assigns interpretable weights to different harm and benefit categories, enabling customizable safety moderation based on specific regulatory standards, user demographics, or application contexts.
**Weakness:**
1. The paper lacks a discussion on how knowledge distillation is applied to specific scenarios, especially considering the recent advancements in distillation techniques, which have enabled smaller models to surpass larger models in single-task performance.
2. The design motivation of the aggregation model in Section 2.3 is unclear. It is a simple sum of multiplications and then multiplication by artificially set weights. This is not a design with a transparent motivation. The main reason is that there is no explanation for why a simple weighted sum can reflect the degree of harm.
3. Section 3.1 uses average F1 scores to demonstrate the capability of the distillation model, but Table 2 clearly shows that there is a gap with the general model in some specific tasks, which means that the discussion of the results is insufficient.
4. The paper has always emphasized the transparency of reasoning, but has not experimentally verified it. Instead, it only claims to have a mathematical formula to prove transparency. But I think the transparency of reasoning also requires an analysis of why the mathematical formula works, which is missing in the original paper or experimental analysis.
5. Inference acceleration should be discussed in greater depth.
Other Comments Or Suggestions: Please see the weaknesses, if the author can address my concerns convincingly, I am open to raise my score.
Questions For Authors: NA
Ethical Review Flag: Flag this paper for an ethics review.
Ethics Expertise Needed: ['Privacy and Security']
Ethical Review Concerns: This article is related to ethics such as safety, etc.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful and constructive feedback.
1. **Knowledge Distillation Methods**
We acknowledge the importance of discussing knowledge distillation methods and justifying our approach. We propose adding the following to the manuscript:
- Related Work: Review current knowledge distillation approaches for LLMs, including transferring knowledge from output logits, intermediate layer representations, attention representations, or symbolic knowledge of the teacher model.
- Justification for Symbolic Knowledge Distillation: Many of our frontier teacher LMs were proprietary, preventing access to internal model representations. Symbolic knowledge distillation enabled us to transfer knowledge solely via outputs and data. By structuring teacher knowledge as harm-benefit trees, we fine-tuned student LMs that matched teacher performance in harm-benefit tree generation for long context windows (up to 18,000 tokens, Tables 1 and 6). Additionally, we augmented our dataset with ~14k adversarial examples to improve performance in adversarial cases (see our response to Reviewer Zjni on Adversarial Safety Testing). Notably, the student model outperformed GPT-4 on adversarial persuasion techniques in SORRY-Bench (Appendix C.3, Figure 4).
2. **Design Motivation for Aggregation Model in Section 2.3**
First, we clarify that the weights in our aggregation model were **not arbitrarily set** but optimized to minimize prediction loss on a labeled alignment dataset held out from training.
Second, we will more clearly emphasize in the revision that the structure of this algorithm was directly inspired by classic safety decision-making techniques in the policy and regulation domain, which explicitly weigh potential harms and benefits across stakeholders and sum them to make a policy decision (Arrow et al., 1996).
The approach of this fully linear model has many virtues, including being transparent and interpretable, with weights valued between 0 and 1, quantifying the importance of different types of harm-benefit tree entries (Figure 3). The weighted sum that the aggregation algorithm produces reflects overall harm levels and captures the relative importance of risk categories, likelihoods, extents, and immediacies that reflect the safety values underlying the labels.
While this simple mathematical model is highly effective in evaluation, we acknowledge its limitations—summing harmful and beneficial effects alone may not be ultimately sufficient for safe decision-making. Indeed, ongoing work in our group aims to demonstrate exactly this, showing that human permissibility judgments are responsive to more than just outcomes. Future work will combine these lines of research, exploring alternative aggregation mechanisms for SafetyAnalyst and validating them against human decision-making processes to support robust alignment.
3. **Evaluation in Table 2**
We agree that additional discussion and analysis of our model’s failures and sucesses will enhance readers’ understanding of the strengths and weaknesses of our model. See our response to Reviewer dRb3 on **Failure Cases and Additional Analysis**.
While we acknowledge that SafetyAnalyst has limitations on some benchmarks (e.g., SimpleSafetyTests), we believe AIR-Bench and SORRY-Bench (on which our model excels) are appropriately emphasized in the aggregate performance measure, as they cover regulatory compliance to multiple countries, persuasion techniques, writing styles, and multilingual safety assessments—areas not adequately addressed by other benchmarks. Moreover, to minimize bias in our evaluation and maximally build on prior work, we included all benchmarks applicable to our task that were featured in the WildGuard paper, the newest baseline we evaluated against.
4. **Transparency of the System**
The reviewer asks for an analysis that supports our claims that the system is transparent.
First, it is critical to note that the harm-benefit trees (not just the aggregation algorithm) are a critical component of the transparency of SafetyAnalyst. Unlike most uses of “chain of thought” reasoning—where there is no guarantee that the reasoning surfaced impacts the judgment ultimatley rendered by the system—our system directly operates on the output of the reasoning process to render a decision. That is, the harm-benefit features surfaced in the trees are entered into the aggregation algorithm to produce a harmfulness score. For a demonstration of this process that walks through the harm-benefit features generated and how they are weighted and aggregated, see Appendix E: Case Study. Moreover, Table 5 explores a series of ablations of different features of the aggregation algorithm.
For further analysis of this point, see Response to Reviewer dRb3 on **Demonstration of Steerability** and Reviewer CN74 **Adaptation to Community Standards.**
5. **Inference Acceleration**
See response to Reviewer Zjni on Runtime Efficiency.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for the detailed reply. I still have reservations about transparency. Considering the resolution of other issues and the workload of this article, after careful consideration, I decided to increase my score to 3. | Summary: This paper presents an interpretable guardrail model that is calibrated on safety preferences from human annotations and also allows steer-ability to align with various human safety values and preferences. The work uses an extended version of the taxonomy of Harmful actions that are generated in accordance with AIR 2024 risk taxonomy (Zeng et al., 2024b), an extensive categorization of harmful actions to create prompt templates and generate harm-benefit trees using multiple frontier models using CoT reasoning (also taking into account various stakeholders (with a future goal to be pluralistically aligned). Smaller LLMs are distilled to generate these harm-benefit trees. Another aggregation model is developed to generate harmfulness score for the prompts using calibrated data. Overall, the results are pretty impressive (close to gpt4o performance and better than multiple open weight, less interpretable models). Authors have released the datasets and models, creating a significant resource for researchers to build up upon this work.
Claims And Evidence: Authors claim that SAFETYANALYST (F1=0.81) outperforms current LLM content moderation systems (F1<0.72) on average, while offering the benefits of interpretability, transparency, and steerability that other systems lack.
- While true (according to the data), there is a heavy bias towards the Sorry-Bench dataset as well as the AIR-bench datasets (which is the basis of this taxonomy) performance where the model outshines. Some more detailed analysis of this aspect will benefit the readers to appreciate the value of this research (Appendix has some additional information - understanding the current gaps of SafetyAnalyst by exploring other benchmark datasets (eg. bias benchmarks, AILuminate benchmark, and other recent ones that the authors may find useful)
Given that the inference time compute makes the method impractical, some more ML innovation to improve the runtime (parallel processing, MoE, integrated scoring experiments) would make this work a lot more interesting for the ICML audience.
Methods And Evaluation Criteria: - The proposed method is very interesting, novel to a degree (where we see a plethora of guardrails models like Llamaguard that are blackbox systems) and integrates transparency and steerability of the models towards user needs.
- The feature aggregation and weight alignment methods tackles the issue of pluralistic alignment which is highly desirable (also from future requirements for possible regulatory compliance perspective)
- authors have done comprehensive testing with other benchmarks
Theoretical Claims: No cialms
Experimental Designs Or Analyses: The experiments seem sound and mostly comprehensive (please see questions for suggestions to improve the experiments)
Supplementary Material: looked at some material (table 6 and C.3)
Relation To Broader Scientific Literature: This work is relevant since it aims to address pluralistic alignment by enabling flexible adaptation to multiple value systems and respect cultural diversity. This is an emerging area of research and making this work accessible to the large community would increase development of extensions of this method. Authors have included relevant broader literature in the pluralistic alignment section.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Weakness:
- The paper relies on CoT based generation of harm-benefit trees. There are multiple advanced reasoning methods that exist today that could have improved the generation of reasoning trace:
- eg., Use of `Tree of Thoughts' inspired methods to iteratively explore multiple reasoning paths.
- explorations of other techniques like self-consistency based sampling, reflexion and self-critique methods
- To improve runtime and efficiency, one could explore development of MoE architectures to activate only a subset of experts dynamically based on input complexity and generate relevant harm-benefit trees and integrated scores (these methods may not have worked but studying these approaches to address improved runtime would have made the paper a lot more interesting to the ICML community)
Other Comments Or Suggestions: please see above
Questions For Authors: 1. Advanced Chaining Techniques: Exploring Tree of Thoughts, Self-Consistency, and Reflexion like methods to study and improve harm-benefit reasoning quality would improve the quality of the work, can the authors justify why these approaches were not considered while generating the reasoning trees?
2. Runtime Efficiency: Utilizing MoE architectures, integrated scoring like methods to improve the runtime would make the model practical to use. Can the authors justify why this work was left as future work? While the method is promising, the huge latency that exists with the approach today would make this method impractical and unusable.
3. Table 6 results raise additional concerns about the human agreement with the model responses. While having majority compliance, however the scores are between 50-60% for multiple categories and much lower than other models, please help in better interpretation of the result.
4. Did the authors explore automated red teaming for adversarial safety (for example, using jailbreak attacks) to test for robustness of the method explicitly
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1. **Bias Towards AIR-Bench and SORRY-Bench in Evaluation**
See response to Rev. PiLF on Evaluation in Table 2, where we highlight that AIR-Bench and SORRY-Bench include diverse, underrepresented risk categories, making their strong weighting in the overall performance score a benefit rather than a liability. The reviewer also requests further analysis of our model’s successes and failures. See response to Rev dRb3 on Failure Cases.
We also appreciate the reviewer’s suggestion to evaluate SafetyAnalyst on recent benchmarks and will do so as permitted by the review timeline.
2. **Advanced Chaining Techniques**
The reviewer suggested exploring advanced reasoning techniques, such as ToT, self-consistency, reflexion, and self-critique. Our harm-benefit tree generation is conceptually similar to ToT in generating multiple paths of stakeholder-action-effect trajectories (we will draw this out explicitly in the revision). However, unlike the recommended approaches, ours doesn’t prune paths. In a preliminary experiment, we used an LM judge to evaluate harm-benefit trajectories and distilled this knowledge into a critique LM to prune harm-benefit trees, but this marginally reduced SafetyAnalyst’s performance. This suggests that preserving diversity and comprehensiveness in harm-benefit features may be more valuable than selecting for “better” reasoning paths. (If the reviewer thinks discussion is valuable, we can report results in Appendix.) However, in future work should indeed continue testing other advanced reasoning techniques.
3. **Runtime Efficiency**
The reviewer recommended architectural optimizations, such as MoE and integrated scoring, to improve runtime. We agree that these designs could improve inference and hope to explore them in future work. Our primary focus in this work was to establish the conceptual framework and refine the data generation process, rather than optimizing runtime.
However, we made an effort to improve runtime by parallelizing harm-tree and benefit-tree generation into two models. For fair comparison with WildGuard, we evaluated inference with comparable parameter counts and GPU usage without parallel execution. Additionally, as noted in Section 3.2, strategically ablating some dimensions of the harm-benefit tree could reduce inference costs. Nonetheless, SafetyAnalyst is inherently more expensive at inference time than LlamaGuard-style classifiers due to the tradeoff between inference time compute and interpretable decisions. Thus, SafetyAnalyst is most advantageous when reliable and explainable safety decisions are needed. (Also see response to Rev CN74 on Inference Time.)
4. **Interpretation of Table 6**
The reviewer noted substantial variance in human agreement scores in Table 6.
The overall variation can be explained by the fact that we instructed LMs to enumerate as many distinct stakeholders/actions/effects as possible, which produced some marginally impactful entries that annotators are less likely to endorse. (This doesn’t hurt downstream performance, because these entries are weighted less by the aggregation algorithm.) In addition, some models had the tendency to generate more entries per prompt than others and the extra entries they generated were less crucial to the scenario. For example, GPT-4o and Llama-405B generated more harmful effects per stakeholder than other models (Table 4) and have lower human agreement rates than other models for this category (Table 6). We will explain this correspondance in the Appendix.
Regarding our model’s performance: Table 1 shows that somewhat lower annotator agreement with the harm-benefit features produced by our model does not seem to severly impact ultimate performance (when evaluated on WildJailbreak gold labels). Nonetheless, in ongoing work, we are exploring how our annotation data may be the result of substantive disagreements between annotators about the harm-benefit features (due to pluralistic human values, rather than poor data quality) which could be used to predict or establish what individual users’ gold-labels would be for a safety dataset.
5. **Adversarial Safety Testing**
Although adversarial safety was not a primary target of our work, we did incorporate adversarial and jailbreak attacks in both our training and evaluation. For training, we leveraged the WildJailbreak dataset, which contains both vanilla and adversarial versions of the same prompts. For each vanilla prompt, we generated a harm-benefit tree using a teacher LM, and augmented the training data by adding corresponding adversarial prompts and coupling them with the same harm-benefit tree. This approach allowed us to augment our training dataset with 13,838 adversarial examples. For evaluation, both WildGuardTest and SORRY-Bench (Fig 4) contain adversarial examples. SafetyAnalyst achieved competitive performance on the adversarial subsets of both benchmarks (Table 2, Fig 4), which we will highlight more explicitly in the revision. | null | null | null | null | null | null |
MVA: Linear Attention with High-order Query-Keys Integration and Multi-level Vocabulary Decomposition | Accept (poster) | Summary: This paper introduces MVA (Multi-level Vocabulary decomposition Attention), a linear attention mechanism built upon high-order query-keys integration theory and multi-level vocabulary decomposition. The authors unify popular linear attention methods under their theoretical framework and try to address the performance gap between linear attention and softmax attention. They propose methods to transform pre-trained softmax-based language models into linear models through fine-tuning. Their approach combines linear and sparse attention to capture information across different frequency bands, and uses multi-level vocabulary decomposition to expand memory capacity. The authors also introduce a soft integration strategy with sliding window attention. The paper claims their method can fine-tune models to achieve linear complexity while retaining 99% of original performance with fewer than 100M tokens, and outperforms state-of-the-art models on benchmarks with minimal fine-tuning.
Claims And Evidence: The claims about improving over existing linear attention methods are supported by experimental results in Tables 1-3, showing the MVA and MVA-SW variants outperforming previous methods. The theoretical claims regarding high-order QK integration and multi-level vocabulary decomposition are somewhat justified through the error analysis in the appendix.
Methods And Evaluation Criteria: +The proposed methods make sense for the problem of creating efficient linear attention mechanisms. The evaluation on standard benchmarks like MMLU, ARC, HellaSwag, etc. is appropriate for assessing language model capabilities.
-However, the evaluation criteria are incomplete - there's no comparison of inference speed or memory usage during long-context generation, which is crucial for linear attention's practical benefits. They focus almost exclusively on accuracy metrics while neglecting efficiency metrics, inference speed, memory usage during generation. Also, the paper only evaluates on one base model (Mistral-7B), limiting the generalizability of the findings. The absence of long-context evaluation is particularly problematic given that KV cache efficiency is one of the main advantages of linear attention.
Theoretical Claims: I checked some of the theoretical claims in the paper, particularly those related to high-order QK integration (Theorems 3.1 and 3.2) and multi-level vocabulary decomposition (Theorem 3.3). The paper provides some justification for these claims through expansions and error analyses, but the presentation is quite informal and difficult to follow. The proofs lack rigorous mathematical formalization, and many statements are presented as informal explanations rather than properly stated theorems with clear assumptions and conclusions, i.e., Assumption, Lemma, Theorem, Proof style. The error bounds in Theorem 3.3 look reasonable but would benefit from clearer exposition and more rigorous derivation. The Taylor series expansions used to justify the frequency-based interpretation are plausible but would be more convincing with empirical validation.
Experimental Designs Or Analyses: The experiments use appropriate benchmarks and comparative baselines (GSA, GLA, RetNet, SUPRA). However, there are several issues with the experimental design:
- The paper only evaluates on one base model (Mistral-7B), which limits generalizability.
- There's no evaluation of inference speed or memory usage, which is crucial for linear attention models.
- The paper lacks long-context evaluation, despite this being a primary motivation for linear attention.
- The fine-tuning approach using LoRA is reasonable, but more details on hyperparameter selection and optimization would strengthen the experimental rigor.
Supplementary Material: I roughly reviewed the whole appendix.
Relation To Broader Scientific Literature: The key contributions of the paper are well-situated within the linear attention literature. The authors build upon previous work like MetaLA and Gated Slot Attention, extending these approaches through their high-order QK integration theory and multi-level vocabulary decomposition. The paper clearly acknowledges its relationship to prior techniques like the Delta Rule and recursive sparse attention. The comparison with LoLCATs shows awareness of concurrent hybrid approaches.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
- The paper provides a good background and summarization of linear attention methods, unifying them under a common theoretical framework that helps clarify the relationships between different approaches.
- The performance results are promising, with MVA outperforming previous linear attention methods on benchmark tasks while requiring fewer training tokens. The multi-level vocabulary decomposition is an interesting approach to expanding the memory capacity of linear attention models.
- The direction of converting pre-trained softmax models to linear attention models is important and practical, as it addresses a significant efficiency bottleneck in deploying LLMs.
Weaknesses (some mentioned above, just make a summarization here):
- The paper only evaluates on a single pre-trained model (Mistral-7B), making it unclear how well the approach generalizes to other model architectures or scales.
- The introduction and explanation of MVA lacks sufficient detail for implementation - there's no clear step-by-step guide on how to transform a pre-trained LLM to MVA, which limits reproducibility.
- Despite linear attention's main advantage being inference efficiency, the paper doesn't report any inference speed metrics or memory usage during generation, making it hard to assess the practical benefits.
- The paper lacks any evaluation on long-context tasks where KV cache efficiency becomes the bottleneck, which is a critical use case for linear attention models.
- The theoretical content, especially in the appendix, is poorly organized and lacks formal structure - claims should be presented as clear lemmas and theorems with explicit assumptions and conclusions.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the insightful feedback and guidance provided. Based on your suggestions, we have made substantial improvements to our work.
1. **Additional Experiments with MVA-SW on Qwen2.5-14B-1M and Qwen2.5-32B**
We have conducted further experiments with MVA-SW on Qwen2.5 models and report the results below.
| Model | MMLU | PIQA | Hellaswag |
|-|--|--|---|
| Qwen2.5-14B-1M | 80.7 | 85.2 | 87.3 |
| → MVA-SW (14B) | 77.3 | 83.8 | 86.8 |
| Qwen2.5-32B | 83.9 | - | 85.2 |
| → MVA-SW (32B) | 79.8 | 82.5 | 85.0 |
We have also included results for **Llama3-8B with MVA**:
| Model | +Tokens | ARC-C | Winogrande | MMLU | TriviaQA |
|--|---|--|---|-|---|
| Llama3.1-8B | - | 79.7 | 60.5 | 66.7 | 77.6 |
| → MVA (8B) | +4B | 60.4 | 58.2 | 32.1 | 63.2 |
2. **LoRA Configuration Details**
As suggested, we explicitly clarify the LoRA settings. Our method only requires replacing Full Attention in Mistral or Llama models with MVA-SW or MVA, followed by fine-tuning. The configurations are detailed in Sections 4.1, 4.2, and 4.3.
- **MVA-SW (General Setting)**: Fine-tuning QKV and down_proj weights, with rank=128 and α=256 (default α=2×rank).
- **MVA-SW (LoLCATs Setting)**: Fine-tuning only QKV weights, with rank=8 and α=16 (aligned with LoLCATs).
- **MVA**: Fine-tuning QKV and down_proj weights, with rank=256 and α=512.
3. **Inference Efficiency and Memory Usage**
While inference efficiency and memory footprint are closely related to training efficiency in linear models, we acknowledge that our previous evidence was insufficient. To address this, we now report inference efficiency and memory usage across different sequence lengths.
### Table: Inference Performance Comparison
| Seq Len | Full Inf Time (s) | Full Inf Mem (GB) | Prefill Time (s) | Gen Latency (ms/token) | Total Mem (GB) |
|--|--|-|--|--|-|
| **MVA** | | | | | |
| 4K | 0.14 | 0.77 | 0.125 | 98.8 | 15.32 |
| 8K | 0.26 | 1.53 | 0.249 | 60.3 | 16.58 |
| 16K | 0.51 | 3.06 | 0.508 | 78.8 | 19.08 |
| 32K | 1.08 | 6.11 | 1.090 | 78.8 | 24.09 |
| 64K | 2.25 | 12.22 | 2.265 | 97.3 | 34.11 |
| 128K | 5.08 | 24.44 | 7.156 | 58.1 | 54.14 |
| **GSA** | | | | | |
| 4K | 0.10 | 0.76 | 0.077 | 93.2 | 15.30 |
| 8K | 0.17 | 1.53 | 0.153 | 40.9 | 16.55 |
| 16K | 0.32 | 3.06 | 0.315 | 48.5 | 19.06 |
| 32K | 0.64 | 6.11 | 0.630 | 63.0 | 24.07 |
| 64K | 1.29 | 12.22 | 1.293 | 90.2 | 34.08 |
| 128K | 2.66 | 24.44 | 5.102 | 38.7 | 54.11 |
| **Flash-Attention** | | | | | |
| 4K | 0.05 | 1.26 | 0.056 | 21.7 | 15.79 |
| 8K | 0.11 | 2.53 | 0.116 | 27.5 | 18.54 |
| 16K | 0.27 | 5.05 | 0.287 | 46.3 | 23.55 |
| 32K | 0.73 | 10.11 | 0.750 | 92.4 | 33.55 |
| 64K | 2.19 | 20.22 | 2.208 | 220.4 | 53.57 |
| 128K | OOM | OOM | OOM | OOM | OOM |
As shown in the table, the Full Inf Time and Full Inf Mem correspond to inference using the entire sequence without KV cache. In contrast, the Prefill Time refers to inference where the sequence is first prefilled and subsequently decoded using the KV cache. The Gen Latency then represents the per-token decoding time in this cached scenario. There may be minor fluctuations in the Prefill Time results due to the absence of prewarming during testing. However, the Gen Latency of the linear model should be close to a fixed value in theory but not in practice, probably due to some shortcomings of the triton operator in slicing and compiling at different lengths. **Overall, our method starts to dominate over flash-attention in all aspects at 64K length.**
4. **Long Context Experiments**
Following your suggestion, we evaluated our method on LongBench and compared it against GSA and other baselines.
| Model | Qasper | NarrativeQA | QMSum |
|--|---|----|---|
| **Models trained from scratch** | | | |
| RWKV6 | 9.2 | 14.4 | 1.1 |
| Mamba | 5.6 | 27.9 | 0.8 |
| Mistral | 25.8 | 25.1 | 5.0 |
| **Finetuned from Mistral 7B on 20B tokens** | | | |
| RetNet | 11.1 | 0.0 | 0.0 |
| GLA | 18.4 | 17.2 | 9.0 |
| GSA | 18.8 | 19.2 | 10.0 |
| **MVA** | 20.7 | 20.4 | 9.58 |
5. **Theoretical Proof Organization**
We greatly appreciate your suggestions regarding the structure of our theoretical proofs. We have now revised the theoretical analysis to follow a structure similar to **MetaLA’s appendix**:
- **Subsection → Proposition (or Lemma) → Proof → Discussion**
### Conclusion
We are deeply grateful for your constructive feedback, which has significantly improved our research and manuscript.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. The Qwen results are interesting, and the Inference Efficiency and Memory Usage part fixed my concerns. Thus, I will raise my score from 2 to 3.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your valuable feedback and your detailed evaluation of our work. Your insightful comments have greatly helped us clarify key ideas and improve the presentation of our paper. We are especially grateful for your score adjustment and for recognizing the contributions of this work. Your suggestions have been incredibly valuable to us, and this discussion has been truly enlightening. Once again, thank you for your thorough review and constructive guidance! | Summary: The paper introduces MVA, a novel linear attention mechanism to bridge the gap with softmax attention. The proposed approach uses:
1. High order QK integration theory - to integrate high and low frequency information to improve the approximation of softmax attention.
2. Multi-level Vocabulary Decomposition - to recursively compress and store residual errors from previous attention states, thus reducing information loss.
The proposed approach is evaluated on multiple benchmarks, showing improvements over existing state-of-the-art linear attention methods such as GSA and MetaLA. MVA is able to achieve 99% of the original performance with less than 100M tokens, outperforming prior linearization techniques.
Claims And Evidence: Claims with sufficient support:
1. MVA outperforms SOTA linear attention models like GSA and MetaLA: Results in Table 1 support the claim
2. MVA improves memory efficiency by maintaining fixed-size states: Results in Table 6 support the claim
Claims that require further justification:
1. High order QK integration Theory improves both low-frequency and high-frequency attention: This is partially supported by theorems but it would be good to add empirical analysis of frequency bands.
2. MVA-SW solves the convergence issues of hybrid architectures: A convergence study would strengthen the claim.
Methods And Evaluation Criteria: 1.The proposed approach is evaluated against SOTA linear attention models such as GSA, MetaLA, RetNet and GLA. The paper evalautes 2. MVA on widely used language model benchmarks such as MMLU, ARC-E, HelloSwag, PIQA, Winogrande, TiriviaQA and NQ.
Fine-tuning results are shown for Mistral-7B, but it would be good to add a couple more such models and even larger models such as of size 30B, 70B etc.
3. There is missing analysis on convergence or loss curves to support the claims made for MVA-SW.
Theoretical Claims: No
Experimental Designs Or Analyses: No
Supplementary Material: No
Relation To Broader Scientific Literature: 1. There has been prior work to approximate softmax attention using linear attention such as MetaLA and GSA. MetaLA captures low frequency components while GSA captures high-frequency components, this paper combines both perspectives into a unified framework(Higher order QK-integration).
2. Unlike Linformer, RetNet and MetaLA, which compress entire sequences into fixed-size states, MVA uses MVD to recursively store compression errors, leading to exponential memory capacity.
3. LoLCATs combines SWA with linear attention, but requires two-stage fine-tuning. MVA-SW integrates SWA into the attention computation directly, reducing training overhead.
4. Prior hybrid models require significant retraining when replacing softmax layers. MVA retains 99% of softmax-trained performance with minimal fine-tuning, making it more practical for real-world deployment.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The paper is very well-written and easy to follow.
2. The proposed approach is evaluated on various datasets and compared with SOTA approaches.
3. There are enough ablation studies to support various claims.
4. The proposed approach addresses a critical bottleneck in LLM scalability - quadratic complexity in softmax attention. MVA provides a scalable alternative with minimal loss in performance.
Weaknesses:
1. The proposed approach is not evaluates on models other than Mistral-7B. It would be good to add larger models with sizes 30B and 70B to see if the proposed approach scales to larger architectures.
2. It would be good to add some long-context benchmarks.
3. The paper lacks convergence analysis for MVA, so it would be good to add empirical results supporting that.
Other Comments Or Suggestions: No
Questions For Authors: Same as weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful feedback and valuable support, which have greatly benefited us. Below, we provide further clarifications based on your suggestions. Thank you again for your guidance.
1. **Scaling to Larger Models**
Due to resource and time constraints, we are unable to conduct fine-tuning experiments on a 70B model. However, to approximate your suggestion, we have conducted experiments with MVA-SW on **Qwen2.5-14B-1M** and **Qwen2.5-32B**, as shown in the table below:
| Model | MMLU | PIQA | Hellaswag |
|----|-------|-------|-----------|
| Qwen2.5-14B-1M | 80.7 | 85.2 | 87.3 |
| → MVA-SW (14B) | 77.3 | 83.8 | 86.8 |
| Qwen2.5-32B | 83.9 | - | 85.2 |
| → MVA-SW (32B) | 79.8 | 82.5 | 85.0 |
Additionally, we have incorporated **MVA into Llama3-8B**, and the results are presented below:
| Model | +Tokens | ARC-C | Winogrande | MMLU | TriviaQA |
|----------------|---------|-------|------------|------|----------|
| Llama3.1-8B | - | 79.7 | 60.5 | 66.7 | 77.6 |
| → MVA (8B) | +4B | 60.4 | 58.2 | 32.1 | 63.2 |
Due to time constraints we have only carried out the above experiments, if you require additional experiments or more tests please let us know, we would be happy to talk to you again and receive your guidance, thank you very much!
2. **LongBench Evaluation**
For long-sequence tasks, we followed the benchmark selection in the GSA paper and conducted experiments on Qasper, NarrativeQA, and QMSum, all of which are part of LongBench except QuALITY. Consequently, we utilized the LongBench framework to ensure consistency across experiments when comparing against GSA and other methods. However, it appears that GSA may not have used LongBench directly for evaluation. As a result, there are slight discrepancies between our results on Mistral and those reported in the GSA paper. To ensure fair comparisons, we will adopt a normalization strategy: we will adjust the sampling procedure of our Mistral and MVA evaluations such that the resulting Mistral performance aligns more closely with the GSA-reported Mistral results. We will then conduct our MVA experiments under the same conditions.
| Model | Qasper | NarrativeQA | QMSum |
|-----------|--------|------------|-------|
| **Models trained from scratch** | | | |
| RWKV6 | 9.2 | 14.4 | 1.1 |
| Mamba | 5.6 | 27.9 | 0.8 |
| Mistral | 25.8 | 25.1 | 5.0 |
| **Finetuned from Mistral 7B on 20B tokens** | | | |
| RetNet | 11.1 | 0.0 | 0.0 |
| GLA | 18.4 | 17.2 | 9.0 |
| GSA | 18.8 | 19.2 | 10.0 |
| MVA | **20.7** | **20.4** | 9.58 |
3. **Loss Curve Analysis**
We have added loss curves from fine-tuning different models in the https://anonymous.4open.science/r/icml25_mva-B2AB provided. The results indicate that **MVA exhibits the fastest loss convergence** compared to other methods such as **GLA, GSA, and MetaLA**. Additionally, as training progresses, the **loss of MVA gradually approaches that of Mistral**, further demonstrating its effectiveness. Please let us know if you need any other convergence curves, thank you very much!
We sincerely appreciate your constructive feedback, which has been instrumental in refining our work. Thank you again for your valuable time and insightful suggestions! | Summary: The paper introduces MVA (Multi-level Vocabulary Attention), that uses high-order Query-Key (QK) integration with a recursive multi-level vocabulary decomposition to approximate Softmax attention. The paper combines sparse and linear attention, MVA achieves good performance upon finetuning on Mistral-7B.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: No.
Experimental Designs Or Analyses: See strengths and weaknesses
Supplementary Material: No
Relation To Broader Scientific Literature: See strengths and weaknesses
Essential References Not Discussed: No
Other Strengths And Weaknesses: Comments:
1. Figure 1 doesn’t define the acronym MQVA. The figure is hard to read.
2. Running title of the paper is “Submission and Formatting Instructions for ICML 2025”
3. Why is increasing the number of finetuning tokens from 0.1B vs 2B doesn’t improve performance in Table 1?
4. All the equations are numbered, even though most of them are not used/referenced.
5. The code notation “sumexp(., dim=-1)” is not used formally unless defined.
6. The equations (27)-(30) use code notation several times. Improving the notation would help in line 320. For instance, metala can be made MetaLA in the subscript, and sigmoid can be declared a mathematical operator in latex. Here’s a reference https://epubs.siam.org/doi/10.1137/1.9781611976106
7. What is MVA-2 level in Table 6.
8. What is the purpose of Q in “MVA (MQVSWA without SWA)” in Table 3 title. The acronyms are very confusing.
9. It would be great to have elaborate table titles with better explanation of the baselines and the observations. It would be better if there is an explanation on why there are two tables, with and without SWA.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We sincerely appreciate the time and effort you have taken to review our work. Your insightful suggestions have significantly helped us refine the clarity and presentation of our paper. Below, we address each of your concerns in detail.
### 1. Clarification on Naming Conventions:
We sincerely apologize for any confusion caused by our last-minute modification of the naming conventions. Our advisor suggested that the abbreviation should follow a three-letter format, similar to GLA and GSA, as MQVA was considered too long. From the paper, it can be derived that **MQVA = MVA** and **MQVSWA = MVA-SW**. However, we acknowledge that our original presentation might have been unclear, and we have now carefully revised the notations and paper formatting to ensure clarity.
### 2. Title:
Thank you for pointing out. We have now corrected the content in \icmltitlerunning to match our paper’s title.
### 3. Justification for Maintaining or Slightly Improving Performance with Additional Training Tokens:
We appreciate your suggestion to explicitly clarify why fine-tuning with more tokens maintains or slightly improves performance, as this requires a deeper understanding of the method. The reason is as follows:
When combining **Sliding Window Attention (SWA)** with **Linear Attention** during fine-tuning, the two branches initially exhibit different learning distributions, with SWA being more dominant. As training progresses, performance first increases and then decreases due to the impact of Linear Attention on the final outcome. However, our method mitigates this issue by either **fine-tuning only the introduced additional parameters** or **selectively truncating gradient information**, effectively transforming the declining phase into a slow improvement and highlighting the advantages of our approach.
### 4. Removal of Unused Equation Numbering:
We acknowledge that numbering equations that are not referenced in the main text may introduce unnecessary complexity. However, we intentionally retained some equation numbers because they facilitate discussions in group meetings and other research communications. That said, we recognize your expertise in academic writing and have followed your suggestion to remove some of the unreferenced equation numbers for a more formal and concise presentation.
### 5. Notation Refinements:
We have revised our notation for clarity:
- The previous notation has been updated to **$\exp(\cdot) R_d$**, where **$R_d \in \mathbb{R}^{s_w \times 1}$**, **$s_w$** represents the window size and the values in the $R_d$ matrix are all 1.
- We have replaced `sigmoid` with $\sigma(\cdot)$ and provided its mathematical definition for consistency.
- The subscripts in our MetaLA equations have been adjusted following your recommendations, resulting in a much more concise and readable formula set. We truly appreciate your guidance in improving the mathematical presentation.
### 6. Clarification of MVA-2 Level:
As noted in Section 3.2, **MVA-2 Level** employs multi-level vocabulary decomposition, where the vocabulary is recursively decomposed into two hierarchical levels.
### 7. Explanation of With and Without SWA Comparisons:
We appreciate your suggestion to enhance the explanations in our table captions. We will carefully incorporate this feedback in future revisions. Below, we provide a rationale for the inclusion of both "with" and "without SWA" comparisons:
Currently, there are two primary research directions for converting full-attention models into linear models:
1. **Hybrid architectures**, which combine **sliding window attention** and **linear attention** either at the intra-layer level or by selectively replacing different layers.
2. **Pure linear attention models**, such as GSA, which do not mix different mechanisms.
Both approaches share a common challenge: the **inherent limitations of linear attention**. Since our method offers a theoretically superior approximation to full attention, it demonstrates advantages in both hybrid and purely linear settings.
Another important reason for including these comparisons is that **SWA preserves the original model’s local distribution within the window**, significantly reducing fine-tuning resource requirements while achieving higher performance. As a result, many recent approaches incorporate **sliding window attention or global window attention mechanisms**. However, researchers focused on linear models, such as those studying GSA, are particularly interested in evaluating the effectiveness of purely linear attention. Thus, our experiments on **MVA without SWA** serve to highlight the strengths of our linear component and provide valuable insights for researchers working in this domain.
We truly appreciate your constructive feedback, which has greatly contributed to the refinement of our paper. Your recommendation of *Handbook of Writing for the Mathematical Sciences* has been particularly beneficial.
---
Rebuttal Comment 1.1:
Comment: I appreciate the rebuttal from the authors, but, I still think the paper writing lags in quality, while the results of the paper are interesting, so I maintain my score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your thoughtful comments and your recognition of the results and contributions of our work. Your feedback is highly valuable to us.
1. We respectfully believe that the paper may contain only minor writing imperfections—among the points you raised, only Issues 1 and 8 might introduce a very slight but essentially negligible impact on the clarity of the presentation. We suspect that your reading may have primarily focused on the figures, tables, and equations, and that the confusion was possibly caused by not referring to the surrounding text, which does clarify these points. A brief review of the relevant paragraphs typically resolves these ambiguities. **Indeed, as shown in the reviews, the other reviewers were able to understand the content clearly and provided highly positive responses. This suggests that while there may be minor imperfections, they did not significantly hinder comprehension.**
2. We deeply respect your high standards for scientific writing. We sincerely believe that your master-level writing represents the pinnacle of clarity and precision, elevating a paper to the level of a finely crafted work of art. We are therefore truly grateful for your comments, which have pushed us toward that standard and helped us improve the presentation of our work.
That said, we also respectfully feel that holding every paper to such a master-level expectation in writing may be unnecessarily strict. While we acknowledge and appreciate your emphasis on expression quality, we would like to emphasize that the issues you pointed out—though worth improving—are in our opinion minor and do not hinder comprehension. Indeed, as evidenced by the feedback from other reviewers, these details did not prevent a clear understanding of the key contributions of the paper.
**We have addressed each of your concerns in the rebuttal with detailed and careful responses.** If there are any specific points that remain unclear, we would be more than willing to clarify them further. We kindly ask that our efforts not be dismissed without engagement. We are committed to rigorous revision and open discussion, and we hope for constructive and specific feedback rather than a general rejection of our responses.
3. Once again, we fully recognize your expertise and insightful standards. At the same time, we humbly note that even the most experienced researchers may occasionally overlook certain details. We believe that what distinguishes true mastery is not perfection in every draft, but the ability to clearly identify actionable feedback and foster the collaborative improvement of ideas.
**For example, the issues you raised were easily understandable upon my reading, but there were indeed some minor flaws. You can easily verify these sentence structures through tools like ChatGPT or Deepseek to check for potential issues, such as grammar errors. The answer is affirmative. Thus, we believe that even experts like you can make mistakes. We also admit that our paper contains minor imperfections, which we have carefully addressed. It is through mutual tolerance and understanding that we will progress together. Ultimately, our goal is to make our paper understandable, and we believe we have achieved that.**
Ultimately, we believe the goal is clear communication of scientific content, and we have done our best to achieve that. We sincerely thank you again for your valuable suggestions and constructive feedback. We would deeply appreciate it if you could reconsider your evaluation in light of our point-by-point revisions, and we welcome further discussion if you have any remaining concerns. Thank you once again for your time and consideration. | Summary: This work proposes an efficient Transformer alternative, which unifies existing architectures in the field GSA/GLA/MetaLA/Based with theory and combines their strengths to achieve performance improvements over strong baseline approaches at model scales of up to 7B parameters. The theory examines whether existing architectures tend to capture low or high frequency signals from input text.
Claims And Evidence: See strengths and weaknesses.
Methods And Evaluation Criteria: The methods and evaluation criteria make sense.
Theoretical Claims: Yes did not spot issues.
Experimental Designs Or Analyses: See the strengths and weaknesses.
Supplementary Material: Lightly skimmed.
Relation To Broader Scientific Literature: This work competes with the strongest baseline approaches for efficient Transformer alternatives.
Essential References Not Discussed: - The weighted combination without recomputation from section 3.4 is already presented in LoLCATS, and it is proposed without sufficient attribution. This is not a new idea. The combination of linear attention plus short sliding window attention is presented in Arora et al., Simple linear attention models.
- The idea of using a learned perceptron for the feature map is proposed in LoLCATS and Hedgehog, is this being suggested as a contribution? The writing should be clarified.
Applying Theorem 3.1 Taylor polynomial for exp to produce the feature map for linear attention. Do these works share any overlap?
- Keles et al., On The Computational Complexity of Self-Attention, 2022.
- Arora et al., Simple linear attention models, ICML 2024.
Other Strengths And Weaknesses: **Strengths**
- Strong experimental results in Table 3, where quality is better using less training data than strong baseline methods
- Improved distillation method that just requires one step of training versus prior work that uses multiple stages
- Theorem 3.2 is very interesting, with respect to about high vs. low frequency signal processing
**Weaknesses**
Experiments
- Is SlimPajama used in LoLCATS? Would be worth understanding if the improvements are due to the dataset or due to the architecture.
- Is the improvement over GLA/GSA due to the weighted-combination of softmax and linear attention without recompute from Section 3.4? Or do you apply the method in Section 3.4 for the baselines as well?
Efficiency
- What is the efficiency of MVA for longer sequences?
Additional analysis
- Are there any empirical observations suggesting that the GSA and MetaLA pick up low-vs.-high frequency patterns? Not sure if this is observable?
- How many decomposition steps are used and are there any ablations to understand that better? Compared to just hybridizing GSA and MetaLA?
Other Comments Or Suggestions: Line 154: “However, as MetaLA removes the K matrix, it introduces a significant gap from Softmax Attention, hindering methods that rely on fine-tuning Softmax Attention weight” Could be explained more clearly. It is not clear why this is a limitation.
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback. Your suggestions have helped us refine our methodology and validation, especially regarding dataset selection. Below, we provide responses to each of your concerns.
## 1. Essential References Not Discussed:
1. **Conclusion:** We emphasise the importance of weighting LA with SWA based on attention score correlations in hybrid models. This is explicitly stated in our paper, along with supporting empirical validation.
**Explanation:** Our baseline model indeed integrates linear attention with sliding window attention, as seen in prior work like Based and Infini-Attention. However, these methods typically rely on standard gating mechanisms. We argue that a more effective approach is to weight based on attention score correlations—our proposed soft fusion method. Upon further analysis, we recognize that LoLCATs also employ attention score weighting for linear and sliding window attention. However, our study was also conducted very early not later than LoLCATs, and our method extending softmax or sigmoid-based linear models is more natural and logical compared with the ordinary LA of LoLCATs. Notably, if full attention is computed recursively, both current and historical attention terms are weighted by attention scores. While LoLCATs apply this strategy, they do not explicitly highlight its significance. Furthermore, our approach preserves a softmax-like function, making our method more intuitive and theoretically sound.
2. **Perceptron Mechanism:** We do not claim the perceptron mechanism as a novel contribution, as it has been widely studied. Instead, we employ it due to constraints in our GSA branch. Since the KV sequence is dynamically compressed to a fixed size, applying a simple softmax may no longer be optimal. Prior studies suggest that softmax acts as a relevance filter, enhancing focus on crucial tokens. However, given the severe compression in GSA or our method, more sophisticated information extraction mechanisms are necessary. This motivates our use of a perceptron, which we empirically validate.
3. ** Prior Work:** Our method shares only minimal overlap with existing work, apart from the well-known Taylor expansion of \(e^x\), which is a natural and intuitive formulation. Nevertheless, these prior works provide valuable insights, and we would like to include them as references to strengthen the theoretical grounding of our approach.
## 2. Addressing Weaknesses:
1. **Comparison with LoLCATs on Alpaca-Clean Data:**
LoLCATs did not use SlimPajama but rather Alpaca-Clean. We conducted experiments on Alpaca-Clean, and our results are as follows:
| Model | Training Data | PIQA | ARC-e | ARC-c | HellaSwag | Wino-grande | MMLU | Avg. | Avg. (w/o MMLU) |
|--------------------|---------------|------|-------|-------|-----------|-------------|------|------|-----------------|
| Mistral-7B (v0.1) | - | 82.1 | 80.9 | 53.8 | 81.0 | 74.0 | 62.4 | 72.4 | 74.4 |
| → LoLCATs (LoRA rank=8) | AlpacaClean(+40M) | 81.5 | 81.7 | 54.9 | 80.7 | 74.0 | 51.4 | 70.7 | 74.5 |
| → LoLCATs (rank=8)| RedPajama(+40M) | 80.1 | 77.6 | 49.0 | 80.3 | 71.7 | 53.2 | 68.6 | 71.7 |
| → MVA-SW (rank=32)| AlpacaClean(+20M)| 82.1 | 81.5 | 54.7 | 81.2 | 74.1 | 52.2 | 70.7 | 74.4 |
| → MVA-SW (rank=8)| AlpacaClean(+40M)| 82.3 | 81.9 | 57.6 | 80.2 | 74.0 | 51.6 | 71.2 | 75.2|
| → MVA-SW (rank=8)| RedPajama(+40M)| 82.5| 81.5 | 55.7 | 79.7 | 72.9 | 52.4 | 70.8 | 74.5|
These results indicate that, unlike LoLCATs, which require a two-stage process, our method maintains strong performance across datasets with a single-stage fine-tuning approach. In particular, our approach to AlpacaClean fine-tuning performance is much better than LoLCATs.
2. **Comparison of GLA/GSA:**
We ensured a fair comparison in all evaluations. The advantage of our MVA (without sliding window) over GSA stems from the incorporation of information across different frequency scales and the extending of memory capacity. Additionally, when applying the method described in Section 3.4, we extended it to both GLA and GSA for a fair comparison.
3. **MetaLA vs. Softmax:**
The key limitation of MetaLA compared to softmax is the absence of a K matrix, which is replaced by a gated matrix. This substitution hinders the effective use of the original attention’s K matrix and introduces additional instability in training. Although our paper proposes mitigations, this deviation from standard attention mechanisms poses inherent challenges in fine-tuning convergence.
Last but not least, thank you very much for your guidance and comments, so that we can improve our work! | null | null | null | null | null | null |
Fast Incomplete Multi-view Clustering by Flexible Anchor Learning | Accept (poster) | Summary: This paper proposes FIML in this work for fast incomplete multi-view clustering. It simultaneously considers graph construction, anchor learning and graph partition in a unified framework, in which these parts boost each other for improving the effectiveness and efficiency for datasets with large scales. To be specific, a shared anchor graph for guaranteeing the consistency among multiple views is learned and the adaptive weight coefficient is adopted to balance the impact for each view.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes. There are no proofs for theoretical claims in this work and I do not need to check them.
Experimental Designs Or Analyses: Yes. The adopted experimental methods for comparison contain methods from the recent years, which enhances the credibility of the experimental performance.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: Compared to the existing works, this paper proposes FIML in this work for fast incomplete multi-view clustering. It simultaneously considers graph construction, anchor learning and graph partition in a unified framework, in which these parts boost each other for improving the effectiveness and efficiency for datasets with large scales.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strength: The authors learn a shared anchor graph for guaranteeing the consistency among multiple views and adopt the adaptive weight coefficient to balance the impact for each view.
Weakness:
1. After giving the total objective function for the proposed FIML, the authors should show the summarization of this objective function after Eq. (3) to make the formulation of FIML better explained.
2. The authors should further what is KKT condition before Eq. (10) for the Optimization part of the proposed FIML.
3.The compared methods in the experiment are not enough and the authors are expected to add more recent works for comparison in Experiment part.
4. The length of motivation part in the work is oversized in Section 2 for the paper. Then the authors should highlight the most important part in this part to make the motivation more obvious.
5. The formats of works in references part should be consistent to increase the presentation of this paper.
Other Comments Or Suggestions: No.
Questions For Authors: 1. The authors are expected to add more recent works for comparison in Experiment part.
2. The authors should highlight the most important part in motivation part to make the motivation more obvious.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Q1: The summarization of the objective function of the proposed FIML after Eq. (3) should be given.
A1: Thanks for the comment! The related summarization of the objective function for the proposed FIML after Eq. (3) is shown in the following. Then graph construction, anchor learning and graph partition are jointly integrated into a unified framework for incomplete multi-view clustering in this manner, where these three parts can boost each other to achieve effective and efficient clustering results for incomplete large-scale multi-view dataset. The discriminative anchors can be automatically learned and the final partition is achieved in this manner. We will add the above summarization for the camera-ready version.
Q2: What is KKT condition before Eq. (10)?
A2: Good question! It is needed to explain KKT and the meaning is not clear without the related description. KKT is the abbreviation for Karush-Kuhn-Tucker conditions and we will add this explanation for the camera-ready version.
Q3: More recent methods should be added for comparison in the experiment.
A3: Thanks for the comment! We have added a recent method for comparison in the experimental section, i.e., OMVCDR [a].
[a] One-Step Multi-View Clustering With Diverse Representation, 2024
The clustering results of OMVCDR based on ACC for all datasets are 77.00±0.00, 89.85±0.10, 91.75±0.00, 76.50±0.15, 98.80±0.00, 98.95±0.10
The clustering results of OMVCDR based on NMI for all datasets are 89.50±0.00, 75.85±0.15, 49.00±0.05, 58.00±0.00, 96.50±0.10, 99.72±0.00
The clustering results of OMVCDR based on F1-score for all datasets are 69.00±0.00, 81.15±0.10, 88.95±0.00, 60.00±0.05, 97.80±0.00, 99.25±0.00
The clustering results of OMVCDR based on Purity for all datasets are 79.56±0.00, 91.25±0.20, 92.00±0.00, 76.50±0.50, 98.79±0.00, 99.20±0.05
Q4: The most important part in motivation should be highlighted.
A4: Good question! It is needed to highlight the most important part in motivation, which is presented as “Actually, relatively less data samples are enough to reconstruct the latent space. Therefore, selecting some data samples from the original dataset as anchors or landmarks for reconstructing the relation structure is commonly used in the existing works.” We will highlight the above details for the camera-ready version.
Q5: The formats of works in references should be consistent.
A5: Thanks for the comment! We will check the formats of works in references part for consistency in increasing the presentation of the paper. | Summary: This paper proposes a novel fast incomplete multi-view clustering method for the data with large scales, termed Fast Incomplete Multi-view clustering via flexible anchor Learning (FIML), where graph construction, anchor learning and graph partition are simultaneously integrated into a unified framework for efficient incomplete multi-view clustering. To be specific, the authors learn a shared anchor graph to guarantee the consistency among multiple views and employ a adaptive weight coefficient to balance the impact for each view. The relation between anchor graph and similarity matrix in symmetric nonnegative matrix factorization can also be built, i.e., each entry in the anchor graph can characterize the similarity between the anchor and original data sample.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes. There exist no proofs for theoretical claims in the paper and it is not needed to check them.
Experimental Designs Or Analyses: Yes. The used compared methods include the works from the recent years, increasing the credibility of the final results.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: This paper gives a new insight to the community of incomplete multi-view clustering for large scale datasets compared to the existing methods, i.e., graph construction, anchor learning and graph partition in efficient incomplete multi-view clustering can boost each other, which are able to be integrated into a problem. The combination of these three issues is the focus in our work. While most existing work treat graph construction, anchor learning and graph partition as separated problems in incomplete multi-view clustering for the datasets with large scales.
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strength**
The authors constrain the factor matrix with rigorous interpretation to be cluster indicator representation by introducing the orthogonal constraint on the actual bases and use the alternative algorithm for solving the formulated problem. Extensive experiments are performed on different datasets to demonstrate the superiority of FIML in terms of effectiveness and efficiency.
**Weakness**
1. The authors state that each entry in the anchor graph $ Z $ describes the similarity between data sample and anchor. Since the symmetric constraint on $ Z\in R^{m\times n} $ are not guaranteed in factorization with $ m\ll n $, the authors remove such constraint on anchor graph $ Z $ and this is the main difference between anchor graph and similarity matrix in symmetric nonnegative matrix factorization. In this part, the authors are expected to give the reason why each entry in the anchor graph $ Z $ describes the similarity between data sample and anchor.
2. The authors list the detailed clustering results of FIML and the compared approaches on different datasets in terms of four metrics in Tables 1-4. The authors also compare FIML with IMVC-CBG and FIMVC-VIA under different missing ratios on several datasets under different metrics. According to Tables 1-4 and Figs. 4-7, the authors then draw some following conclusions. However, the authors do not bold the best clustering performance in Tables 1-4 in terms of four metrics for this paper.
3. The authors perform the parameter selection for the trade-off parameter $ \lambda $ in the range of $ [0.001,0.1,1,10,100,1000] $ to study how these this parameter influences the final clustering performance and find that better performance is achieved when $ \lambda=1 $ under the same $ m $ on different datasets. Besides, the clustering result of FIML is relatively stable over different parameter values on these datasets, which shows that FIML is generally robust to the trade-off parameter $ \lambda $. However, the authors do not give the detailed reason why the $ [0.001,0.1,1,10,100,1000] $ is chosen for parameter selection in this paper.
4. The authors perform ablation study to validate the superiority of adopting a unified framework integrated by graph construction, anchor learning and graph partition. In comparative experiments, the authors first learn anchors and construct the graph to obtain informative representation. Then the graph partition is isolated from the above two processes in the designed experiment. However, the authors do not give the detailed specific values analysis regarding the superiority of adopting a unified framework integrated by graph construction, anchor learning and graph partition.
Other Comments Or Suggestions: No
Questions For Authors: 1. The authors conduct convergence analysis of FIML on different datasets by showing the evolution process of the objective function with iterations in terms of ACC and observe that FIML monotonically decreases with iterations and tends to converge in about some iterations on these datasets. Here, how many iterations are needed for the proposed FIML in reaching convergence?
2. The authors show the running time of FIML and the compared approaches on different benchmark datasets in Table 5. However, the authors do not give the memory description of the used device. What is the memory value of the used device?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Q1: The reason why each entry in the anchor graph describes the similarity between data sample and anchor.
A1: Thanks for the comment! The reason why each entry in the anchor graph Z describes the similarity between data sample and anchor can be explained by the fact that dimension of the anchor graph Z is m \times n. It corresponds to m anchors and n data samples and larger entry in the anchor graph Z indicates larger similarity between data sample and anchor. We will add the above explanations for the camera-ready version.
Q2: The authors do not bold the best results in the experiment.
A2: Good question! It is needed to bold the best clustering results on tables for the experiment to make the performance achievements more obvious. We will bold the best clustering results on tables for the camera-ready version in the experiment to make the performance achievements more obvious.
Q3: The reason why the [0.001, 0.1, 1, 10, 100, 1000] is chosen for parameter selection should be given.
A3: Thanks for the comment! The reason why the [0.001, 0.1, 1, 10, 100, 1000] is chosen for parameter selection is that values in such range represent different representative magnitudes in parameter selection for \lambda in the experiment. We will add this explanation for the camera-ready version.
Q4: Giving the detailed specific values analysis for the experimental results.
A4: Good question! It is needed to give the detailed specific values analysis regarding the superiority of adopting a unified framework integrated by graph construction, anchor learning and graph partition. On the STL10 dataset, our method outperforms the last five multi-view clustering methods in tables by achieving improvements of 49.5%, 31.6%, 59.8%, 22.7% and 2.3%. We will add the above detailed specific values analysis for the experimental results in the camera-ready version.
Q5: How many iterations are needed for the proposed FIML in reaching convergence?
A5: Thanks for the comment! It needs about 20 iterations for the proposed FIML in reaching convergence. We will add this description regarding the iteration convergence for the camera-ready version.
Q6: What is the memory value of the adopted device in experiment?
A6: Good question! It is needed to give the memory description of the adopted device considering the running time of FIML and the compared approaches on different benchmark datasets are shown in Table 5. The memory value of the used device is 8G and we will add this description for the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. They have addressed my concerns. | Summary: This paper proposes a novel fast incomplete multi-view clustering method for the data with large scales, termed Fast Incomplete Multi-view clustering by flexible anchor Learning (FIML), where graph construction, anchor learning and graph partition are simultaneously considered in a unified framework for fast incomplete multi-view clustering. These three parts can boost each other, which promotes the quality of the clustering and improves the efficiency for large scale datasets. To be specific, the authors learn a shared anchor graph to guarantee the consistency among multiple views and adopt a adaptive weight coefficient to balance the impact for each view.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes. The effectiveness of the proposed FIML is better demonstrated in the experiment.
Theoretical Claims: Yes. The authors do not give proofs for theoretical claims in this work and there is no need to check them.
Experimental Designs Or Analyses: Yes. the experimental designs are reasonable.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The authors propose a novel fast incomplete multi-view clustering method for the data with large scales, termed Fast Incomplete Multi-view clustering by flexible anchor Learning (FIML), where graph construction, anchor learning and graph partition are simultaneously considered in a unified framework for fast incomplete multi-view clustering.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strength:
This paper learns a shared anchor graph to guarantee the consistency among multiple views and adopt a adaptive weight coefficient to balance the impact for each view. The relation between anchor graph and similarity matrix in symmetric nonnegative matrix factorization can also be built.
Weakness:
1. The authors propose a novel fast incomplete multi-view clustering method for the data with large scales, termed Fast Incomplete Multi-view clustering by flexible anchor Learning (FIML). The authors are expected to compare the proposed FIML with the most related works in Introduction part. Then the novelty of the proposed FIML is more clear for the readers.
2. The authors should add more recent related works in multi-view clustering for comparison in the experiment. Then the effectiveness of the proposed FIML is better demonstrated in the experiment.
3. Considering the running time is given in the experiment, the authors should give the memory the used device in the experiment.
4. The authors should bold the best and highlight the second best clustering performance for tables in the experiment to make the performance gains more obvious.
5. The authors are expected to give more detailed running time description in the related parts, i.e., what is the unit of running time in the experiment , second (s) or log second?
Other Comments Or Suggestions: No.
Questions For Authors: 1. The authors should give more detailed running time description in the related parts, i.e., what is the unit of running time in the experiment , second (s) or log second?
2. The authors are expected to add more recent related works in multi-view clustering for comparison in the experiment. Then the effectiveness of the proposed FIML is better demonstrated in the experiment.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Q1: Compare the proposed FIML with the most related works in Introduction.
A1: Thanks for the comment! The existing works most related to FIML are FPMVS-CAG and SMVSC. FPMVS-CAG jointly performs anchor selection and subspace graph construction into a framework. Then the two processes can be negotiated with each other to improve the clustering performance. SMVSC integrates anchor learning and graph construction into a unified optimization process. A more discriminative clustering structure can be achieved in this manner. The connection between the above two works and ours is that anchor learning and subspace graph construction are simultaneously conducted in a unified framework. The differences between these two works and ours, are that we learn a shared anchor graph to guarantee the consistency among multiple views and employ a adaptive weight coefficient to balance the impact for each view. The relation between anchor graph and similarity matrix in symmetric nonnegative matrix factorization can also be built, i.e., each entry in the anchor graph can characterize the similarity between the anchor and original data sample.
Q2: Add more recent compared methods in the experiment.
A2: Good question! We have added a recent method for comparison in the experimental section, i.e., OMVCDR [a].
[a] One-Step Multi-View Clustering With Diverse Representation, 2024
The clustering results of OMVCDR based on ACC for all datasets are 77.00±0.00, 89.85±0.10, 91.75±0.00, 76.50±0.15, 98.80±0.00, 98.95±0.10
The clustering results of OMVCDR based on NMI for all datasets are 89.50±0.00, 75.85±0.15, 49.00±0.05, 58.00±0.00, 96.50±0.10, 99.72±0.00
The clustering results of OMVCDR based on F1-score for all datasets are 69.00±0.00, 81.15±0.10, 88.95±0.00, 60.00±0.05, 97.80±0.00, 99.25±0.00
The clustering results of OMVCDR based on Purity for all datasets are 79.56±0.00, 91.25±0.20, 92.00±0.00, 76.50±0.50, 98.79±0.00, 99.20±0.05
Q3: The memory of the adopted device should be given.
A3: Thanks for the comment! Since the running time is listed in the experiment, we should give the memory of the used device in the experiment. The memory of the adopted device in the experiment is 8G in the experiment and we will add this description for the camera-ready version.
Q4: The best and second best results should be bold in the experiment.
A4: Good question! It is important to bold the best and highlight the second best clustering performance for tables in the experiment in making the performance gains more obvious for the readers. We will bold the best and highlight the second best clustering performance for tables of the camera-ready version in the experiment to make the performance gains more obvious.
Q5: What is the unit of running time in the experiment, second or log second?
A5: Thanks for the comment! It is needed to give more detailed running time description in the related parts. The unit of running time in the experiment is log second and we will add this description for the camera-ready version. | Summary: This paper proposes a novel fast incomplete multi-view clustering method for large scale data termed as FIML, where graph construction, anchor learning and graph partition are simultaneously considered in a unified framework for fast incomplete multi-view clustering. The relation between anchor graph and similarity matrix in symmetric nonnegative matrix factorization is also built, i.e., each entry in the anchor graph is able to characterize the similarity between the anchor and original data sample.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes. There are no proofs for theoretical claims and there is no need to check the correctness.
Experimental Designs Or Analyses: Yes. The dataset used in the experiment covers a wide range of categories and quantities, enabling a comprehensive presentation for the final results. Additionally, the comparison of experimental methods consists of advanced models from the past three years, enhancing the credibility of the experimental results.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: Compared to the previous studies, the highlights of this research constrain the factor matrix with rigorous interpretation to be cluster indicator representation by introducing the orthogonal constraint on the actual bases and use the alternative algorithm for solving the formulated problem. Extensive experiments are performed on different datasets to demonstrate the superiority of FIML in terms of effectiveness and efficiency.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strength: This paper builds the relation between anchor graph and similarity matrix in symmetric nonnegative matrix factorization, i.e., each entry in the anchor graph is able to characterize the similarity between the anchor and original data sample.
Weakness:
1. The authors should add more recent fast multi-view clustering methods for comparison in the experiment to validate the novelty of this work.
2. The authors can give more detailed experimental analysis for the experimental results based on ACC, NMI, F1-score and Purity in Table 1-Table 4.
3. The best clustering performance based on ACC, NMI, F1-score and Purity in Table 1-Table 4 in the experiment should be bold.
4. The authors should give the brief optimization process before the following optimization steps in Optimization part. Then the whole optimization routine becomes more clear.
Other Comments Or Suggestions: No.
Questions For Authors: 1. The authors should give more detailed experimental analysis for the experimental results based on ACC, NMI, F1-score and Purity in Table 1-Table 4.
2. The authors are expected to give the brief optimization process before the following optimization steps in Optimization part. Then the whole optimization routine becomes more clear.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Q1: The authors should add more recent works for comparison in the experiment.
A1: Thanks for the comment! We have added a recent method for comparison in the experimental section, i.e., OMVCDR [a].
[a] One-Step Multi-View Clustering With Diverse Representation, 2024
The clustering results of OMVCDR based on ACC for all datasets are 77.00±0.00, 89.85±0.10, 91.75±0.00, 76.50±0.15, 98.80±0.00, 98.95±0.10
The clustering results of OMVCDR based on NMI for all datasets are 89.50±0.00, 75.85±0.15, 49.00±0.05, 58.00±0.00, 96.50±0.10, 99.72±0.00
The clustering results of OMVCDR based on F1-score for all datasets are 69.00±0.00, 81.15±0.10, 88.95±0.00, 60.00±0.05, 97.80±0.00, 99.25±0.00
The clustering results of OMVCDR based on Purity for all datasets are 79.56±0.00, 91.25±0.20, 92.00±0.00, 76.50±0.50, 98.79±0.00, 99.20±0.05
Q2: The authors should give more detailed analysis for the experimental results based on ACC, NMI, F1-score and Purity.
A2: Good question! We have added more related detailed analysis for the experimental results as shown in the following. On the STL10 dataset, our method outperforms the last five multi-view clustering methods in tables by achieving improvements of 49.5%, 31.6%, 59.8%, 22.7% and 2.3%. We can also find that the anchor based algorithms are capable of handling the bigger data compared with the traditional multi-view clustering methods.
Q3: The best clustering results in the experiment should be bold.
A3: Thanks for the comment! We will bold the best clustering performance based on ACC, NMI, F1-score and Purity in Table 1-4 in the experiment for the camera-ready version to make the performance gains more obvious.
Q4: The brief optimization process before the detailed steps should be given.
A4: Good question! The algorithm of FIML consists of the steps shown in the following. It contains the input, output and initialization of the algorithm. The detailed optimization steps regarding each variable are also given in this algorithm. Then the optimization stops when the convergence condition achieves. We will add such brief optimization process before the detailed steps in Optimization part to make the whole optimization routine more clear.
---
Rebuttal Comment 1.1:
Comment: The authors have been well addressed my concerns, and I keep my score. | null | null | null | null | null | null |
Improving Zero-Shot Adversarial Robustness in Vision-Language Models by Closed-form Alignment of Adversarial Path Simplices | Accept (spotlight poster) | Summary: In this work, the authors focus on a topic, robustness of VLM (CLIP). The authors propose to solve a challenge in adversarial fine-tuning, and propose to use Taylor expansion to enlarge the dicision space, which can make the model more robust.
Claims And Evidence: The claims are supported by some math derivation.
Methods And Evaluation Criteria: The paper uses a lot of quantitative results to demonstrate the effectiveness of this work.
Theoretical Claims: This work includes one theorem (3.1), which is easy to understand.
Experimental Designs Or Analyses: The work is all SOTA across all the tables of quantitative result, which is a large improvement.
Supplementary Material: I have read the supplementary materials.
Relation To Broader Scientific Literature: There is no need to discuss broader scientific literature.
Essential References Not Discussed: The discussion of related works are sufficient.
Other Strengths And Weaknesses: Strength:
1. The quantitative results are a large improvement against baselines.
2. The figures are of high quality.
3. The paper provide theoretical analysis of the proposed theory.
Weakness:
Although the idea of the paper may be good, the writing of this work is not good, which make this work not very suitable for publication.
1. I think the writing of abstract needs to be improved. It should convey the key idea of the paper, instead of listing complex math notation. I suggest moving the math notation into the method section and just use natural language to describe the key idea of this work.
2. The related work should be in a seperate section instead of put in the introduction section. Also, the introduction is too long and too redundant.
3. The section 3 is too intensive with math. I think there should be some explanation of notations and some text for why you are writing this line of equation.
4. Small errors: Equation (10) is out of the space. Table 6 is overlapped with other tables.
Other Comments Or Suggestions: I am willing to raise the score after effective rebuttal.
Questions For Authors: Will you open-source the code?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Rev. srgi
We thank Rev. for the constructive feedback. **Kindly also note additional Theory/Results in Resp. to Rev. EyNB**.
### **1. Abstract.**
Thank you. We have removed now math notations:
```
Vision-Language Models (VLMs), e.g., CLIP, excel at zero-shot classification due to large-scale pre-training yet are vulnerable to adversarial examples. Adversarial fine-tuning robustifies zero-shot models by aligning prediction scores of individual adversaries with their clean counterparts, which typically overlooks intermediate adversarial samples along the adversarial trajectory crossing the decision boundary. Such intermediate adversaries and their vicinity offer informative representations of the decision surface, which can further be improved by sampling adversarial candidates from simplices formed by joining vertex with consecutive vertices on the adversarial trajectory. However, sampling simplices for adversaries is prohibitively costly. To train robust VLM, we overcome these limitations by Taylor expansion and formulating an upper-bound of alignment loss that depends on Jacobian/Hessian obtained at clean samples. As regions between clean and intermediate adversarial samples capture a larger decision landscape, we robustify VLM by plausible adversaries on simplices by our closed-form formulations equivalent to infinite uniform sampling of the simplex. We obtain state-of-the-art robustness across 15 datasets and diverse vision-language tasks.
```
### **2. Separate Section for Related Works & Concise Intro.**
**Absolutely. This is very easy to achieve.**
We will provide a separate detailed related work section in the revised version, including:
- alignment schemes (plus necessary equation).
- single-modal adversarial attacks/robustness
- multimodal adversarial attacks/robustness for a better understanding.
### **3. More Explanations of Section 3.**
- We first discuss the upper bound of the prediction alignment between clean and adversarial examples derived by Taylor expansion in Section 3.1
- In Section 3.2, we propose a theorem to avoid empirical aggregating/sampling from adversarial simplices. In other words, we provide a closed-form solution for sampling from adversarial simplices.
- The overall loss function is shown in Section 3.3.
- We further demonstrate that our method also bounds the robust risk in Section 3.4.
We will add the summary table of symbols:
|Symbols|Explanations|
|-|-|
|$\mathbf{x}$|Clean example|
|$\bf{\delta}_{\mathbf{x}}$|Adversarial perturbation|
|$g_\theta(\cdot)$|Prediction of CLIP|
|$J_{g}$|Jacobian matrix|
|$H_{g}$|Hessian matrix|
|$\Omega$|Vanilla upper bound of the Euclidean prediction distance|
|$\bar{\Omega}$|Upper bound with the cross-product term|
|$\bar{\bar{\Omega}}$|Upper bound without the cross-product term|
### **4. Formatting Issues.**
We apologize. We will fix Eq. 10 and one table that jumped between pages breaking formatting.
### **5. Code Release.**
**We stress that our code, model weights, and setup will be publicly available.**
$\color{blue}\text{General Additional Results:}$
### **6. Additional results (ViT-B vs. ViT-L).**
We extend our Table 3 (ImageNet), showing the average acc. across 15 datasets on ViT-B and ViT-L:
|Architecture|Method|ImageNet Clean|ImageNet AA|Avg. Clean|Avg. AA|
|-|-|-|-|-|-|
|ViT-B|TeCoA|54.43|25.19|48.83|25.75|
|ViT-B|PMG-FT|51.33|24.94|49.71|26.98|
|ViT-B|FARE|50.94|23.78|56.68|29.30|
|ViT-B|**AdvSimplex**|61.28|32.26|60.23|34.06|
|ViT-L|TeCoA|73.61|61.14|70.95|47.96|
|ViT-L|PMG-FT|73.92|60.63|68.67|49.38|
|ViT-L|FARE|72.51|57.20|71.30|50.54|
|ViT-L|**AdvSimplex**|76.87|64.09|73.39|52.80|
### **7. Additional results (Batch Size).**
For completeness, we present results with two different batch sizes, **128 and 512** to show our approach performs best. The average results across 15 datasets on ViT-B are below:
|Batch Size|Method|Avg. Clean|Avg. AA|
|-|-|-|-|
|128|TeCoA|48.70|25.62|
|128|PMG-FT|49.45|26.74|
|128|FARE|56.22|28.95|
|128|**AdvSimplex**|59.86|33.54|
|512|TeCoA|48.83|25.75|
|512|PMG-FT|49.71|26.98|
|512|FARE|56.68|29.30|
|512|**AdvSimplex**|60.23|34.06|
Our method does not require a large batch size, but large batch size can enhance a bit all methods.
### **8. Additional results: Auto-Attack Results for Each Dataset.**
The auto-attack results ($\epsilon$=2/255) of ViT-B corresponding to Table 2 is below:
|Method|ImageNet|STL10|CIFAR-10|CIFAR-100|SUN397|Stanf.Cars|Food101|OxfordPet|Flower102|DTD|EuroSAT|FGVC|PCAM|Caltech101|Caltech256|Avearge|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|TeCoA|25.19|69.38|42.01|21.40|16.58|4.09|12.51|40.87|14.96|16.07|11.19|1.68|12.06|54.53|43.74|25.75|
|PMG-FT|24.94|69.98|43.28|21.48|16.70|6.04|13.57|41.06|15.68|17.21|11.89|2.16|19.57|56.16|44.92|26.98|
|FARE|23.78|75.40|49.76|28.35|16.29|8.32|17.04|44.61|17.03|18.39|8.58|2.25|21.41|60.14|48.16|29.30|
|**AdvSimplex**|32.26|77.82|55.64|30.99|18.80|9.45|19.87|53.67|18.90|19.85|12.52|4.41|40.53|64.85|51.34|34.06|
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal! I hope the writing will be improved in the final version.
---
Reply to Comment 1.1.1:
Comment: Esteemed Reviewer,
\
\
Thank you for your kind message, and valuable comments helping us improve and refine our manuscript. Meantime, if there is anything else we can answer or explain or discuss further, kindly do let us know.
\
\
Rest assured, all requested improvements will be made in the final paper.
Kind regards,
\
Authors | Summary: This paper aims to enhance the robustness of CLIP for zero-shot image classification.
It emphasizes that existing defense methods often disregard intermediate adversarial samples along the trajectory, which are found to be beneficial in this study.
The proposed method, AdvSimplex, uses an efficent method to statistically align clean sample and adversarial samples from the trajectory, instead of sampling adversarial examples from the trajectory area and aligning them with a clean sample one by one.
Experiments on ImageNet demonstrate improvements in both accuracy and robustness.
Claims And Evidence: - In the abstract, the authors claim, “We obtain state-of-the-art with 10× speed-up”.
- However, the computational cost is never comparable to that of state-of-the-art methods.
- Adversarial training fundamentally involves solving a min-max optimization problem. It is unclear why using weaker adversarial examples from the trajectory would enhance robustness. The authors cite [Gao et al., 2024], whose work focuses on improving the transferability of adversarial examples, which involves sacrificing attack strength on the surrogate model. However, the relevance of transferable adversarial examples to adversarial training remains insufficiently validated, since using such weaker attack is suboptimal in min-max optimization.
[Gao et al. 2024] Gao, S., Jia, X., Ren, X., Tsang, I., and Guo, Q. Boosting transferability in vision-language attacks via diversifica- tion along the intersection region of adversarial trajectory. ECCV 2024
Methods And Evaluation Criteria: - The numbers are very different from those of existing papers. For example, in [Schlarmann et al. 2024], the clean zeros-shot accuracy on ImageNet is over 70% for all FARE, TeCoA, and original CLIP; however, Table 1 shows much lower numbers. Also, the robust accuracy is much lower. Why?
- [Schlarmann et al. 2024] Schlarmann, Christian, et al. "Robust clip: Unsupervised adversarial fine-tuning of vision embeddings for robust large vision-language models." ICML 2024
- Table 2: The PGD-20 results as the main results are not convincing. Auto-Attack results are reported only for the average of datasets. This lacks information for readers.
Theoretical Claims: - I don’t get how Eq.7 is derived. How did $\gamma(x, \delta_x)$ disappear? Are the notations correct? If $\alpha(x,\delta_x) = || J_g(x) \delta_x ||^2_2$, why is it doubled in Eq.(7)??? Sorry if I misunderstand anything.
Experimental Designs Or Analyses: Experimental designs are mostly aligned with existing work. However, there are some concerns, which are mentioned in "Methods And Evaluation Criteria."
Supplementary Material: No.
Relation To Broader Scientific Literature: The idea of leveraging adversarial trajectory in adversarial training may be new. It could also be applied to image classification or other tasks.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Weakness:
- The presentation quality is not optimial.
- The caption of Table 6 overlaps with the table.
- I don’t think it’s good practice to include extensive mathematical formulations in the abstract. At the very least, using multiple notations without explanation is problematic.
Other Comments Or Suggestions: - P.2, Line.90: “Lu et al. (2023) obtained intermediate adversarial samples along the adversary generation trajectory to achieve cross-VLM attacks."
- This seems to be a wrong citation? Set-level Guidance Attack (SGA) does not use generation trajectory.
Questions For Authors: - Is the idea of using adversarial trajectory novel? If yes, do you think it can be applied to image classification task, not only VLMs?
- Why does the clean accuracy improve with the proposed method?
- Is the training time per epoch longer than FARE or TeCoA?
- This method uses a much larger batch size of 512 compared to 128 of TeCoA and FARE.
- Does this method require a large batch size?
- Do TeCoA/FARE results improve with larger batch sizes?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Rev. XcU1
Thank you for the constructive feedback. **Kindly also note additional Theory/Results in Resp. to Rev. EyNB**.
### **1. Compare Cost. 10x speedup.**
- Below are **training times for FARE, TeCoA, PMG-FT**.
- The *10x speed-up* is for our closed-form "infinite" sampling from adv. simplex *vs.* "naive" sampling (see below *Sampling 300* samples per simplex *vs.* AdvSimplex (10 steps)).
|Method|Clean|AA|Training Time per Epoch (hours)|
|-|-|-|-|
|TeCoA|48.83|25.75|1.3|
|PMG-FT|49.71|26.98|2.1|
|FARE|56.68|29.30|1.7|
|Sampling (300 samp.)|60.18|34.20|36.2|
|**AdvSimplex (10 steps)**|60.23|34.06|2.9|
|**AdvSimplex (5 steps)**|58.75|33.27|1.6|
|**AdvSimplex (3 steps)**|58.36|32.19|1.2|
|**AdvTetrahedron (10 steps)**|60.80|35.17|2.7|
|**AdvTetrahedron (5 steps)**|59.12|33.64|1.4|
|**AdvTetrahedron (3 steps)**|58.67|32.70|1.1|
- For **the same time budget (~1.6h), AdvSimplex (5 steps) outperforms FARE by ~2\% and ~4\%** in the clean and robust accuracy (AA)
- For 1.7x time budget, **AdvSimplex (10 steps) gets ~3.5\% and ~4.7\% gains over FARE** (clean and robust acc. (AA)).
- **Our method does not use additional modules/parameters:** the inference time is consistent with other VLMs.
- **AdvTetrahedron** uses a generalization of our Theorem 3.1 to simplices with more vertices, e.g., **tetrahedron** ($Q=4$ vertices). **Kindly see Resp. 1 to Rev. EyNB for detailed information.**
### **2. Why Weak Adversaries can Robustify? 2b. Clean accuracy gains. 2c. SGA does not use intermediate adv. 2d. Novelty using adv. trajectory.**
> [a] Set-level Guidance Attack..., Lu et al. (2023), ICCV
> [b] Boosting Transferability in Vision-Language Attacks..., Gao et al.(2024), ECCV
- SGA [a] in every ascent step generates few perturbations around intermediate ${\bf v}'$ and continues ascent step from their mean ($\bf\mu\neq v'$). Thus, for the same ${\bf x}$ generated 3 traj. will differ.
- [b] forms triangles between ${\bf x}$ and consecutive 2 intermediate steps ${\bf v}\_i$ and ${\bf v}\_{i+1}$ but does not use noise. But Fig. 1b [b] shows triangles capture multiple trajectory routes.
As gradient ascent uses fixed gradient step and the decision boundary is non-linear, **the adversarial path is not a perfect ascent**. Intermediate adv. samples:
- **may be sometimes stronger than final adv. samples**
- **enjoy diversity in adversarial directions.**
\
\
We verify this: **weights $\omega\_i$ in Eq. 13 (main paper) for the final-step adversary $m$ is not always more adversarial than intermediate adversary $m-1$**:
|$\omega\_m\geq\omega\_{m-1}$|$\omega\_{m-1}\geq\omega\_{m-2}$|$\omega\_{m-2}\geq\omega\_{m-3}$|
|-|-|-|
|83%|79%|73%|
*Friendly Adversarial Training (ICML'20)* notes that weak adv. help obtain robust decision boundary.
Below we also show that simplices from:
- late intermediate steps lead to greater adversarial robustness
- early intermediate steps lead to greater clean accuracy
- **mid intermediate steps further boost clean+adversarial accuracy**
|Indices of Used Adv. Simplices|Clean|AA|
|-|-|-|
|8-10 (Late Steps)|58.65|33.06|
|6-10 (Late Steps)|59.14|33.38|
|1-10 (All)|**60.23**|**34.06**|
|1-5 (Early Steps)|60.37|32.49|
|1-3 (Early Steps)|59.79|32.00|
**2b.** Early adversaries improve clean acc. as they are more correlated with clean ${\bf x}$ and act as sample augmentation.
**2c.** To validate Rev. assumption that only final adversaries matter, per ${\bf x}$ we generate 3 adv. paths as SGA (paths differ due to noise injection). **We build a simplex only from final adv. points of 3 paths (AdvSimplex-SGA)** per ${\bf x}$: this outperforms vanilla SGA but is worse than **AdvSimplex**.
|Method|Avg. Clean|Avg. AA|
|-|-|-|
|SGA (standard: one adv. per ${\bf x}$)|56.84|30.08|
|SGA (3 adversaries per ${\bf x}$)|57.31|30.93 |
|AdvSimplex-SGA (simplex on 3 adv. per ${\bf x}$)|58.75|32.40|
|**AdvSimplex**|60.23|34.06|
**2d.** Our main contribution is **"infinite" sampling+alignment but adv. simplex can be generated in many ways** (our strategy, SGA-like strategy, etc.) Unlike SGA [a] and [b] we are the first to use adv. simplices from trajectory for robustification.
### **3. $\gamma$ missing in Eq. 7**
**See Resp. 1b to Rev. EyNB.**
### **4. Results differ from other papers.**
- [Schlarmann, 2024] reports on a sampled subset of ImageNet (5000 samples) on **ViT-L** in their Table 4
- We report on **ViT-B** in our Tables 1+2 following TeCoA and PMG settings (Mao, 2023; Wang, 2024)
**To address Rev.'s concern, we extend our Table 3 to ViT-L in Resp. 6 to Rev. srgi.**
### **5. Compare Batch Sizes.**
In paper, we use same experimental setup (BS: 512) for TeCoA, PMG, FARE and ours for fairnesss.
See **Resp. 7 to Rev. srgi**
### **6. Auto-Attack for Each Dataset.**
See **table in Resp. 8 to Rev. srgi.** We can post ViT-L AA in discussions.
### **7. Extend to image classification.**
See **Resp. 6 to Rev. td6r**
### **8. Formatting/abstract.**
See **Resp 1-4 to Rev. srgi**
---
Rebuttal Comment 1.1:
Comment: Sincere apologies, I mistakenly posted my response as an Official Comment instead of a Rebuttal Comment, which made my reply invisible to the authors.
---
Thank you so much for your thorough reply. Many of the concerns are addressed.
> 2. Why Weak Adversaries can Robustify? 2b. Clean accuracy gains. 2c. SGA does not use intermediate adv. 2d. Novelty using adv. trajectory.
I believe this part should be incorporated into the paper. The current version starts by discussing efficient trajectory sampling but lacks an explanation of why trajectory is important, relying solely on citations. Including this discussion would enhance the paper’s quality and readability.
Additionally, I agree with Reviewer srgi that the paper is sometimes too dense with mathematical content. I suggest revising it to provide sufficient explanations, including intuitive ones, to help readers unfamiliar with this field grasp the key ideas.
---
I will update the score.
---
Reply to Comment 1.1.1:
Comment: Esteemed Reviewer,
\
\
Thank you for your kind message, and valuable comments helping us improve and refine our manuscript.
\
\
Rest assured, all requested improvements will be made in the final paper. We will include details of Q2 as per your request, and explanations why the trajectory is important.
\
\
Rest assured, we will revise maths accordingly to make it more accessible and intuitive and less dense.
\
\
Kind regards,
\
Authors | Summary: This paper proposes a new adversarial fine-tuning method for VLMs that enhances zero-shot adversarial robustness by leveraging adversarial simplices formed from a clean image and consecutive intermediate adversaries along the gradient ascent path. The proposed method employs a Taylor expansion of the model's prediction function to derive a closed-form upper bound on the alignment loss. This closed-form solution, which approximates infinite uniform sampling within the simplex, is then integrated into a combined loss function that balances standard classification loss with an adaptive adversarial alignment term, which achieves SOTA zero-shot adversarial robustness and computational efficiency across 15 datasets.
Claims And Evidence: Yes, the claims made in the submission are well-supported by clear and convincing evidence.
Methods And Evaluation Criteria: This paper aims to improve the zero-shot adversarial robustness of VLMs. Clean and robust accuracy used in this paper are widely acknowledged as the golden metrics in zero-shot robust classification problems. 15 benchmark datasets used in this paper are widely acknowledged to evaluate the zero-shot capabilities of CLIP. All the baseline methods are fine-tuned on ImageNet, which is a relatively large-scale dataset in this field.
Theoretical Claims: The proof is overall correct. In particular, I checked the derivation of Theorem 3.1 on the closed-form second-order statistics for points uniformly sampled in a simplex, as well as the bounding argument in Section 3.1 (where the paper moves from the exact alignment loss to its Taylor-based upper bound). The closed-form expression for the uniform simplex statistic (Theorem 3.1) is standard and appears correctly applied.
Experimental Designs Or Analyses: The experimental design is very sound from my perspective: this paper uses 15 datasets and multiple attack methods for evaluations (e.g., PGD, C&W and AutoAttack) and all evaluations use adaptive attacks. In addition, this paper systematically measures both zero-shot clean and robust accuracy across different model architectures and perturbation budgets. The authors also provide ablation studies and sensitivity analyses that help support the validity of their choices of hyperparameters. Regarding the section on baseline methods, I notice that a recently published paper [1] can be considered as one of the baseline methods in this paper as well since it also claims that it achieves the SOTA performance in zero-shot robustness.
[1] Text-guided attention is all you need for zero-shot robustness in vision-language models, NeurIPS 2024.
Supplementary Material: This paper does not contain any supplementary materials (here I assume the appendix is not supplementary material).
Relation To Broader Scientific Literature: The paper builds upon a substantial body of work in adversarial training and fine-tuning. For example, Wang et al. (2022) and Lu et al. (2023) emphasize the importance of intermediate adversarial samples along the gradient ascent path, as well as Gao et al. (2024) who introduced the concept of forming simplices with consecutive adversaries. However, previous approaches struggled with the computational costs of sampling such simplices. This paper advances the previous methods by using Taylor expansion and deriving a closed-form solution that captures the infinite sampling behavior within a simplex. This approach not only aligns with established ideas in robust optimization but also addresses practical scalability concerns, thereby contributing a novel and computationally efficient method to the field.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
1.This paper proposes an interesting fine-tuning method that could largely improve the zero-shot performance and, at the same time, significantly reduce computational cost.
2.Theoretical derivations are well-grounded and the experimental results support the claims.
Weaknesses:
1.The experiment lacks a sensitivity analysis of the parameter $\lambda$ used in Eq. (13).
Other Comments Or Suggestions: I noticed that Table 5 and Table 6 overlap in the current layout. I would suggest adjusting their formatting to avoid overlapping.
Questions For Authors: 1.The adaptive re-weighting mechanism in Eq. (13) appears crucial to your loss formulation. How sensitive is the method’s performance to the choice of $\lambda$?
2.In your comparisons, the closed-form alignment is shown to be computationally more efficient than explicit sampling. Could you provide further quantitative analysis on the trade-off between computational overhead and adversarial robustness when using closed-form versus sampled adversarial simplices?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Rev. td6r
Thank you for the constructive feedback. **Kindly also note additional Theory/Results in Resp. to Rev. EyNB**.
### **1. Theoretical derivations well-grounded/experiments support claims.**
Thank you. We have also a generalization of our Theorem 3.1 to simplices with more vertices, e.g., **tetrahedron ($Q=4$ vertices)** or **pentachoron ($Q=5$ vertices)**.
\
\
**Kindly see Resp. 1 to Rev. EyNB for details.** Below we provide a list of our extensions.
Let $Q$ be the number of vertices (${\bf z}_1,\ldots,{\bf z}_Q$) for a simplex. The closed-form expression for ${\bf\Sigma}_x$ is given as:
$\mathbb{E}[{\bf pp}^T]=\frac{1}{Q(Q+1)}\bigg[\sum\_{i=1}^Q{\bf z}\_i {\bf z}\_i^T + \Bigl(\sum_{i=1}^Q {\bf z}\_i\Bigr)\Bigl(\sum_{i=1}^Q {\bf z}\_i\Bigr)^T\bigg]$.
Table below shows further gains from our extended theorem:
|Method|Avg. Clean|Avg. AA|
|-|-|-|
|Our **AdvSimplex**|60.23|34.06|
|Our **AdvTetrahedron**|**60.80**|**35.17**|
### **2. Comparison with [1]**
> [1] Text-guided attention is all you need for zero-shot robustness in vision-language models, NeurIPS 2024.
TGA-ZSR [1] designs text-guided attention to enhance zero-shot robustness on VLMs. We will cite [1].
\
\
Below we evaluate/compare our AdvSimplex with [1] on image-level, text-level, and bi-level adversarial attacks. The average results across 15 datasets are below:
|||*Image-Level Attacks*||*Text-Level Attacks*||*Bi-Level Attacks*||
|-|-|-|-|-|-|-|-|
|**Method**|**Clean**|**PGD**|**AA**|**BERT-Attack**|**GBDA**|**Co-Attack**|**SGA**|
|TeCoA|48.83|27.33|25.75|37.14|35.30|26.73|25.94|
|PMG-FT|49.71|28.44|26.98|37.61|36.46|28.11|27.85|
|FARE|56.68|30.94|29.30|35.45|34.97|25.38|25.06|
|$\color{red}\text{TGA-ZSR [1]}$|57.54|31.15|30.41|38.07|37.32|29.30|28.58|
|**AdvSimplex**|**60.23**|**35.68**|**34.06**|**40.21**|**39.88**|**32.95**|**32.53**|
**AdvSimplex significantly outperforms other methods.**
### **3. Sensitivity w.r.t. $\lambda$.**
- $\lambda$ controls the impact of the upper bound of $\Omega({\bf x})$ when aligning adversarial predictions with clean predictions.
- Figure 4a (main paper) shows that increasing $\lambda$ enhances the adversarial robustness at the cost clean accuracy. Lowering $\lambda$ improves zero-shot clean acc. but reduces adv. robustness.
- Such a trade-off stems from the optimization the natural and boundary risks.
Average results across 15 datasets w.r.t. $\lambda$ are below:
- **Uniform weighting**: $\omega_i=1/(m-1)$ is const. for all $i=1,\ldots,m-1$.
- **Adaptive weighting**: as per Eq. 13 (main paper)
|Re-weighting Strategy|$\quad\lambda\quad$|Clean Accuracy|AA-Robust Accuracy|
|-|-|-|-|
|Uniform|0.5|59.67|32.38|
|Uniform|**0.6**|**59.45**|**32.96**|
|Uniform|0.7|59.06|33.64|
|Uniform|0.8|58.72|34.09|
|Uniform|0.9|58.29|34.26|
|Adaptive|0.5|60.41|33.35|
|Adaptive|**0.6**|**60.23**|**34.06**|
|Adaptive|0.7|60.02|34.75|
|Adaptive|0.8|59.78|35.10|
|Adaptive|0.9|59.04|35.39|
In our paper, $\lambda=0.6$ in all experiments.
### **4. Formatting issue.**
We will fix this. It appears tables jumped between pages unintentionally.
### **5. Trade-off analyses between closed-form vs. sampling.**
- In addition to Figure 4b, we present further results (clean accuracy, adv. robust accuracy (AA-Robust Accuracy), training time per epoch):
|Method|No. samples per simplex|Clean Accuracy|AA-Robust Accuracy|Train Time per Epoch (hours)|
|-|-|-|-|-|
|Sampling|3|57.42|28.86|1.4|
|Sampling|10|57.63|30.37|2.6|
|Sampling|30|57.98|31.98|5.1|
|Sampling|50|58.55|33.17|8.0|
|Sampling|70|59.07|33.69|10.3|
|Sampling|100|59.61|33.92|14.0|
|Sampling|200|59.87|34.07|25.5|
|Sampling|300|60.18|34.20|36.2|
|Our **closed-form simplex (10 steps)**|$\infty$|60.23|34.06|2.9|
|Our **closed-form simplex (5 steps)**|$\infty$|58.75|33.27|1.6|
|Our **closed-form simplex (3 steps)**|$\infty$|58.36|32.19|1.2|
|Our **closed-form tetrahedron (10 steps)**|$\infty$|60.80|35.17|2.7|
|Our **closed-form tetrahedron (5 steps)**|$\infty$|59.12|33.64|1.4|
|Our **closed-form tetrahedron (3 steps)**|$\infty$|58.67|32.70|1.1|
- Increasing the number of sampling adversaries from simplex yields gains in robustness at the substantial computational cost increase.
- Our closed-form solution effectively balances robustness and computational efficiency, providing **competitive clean and robust performance at ~$10\times$ reduced computational cost.**
### **6. Extension to image classification.**
In our paper, we also have:
- **Vision-text understanding.** Table 7 is **image-text retrieval (Flickr30k)** and **image captioning (Nocaps)**
- **Medical diagnosis.** Table 8 shows results on ChestX-ray14, CheXpert and PadChest.
For more results, table below is single-modal image classification built on TRADES with Wide-ResNet-28-10 (CIFAR-10/100):
||CIFAR-10||CIFAR-100||
|-|-|-|-|-|
|**Method**|**Clean**|**AA**|**Clean**|**AA**|
|PGD-AT|83.79|56.30|58.13|26.65|
|TRADES|85.17|57.09|59.48|26.99|
|**AdvSimplex**|**87.25**|**58.64**|**61.30**|**28.57**|
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their comprehensive rebuttal. My concerns are **well-addressed** and I will keep my score towards accepting this paper. Please add these additional experiments to the paper, which I believe will make the paper more solid. Best of luck with the rebuttal!
---
Reply to Comment 1.1.1:
Comment: Esteemed Reviewer,
\
\
Thank you for your kind message, and valuable comments helping us improve and refine our manuscript. Meantime, if there is anything else we can answer or explain or discuss further, kindly do let us know.
\
\
Rest assured, all requested explanations and details will be added in the final paper.
Kind regards,
\
Authors | Summary: This paper tackles the challenge of making vision-language models (VLMs) more robust to adversarial attacks in zero-shot classification. Existing methods try to improve robustness by sampling adversarial examples along the decision boundary, but they come with high computational costs. To address this, the authors propose a more efficient approach using a closed-form formulation based on Taylor expansion. This allows them to approximate adversarial alignment loss without expensive second-order computations while still capturing key adversarial perturbations along the trajectory. Their method effectively simulates infinite uniform sampling over simplices, offering a computationally efficient way to strengthen VLMs against attacks.
Claims And Evidence: All claims in the paper are clearly stated and supported by theoretical proofs.
Methods And Evaluation Criteria: The proposed method is well-motivated and effectively addresses the computational challenges of adversarial fine-tuning for vision-language models (VLMs). The use of a closed-form formulation based on Taylor expansion makes sense for improving efficiency while maintaining robustness. The evaluation criteria are appropriate, with experiments conducted diverse tasks and datasets, which provides a strong empirical results for assessing the method’s effectiveness.
Theoretical Claims: Regarding the theoretical claims, the paper provides clear mathematical derivations to support its approach. The Taylor expansion and the closed-form approximation of adversarial alignment loss are well-structured and logically follow from the given assumptions. I checked the key proofs related to the approximation and alignment loss formulation, and they appear to be correct.
Experimental Designs Or Analyses: Yes, I reviewed the experimental design and analyses, which appear well-structured and aligned with the paper’s objectives. The evaluation spans 15 datasets, providing a diverse testbed for assessing the robustness of the proposed method. The comparisons with prior adversarial fine-tuning approaches are relevant, and the reported 10× speed-up adds an important efficiency perspective.
Supplementary Material: I review on the experimental detail parts.
Relation To Broader Scientific Literature: This paper builds on prior work in adversarial robustness for vision-language models (VLMs) and extends research on efficient adversarial training methods. Traditional adversarial fine-tuning methods, such as those explored by Wang et al. and Lu et al., have highlighted the importance of intermediate adversarial examples along the adversarial trajectory for improving robustness. However, these approaches rely on costly second-order computations, limiting their scalability. The proposed method addresses this limitation by leveraging Taylor expansion to approximate adversarial alignment loss in a closed-form solution, eliminating the need for expensive sampling of adversarial simplices.
Essential References Not Discussed: The paper covers the essential references relevant to its topic, and I did not find any major prior work that was overlooked.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: # Response to Rev. EyNB
We thank Rev. for constructive feedback.
### **1. I checked key proofs...they appear to be correct.**
Thank you.
### **1b.** Why $\gamma$ disappears in Eq. 7? (for Rev. XcU1)
- In Eq. 5 \& 6, elements after the sum can be written as $(\sqrt{\alpha}+\frac{1}{2}\sqrt{\beta})^2=\alpha+\frac{1}{4}\beta+2\cdot\frac{1}{2}\gamma$ where $\gamma=\sqrt{\alpha\beta}\;,\alpha,\beta\geq 0$.
- Use known inequality $(a+b)^p \le 2^{p-1}(a^p+b^p)$ and set $p=2$ so $(a+b)^2 \le 2^{1}(a^2+b^2)$.
- Substitute the first eq. into the inequality: $(\sqrt{\alpha}+\frac{1}{2}\sqrt{\beta})^2\le 2\alpha +2\cdot\frac{1}{4}\beta=2\alpha +\frac{1}{2}\beta$ and that is why $\gamma$ does not appear in Eq. 7.
- This result corresponds to $\bar{\bar{\Omega}}$ in Eq. 10 which is an upper bound of Eq. 6.
- Note we evaluate in supp. material (**C.2**) Eq. 9 includes $\gamma$ but is very costly due to operations in $\mathbb{R}^{(wh)^2}$ *vs.* $\mathbb{R}^{wh}$.
### **1c. To further showcase our work, below we have additional extensions of Theorem 3.1.**
- **Theorem 3.1 (extended)** can be extended to higher-order simplices, e.g., **tetrahedron ($Q=4$ vertices)** or **pentachoron ($Q=5$ vertices)**:
Let $Q$ be the number of vertices (${\bf z}_1,\ldots,{\bf z}_Q$) for a simplex. The closed-form expression for ${\bf\Sigma}_x$ is given as:
$\mathbb{E}[{\bf pp}^T]=\frac{1}{Q(Q+1)}\bigg[\sum\_{i=1}^Q{\bf z}\_i {\bf z}\_i^T + \Bigl(\sum_{i=1}^Q {\bf z}\_i\Bigr)\Bigl(\sum_{i=1}^Q {\bf z}\_i\Bigr)^T\bigg]$.
Proof follows steps in paper, e.g., parameterizing ${\bf p}=\sum\_i^Q\alpha_n{\bf z}\_i, \\;\alpha\_i\geq 0,\\; \sum\_i^Q\alpha\_i=1$ and noting that:
- $\mathbb{E}[\alpha_i] = \frac{1}{Q}$
- $\mathbb{E}[\alpha_i^2] = \frac{2}{Q(Q+1)}$
- $\mathbb{E}[\alpha_i\alpha_j] = \frac{1}{Q(Q+1)},\\;i\neq j$
for the underlying Dirichlet distribution. Then we expand $\mathbb{E}[pp^T]$ where $p p^T = \sum\_{i,j=1}^Q \alpha\_i\alpha\_j {\bf z}\_i{\bf z}\_j^T$.
### **1d** We have also a theoretical extension linking Theorem 3.1 above with the **Hessian-vector product (HVP)** needed for fast evaluation of Eq. 10.
Evaluating Hessian even with **functorch** is slow. In our paper, **we used HPV which never evaluates Hessian explicitly.** HPV computes very fast $({\bf H\_g\cdot v})$ instead.
- To take advantage of it, Theorem 3.1 includes:
$\mathbb{E}[{\bf p}^T (H\_g\cdot{\bf p})]=\frac{1}{Q(Q+1)}\bigg[\sum\_{i=1}^Q{\bf z}\_i^T(H\_g\cdot{\bf z}\_i) + \Bigl(\sum_{i=1}^Q {\bf z}\_i\Bigr)^T\Bigl(H\_g\cdot\sum_{i=1}^Q {\bf z}\_i\Bigr)\bigg]$.
- Thus, in Eq. 10, we can substitute $\langle {\bf \Sigma}\_x, (H\_g({\bf x}))\_c\rangle^2=\Big(\mathbb{E}[{\bf p}^T ((H\_g({\bf x}))\_c\cdot{\bf p})]\Big)^2$. For a simplex with three vertices (one is $0$), this requires three HPV evaluations.
### **2. Alternative to Euclidean alignment in Eq. 1.**
We replace $\Omega({\bf x})$ in Eq. 1 with a KL-divergence variant integrated with Theorem 3.1. To this end:
- we use a different Taylor expansion $\log g({\bf x}+{\bf\delta}\_x)-\log g({\bf x})\approx J\_{\log (g+\rho)}({\bf x}){\bf\delta}\_x + \frac{1}{2}\bigl[{\bf\delta}\_x^T(H\_{\log (g+\rho)}({\bf x}))\_c\\,{\bf\delta}\_x\bigr]_{c=1}^{C}$ where $\rho=1e^{-5}$ is added for the numberical stability of $\log$.
- then $\Omega\_{KL}({\bf x})=\frac{1}{\kappa}\sum\nolimits\_{{\bf\delta}\_x\in\Delta\_\mathcal{X}} KL\big(g({\bf x}) || g({\bf x}+{\bf\delta}\_x)\big)\approx-\sum\_{c=1}^C (g({\bf x}))\_c\cdot \Big[ \big\langle{\bf\mu}\_x, \mathcal{J}\_{\log (g+\rho)}({\bf x},c) \big\rangle+\frac{1}{2}\big\langle{\bf{\Sigma}}\_x, (H\_{\log (g+\rho)}({\bf x}))\_c \big\rangle \Big]$
where ${\bf\mu}\_x$ is the analytical mean of simplex, and ${\bf{\Sigma}}\_x$ is from Theorem 3.1.
### **3. Results for extensions.**
|Method|Avg. Clean|Avg. AA|Train Time per Epoch (hours)|
|-|-|-|-|
|KL Div. in Eq. 1 (sampling)|59.28|32.96|3.5|
|AdvSimplex-KL|59.13|33.20|3.0|
|SGA (standard: one adv. per ${\bf x}$)|56.84|30.08|1.3|
|SGA (3 adversaries per ${\bf x}$)|57.31|30.93 |1.4|
|AdvSimplex-SGA (simplex on 3 adv. per ${\bf x}$)|58.75|32.40|2.1|
|**AdvSimplex**|60.23|34.06|2.9|
|**AdvTetrahedron**|60.80|35.17|2.7|
In the table above:
- **AdvSimplex-KL** uses our KL div. $\Omega\_{KL}({\bf x})$ instead of bounds of Euclidean $\Omega({\bf x})$.
- **AdvTetrahedron** uses our extended Theory 3.1 for tetrahedron ($Q=4$). We take ${\bf x}$ and additional consecutive 3 vertices instead of 2 as in AdvSimplex ($Q=3$).
- **AdvSimplex-SGA** uses simplex formed from the final 3 adversaries of 3 adversarial paths per sample ${\bf x}$. As SGA perturbs each intermediate adversary by noise, running gradient ascent 3x produces 3 distinct paths.
- **SGA (3 adv.)**: we generate 3 adv. paths as in AdvSimplex-SGA and use the final 3 adversaries directly for robustification.
- **SGA (standard)**: we use the final adversary from a single adv. path for robustification.
SGA:
> Set-level Guidance Attack..., ICCV'23. | null | null | null | null | null | null |
Discovering Latent Causal Graphs from Spatiotemporal Data | Accept (poster) | Summary: This manuscript analyzes spatiotemporal data with the goal of finding a temporal structural causal model of underlying latents. This framework could be used for causal discovery of earth systems. The manuscript includes proof of identifiability of reasonable assumptions, and provides empirical results on real and synthetic data.
Claims And Evidence: I do not find the case study on the hybrid real-synthetic earth system overly convincing, as it is not clear that the inferred shapes with clear boundaries are realistic based on earth systems, nor was prediction really considered.
Methods And Evaluation Criteria: One thing that is clearly missing in the evaluation is how well these models predict. An ideal model would both discover potential causal relationships but also explain a significant amount of variance in the system. In other words, if a system loses a lot of predictive ability, I would have a difficult time accepting its underlying latent model. As such, I would encourage the authors to clearly describe how good their predictions are and compare this to the field at large.
Theoretical Claims: The theoretical claims appear correct and the basis of the theory is well-founded in the literature.
Experimental Designs Or Analyses: As described above, I would like to see results on prediction.
An additional consideration is the impact of spatial factor shape on performance. It is clear that SPACY outperforms competing approaches on the synthetic data, and is robust to mild permutations of the kernel in SPACY. However, in practice I would expect more complex shapes that are non-isotropic, whereas the data and kernels are both isotropic, and I would like to see a clearer description of that result.
The claims on the climate datasets are too strong. For instance, the relationships shown in Figure 6 are interesting in the fact that they recover known relationships, but the strong prior on the shape of spatial factors hinder interpretation. In particular, the physics of the earth system requires that the relationship goes between the intermediate spatial locations, which is clearly missed here. As such, it is difficult to assess whether these conclusions are good or oversimplified. Part of this could be addressed by assessing forecasting strength to show that you are copying information.
Additionally, the claim that the SPACY factors are easier to interpret than the Varimax-PCA is difficult to evaluate, as the Varimax-PCA factors show known relationships that respect geographical changes in the earth.
Supplementary Material: I read through all of the supplemental materials. I went through the setup of the theoretical claims to see if it matched my expectations (it did), but did not go through every line.
Relation To Broader Scientific Literature: The contributions seem appropriate given the contributions of the paper; albeit the evidence for the 3rd claim (empirical) is weaker than I would like.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Overall, I like that this is approaching a novel problem and attempting to make insights on complicated systems.
Other Comments Or Suggestions: Some other minor comments and suggestions:
SCM isn’t accounting for time in its initial definition of 3, which is confusing. It looks like you are defining these as independent processes in the preliminaries, which then moves to a temporal process in the Latent SCM. This should be revised and clarified.
\ell is not defined in section 3. I assume it’s the spatial index, but this should be explicitly stated.
G_j,d^k is not fully defined in equation 2.
The use of Rhino is not fully motivated or elucidated.
The definition of space used seems suboptimal. While not just define space on a surface of a sphere?
The causal graph used in the method is not restricted to or encouraged to be a DAG, whereas an SCM is. This choice should be discussed.
The generation of the synthetic data is not clear enough. For instance, Erdos-Reyni is not typically directed, please describe given the SCM setup. Additionally, the generation through time should be clarified.
Questions For Authors: Primarily, I want the authors to provide more details on robustness to non-isotropic shapes, as in real-world systems, and address how much variability the model captures of the real data.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for a thorough and insightful review.
**Predictive Performance Evaluation**: Our evaluation is consistent with the literature of causal discovery. As noted or observed in prior works [1, 2, 3] causal discovery/representation learning differs from prediction/forecasting. Causes may not explain a large portion of the effect’s variance. For example, let $X \sim N(0, 1)$ and $Y = 0.2X + E$ with $E \sim N(0, 1)$. X has a significant causal effect on Y yet explains only $\frac{0.2^2}{0.2^2 + 1} \approx 3.8 \\%$ of Y’s variance.
Nevertheless, following the reviewer’s suggestion, we modify SPACY to predict one-step ahead $\widehat{X}^{t+1}$ using ${X}^{t}... {X}^{t-\tau}$. We edit the SCM module to autoregressively predict the latent values $\widehat{Z}^{t+1}$ from ${Z}^{t}... {Z}^{t-\tau}$ following the topological ordering of the inferred causal graph and then decode these to obtain the prediction. Similarly, we use the learned transition prior in LEAP to predict one step ahead. We report the results for our method on both the [synthetic](https://pasteboard.co/C0spUI2cWDJS.png) (Nonlinear SCM and Nonlinear spatial mapping) and the [climate](https://pasteboard.co/tH88MFprECMB.png) dataset.
Our results indicate that SPACY captures most of the variance in the synthetic dataset. In the climate dataset, where complex exogenous factors dominate, the explained variance, though significant, is lower—consistent with expectations. LEAP shows a similar pattern, with lower performance than SPACY. In summary, while predictive performance is relevant in some contexts, our primary contribution is causal representation learning which we evaluate accordingly.
**Non-Isotropic Spatial Factors**: Our experiments use isotropic and elliptical anisotropic (Section B.3) spatial kernels, but SPACY’s theoretical framework supports fully anisotropic kernels, as our identifiability guarantees require only mild conditions about the spatial factors. We used RBF kernels since they parsimoniously capture high-level features. However, our framework can also be used with more expressive kernels.
To assess SPACY’s robustness to anisotropy, we conduct an additional ablation study in which the spatial factors are irregular and anisotropic. We introduce anisotropy by applying a sinusoidal warp to the spatial coordinates before using an anisotropic RBF kernel to generate data. We use isotropic RBF kernels with SPACY to model the synthetic data (Nonlinear SCM and Nonlinear spatial mapping setting). [Results](https://pasteboard.co/dKiKosZtDxdJ.png)
Our findings show that even with isotropic RBF kernels, SPACY can approximate each latent variable’s location and scale and recover the causal graph. We will add these results to our paper.
**Climate Dataset Claims**: We appreciate the reviewer’s concern regarding spatial factor interpretability. However, many key atmospheric processes (the Walker circulation, monsoonal systems, and ENSO teleconnections) exhibit strong localization due to the interplay between heterogeneous boundary conditions (e.g., land-sea contrast, topography) and localized forcings. Despite using isotropic kernels, our method captures these localized relationships, offering greater physical interpretability than traditional approaches (e.g., principal components or EOFs) that may obscure spatial meaning through averaging. Additionally, our framework avoids diffuse spatial factors, linking causal links to identifiable physical processes.
**SPACY vs Varimax Interpretability**: SPACY’s spatial factors enforce geographic coherence by linking causal variables to interpretable regions, unlike Varimax’s scattered loadings. Visualizations show that many nodes recovered by Varimax are diffuse, uninterpretable, and lack clear physical locations, with clusters (e.g., nodes 42, 44) suggesting similar underlying components.
[Precipitation](https://pasteboard.co/PV7yQmWl3Xh6.png), [Temperature](https://pasteboard.co/jVaQSkHfqFMS.png)
**Other Suggestions:** We will clarify the notations and synthetic data generation process. To address some concerns:
- Any differentiable temporal causal discovery algorithm can be used with SPACY. We chose Rhino due to its flexibility and strong identifiability guarantees.
- SPACY operates with general metrics defined on $[0, 1]^K$. We can model diverse scenarios, such as the spherical grid with Haversine distance in the climate experiment.
- Following [4], we use a DAGness constraint for the instantaneous matrix (Appendix C.1).
**References**
[1] Shmueli, Gt. "To explain or to predict?." (2010)
[2] Lee, SY, et al. "Assessment of the predictive power of a causal variable: An application to the Head Start impact study." (2022)
[3] Chauhan, R.S, et al. "Causation versus prediction in travel mode choice modeling." (2025)
[4] Gong, W., et al. "Rhino: Deep Causal Temporal Relationship Learning with History-dependent Noise." (2022) | Summary: The paper presents a causal discovery method called SPACY for structural causal models over spatiotemporal data embedded in grids. An identifiability theory is developed, and experiments compare to both structure-learning and causal representation learning models on both synthetic data and climate data.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The proofs are in the appendix and so I haven't checked them.
Experimental Designs Or Analyses: I have checked the validity of these experimental designs, and they live up to the standard for the generative modeling field. The authors have also added more experiments with additional datasets in the course of their response to reviewers.
Supplementary Material: no
Relation To Broader Scientific Literature: The paper embeds itself well in the broader literature via a related works section. As part of the author response, the authors have agreed to include the additional reference I recommended as part of their citations on RBF kernels.
Essential References Not Discussed: Handled in author response.
Other Strengths And Weaknesses: Like several other papers submitted to ICML this year, the paper has the strength of giving novel identifiability theorems, which can help to make up for having only one non-synthetic experiment.
Other Comments Or Suggestions: I thank the authors for their engagement during the review and discussion process.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful review and for pointing us to the relevant prior work [1]. We will revise the manuscript to include this citation in the discussion of RBF kernels.
We envision SPACY being applicable across various fields—for instance, in neuroscience for analyzing brain imaging data, or in epidemiology for identifying patterns in disease spread. We plan to include a discussion highlighting these potential applications, while leaving their in-depth exploration for future work.
**References**
[1] Sennesh, Eli, et al. "Neural topographic factor analysis for fmri data." Advances in Neural Information Processing Systems 33 (2020): 12046-12056. | Summary: This paper proposes a spatiotemporal causal discovery framework named SPACY based on the method of variational inference. SPACY introduces spatial kernel functions, aggregates spatially adjacent points by utilizing these kernel functions, maps the observed time series to latent representations, and discovers the causal structure in the low-dimensional latent space. Meanwhile, it proves the identifiability of the model in the continuous spatial domain, and verify the effectiveness of the proposed method in synthetic data and real-world data.
Claims And Evidence: The claims are well supported by the theoretical analysis and extensive experiments.
Methods And Evaluation Criteria: The proposed method is generally sound.
Theoretical Claims: The theoretical claims are well-discussed. But I am not family with the topic, I cannot ensure the proof is correct.
Experimental Designs Or Analyses: Experimental designs and analyses are sound.
Supplementary Material: I skip the proof section in the appendix. The identification of representation may depend on the theory of non-linear ICA. I am not an expert in this.
Relation To Broader Scientific Literature: The proposed method is important for the identification of causal representation in the time series data.
Essential References Not Discussed: I am not sure if all key references are well-discussed.
Other Strengths And Weaknesses: Strengths:
1、 The paper is clearly written and well-organized.
2、 The paper proves the identifiability of the latent factors in the continuous spatial domain without relying on traditional assumptions, such as the assumption of sparsity of the causal graph.
3、 The proposed methods can explain the causal relations in the Global Climate Dataset, which shows a meaningful real-world application.
Weaknesses:
1. The assumption of Gaussian noise in the latent structural causal model (SCM) appears relatively restrictive, and the variational inference approach hinges on this condition. I am not sure if the proposed method is sensitive to this condition.
2. Theorems 1 and 2 establish the identifiability of the latent representations, yet the intuition behind identifying the causal structure among latent factors remains unclear. Could the authors provide a clearer explanation or illustrative example to bridge this gap?
3. The paper assesses partial robustness via hyperparameter experiments (e.g., overparameterization of D), but the question of how to systematically determine hyperparameter values to balance model complexity and performance remains unresolved. Further clarification or guidelines would strengthen this aspect.
Other Comments Or Suggestions: NAN
Questions For Authors: See Weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful review and thoughtful feedback.
**Noise at latent vs. observation space**: We clarify that there are **two separate additive noises** at latent and observation space respectively. The Gaussian noise assumption applies only to the **observation space**, not the latent SCM. The identifiability guarantees in Theorems 1–2 do not require explicit assumptions on the noise type in the latent SCM. Empirically, we validated our method on data with non-Gaussian and history-dependent noise (see experiments in Section 5). We will emphasize this distinction in the revised manuscript for better clarity.
Furthermore, the Gaussian assumption in the observation space can be relaxed quite easily (see Step 1 in Appendix B.2.2 from [1]). We can also relax the assumption of Gaussian noise in Theorems 1 and 2 using the aforementioned technique in the updated manuscript.
**Causal Structure Identification**: Theorems 1 and 2 focus on establishing the identifiability of the latent variables from observational data. **Once the latents are identified, any causal discovery algorithm with identifiability guarantees (such as Rhino, which we used in our paper) can recover the causal graph**, since the causal relationships are encoded in the conditional independence relationships of the latent variables. We will clarify this in the Theory section of our paper.
**Choice of hyperparameters**: We thank the reviewer for highlighting this practical concern. Our experiments (Section 5.3) demonstrate that overparameterization the latent dimension D does not degrade performance, allowing users to err on the side of overestimation. To provide systematic guidelines for using our model, we will:
1. Add a recommendation in Section 5.3 to select D larger than the anticipated latent dimensionality.
2. [Include an additional experiment with a plot demonstrating the performance trend based on different levels of over-parameterization](https://pasteboard.co/Kowd0wCMoLxW.png). The results indicate that overparameterization does not significantly degrade the performance of causal discovery and latent representation inference and, in fact, offers flexibility when there is uncertainty about the true latent dimensionality.
**References**
[1] Khemakhem, Ilyes, et al. "Variational autoencoders and nonlinear ica: A unifying framework." International conference on artificial intelligence and statistics. PMLR, 2020.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response. I am satisfied with the response that most of my questions are addressed.
I will keep my score leaning towards acceptance. | Summary: The submission present SPACEY a spatio-temporal causal model which is specified over a reduced latent representation.
The authors, show that (in the continuous grid case) the model is partially identifiable, moreover the same insights should apply to the finite grid case, when the number of locations is large.
The model is implemented with non-linear SCM and non-linear pointwise plus linear mixing between spatial locations.
Inference on the defined model is performed using variational inference.
Claims And Evidence: The main claims are about identifiability, and proofs are provided in the appendix (which I did not check).
Claims about superiority of the model are supported by simulation experiments.
Methods And Evaluation Criteria: Yes the proposed simulation experiments and climate examples make sense.
Theoretical Claims: Proofs are presented in the supplementary materials, I did not checked correctness.
Experimental Designs Or Analyses: Yes I checked the validity of the simulation experiments and the setting is correct , I did not find any issue.
Supplementary Material: Yes the ones specifying Experiments Details (sec C).
Relation To Broader Scientific Literature: I think the authors lack a better discussion of the relationship between the proposed approach and classical multi-level Bayesian models.
Variational inference used in the submission is an approximation of Bayesian inference, and the specification of the SPACY model resemble a Bayesian multilevel models for spatial data. This has already been applied to a very similar setting through the so called Dynamic Causal Model (DCM).
For instance check the following references:
- Friston, Karl J., Lee Harrison, and Will Penny. "Dynamic causal modelling." Neuroimage 19.4 (2003): 1273-1302.
- Friston, Karl J., et al. "Dynamic causal modelling of COVID-19." Wellcome open research 5 (2020): 89.
- Stephan, Klaas Enno, et al. "Ten simple rules for dynamic causal modeling." Neuroimage 49.4 (2010): 3099-3109.
- Friston, Karl, Rosalyn Moran, and Anil K. Seth. "Analysing connectivity with Granger causality and dynamic causal modelling." Current opinion in neurobiology 23.2 (2013): 172-178.
DCM solves a similar problem with a very similar approach and I think a deep discussion between the similarities and the differences between the proposed approach and DCM is lacking.
Essential References Not Discussed: yes for instance, check previous response
- Friston, Karl J., Lee Harrison, and Will Penny. "Dynamic causal modelling." Neuroimage 19.4 (2003): 1273-1302.
- Friston, Karl J., et al. "Dynamic causal modelling of COVID-19." Wellcome open research 5 (2020): 89.
- Stephan, Klaas Enno, et al. "Ten simple rules for dynamic causal modeling." Neuroimage 49.4 (2010): 3099-3109.
- Friston, Karl, Rosalyn Moran, and Anil K. Seth. "Analysing connectivity with Granger causality and dynamic causal modelling." Current opinion in neurobiology 23.2 (2013): 172-178.
Other Strengths And Weaknesses: The paper is generally well written even if I feel the authors should make more effort to analyze their proposed method with respect to already existing methodologies,
Other Comments Or Suggestions: - check capitalization in references, e.g. Bayesian
Questions For Authors: How the proposed approach relate to DCM? what are the specific differences? what are the similarities? is DCM not applicable in the simulation experiments or the climate example?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful feedback.
We agree that both DCM and SPACY formulate the problem of inferring causal relationships between latent variables as a multi-level Bayesian model. However, they differ significantly in their assumptions and approaches.
DCMs aim to infer the causal relationships of interactions in a **dynamical system** with potentially cyclic relationships. In contrast, SPACY uses the **structural causal model** [1] framework, where the causal graph is constrained to be acyclic. Moreover, DCM assumes that the parameters of the forward model (i.e. the relationship between the latent and observable variables) are known apriori. SPACY, on the other hand, infers the parameters of the forward model, as well as the correspondence between the observed variables and latent variables by fitting neural networks to the observed data. This latent-space approach enables SPACY to explore causal relationships without relying on prior mechanistic assumptions or predefined dynamical models, making it suitable for application to high-dimensional data with unknown dynamics. Algorithmically, DCM uses the EM approach for inferring the structure, while SPACY uses Variational Inference.
We will expand the related work section by including the references listed by the reviewer, and the above discussion. We will also fix the capitalization issue in the references.
**References**
[1] Pearl, Judea. "Causal inference." Causality: objectives and assessment (2010): 39-58. | null | null | null | null | null | null |
Compute or Load KV Cache? Why Not Both? | Accept (poster) | Summary: The authors proposed a simple but effective method dealing with the kv cache prefilling: utilizing both the GPU computation and IO loading to get the free lunch of both, maximizing the loading speed with zero cost.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Evaluation is mainly based on the latency metrics (wall clock time).
I don't see any inappropriate evaluation.
Theoretical Claims: No theory claim; infra paper.
Experimental Designs Or Analyses: it's sound enough. I like the overhead analysis, which further prove the usability of cake.
One simple concern: the authors didn't analyze the performance gains under memory-bounded and computing-bounded conditions by varying the batch size, which is a quite interesting study to fully know the potential of CAKE under different batch sizes and model sizes.
Supplementary Material: yes
Relation To Broader Scientific Literature: it's related to the computation efficiency of Language models, including the deployment of LLMs, VLMs, and any LM archs with a heavy prefilling overhead.
Essential References Not Discussed: no
Other Strengths And Weaknesses: discussed above.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your interest in Cake and for providing insightful feedback. We value your constructive comments and are pleased to address your concerns.
In our original experiments, we adopted the inter-token-latency-optimized configuration in vLLM v0.6.2, where the max number of batched tokens is 512. We agree that exploring different batch sizes offers valuable insight into Cake’s adaptability under memory-bound and compute-bound conditions. Below are our newly added experiment results of Cake’s performance gain over the compute-only and I/O-only prefill on different numbers of batched tokens and different model sizes. We will include such results in our revised version.
-----------------------------------------------------------------
| BatchSize | BW | LongAlpaca-7B | LongAlpaca-13B |
|-----------|--------|---------------|----------------|
| 64 | 32Gbps | 1.47\3.47 | 1.47\3.55 |
| 128 | 32Gbps | 1.83\2.40 | 1.79\2.45 |
| 256 | 32Gbps | 2.06\2.09 | 1.96\2.20 |
| 512 | 32Gbps | 2.20\1.94 | 2.03\1.91 |
| 1024 | 32Gbps | 2.15\1.70 | 2.18\1.91 |
| 2048 | 32Gbps | 2.31\1.71 | 2.04\1.70 |
-----------------------------------------------------------------
**Table r1. Speedup of Cake over I/O-only \ Compute-only method under different batch sizes of tokens and different models. Hardware: 1xA100, Seq-len: 16k**
As shown in Tables r1, Cake achieves an average Time-to-First-Token(TTFT) speedup of 1.96x over I/O-only methods and 2.25x over compute-only methods across various batch sizes. At smaller batch sizes, KV cache computation is memory-bound, underutilizing compute units. As batch size increases, it shifts toward compute-bound, boosting GPU efficiency. Generally, the speedup over I/O-only methods is higher with larger batch sizes, reflecting Cake’s increasing reliance on computation. These results highlight Cake’s ability to adapt to various compute efficiencies caused by different batch sizes, automatically optimizing TTFT.
Thanks again for your suggestion, and we are happy to address any further concerns. | Summary: The paper introduces a KV cache loading system called Cake, that optimizes computation and I/O in parallel to speed up LLM inference. It uses bidirectional scheduling for efficient resource use and adaptive scheduling to handle varying workloads. Evaluations show Cake cuts TTFT by 2.6×, making it a practical solution for long-context LLMs.
Claims And Evidence: The main claim made by this paper is "Cake is the first system to demonstrate that efficiently utilizing both computational and I/O resources can optimally reduce TTFT in longcontext prefix caching scenarios."
In my opinion, this statement is clear and supported by experimmental results
Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the problem considered.
I also really enjoy reading the analysis of the design section and the insights of Cake
Theoretical Claims: No theoretical claims in this paper
Experimental Designs Or Analyses: Yes, I checked the soundness of the experimental designs.
Supplementary Material: No
Relation To Broader Scientific Literature: TTFT is a crusial SLO in LLM serving area and scheduling long context request is a very important problem.
Essential References Not Discussed: No
Other Strengths And Weaknesses: In general, I like this paper. It is simple and intuitive
The weakness in my mind is there is no discussion on how this approach interacts with PD disaggregation, which is wildly used in practice. In my understanding, even though the KV cache is computed, they still need to be transferred to decode server through RDMA.
Other Comments Or Suggestions: I suggest introduce more details regarding chunked prefill, since Cake is built upon it. And many audience of ICML may not familar with it.
Questions For Authors: See the weakness part.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback and kind words about our paper. Below, we address your suggestions regarding the interaction of Cake with Prefill-Decode (PD) disaggregation. We will add the following discussion to the revised version.
* Cake is compatible with Prefill-Decode Disaggregation with a minor modification. The typical workflow of Prefill-Decode Disaggregation involves routing a request to a prefill server to generate the KV cache, which is then streamed to a decode server. Cake can fit into this scenario by splitting its bidirectional process across these servers: the prefill server handles computation, generating the KV cache from the sequence’s start and streaming it to the decode server; On the decode server, Cake simultaneously manages two streams—one is the I/O loading stream, fetching the existing KV cache from disks starting from the last tokens, and the other is the streaming process from the prefill server, starting from the first tokens—until the two processes meet in the middle on the decode server, meaning all KV cache is ready for decoding.
* Thank you for your suggestions to provide more details on chunked prefill. We will add more details in the background Section in our revised version. | Summary: This paper introduces Cake, a hybrid KV cache loading system that leverages both I/O resources (for loading) and computational resources (for re-computation). The authors observe that both I/O-only approaches (e.g. LMCache) and compute-only approaches (e.g. vLLM) fall short in practice in terms of minimizing TTFT when loading from local or remote disks. Inspired by this, Cake uses both I/O and computation in parallel when loading from disks (which are high-capacity but low-bandwidth). Evaluations show that Cake can reduce TTFT by 2.6x on average for long-context prefix caching scenarios.
Claims And Evidence: Most of the claims made by the authors are supported by citations or experiment results. Please see "Questions For Authors" for details.
Methods And Evaluation Criteria: The paper features solid and comprehensive evaluation of the proposed method, Cake.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The paper features solid and comprehensive evaluation of the proposed method, Cake.
Supplementary Material: N/A
Relation To Broader Scientific Literature: This paper lies within the broad area of KV cache management systems for language model inference. The authors focus primarily on acceleration of KV cache loading from disks, which can be helpful to cloud providers in the case of prefix-cache loading or prompt-cache loading.
Essential References Not Discussed: As far as I know, there is no missing related work (in the area of KV cache loading / prefix caching systems) that remain undiscussed in this paper.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: Thank you for submitting this paper to NeurIPS. The core idea of this paper, leveraging I/O and compute in parallel when loading KV cache from disks, is well presented and makes sense. Experiments are also solid and comprehensive.
I have two major questions/concerns.
1. In reality, I/O (network) and compute capability can vary dramatically from time to time, even with the same set of local/remote hardware, depending on if you have competing workloads, network variance, temperature of hardware, etc. Is your system assuming a fixed config of I/O and compute capability, pre-measured given the hardware stack? If this is the case, do you have real-time profiling that adjusts the capabilities so that your scheduling works well for the real-world scenario? Otherwise it seems very likely that sub-optimal scheduling decisions will occur.
2. Do you think your system would work for CPU->GPU loading with minor adaptation? If not, what do you think is the major challenge (or maybe loading is all you need in that case)? Is it because disk->CPU/GPU might be the only case where re-computation can be sometimes more efficient (in terms of TTFT) than direct loading?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your insightful feedback, and we are glad to address your concerns.
1. **Dynamic I/O, workloads, etc.** Cake’s parameter-free design allows it to automatically adapt to dynamic conditions, such as fluctuating network bandwidth or GPU performance, as discussed in Part 3 of Section 4. This is a key strength, as shown in Figure 5 of Section 5.6, we conducted an experiment on dynamic computational power and network bandwidth. Under severe fluctuations, Cake is still able to find the optimal merging point automatically and maximize the utilization of dynamic resources. The reason behind it is that Cake does not rely on any offline profiling or any static parameters. The optimal utilization of compute and I/O is achieved by our bidirectional KV cache generation mechanism, demonstrated in Figure 2. Cake starts KV cache computing and loading from opposite directions, chunk by chunk; the two threads run in parallel asynchronously and merge at a certain point in the middle of the context. A sudden network traffic burst may slow down the I/O transfer, then the two threads will merge at a position closer to the end of the context. Similarly, if a temperature spike reduces GPU frequency and slows down the computing progress, the threads will meet at a position closer to the head of the context. The scheduler doesn’t need to change anything caused by a certain period of exceptional condition, it only needs to interrupt the two threads if the last computed chunk includes a loaded token‘s cache and switch request status to the decoding stage. This design ensures that Cake always achieves the optimal Time-To-First-Token as it fully leverages the available I/O and computing capability, even though they are dynamic.
2. **Potential application of CPU-GPU loading.** Cake can be applied between CPU and GPU with minor adjustments. Yet, there are two reasons why we don’t suggest applying Cake between host memory and GPU. First, as we show in Figure 3, for a context length of 32k, the KV cache computing speed of an H100 is approximately equivalent to 4GB/s I/O transfer. However, H100 supports PCIe 5.0x16, which provides up to 64GB/s host-to-device transfer bandwidth per GPU. This huge gap makes re-computing less efficient, and loading all the cache is a better option. Second, compared with SSD and remote disks, host memory is more expensive, and its storage is limited. As is profiled by AttentionScore, 80% of cache hits happen on the disk level, and we believe optimizing it will bring broader benefits to the modern LLM serving system.
We appreciate your interest in Cake and your insightful reviews. We are happy to address any further questions about Cake.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for their detailed and helpful response. I have also read all other reviewers' comments in detail.
Overall, I like the research direction this work opens (leveraging both I/O + compute for efficient KV cache management). I can foresee related ideas like I/O+compute for loading from heterogeneous storage, e.g. CXL memory, serverless storage, etc., or better policy design under the framework of Cake. It might require some additional effort if you're sending something related to OSDI/SOSP though.
I decide to raise my score to 4. The main reason is because (1) my concerns are addressed, and (2) the authors actually integrate Cake into LMCache and vLLM --- So hopefully the impact of the idea will go beyond the paper itself.
---
Reply to Comment 1.1.1:
Comment: Thank you for your kind words and thoughtful engagement with our work. We’re excited that you see potential in Cake’s direction and future applications, and we sincerely appreciate your feedback throughout this process. | Summary: This paper introduces Cake, a novel KV cache loading system that optimally utilizes both computational and I/O resources in parallel, with a bidirectional scheduling strategy, and an adaptive scheduling mechanism. The proposed method can be seamlessly integrated in existing methods, with better TTFT.
Claims And Evidence: Most of the claims and evidence in the paper are convincing.
Methods And Evaluation Criteria: Most of the evaluation criterions are good.
Theoretical Claims: There is no theoretical claims in the paper.
Experimental Designs Or Analyses: 1. The experiment designs of this paper are good.
2. It would be better if the authors conduct experiments on more mobile devices, such as GPU for laptops (4090, 3090) and GPU for mobile phones.
Supplementary Material: I have checked all the supplementary materials of this paper.
Relation To Broader Scientific Literature: I did not find any specific relation between this paper and broader scientific literature.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: 1. The most significant strength of this paper is that the proposed method does not bring any harm to model accuracy, making it useful in most applications.
2. Please discuss whether the proposed method conflicts with other acceleration methods for KV cache and decoding, such as speculative decoding, multi-token prediction, KV cache quantization, and eviction. Intuitively, if these methods are applied, the improvements of Cake may be limited.
3. Please discuss whether the proposed method is compatible with other system-level acceleration methods, especially for the methods that separate the prefilling and decoding stages in different hardware.
4. Please discuss the performance of Cake when multiple GPUs and even multiple GPU nodes are employed. Does distributed inference impacts Cake?
4. Typos: There is no ``.'' in table 4 and table 2.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and feedback on our paper. We are pleased to address your concerns regarding the compatibility and performance of our proposed method, Cake, with other acceleration techniques and system-level optimizations. We will add the following discussion to the revised version.
* **Compatibility with other acceleration methods.** Thanks to the simple and elegant design, Cake is orthogonal and complementary to the broad acceleration methods you mentioned, including speculative decoding, multi-token prediction, KV cache quantization, eviction, and Prefill-Decode disaggregation, making Cake very flexible to deploy. These methods can be categorized into three main classes:
* + **Techniques to Accelerate LLM Decoding Throughput**:
* + + Methods like speculative decoding and multi-token prediction are focused on optimizing the decoding stage, e.g., by leveraging parallel generation opportunities to address the inefficiencies of autoregressive decoding.
* + + Cake, however, targets the prefill stage, i.e., generating the KV cache for the decode stage. Since the prefill and decoding stages are distinct, there is no conflict between Cake and these decoding-focused methods. They can be used together to enhance overall inference efficiency.
* + **KV Cache Size Reduction Methods**:
* + + Techniques such as quantization and eviction aim to reduce the memory footprint of the KV cache. Cake regards the KV cache as simple data, and it does not restrict the size of KV cache or how the KV cache is computed. Therefore, these methods can be seamlessly integrated with Cake.
* + + Actually, we discussed the integration of KV cache compression methods in Section 5.5 of our paper, where we evaluated Cake’s performance with compression techniques, showing that they are orthogonal and can complement Cake’s bidirectional scheduling strategy. This synergy could lead to additional reductions in TTFT, though the extent of improvement may vary depending on specific implementations and resource constraints.
* + **System-level Acceleration Methods**:
* + + **Prefill-Decode Disaggregation.** Cake is compatible with Prefill-Decode Disaggregation with a minor modification. The typical workflow of Prefill-Decode Disaggregation involves routing a request to a prefill server to generate the KV cache, which is then streamed to a decode server. Cake can fit into this scenario by splitting its bidirectional process across these servers: the prefill server handles computation, generating the KV cache from the sequence’s start and streaming it to the decode server; On the decode server, Cake simultaneously manages two streams—one is the I/O loading stream, fetching the existing KV cache from disks starting from the last tokens, and the other is the streaming process from the prefill server, starting from the first tokens—until the two processes meet in the middle on the decode server, meaning all KV cache is ready for decoding.
* + + **Distributed inference.** We have evaluated Cake’s performance with tensor-parallelism using two GPUs, as detailed in Table 3 and Table 4 of our paper. The results demonstrate that Cake can effectively utilize the increased computational capabilities provided by multiple GPUs within a node. By distributing the model across GPUs, the computational power available for Cake’s bidirectional scheduling improves, allowing it to optimize TTFT more efficiently. For distributed inference settings, where multiple inference engines are deployed across various nodes in the data center and requests are routed to different engines for processing, Cake can still be applied within each inference engine. Each engine can independently optimize its own inference latency using Cake, ensuring that the method remains effective even in large-scale, distributed environments.
* **About mobile devices**: Cake is designated for long-context prefix-caching enabled LLM inference systems and is beneficial as long as the simultaneous KV computation and I/O KV cache loading is possible. The enterprise-level data center is the best use case because of its sufficient GPU computational power, GPU memory capacity, and I/O bandwidth. The idea of Cake could also be applied to mobile devices, but the limited computation power and memory space limit the model size or context length, making mobile computing not the ideal use case. Cake’s deployment on mobile devices is promising but needs future work to address engineering challenges.
* Thank you for your careful reading and pointing out the typos. We will correct them in the final version to ensure clarity. | null | null | null | null | null | null |
Losses for Deep Probabilistic Regression | Reject | Summary: The paper claims to be guided by the question: "What is the best probabilistic regression method?". In particular, it focuses on "direct" methods which turn supervised learning into probabilistic regression by using a different loss function. The authors summarize their contributions as introducing a taxonomy of "direct" methods, comparing them empirically to non direct methods, and providing descriptions of main concepts and evaluation practices in probabilistic regression.
---
## Update after rebuttal
The authors did not provide a rebuttal response.
Claims And Evidence: (a) "the collection and categorisation of direct methods under a unifying taxonomy
- The proposed taxonomy is described in Section 5 and considers the type of distribution, optimization objective, parameters and predictions, and explicit- / implicitness
- Table 1 characterizes certain direct methods under the proposed taxonomy with columns "Minimizes", "Implicit", "Predicts", "CDF"
While I endorse the desire to introduce a unifying taxonomy, I am struggling to understand the actual unification and value provided by the proposed taxonomy. For example, in Table 1, the columns "Predicts" and "CDF", referring to type of distribution and parameters / predictions if I understand correctly, are basically still different for each method / row in the table and not unifying at all. The column "Minimizes" distinguishes between negative log-likelihood (NLL) and (approximate) continuous ranked probability score (CRPS), which I acknowledge as a meaningful differentiation. However, observing that there are mainly these two optimization objectives for probabilistic regression is not a significant contribution in my opinion. Finally, the distinction between "explicit" and "implicit" methods is explained poorly in a short paragraph 5.4. For example, from the sentence "Implicit methods operate with fewer assumptions but are more complex to train and infer with compared to explicit methods, which are limited by the number of parameters." the difference between the two is not clear to me. I believe the paragraph also contains a typographical error (because it defines implicit methods twice), making it even more confusing.
(b) "the experimental comparison against non direct methods
- Experiments are discussed in Section 6 and results are listed in Table 2, containing 14 methods evaluated on 8 datasets
Although a variety of 14 methods is great, the evaluation only considers two-layer MLPs (according to the config files in the source code) and small UCI regression datasets. In my opinion, this does not adequately address the question "What is the best probabilistic regression method?" stated in Section 1.
(c) "provide an entry-point describing the main concepts and standard evaluation practices"
- Sections 2, 3, and 4 are all a form of existing work discussion, background or review
These sections are useful and, in my opinion, the main contribution of this paper (also simply by the amount of occupied number of pages). However, some of the discussed topics are arguably quite basic and could be found in standard machine learning / statistics textbooks. For example, Eq. (8), (9), and (10) are all dedicated towards explaining the Gaussian log-likelihood.
Methods And Evaluation Criteria: The paper suggests two new losses based on the proposed taxonomy (Hist-CRPS and KDE). However, the empirical evaluation of the paper itself shows that these are among the worst methods in terms of predictive NLL. Otherwise, the paper does not propose any methods, given that it is mainly a review paper.
The benchmark datasets are from the popular UCI regression benchmark. They are small yet popular in research communities such as Bayesian deep learning. I personally think it is time to move on and find new problems / benchmarks in this field, but this may be more general criticism which is not directly related to this particular paper.
Theoretical Claims: The main paper does not make any substantial theoretical claims. The appendix includes some calculations for differentiable forms of the CRPS, which I did not check with great detail.
Experimental Designs Or Analyses: I compliment the authors for conducting experiments over 20-fold cross-validation splits and also providing the source code. There also doesn't seem to be any issues with the soundness / validity of the experimental design itself. However, all experiments are conducted with small MLPs on small UCI regression datasets, which does not provide any evidence about how these methods would perform with other deep learning architectures and / or larger datasets.
The main takeaway seems to be that "direct" methods are competitive in performance with generative diffusion models such as CARD but much cheaper / faster. This comparison seems ill-posed to me because CARD shows that a conditional diffusion model which is pre-trained on $\mathcal{D}$ can accurately infer the predictive distribution $p(\mathbf{y} | \mathbf{x}, \mathcal{D})$ without explicitly optimizing evaluation metrics, such as MSE or negative log-likelihood, whereas the sole purpose of "direct" methods is to directly optimize such metrics in a supervised learning setting.
Supplementary Material: I briefly reviewed the source code (provided as anonymous repository) to search for experiment configurations, in particular, the size of MLPs used for the experiments.
Relation To Broader Scientific Literature: The paper is primarily a review paper which discusses a broader variety of approaches rather than contribution a particular method.
Essential References Not Discussed: Since the paper also mentions Bayesian methods, the following [1] related review article (and all methods and articles which are discussed and cited by it) could be considered relevant.
[1] V. Fortuin. "Priors in Bayesian Deep Learning: A Review". International Statistical Review (2022), 90, 3, 563–591.
Other Strengths And Weaknesses: Strengths:
- detailed descriptions which would be accessible to someone who is entirely new to the field
- comprehensive discussion on the motivation for probabilistic regression with corresponding citations
Weaknesses:
- the introduced taxonomy does not seem to be very unifying except for distinguishing between CRPS and NLL objectives
- some discussed topics seem quite basic and are also available in standard textbooks or on Wikipedia
Other Comments Or Suggestions: - Author names of narrative in-text citations should not be surrounded by parentheses
- Paragraph 5.4. "Explicit and Implicit Models" describes implicit methods twice. Is this perhaps a typographical mistake or intended?
Questions For Authors: 1. What is the difference between "explicit" and "implicit" methods, proposed as part of the taxonomy introduced in the paper?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Summary: The paper discusses losses for deep probabilistic regression. The authors identify the gap in the literature of scattered knowledge on deep probabilistic regression across various domains and propose a taxonomy of the method to unify the knowledge in that area. Based on that, they identify easy-to-achieve new deep probabilistic regression losses and methods. Moreover, the authors perform an experimental evaluation of the various discussed "direct" methods with "non-direct" ones.
Claims And Evidence: The papers provide three main claims:
(a) the collection and categorization of direct methods under a unifying taxonomy;
(b) the experimental comparison against non direct methods;
(c) provide an entry-point describing the main concepts and standard evaluation practices.
The overall evidence for the claims is weak.
For (a), Section 5 describes the taxonomy, but it's not the easiest to follow and lacks clarity. For example, in Section 5.1, why aren't more probabilistic distributions discussed? Or what are the explicit models in Section 5.4? I would expect the taxonomy to have proper definitions and clarity. Finally, some form of visualization of the taxonomy would be helpful.
For (b), please see the following sections on methods, evaluation criteria and experimental design.
For (c), Sections 3 and 4 contain the background and mentioned entry to the domain. In Section 3, it's not clear why the sharpness, calibration, reliability diagram, and Expected Calibration Error are discussed. Additionally, I lack good formalism with clear descriptions of the variables, parameters, and so on.
Methods And Evaluation Criteria: The proposed methods don't seem to be exhaustive, and the presentation form doesn't make it easy to follow the logic of selection. Namely, the authors propose a taxonomy of the methods, and I would expect to see some cartesian product in the proposed methods, i.e., {distribution1, ..., distribuionN} x {loss1, loss2} x {...}. Also, the methods are limited to only Gaussian, Laplace, and Mixture of Gaussian parameteric distributions, and it's not clear why other distributions are not evaluated, e.g., t-Student, Logistic, LogNormal, Gumbel, Weibull, Poisson, or Negative Binomial (per [1]). Additionally, I miss the comparison with normalizing flow methods like NICE [2], RealNVP [3], or MAF [4] and the follow-up methods dedicated to probabilistic regression like TreeFlow [5] or NodeFlow [6]. Finally, even though the paper focuses strictly on the deep models, it would be fair to compare them with tree-based methods like CatBoost [7] or NGBoost [8].
The proposed evaluation criteria, to some point, make sense, but they are not exhaustive enough. While the benchmark from Gal & Ghahramani (2016) is a de facto standard, I would expect the "review paper" (as the paper positions itself) to make a more comprehensive comparison based on an OpenML benchmark like one in the TabPFN article [9].
[1] "Probabilistic Gradient Boosting Machines for Large-Scale Probabilistic Regression", Sprangers et al., 2021
[2] "NICE: Non-linear Independent Components Estimation", Dinh et al., 2014
[3] "Density estimation using Real NVP", Dinh et al., 2016
[4] "Masked Autoregressive Flow for Density Estimation", Papamakarios et al., 2017
[5] "TreeFlow: Going beyond Tree-based Gaussian Probabilistic Regression", Wielopolski & Zieba, 2022
[6] "NodeFlow: Towards End-to-End Flexible Probabilistic Regression on Tabular Data", Wielopolski et al., 2024
[7] "Uncertainty in Gradient Boosting via Ensembles", Malinin et al., 2020
[8] "NGBoost: Natural Gradient Boosting for Probabilistic Prediction", Duan et al., 2020
[9] "TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second", Hollmann et al., 2022
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: I evaluated the soundness and validity of the experimental part of the work presented in Section 6. I lack the more rigorous and extensive evaluation of the methods.
First of all, "Results are summarized in Table 2, which shows that the diffusion-based method CARD performs best" - based on what is this conclusion? CARD method is not consistently the best one, e.g., Energy dataset, Power dataset, Wine dataset, Yacht dataset. There is no aggregated summary of the performance of the methods.
Second of all, I miss additional analysis that would discuss the inference times, sample size analysis, and training times (in more detail).
Third of all, the paper discusses the probabilistic regression - I miss at least one or two examples of the estimated distributions based on various methods and discussion on them.
Question: Why don't authors evaluate methods using CRPS?
Question: What is the architecture of the used MLP network? What is the number of parameters? What is the impact of MLP network size on the results?
Question: What infrastructure was used for the experiments?
Suggestion: The author could consider the experiment on an artificial dataset created using known probabilistic distribution and check the methods' effectiveness. Check [5] for a further reference.
Supplementary Material: Yes, the whole appendix. I did not get into the very details of section F.
Question: What is the comment in section D, "Categorical Evaluation", about? It seems that datasets used in experiments are all about regression.
Relation To Broader Scientific Literature: See the methods and evaluation criteria sections.
The additional question to that is why authors don't include the analysis of multivariate probabilistic regression.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: - 155, Left - The paper is not comprehensive to all deep probabilistic regression but limited to univariate ones - it's worth mentioning in the beginning. Moreover modeling $p(y_1|x)$, ..., $p(y_N|x)$ is a special case of modeling $p(y_1, ..., y_n|x)$ - it's might be also worth mentioning.
Other Comments Or Suggestions: - 019, Right: "Methods that mirror supervised learning,"
- What do you mean by mirror?
- 020, Right: "are particularly attractive when considering efficiency, ease of use and scalability. "
- Why?
- 058, Left: "sampling methods and direct methods"
- What do authors mean by sampling methods? They are not discussed or explained previously.
- 146, Left: "R"- styling: the convention for real numbers is $\mathbf{R}$.
- 150, Right: Citation of SPSR could be earlier in 126, Right, as it's the first time when it is mentioned.
- 416, Left - "real datasets" - "real-world datasetes"?
- 435, Right - "canomical" - typo
- Appendix, Training Time, Table 3, Readability of the results could be improved, for example it would be better to have minutes + seconds?
Questions For Authors: Questions in the previous sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Summary: The authors began their research by observing that despite using probabilistic regression across various fields, there was no unified overview of these methods. First, they analyze various Probabilistic regression approaches and organize them from the perspective of "closed-form expressions of the CRPS of piecewise-linear CDFs." Based on this, they organize representative methods and conduct comparative experiments on UCI datasets. Through these experiments, they demonstrate that direct methods are not only simpler to train and less costly to infer compared to sample prediction methods but also achieve similar performance. Therefore, the authors encourage the community to reconsider the effectiveness of basic methods.
Claims And Evidence: * The authors' claims are adequately supported through comparative experiments. However, despite using multiple regression datasets from UCI, additional validation using different datasets would further enhance the reliability of their findings.
Methods And Evaluation Criteria: * The authors aim to analyze deep regression loss from multiple angles. To this end, they integrate the theoretical foundations of various losses used in different papers and verify through repeated experiments whether these losses can easily achieve good performance through the actual learning process. Additionally, they designed experiments by selecting the commonly used UCI dataset in deep regression loss research.
Theoretical Claims: * This paper does not make any theoretical claims. Their main claims are experimentally demonstrated.
* Their process of organizing various forms of loss into the proposed taxonomy is very reasonable.
Experimental Designs Or Analyses: * The experiments designed to compare probabilistic regression methods were derived from the experimental design derived from this basis [Han et al., 2022; Gal & Ghahramani, 2016]. This makes it technically valid since they compared it with CARD. They also shared their environment settings through links in the supplementary materials.
Supplementary Material: I reviewed all the supplementary material to understand their experimental settings and how they organize their taxonomy.
Relation To Broader Scientific Literature: * Given that Deep Probabilistic Regression is used across various fields, this research will help other scientific studies produce more reliable prediction results.
Essential References Not Discussed: * To my knowledge, this paper adequately covers relevant works in the field.
Other Strengths And Weaknesses: * This paper classifies various losses in Probabilistic regression and clearly conveys their characteristics through repeated comparative experiments, providing other researchers with a powerful and simple baseline. As the authors note, the wide variety of losses used across different application fields makes meaningful comparisons challenging. Therefore, as a researcher, I highly welcome this study that establishes such a baseline.
* It is interesting that they compared not only average performance across various datasets but also hyperparameter settings and actual computation time to evaluate the priority of loss functions.
* The nature of this paper differs from typical conference papers. That is, it is difficult to evaluate it based on criteria such as the novelty of a method.
Other Comments Or Suggestions: * In Table 2, I recommend modifying the table to better highlight the losses designed with the design choices described in section 5.5.
* For the completeness of the paper, I recommend adding experimental environment details to the supplementary materials, rather than just a link.
Questions For Authors: The losses generated through the design space proposed by the authors appear to perform worse compared to other methods. From this perspective, please explain whether considering the design space is meaningful and why these losses showed degraded performance.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Summary: The paper offers a summary of probabilistic regression methods, primarily focusing on so-called "direct methods" in the supervised regime. The review discusses at length the strictly proper scoring rules: Continuous Ranked Probability Score and the Negative Loglikelihood. Various probabilistic methods are discussed along with losses. The paper provides experimental comparisons of mean NLL values over 8 datasets spanning a variety of probabilistic regression models.
Claims And Evidence: The main claim of this review paper is the development of a taxonomy of direct methods. While there is an attempt at this, it is not done so very clearly or concisely thus failing as a taxonomy.
Methods And Evaluation Criteria: N/A
Theoretical Claims: N/A
Experimental Designs Or Analyses: The evaluation of NLL over a span of models and datasets is overall rather uninformative. The choice of method still depends on domain information and as such there is no "best" probabilistic regression method. The nuances of this are not discussed.
Supplementary Material: Yes, I reviewed the supplementary appendices.
Relation To Broader Scientific Literature: This serves as a summary of losses for a large breadth of direct probabilistic regression methods. It does not necessarily improve on the domain specific papers cited.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Since this paper is a survey paper, it is overall not particularly original. The significance of it is also difficult to pin-point since practitioners would most likely be interested in domain specific surveys/reviews of the topic. A survey should either act as a clear introduction to a topic allowing for further investigation in open problems or a consolidation of related research in a clear and concise manner. This paper fails to do either of these tasks as it seems confused for what or for whom it is applicable over the existing cited surveys. Additionally, for the breadth of topics covered, I do not think that the length of a conference paper is sufficient to truly cover the intended information in the clear and well developed manner of a survey.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | null | null | null | null | null | null | ||||
Computing Optimal Transport Maps and Wasserstein Barycenters Using Conditional Normalizing Flows | Accept (poster) | Summary: An alternative formulation of the p-Wasserstein distance is introduced and it consists in a constrained L^p minimisation problem for a given latent distribution. This alternative formulation allows the authors to both i) directly solve the Wasserstein primal minimization problem via stochastic gradient decent and ii) exploit conditional normalizing flows in order to compute the optimal transport map (already exiting here, since the source and destination measures are absolutely continuous w.r.t. to the Lebesgue measure) as well as the Wasserstein barycenters. Two pseudo-algorithms are well detailed and experiments on two datasets (Swiss roll and Mnist) are used to support the two main claims: the proposed approach allows one to compute barycenters from hundreds of input distributions and it is efficient in high-dimensional spaces.
Claims And Evidence: From a theoretical point of view, the two main claims of the paper (i.e. Theorems 4.2 and 4.3) are clear and I did not find any error in the proofs. About the experimental results: everything is convincing except for some comparisons, essentially in Tables 2-3 as I will detail below.
Methods And Evaluation Criteria: Yes, most of the time the methods/evaluation criteria do make sense, although some comparisons with standard OT solver would be beneficial: barycenters similar to those in Figures 3-4 could be computed via standard discrete OT (e.g. in Python Optimal Transport) and it would be interesting to perform qualitative and quantitative comparisons with your method. E.g. what about the running times?
Theoretical Claims: I checked the correctness of proofs of part of Lemma A.3 and entirely revised proofs of Theorems 4.2 and 4.3. As I said everything looks correct at my eyes.
Experimental Designs Or Analyses: The experimental design is quite clear but it coud be improved. Consider Table 1, for instance. Since you simulate data from (high-dimensional) Gaussian distributions and you know the actual Wasserstein distance between them, you might calculate the absolute difference between the Wasserstein loss you come up with with your method, from each data sample, and the true one! Why to use the upper/lower bound of another quantity in this case? It is misleading. Moreover, and this is the weak point of the paper in my view, in Tables 2-3, you only compare your methods with Win and SCW_2B in terms of the lower bound BW_2^2-UVP. Now, a lower bound is a lower bound, what about the upper bound? Their UVP might be better than yours despite having a worse lower bound. Am-I missing something?
Supplementary Material: I read sections A-B in the Appendix, in the limits that I specified above. I read (and appreciated) section C. Did not read section E.
Relation To Broader Scientific Literature: The relation with previous/related literature is correct, in my view.
Essential References Not Discussed: I don't think there are essential references not discusses, but I might be wrong.
Other Strengths And Weaknesses: I liked very much this paper. It is well written, mathematics are elegant and the theoretical arguments are solid. I fell that Theorem 4.2 might open the way to other approaches, adopting or not Normalizing Flows and, by the way, I consider that being able to solve OT problems (distance and barycenters) via standard SGD, in the primal, is an important advance.
As I sad, however I think that the experimental part could be improved in order to asses the interest of adopting this approach in place of alternatives.
Other Comments Or Suggestions: Just a couple of parentheses missing in the pseudo-codes such as the one at the very end of line 257.
Questions For Authors: I would like to see this paper published at ICML. I will tentatively recommend is for "weak accept" but if you can do the one or two edits (Tables 1-2-3) that I suggested for the experimental part or alternatively show me that I am wrong about my concerns (it might be the case) I am ready to raise my note to "accept".
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer,
thanks for taking the time to review our submission and for the kind appreciation of our contribution. We have carefully addressed all your comments below. According to the ICML review guidelines, we are not allowed to submit a revised manuscript at this point of the review process, but below we discuss in detail the edits we will make based on your suggestions in the camera-ready version of the paper, if accepted.
**1. Some comparisons with standard OT solver would be beneficial: barycenters similar to those in Figures 3-4 could be computed via standard discrete OT (e.g. in Python Optimal Transport) and it would be interesting to perform qualitative and quantitative comparisons with your method. E.g. what about the running times?**
The quantitative comparison between discrete OT and neural continuous OT methods has already been investigated extensively in the literature. The consensus is that discrete OT methods perform poorly in high dimensions, due to the required discretization of the input distributions. Specifically, performance degrades above $d=16$ (see, for instance, Figure 5 of (Fan et al, 2020) and Tables 1, 2, and 3 of (Korotin et al, 2021)), meaning that any discrete OT technique is assured to perform poorly on the MNIST 0/1 barycenter task shown in Figure 4 ($d=784$).
On the other hand, discrete OT methods will certainly achieve good performance on the Swiss roll dataset (Figure 3) and decent computational times. Nevertheless, the point of presenting Figure 3 is not to stake a performance claim, but rather to verify visually that our conditional normalizing flow model performs well on highly non-linear distributions with complex support. The real performance evaluation is done on more high-dimensional problems in the subsequent sections.
In conclusion, we think that a direct comparison with discrete OT methods in Figures 3 and 4 would not be meaningful.
**2. Consider Table 1, for instance. Since you simulate data from (high-dimensional) Gaussian distributions and you know the actual Wasserstein distance between them, you might calculate the absolute difference between the Wasserstein loss you come up with with your method, from each data sample, and the true one! Why to use the upper/lower bound of another quantity in this case? It is misleading.**
We completely agree with the reviewer. We re-ran Algorithm 1 on high-dimensional Gaussian data in the last few days and found that indeed our estimated Wasserstein distance converges to the true one very quickly. The plot for $d=64$ can be found here: https://pasteboard.co/jk8v0aZgFTom.png.
This (and similar plots) will be added to the revised paper.
**3. Moreover, and this is the weak point of the paper in my view, in Tables 2-3, you only compare your methods with Win and $SCW_2B$ in terms of the lower bound $BW_2^2-UVP$. Now, a lower bound is a lower bound, what about the upper bound? Their UVP might be better than yours despite having a worse lower bound.**
This is a very important point, also raised by another reviewer. We entirely agree. Please see our answer to question 2 by reviewer u7iE.
**4. I fell that Theorem 4.2 might open the way to other approaches, adopting or not Normalizing Flows.**
This is definitely worth pursuing. We only mention that, since OT maps in Threorem 4.2 are ultimately computed by inverting the model, only invertible architectures are valid candidates for this approach.
**5. Just a couple of parentheses missing in the pseudo-codes such as the one at the very end of line 257.**
Thanks. We will add them in the revised paper.
**References**
Fan, Amirhossein, and Yongxin. "Scalable computations of wasserstein barycenter via input convex neural networks." arXiv:2007.04462 (2020).
Korotin et al. "Continuous wasserstein-2 barycenter estimation without minimax optimization." arXiv:2102.01752 (2021). | Summary: This paper introduces a new way to compute optimal transport maps and Wasserstein barycenters using conditional normalizing flows.
Claims And Evidence: By order of appearance:
1. Correctness of OT and barycenter problems: I have doubts on the barycenter derivation, as well as doubts on the construction of the loss, see theoretical claims section below.
2. Normalizing flows: There is some debate in the literature regarding universality, what papers are the authors referring to?
3. Yes, the method yields explicit maps, which is a great strength of this method.
4. Scaling: The presented evidence is convincing.
### Update after rebuttal: Main concern fixed
The barycenter proof was fixed, minor points see below.
Methods And Evaluation Criteria: ## OT Maps
Algorithm 1 is only evaluated on Gaussian data, which should be relatively easy to learn for a flow (e.g. Draxler et al. 2022 show that RealNVP flows converge exponentially fast to Gaussian data in terms of number of layers). I think estimating the Wasserstein distance for more complicated tasks would be a great add-on, e.g. pick any of the OT applications in the introduction. I think this is somewhat compensated by the Barycenter experiments, since OT estimation is a prerequisite for the barycenter computation.
Can you make a plot between the true Wasserstein distance and the distance estimated by the model?
## Barycenters
Why is there no L-UVP in Tables 2 and 3? The displayed loss is only a lower bound on the true metric.
Given that the rotation matrix in the 64-dimensional experiment in 5.2.4 is 2-dimensional, does the experiment really measure the scaling behavior?
## References
Draxler, Felix, Christoph Schnörr, and Ullrich Köthe. "Whitening convergence rate of coupling-based normalizing flows." Advances in Neural Information Processing Systems 35 (2022): 37241-37253.
### Update after rebuttal
Questions answered, experimental evidence still limited, but enough in my opinion.
Theoretical Claims: I did not check any statements in terms of where suitable maps exist, i.e. whether they are measurable. Instead I assumed that all densities are continuous wrt. Lebesgue measure.
## Theorem 4.2
Theorem 4.2 is convincing to me.
## Theorem 4.3 lacks proof, in my understanding
Let's first establish common ground, please confirm that I understand the proof of Theorem 4.3 correctly:
1. l. 707-715 show that h exists (since the Barycenter exists).
2. l. 715-716 argue that h establishes equality in Eq. 4 (since h + f(., s) are the solution of Theorem 4.2 to the OT between the Barycenter and each mu_s).
3. l. 717-722 show that h minimizes Eq. (l. 720) (if there was a better h, then it would not be the solution to the OT).
4. l. 722-724: Since h is the solution of that minimization, it is the average over the OT solution for each s (first statement of Thm 4.3).
5. Because p=2, we can separate the optimization tasks over dimensions.
However, I think the proof is missing the most important step: The map that satisfies Eq (l. 173) and Eq (l. 179) points to the barycenter has $h_\sharp \lambda = \bar \mu$. In addition, it seems that the second equation is not even used in Algorithm 2.
In other words: if one wants to train a model with these two as their loss, then one also has to show that the solution of this optimization is the barycenter.
This is my most important criticism, addressing it will increase my rating.
## Loss Tradeoff in Alg. 1 always introduces bias
I think that the argumentation regarding the annealing structure is incorrect and of different nature than simulated annealing. This is easy to see for the following counterexample: Let mu_1 and mu_2 be two Gaussian distributions with means $\pm 1$ and standard deviations $1$, and let the latent lambda also be a Gaussian distribution. Then the loss offers a closed-form solution which reveals that the learned means are off proportional to $\sqrt{\chi}$ (I might be wrong in the scaling).
In other words: The **optimal solution of the loss in Alg. 1 over all bijections will not be the optimal transport solution**. I would conjecture that it comes closer as $\chi \to 0$, but it is unclear how fast and what the tradeoff will be if presented arbitrary distributions.
Why different from annealing? In terms of suboptimal minima, I think that one can always move between different optima by sending infinitesimal packages of mass from one place to another -- so the loss barriers really are vanishing (and this effect seems to transfer to neural network parameterizations, see "mode connectivity" as per Garipov et al 2018, and Draxler et al 2018).
I think this will be an easy fix.
## Universality
I think there is a logical gap that the existing literature has not fully closed in terms of universality of normalizing flows. With the statement "normalizing flows, which are universal approximators for bijections and are thus the most natural generative models to employ in this context", the authors probably refer to the work by Teshima et al. 2020. However, as Koehler et al. 2021 and Draxler et al. 2024 point out, the underlying proofs use arbitrarily ill-conditioned networks (become arbitrarily ill-conditioned as the error decreases, e.g. Section 5.1 in Draxler et al. 2024).
## References
Garipov, Timur, et al. "Loss surfaces, mode connectivity, and fast ensembling of dnns." Advances in neural information processing systems 31 (2018).
Draxler, Felix, et al. "Essentially no barriers in neural network energy landscape." International conference on machine learning. PMLR, 2018.
Teshima, Takeshi, et al. "Coupling-based invertible neural networks are universal diffeomorphism approximators." Advances in Neural Information Processing Systems 33 (2020): 3362-3373.
Koehler, Frederic, Viraj Mehta, and Andrej Risteski. "Representational aspects of depth and conditioning in normalizing flows." International Conference on Machine Learning. PMLR, 2021.
Draxler, Felix, et al. "On the universality of volume-preserving and coupling-based normalizing flows." ICML (2024).
### Update after rebuttal
- Theorem 4.3 fixed.
- Bias: The authors argue that weight schedule on loss leads to correct solution, for which they provide additional evidence, which should be included in the paper and is good enough for now.
- Universality: Reached common ground on understanding of the literature
Experimental Designs Or Analyses: I am unsure about the usefulness of the UVP metrics and hope for input from another reviewer.
Supplementary Material: I checked appendices B, C and D.
Relation To Broader Scientific Literature: Optimal Transport is a principled mathematical framework for finding transport maps between distributions that adhere to external cost. It has a broad set of applications, but finding optimal transport maps in high dimensions remains an open challenge.
Essential References Not Discussed: Rectified Flows (and mini-batch flow matching) also try approximating the optimal transport between two distributions. This is not mentioned or compared to in the paper. The nature of the competing loss terms makes this challenging, of course.
References:
Pooladian, Aram-Alexandre, et al. "Multisample flow matching: Straightening flows with minibatch couplings." arXiv preprint arXiv:2304.14772 (2023).
Liu, Xingchao, Chengyue Gong, and Qiang Liu. "Flow straight and fast: Learning to generate and transfer data with rectified flow." arXiv preprint arXiv:2209.03003 (2022).
### Update after rebuttal: Mostly clarified
- The authors clarified the relation to Rectified Flows (Liu et al. 2022), and pointed me to the explanation in that paper.
- Regarding (Pooladian et al. 2023), the authors did not provide an explicit explanation, but I think the argument concerning rectified flows persists that there is no known cost that is minimized, and so the approaches are different
Other Strengths And Weaknesses: The flow construction is innovative and easy to follow, nice work!
Other Comments Or Suggestions: l. 122 right: Add citation to McCann and Gangbo.
Please add links from the proofs of theorems in the main text to the appendix.
Questions For Authors: Please take the criticism regarding the theory seriously. I will adjust my recommendation when the points are addressed.
## Extension ideas
Are there any of the real-world applications mentioned in the introduction feasible with this new method?
Wasserstein barycenters outside p=2? Relatedly, the "specialization" of Algorithm 1 to $p=2$ mentioned in the first paragraph of 4.3.2 affects only l. 202, right?
Is it possible to extend the algorithms to entropic OT? I guess this does not make sense since this produces non-unique couplings, right?
### Updates after rebuttal: Answers provided to all points
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear reviewer,
thanks for the very constructive feedback! Apologies for our terse replies (due to character limit).
**1. Can you make a plot between true and estimated $W_2$ distance?**
See here for Gaussian OT ($d=64$): https://pasteboard.co/jk8v0aZgFTom.png. We will add such pictures.
**2. Why is there no L-UVP in Tables 2-3?**
The **L-UVP values for our model are available** (for $d=126$, L-UVP: 1.5% (gaussian), 5.5% (uniform)). We will add them to Tables 2-3. We left them out because Table 5 in (Korotin et al, 2022) does not report them. If appropriate, we can also report the L-UVP for SCWB and WIN (retrained by us).
**3. The rotation matrix in 5.2.4 is only 2d.**
Any high-dim. rotation is 2d in the right coordinates. Since the algo only sees realizations and has no "rotational" prior, the experiment does capture scaling behavior.
**4. Theorem 4.3 lacks proof.**
We wrote a clearer proof, which we sketch below.
We know that
$$\sum_s w_s W^2_2(\bar{\mu}, \mu_s) \le \sum_s w_s W^2_2(h_\\\#\lambda, \mu_s) \le \sum_s w_s \\\| h - f_s \\\|^2_{L^2(\lambda)}$$
for all $h:\mathbb{R}^d \to \mathbb{R}^d$ and all $f_s \in B(\lambda, \mu_s)$. Since $\bar{\mu} \in \mathcal{P}_{ac}(\mathbb{R}^d)$, Lem 4.1 and arguments in the proof of Thm 4.2 imply that there exist $h \in B(\lambda, \bar{\mu})$ and $f_s \in B(\lambda, \mu_s)$, such that
$$ W^2_2(\bar{\mu}, \mu_s) = \\\| h - f_s \\\|^2_{L^2(\lambda)}$$
and, therefore, achieve equality in the first chain of inequalities. This implies (see paper) that $h(Z) = \mathbb{E}[f(Z,S)|Z]$ and $f$ minimizes $\sum_{i=1}^d \mathbb{E}[\text{Var}(f_i(Z,S))]$.
Assume that there is another function $\tilde{f}$ with $\tilde{f}(\cdot, s) \in B(\lambda, \mu_s)$ achieving minimal conditional variance. Then for $\tilde{h}(Z) = \mathbb{E}[\tilde{f}(Z,S)|Z]$ one has
$$ \sum_s w_s W^2_2({\tilde{h}\\\#\lambda}, \mu_s) \le \sum_s w_s \\\| \tilde{h} - \tilde{f_s} \\\|^2_{L^2(\lambda)} = \sum_{i=1}^d \mathbb{E}[\text{Var}(\tilde{f}(Z,S))] = \sum_{i=1}^d \mathbb{E}[\text{Var}(f_i(Z,S))] = \sum_s w_s W^2_2(\bar{\mu}, \mu_s)$$
Then by def of $\bar{\mu}$ we must have that $\tilde{h}_\\\# \lambda = \bar{\mu}$.
**5. Eq (l. 179) is not used in Algorithm 2.**
The L2 cost in Alg. 2 is exactly Eq (l. 179), where the conditional expectation is given by Eq (l. 173).
**6. Loss Tradeoff in Alg. 1 always introduces bias.**
Let us discuss the proposed example. If $\mu_s = \mathcal{N}(m_s, \sigma_s^2)$, for $s \in \{1, 2\}$, $\lambda = \mathcal{N}(0, 1)$ and $f(z, s|\theta) = \hat{\sigma}_s z + \hat{m}_s$, for $\theta = (\hat{m}_1, \hat{m}_2, \hat{\sigma}_1, \hat{\sigma}_2)$, then the model admits four MLEs: $ \theta^* \in \{(m_1, m_2, \sigma_1, \sigma_2), (m_1, m_2, -\sigma_1, \sigma_2), (m, m_2, \sigma_1, -\sigma_2), (m_1, m_2, -\sigma_1, -\sigma_2)\}$.
The model OT map, $\hat{T}(x) = f(f^{-1}(x, 1), 2)$, is the right one only for two MLEs. In agreement with Thm 4.2, the right MLEs minimize $ L^2(z |\theta) = ((\hat{\sigma}_1 - \hat{\sigma}_2) z + (\hat{m}_1 - \hat{m}_2))^2$.
In the reviewer's example ($\sigma_1 = \sigma_2 = 1$ and $m_1 \neq m_2$), at each $t$ in Alg. 1 we take a GD step towards the minimum of $ -\log(p(x,s|\theta)) + \zeta_t L^2(z |\theta)$:
$$ \hat{m^*}_1(t) = \frac{m_1 + 2 \zeta_t \hat{m}_2}{1 + 2 \zeta_t}, \quad \hat{m^*}_2(t) = \frac{m_2 + 2 \zeta_t \hat{m}_1}{1 + 2 \zeta_t}.$$
But $(\hat{m^*}_1(t), \hat{m^*}_2(t)) \to (m_1, m_2)$ as $\zeta_t \downarrow 0$, therefore at convergence there will be **no bias**. Empirically we do not observe any bias.
**7. The argumentation regarding the annealing structure is incorrect.**
Thanks for the insightful remarks. We do agree and we will remove any reference to annealing. We also saw empirically that minimizing the L2 cost with a likelihood regularizer works well, which shows that MLEs may lie on a connected manifold.
**8. UATs for NFs use arbitrarily ill-conditioned networks.**
This is a valid concern, but it applies to all UATs, which do not provide numerical stability guarantees. It is surely possible to construct distributions for which our networks are ill-conditioned. Empirically, the approach works well.
**9. Other remarks**
- We do have Alg. 1 results for uniform data (e.g. L-UVP=3.8%, BW-UVP=0.7%, for $d=126$). We will add them.
- Rectified flows also use invertible nets, but do not compute *Wasserstein* OT maps nor barycenters.
- Alg. 1 can be adjusted for generic $p>1$ in l.202. It's unclear if the same adjustment gives the $W_p$ barycenter in Alg. 2.
- Any application in the paper intro is feasible for our method. See question 1 by reviewer U9wD.
- Our method is not suitable for entropic OT, since we model Monge maps, not generic couplings.
**References**
- Korotin et al. "Wasserstein iterative networks for barycenter estimation." arXiv:2201.12245 2023
- Liu, Chengyue, and Qiang "Flow straight and fast" arXiv:2209.03003 (2022).
- Pooladian et al. "Multisample flow matching" arXiv:2304.14772 (2023).
---
Rebuttal Comment 1.1:
Comment: Thanks for the helpful answers! I am listing the points I am still concerned about below:
> More complicated OT estimation
The authors did not react to this comment.
> Plot between the true and the estimated W_2 distance
Thanks for the plot, but I would rather suggest a plot (x=dimension y=OT distance, both ground truth and computed). It is good to see how quickly the OT converges to the true value. Is there any way of conservatively estimating the W_2, since learning a non-perfect distribution can lead to biased OT estimates?
> L-UVP
Yes, please report these numbers.
> Theorem 4.3: New proof
Looks correct to me, thanks for closing this gap in the logic.
> Empirically, we do not observe any bias
Looking at the plot provided by the authors in answer to my first question, I do observe a bias towards lower W_2, presumably at a cost to the accuracy. To test this, one would need to train a model without W_2 regularization and whether there is any performance drop by adding the regularization. Or how do the authors conclude that "empirically we do not observe any bias"?
> UATs provide no numerical stability guarantees
My point was that the UAT by Teshima et al. even requires arbitrary bad numerical stability for convergence [Koehler et al., Draxler et al.], *regardless of the target distribution*. I recommend rephrasing Contribution 2 and expanding on this point in the related work or theory section. Right now, my understanding is that it is misleading.
Suggestion for what I mean:
Contribution 2: ..., which are flexible bijections and thus the most natural generative model ...
In 4.3.1 or related work: There are rigorous proofs that coupling-based normalizing flows are known to be universal approximators for bijections [Teshima], with the caveat that their construction relies on ill-conditioned networks [Koehler, Draxler]. [Draxler] present a well-conditioned flow, but they do not guarantee arbitrary bijections, but only approximating arbitrary distributions.
> Rectified Flows do not compute Wasserstein OT
Please see https://arxiv.org/pdf/2209.03003 Figure 3d and https://arxiv.org/pdf/2304.14772 Figure 6 -- there is a OT cost that is minimized because the Rectification step learns an optimized coupling. What am I missing?
> More experiments
I still think, like U9wD, 91Cy (more downstream tasks), and aK7B (more simple comparisons), that the experimental section could be improved.
I am looking forward to your answers!
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
thanks again for your replies. We addressed your concerns below.
**I think estimating the Wasserstein distance for more complicated tasks would be a great add-on, e.g. pick any of the OT applications in the introduction. [...] I still think [...] that the experimental section could be improved.**
We have implemented a new numerical experiment related to a real-life application to fair regression, which we propose to add to the paper.
Fair regression, in the sense of demographic parity, looks for a regression function $f(X,S)$ that minimizes the cost $\mathbb{E}[\\\|Y - f(X,S)\\\|^2]$, such that $f(X,S)$ is independent of $S$. The solution to this constrained optimization is precisely the Wasserstein barycenter of the conditional distributions of $Y$ given $S$ (see (Chzhen et al, 2020)).
We work on the benchmark dataset "Communities and Crime'' and we regress the target variables "percentage of officers assigned to drug units'' and "total number of violent crimes per 100K popuation'' on 127 socio-economic features ($X$) and one sensitive feature ($S$), the "percentage of population that is African-American", with range $[0,1]$.
An "unfair" regression leads to strong correlation between, for instance, the first predicted variable and the sensitive feature (Pearson: 0.43, Spearman: 0.47, Kendall: 0.32), while our fair regression achieves almost perfect uncorrelatedness (Pearson: 0.0003 (p-value: 0.99), Spearman: 0.0058 (p-value: 0.80), Kendall: 0.0039 (p-value: 0.80)). In all cases, we fail to reject the null hypothesis of no association.
This experiment requires computing the barycenter of $100$ input distributions (the cardinality of the range of $S$, rounded up to the nearest % point), which would be computationally challenging for any other numerical method. We think that this experiment exemplifies the relevance of our method for real-life applications.
**I would rather suggest a plot (x=dimension y=OT distance, both ground truth and computed).**
Thanks for the nice suggestion. We made the plot for $d=16, 32, 64$: https://pasteboard.co/EJUgnxvRkesi.png. On the y-axis, we report a box-plot of the (signed) relative error of the estimated Wasserstein distance for 10 random initializations of our model.
**Is there any way of conservatively estimating the W2?**
Yes, this is exactly the idea behind the BW-UVP metric. The $W_2$ distance between two distributions is always lower-bounded by the $W_2$ between two Gaussians with the their respective means and covariances (see (Dawson and Landau, 1982)). This gives a conservative estimate under assumption of normality.
**How do the authors conclude that ``empirically we do not observe any bias''.**
In all experiments, we checked for bias:
- by visual inspection of the marginals (as done in our Figure 1),
- by comparing the true and estimated means (resp. covariances) in terms of Euclidean (resp. Frobenius) norm,
- by computing the L-UVP and BW-UVP metrics, which are just (normalized) upper/lower bounds on the $W_2$ distance between the estimated and the true target measure, and therefore quantify bias.
We think that the new figure (linked above), together with the theoretical considerations in reply to Question 6 in our previous message, show strong evidence that our model is free of bias.
**The UAT by Teshima et al. even requires arbitrary bad numerical stability for convergence [Koehler et al., Draxler et al.], regardless of the target distribution.**
Thanks for expanding on this point, we fully understand now. We will add a discussion of these issues in our Contribution 2 and discussion of related works, exactly as you suggest. Thanks again for pointing out this gap.
**Please see (Liu et al, 2022) Fig 3d and arXiv:2304.14772 Fig 6 -- there is a OT cost that is minimized because the Rectification step learns an optimized coupling. What am I missing?**
OT is concerned with finding optimal couplings that minimize a pre-specified transport cost $c$. In our paper we deal with the Wasserstein OT problem, where the cost is $c(x,y) = |x-y|^p$, for $p>1$. The rectified flow, instead, aims at minimizing a path functional for an SDE with given initial and terminal marginals (see Eq (1) in (Liu et al, 2022)).
The relationship between the two is explained in Sec 3.4 of (Liu et al, 2022), where they show that rectified couplings (i.e. couplings that minimize the rectified flow):
- are not optimal under any cost $c$ (in particular they do *not* correspond to $W_2$ OT couplings),
- in general provide only a lower-bound for the OT cost with respect to any convex cost $c$ (Thm 3.8), as they verify empirically in their Fig 3(d).
**References**
- Dowson, Landau. "The Fréchet distance between multivariate normal distributions." Journal of multivariate analysis 12.3 (1982): 450-455.
- Chzhen et al. "Fair regression with wasserstein barycenters." NeurIPS 33 (2020): 7321-7331.
- Liu, Chengyue, and Liu. "Flow straight and fast" arXiv:2209.03003 (2022). | Summary: The authors propose a new method for finding Wasserstein-2 barycenter via Conditional Normalizing Flows (CNF) as well as computation of Optimal Transport (OT) maps from input distributions to the barycenter. The key advantage of the method is minimization of the primal OT problem by invertible pushforward bijections, avoiding adversarial bi or tri-level optimization problems present in the previous methods. They demonstrate performance of the method for computing Wasserstein-2 barycenter for Swiss Roll dataset, Gaussians and high-dimensional case for MNIST dataset.
## **Update after rebuttal.**
I have carefully read the authors' responses and would like to raise a few concerns:
- (minor) Upon re-reading the manuscript, I noticed that the paper (Korotin et al., 2019) is incorrectly cited as representing the WIN approach in several places (lines 142, 385, 394). I believe the authors intended to reference (Korotin et al., 2022). This appears to be a typo and should be corrected, as it may confuse readers.
- I am *confused* by the fact that you do not use the time of rebuttal to perform the comparison with some of the non-generative approaches which I pointed to (Kolesov 2024a,b). There are not so many papers on continuous OT barycenter estimation and even less of them are generative (in your classification). Thus, it seems for me that the comparison with the best non-generative approaches in the appropriate experimental setups is (1) meaningful and (2) not so time-consuming. It is even more confusing since your section 4.3.2 is dedicated to the similar idea of computing the barycenter using the conditional normalizing flows, which allow for computation of OT maps between input distributions and barycenter (i.e., finding barycenter points from samples of input distributions), and the experimental section 5.2 tests this conditional model. At least in experiment with *location-scatter Gaussian* data the comparison is possible and **meaningful**. See my next point for details.
- I previously referred to the L$_2$-UVP values in Table 1 for the Gaussian experiment, which was incorrect. I actually meant the experiment with location-scatter Gaussian data. As another reviewer noted, the L$_2$-UVP comparison is missing for this case. I believe that, in this experiment, comparing against both generative and non-generative methods would be valuable for situating your approach within the current landscape of continuous barycenter solvers.
- Besides, I am confused by the overall practical validity of the proposed approach, because (1) it works only with the quadratic cost, (2) was not tested on any practical experimental setups considered in SC$\mathbb{W}_2$B (Fan et al, 2021) or WIN (Korotin et al., 2022) papers or any others. For example, the authors do not test their approach on "Ave, celeba!" benchmark dataset considered in WIN paper. It is strange because both competitors SC$\mathbb{W}_2$B and WIN were tested here (see section 6.1 of WIN's paper). Instead, the authors argue that such experiments are not meaningful. However, given that both SC$\mathbb{W}_2$B and WIN were evaluated on this benchmark, omitting it weakens the comparison. (Actually, here it is not necessary to use your conditional model - the comparison of the learned barycenters is enough to measure the performance of your approach.)
Overall, I remain unconvinced that the current contribution is substantial enough to warrant publication at this conference.
**References.**
Fan, J., Taghvaei, A., and Chen, Y. Scalable computations of wasserstein barycenter via input convex neural networks. arXiv preprint arXiv:2007.04462, 2020.
Korotin, A., Egiazarian, V., Asadulaev, A., Safin, A., and Burnaev, E. Wasserstein-2 generative networks. arXiv preprint arXiv:1909.13082, 2019.
Korotin, A., Egiazarian, V., Li, L., and Burnaev, E. Wasserstein iterative networks for barycenter estimation. Advances in Neural Information Processing Systems, 35: 15672–15686, 2022.
Kolesov, A., Mokrov, P., Udovichenko, I., Gazdieva, M., Pammer, G., Burnaev, E., and Korotin, A. Estimating barycenters of distributions with neural optimal transport. arXiv preprint arXiv:2402.03828, 2024a.
Kolesov, A., Mokrov, P., Udovichenko, I., Gazdieva, M., Pammer, G., Kratsios, A., ... & Korotin, A. (2024b). Energy-guided continuous entropic barycenter estimation for general costs. Advances in Neural Information Processing Systems, 37, 107513-107546.
Claims And Evidence: Most of the claims are supported by clear evidence. However, some points of the submission remain unclear from me:
- **Limitations of the approach.** The authors do not discuss the limitations of the proposed method. It is an important point since normalizing flows are known to suffer from different drawbacks (e.g., constraints on the architecture, high computational expense) and the proposed approach should have inherited the same issues which is not directly stated in the text;
- **Scalability.** It is stated that the approach is applicable for high-dimensional datasets, however, the only high-dimensional experiment considered un the paper corresponds to the computation of barycenter for MNIST images which have the dimension 28x28. It is not clear how the method behaves of experiments with higher dimensions, see, e.g., “Ave, CelebA” experiment from (Kolesov et al, 2024a).
- **Limited comparison.** In the considered experimental setups, the authors do not compare their approach with the recent approaches for barycenter estimation, e.g., (Kolesov 2024a,b). As it is evident from the results reported in this papers, their performance is much better than that of WIN (Korotin et al., 2022), thus, the comparison with them is crucial for understanding the performance of the proposed approach. Since it is possible to retrieve the OT maps using your approach, such a comparison seems to be valid.
Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the considered problem.
Theoretical Claims: Yes, I skimmed through the theoretical results and their proofs and do not find any issues.
Experimental Designs Or Analyses: Experimental results are overall valid. However, I have several concerns which I have written in 'claims and evidence' section.
Supplementary Material: I have checked the Appendix of the paper. It is quite well-structured.
Relation To Broader Scientific Literature: The paper proposes a new approach for finding a barycenter of distributions which avoids min-max optimization objective by using the normalizing flows. As far as I know, the usage of normalizing flows in this context is quite novel.
Essential References Not Discussed: The authors have cited most of the related literature.
Other Strengths And Weaknesses: **Strengths**:
- This method proposes non-adversarial optimization problem for finding Wasserstein-2 barycenter;
- The authors offer new intuitive reformulation of the Wasserstein barycenter as expected conditional variance of pushforward bijective transformations;
- The authors develop new generative model that is able to sample directly from barycenter.
**Weaknesses**:
- The authors do not mention limitations of their approach in the main text. For example, since normalizing flows have drawbacks as constraints of considered transformations , high computational expense and instability, these shortcomings are being moved to the proposed method.
- While the dimensionality of tasks grows, the conditional variance grows too. As a consequence of this, the variance of the loss function grows too and it leads to another point of the method’s instability.
- The comparison with other approaches has limitations, see previous sections.
Other Comments Or Suggestions: - The authors claim that (Kolesov et al, 2024) does not provide a generative model of barycenter. However, it is not indeed true. StyleGAN is a manifold-constrained generator in the aforementioned method and it directly samples from barycenter.
Questions For Authors: - The authors claim that their approach performs well on high-dimensional data taking MNIST dataset as a high-dimensional experiment. Is the developed method appropriate for the “Ave, CelebA” experiment (See p.9 from (Kolesov et al, 2024) )?
- The authors provide comparisons with WIN and SCWB. However, (Kolesov et al, 2024) is also suitable for the finding barycenter in MNIST dataset experiment, could you compare with it?
- What about convergence time of the proposed method? One would like to see comparisons of convergence times for the proposed approach with SCWB, WIN and (Kolesov et al, 2024).
**References.**
Kolesov, A., Mokrov, P., Udovichenko, I., Gazdieva, M., Pammer, G., Burnaev, E., and Korotin, A. Estimating barycenters of distributions with neural optimal transport. arXiv preprint arXiv:2402.03828, 2024.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear reviewer,
please find our replies below.
**1. The authors do not discuss limitations of the proposed method.**
We agree that discussing the limitations is very important. We propose to add discussions of the following limitations:
1. Alg. 2 is suitable only for Wasserstein-2 barycenters.
2. Training might be computationally intensive on image datasets larger than the ones we test ($\gg$1000 dimensions).
**2. Normalizing flows (NFs) have known drawbacks (e.g., constraints, high computational cost, instability).**
In our experience NFs are stable and reliable generative models, as our results show. They require large latent spaces, due to the **bijectivity constraint**, but the multi-scale architecture makes training in high dimensions unproblematic (see App. D). Training is **stable** with minimal fine-tuning (e.g. only lower learning rate on non-Gaussian data, see App. D). Our method did not require more **compute** than competitors and can run on a single GPU with 12GB. For training times, see question 7.
**3. Is the method appropriate for “Ave, CelebA”?**
Yes. Indeed, the Glow model (which we build on in Sec 5.2.3) was trained on the CelebA-HQ dataset with great success (Kingma e al., 2018), but it required one week of training on 40 GPUs (https://github.com/openai/glow/issues/37). We lack such computational resources. We will mention this in the method's limitations (see question 1).
We claim that our experimental set-up is sufficient to prove the **scability of our method**, because the MNIST dataset ($d=784$) is high-dimensional enough **for all real-life applications** (see paper references): style transfer ($d=500$), color translation ($d=3$), clustering in Wasserstein space ($d=10-200$), fairness: ($d=1-200$).
Notice that "Ave, CelebA'' is not related to any real-life task (see question 2 by reviewer U9wD).
**4. The authors do not compare their approach with the recent approaches for barycenter estimation, e.g., (Kolesov 2024a,b), which have much better performance than that of WIN (Korotin et al., 2022).**
Generative methods solve a strictly harder problem than OT map estimation. To keep the comparison fair, we compared **generative models only** (see question 5).
Furthermore, the StyleGAN methods in (Kolesov 2024,a,b) solve an OT problem in the StyleGAN latent space with non-quadratic cost. Since none of the methods in our paper supports generic costs, a **comparison is impossible**.
In terms of model evaluation, nothing would be gained by adding (Kolesov 2024a,b), because WIN has a very similar performance (see Table 3 in (Kolesov 2024a) and our Tables 2-3 for location-scatter data and Fig 5 (a, b) in (Kolesov 2024b) for MNIST data).
**5. StyleGAN is a manifold-constrained generator in (Kolesov et al, 2024a) and it directly samples from barycenter.**
We explain why that's not the case. Barycenter generative models "output a learned barycenter distribution, which can be directly sampled, while non-generative models are limited to transporting existing samples from the input distributions" (see Sec 3). The pre-trained StyleGAN $G$ in (Kolesov et al, 2024a,b) (which is an *input* of the method, not an output) is **not** the barycenter (see their Fig 3). A barycenter sample can be obtained only by pushing an input distribution sample $x_k$ through the OT map $T_{k, \phi}(x_k, z)$ (see their Fig 1 (a,b)). In our method, instead, barycenter samples are obtained by pushing latent samples through the map $h$ (no input distribution query needed).
**6. Could you compare with (Kolesov et al, 2024a) on the MNIST dataset?**
See questions 4 and 5, above.
**7. What about convergence time of the proposed method?**
Training time on Gaussian location-scatter data ($d=128$) are: 114m56s (SCWB), 93m28s (WIN), 42m 3s (ours). All methods converge in approximately 10-20 minutes, longer times needed only for top-notch performance. On MNIST our method converges in 30 minutes. We will provide full information on training times.
**8. While the dimensionality of tasks grows, the conditional variance grows too [...] and it leads to [...] instability.**
The conditional variance scales linearly in the number of dimensions $d$. Let $f(Z,s)$ have mean $m_s$ and covariance $\Sigma_s$ and denote $\bar{m} := \sum_{s \in \mathcal{S}} w_s m_s$, then by the law of total variance:
$$\sum_{i=1}^d \mathbb{E}\left[ \text{Var}(f_i(Z,S) | Z) \right] \le \sum_{i=1}^d \text{Var}(f_i(Z,S)) = \sum_{s \in \mathcal{S}} w_s \rm{Tr}(\Sigma_s) + \sum_{s \in \mathcal{S}} w_s \| m_s - \bar{m}\|^2 = O(d)$$
This growth rate is the same as multivariate regression and is unproblematic.
**References**
- Kingma, Prafulla. "Glow: Generative flow with invertible 1x1 convolutions." NeurIPS 31 (2018).
- Kolesov et al. "Estimating barycenters of distributions with neural OT." arXiv:2402.03828 (2024a).
- Kolesov et al. "Energy-guided continuous entropic barycenter estimation for general costs." NeurIPS 37 (2024b): 107513-107546.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers. There are several claims in your answer which I can not agree with, see below.
> Notice that "Ave, CelebA'' is not related to any real-life task (see question 2 by reviewer U9wD).
In your answer to reviewer U9wD, you explain that computing the barycenter of images using the quadratic cost is not meaningful. It is a known issue which motivates the researches to consider more elaborated cost functions and highlights the limitation of your approach. The aspect that your approach is limited to the quadratic cost should be mentioned in limitations section. Of course, "Ave, Celeba" experiment as well as all experiments considered in your paper are not related to real-world life but they are used to highlight the properties of your approach.
> WIN has a very similar performance to (Kolesov 2024a,b)
It is not true - in Table 3 of (Kolesov 2024a), WIN provides **ten times worse** performance than their approach in moderate dimensions (d=64) and much smaller convergence time.
> Comparison with (Kolesov 2024a,b) is not relevant
I can not agree on this point. To start with, I remain skeptical regarding the applicability of *generative* barycenters to real-world tasks. I do not see any sufficient explanations in the text of your paper as well as in the answers to reviewers. Meanwhile, in your experiments, you consider only toy experiment (Gaussians, Swiss roll, etc.) and MNIST one where the aim was to generate barycenter *from the samples of the input distribution*. I understand where this kind of task might appear in real-life, but I do not understand where the generative task might appear. I think you should add more discussion on its applicability in actuarial sciences which you mention.
And I think that since you explore the ability of your approach to perform the translation from the input samples to barycenter you should perform the comparison with the SOTA solvers (Kolesov a,b). I do not see any obstacles to do it in MNIST experiment during the remaining rebuttal time.
You can also easily perform comparison in Gaussians experiment where all of the approaches (generative and non-generative) are applicable. I do not see the details on the Gaussian experiment (you do not specify the weights of barycenters), but I guess that you took the same setup as was used in (Korotin et al., 2022) and also in Kolesov et al., 2024a. Then according to their results in Table 3 and $\mathcal{L}_2$-UVP values reported for your approach, your method provides worse results than WIN and (Kolesov a, b) in all dimensions except for dimension D=2. It poses even more questions about the applicability of your approach.
Looking forward for your replies!
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
thanks for your feedback. Please, find our replies below.
**The aspect that your approach is limited to the quadratic cost should be mentioned in limitations section.**
Yes, thanks for the suggestion. We agree to mention this limitation.
**In Table 3 of (Kolesov 2024a), WIN provides ten times worse performance than their approach in moderate dimensions (d=64) and much smaller convergence time.**
Thanks for pointing out that the performance of WIN is not as good as (Kolesov 2024a). Both papers are very interesting. But since our method is generative, it makes more sense to compare our method to WIN.
**I remain skeptical regarding the applicability of generative barycenters to real-world tasks. I do not see any sufficient explanations in the text of your paper as well as in the answers to reviewers. [...] I think you should add more discussion on its applicability in actuarial sciences which you mention.**
Thanks for your suggestion. We completely agree and will add a more in-depth discussion of the benefits of generative models for Wasserstein barycenters. For many applications, such benefits have been already been highlighted in their respective papers (see our references in the introduction). We report a few of them here:
- Shape/image interpolation: the density of the generative model is already the interpolating shape and does not require surface reconstruction from point clouds.
- Style transfer: the barycenter generative model can generate previously unseen samples (e.g. new MNIST digits in a given style), as opposed to just transporting already existing ones (MNIST digits are, after all, finitely many).
- Fair regression: the barycenter generative model effectively performs distributional regression for the fair premium, which allows, for instance, the construction of confidence intervals, not just point estimates.
- Generative models make it possible to estimate any statistic of the barycenter distribution arbitrarily well (as useful, for instance, in subset posterior estimation). It also allows to transform the distribution further (e.g. it can be shifted so that it has zero mean, which is useful in actuarial applications).
**And I think that since you explore the ability of your approach to perform the translation from the input samples to barycenter you should perform the comparison with the SOTA solvers (Kolesov a,b). I do not see any obstacles to do it in MNIST experiment during the remaining rebuttal time. You can also easily perform comparison in Gaussians experiment where all of the approaches (generative and non-generative) are applicable.**
The papers (Kolesov 2024a,b) are great, but, as we mentioned in Questions 4 and 5, their methods are not directly comparable with ours because:
- they are not generative: they learn the OT maps from the barycenter to the input distributions, but they don't train a generative model of the barycenter,
- the StyleGAN-based implementation solves a manifold-constrained OT problem with non-quadratic cost, which is not supported by our method, nor by the other baseline models.
**Then according to their results in Table 3 and L2-UVP values reported for your approach, your method provides worse results than WIN and (Kolesov a, b) in all dimensions except for dimension D=2. It poses even more questions about the applicability of your approach.**
Thanks for your feedback. But please, notice that these are two different numerical experiments that cannot be compared. Our Table 1 reports results of an OT problem transporting a source distribution $\mu$ to a target distribution $\nu$, while Table 3 in (Kolesov 2024a) refers to a barycenter problem (input measures $(\mu_s)$ are transported into their barycenter $\bar{\mu}$).
**References**
- Kolesov et al. "Estimating barycenters of distributions with neural OT." arXiv:2402.03828 (2024a).
- Kolesov et al. "Energy-guided continuous entropic barycenter estimation for general costs." NeurIPS 37 (2024b): 107513-107546. | Summary: The paper proposes a new algorithm to compute the Wasserstein barycenter of a set of distributions and, by extension, the optimal transport between any pair of the given distributions. The method rests on a new representation of the barycenter objective in terms of conditional normalizing flows. Since this objective can be readily approximated by empirical averages and variances, the resulting algorithm is very simple and elegant. Experiments illustrate the algorithm's outcomes on the Swiss roll and the MNIST datasets and demonstrate its superiority over the competition in systematic comparisons.
Claims And Evidence: The paper derives and proves the new variant of the objective (theorem 4.3). Since the proof short-cuts some steps by citing results from the literature, its correctness it not readily apparent. The experiments are well designed and support the superiority claim.
Methods And Evaluation Criteria: The experiments adapt an evaluation protocol with analytic ground truth from the literature. This makes results directly comparable and supports the superiority claim.
Theoretical Claims: See above.
Experimental Designs Or Analyses: See above.
Supplementary Material: Contains proofs, background on normalizing flows and experimental details. Loos ok.
Relation To Broader Scientific Literature: The experiments only include two alternative methods. I do not know if there are others that should have been included.
Essential References Not Discussed: None found.
Other Strengths And Weaknesses: The authors note -- in accordance to earlier findings -- that samples from the MNIST barycenter look like a superposition of 10 digits with the same style. It should be discussed what this means: Is the barycenter a useful notion of the "average" of a set of given distributions? Does it have appealing properties beyond minimizing the Wasserstein distance? If yes, are these properties useful to simplify or enable downstream tasks? If no, what's the point of computing barycenters?
Other Comments Or Suggestions: The authors might want to consider putting the proofs of theorems 4.2 and 4.3 in the main paper. They are not very long, the paper seems to have substantial unused white space, and it would make the presentation more self-contained.
Minor points and typos:
* The formal definition of the L_p(lambda)-norm is missing. This makes theorem 4.2 hard to understand.
* It is never specified over which distributions expectations and variances are taken. This makes theorem 4.3 hard to understand.
* A closing bracket is missing in the change-of-variables formulas in algorithms 1 and 2.
* Line 205: "s may be univariate discrete". Do you mean "univariate continuous"?
* Section 5.2.1: The weights w_s are not specified.
Questions For Authors: Can you demonstrate the usefulness of Wasserstein barycenters for downstream tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear reviewer,
we have carefully addressed all your comments below. According to the ICML review guidelines, we are not allowed to submit a revised manuscript at this point of the review process, but below we discuss in detail the edits we will make based on your suggestions in the camera-ready version of the paper, if accepted.
**1. Can you demonstrate the usefulness of Wasserstein barycenters for downstream tasks?**
Wasserstein barycenters have found application in many downstream tasks, such as shape interpolation, image interpolation, color translation, style translation, Bayesian subset posterior estimation, clustering in Wasserstein space, and fairness (for references for each application, see the introduction of the paper).
Our interest in real-life applications to fairness in insurance prompted us to develop a method with good scalability properties in the number of input distributions, which is absent in the existing literature and one of the main contributions of our paper. Since the method is of independent interest for the broader Wasserstein barycenter community, in this paper we focus on introducing it and showcasing its competitive performance on well-established benchmarks, leaving its application to real-life actuarial datasets for upcoming work.
**2. Samples from the MNIST barycenter look like a superposition of 10 digits with the same style. Is the Wasserstein barycenter a useful notion?**
The notion of Wasserstein barycenter is in general non-trivial. For instance, the Wasserstein-2 barycenter of two Dirac measures $\delta_x$ and $\delta_y$ (for $x,y \in \mathbb{R}^d$) is not the measure $\frac{1}{2} \delta_x + \frac{1}{2} \delta_y$, but rather the Dirac measure centered at their midpoint, i.e. $\delta_{\frac{x+y}{2}}$. Thanks to this property, Wasserstein barycenters can provide very natural-looking averages of 2d/3d images, *when each image is modelled as an input distribution in 2d/3d space*, as can be seen in Figures 1 and 7 of (Solomon et al, 2015) or Figures 1 and 2 of (Cuturi et al, 2014).
Recent works, instead, focus on image datasets in which *input distributions are not themselves images, but rather empirical distributions of images* (as in the MNIST barycenter task in our Section 5.2.3). While these benchmarks may be useful to showcase model scalability, they are of very limited practical interest (the barycenter images are just pixel-wise averages of input images) and do not correspond to any meaningful image interpolation task. We agree with the reviewer and acknowledge that there is a pressing need for the Wasserstein barycenter community to develop more meaningful benchmark tasks in high dimensions, possibly moving away from image data. We hope our future work on actuarial datasets will contribute to this.
**3. It is never specified over which distributions expectations and variances are taken. This makes theorem 4.3 hard to understand.**
We follow the standard convention of assuming that there is an underlying probability space on which all random variables are defined (this can be done without loss of generality, see Lemma 8.16 of (Kallenberg, 2002)). All expectations are then well-defined Lebesgue integrals with respect to this probability measure. We will add this assumption explicitly in the paper.
**4. Line 205: "s may be univariate discrete". Do you mean "univariate continuous"?**
By "univariate discrete'', we actually meant an $\mathbb{R}$-valued random variable taking finitely many values (such as the discrete uniform distribution). To avoid misunderstandings, we will edit the text in l.205 to read "[...] the conditioning variable $s \in \mathcal{S}$ may be $\mathbb{R}$-valued and taking finitely many values (as in Section 5.2.4, where $\mathcal{S}$ is the uniform grid on $[0, \pi]$ with $n$ points) or one-hot encoded (as in all other numerical experiments).''
**5. The authors might want to consider putting the proofs of theorems 4.2 and 4.3 in the main paper. They are not very long, the paper seems to have substantial unused white space, and it would make the presentation more self-contained.**
This is a good suggestion. We will do this, provided we can respect the ICML page limit for the revised paper.
Thanks also for reporting the typos, missing definitions and missing parameter values. We will correct them all in the revised paper.
**References**
- Solomon, Justin, et al. "Convolutional wasserstein distances: Efficient optimal transportation on geometric domains." ACM Transactions on Graphics (ToG) 34.4 (2015): 1-11.
- Cuturi, Marco, and Arnaud Doucet. "Fast computation of Wasserstein barycenters." International conference on machine learning. PMLR, 2014.
- Kallenberg, Olav. Foundations of Modern Probability. Springer Science \& Business Media, 2002.
---
Rebuttal Comment 1.1:
Comment: > Answer 2: While these benchmarks may be useful to showcase model scalability, they are of very limited practical interest (the barycenter images are just pixel-wise averages of input images) and do not correspond to any meaningful image interpolation task.
For better or worse, the shortcomings of barycenters for the image dataset (superposition of the classes) are readily visible to the human eye. For more abstract distributions (e.g. data from medicine or biology) these shortcomings may be much less apparent (and thus go undetected), but still detrimental for the barycenter to be useful for downstream tasks. I'm thus not yet fully convinced that barycenters are really the right definition of an "average" between a set of distributions, despite their popularity as a research topic in its own right. It would be good to add at least some discussion of this problem to the paper.
> Answer 3: We follow the standard convention of assuming that there is an underlying probability space on which all random variables are defined,
I know that conventions like this are very common in mathematics (similar to dropping parentheses because the "correct grouping of terms is obvious" -- no, it is not). As a computer scientist, I find these conventions really stupid. Being explicit immensely helps readability of the equations. Computer science learned the hard way that readability is much more important than brevity! I thus tend to insist that you explicitly write out over which distributions the expectations are taken (typically as a subscript to the E symbol).
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
thanks for your replies.
**I'm thus not yet fully convinced that barycenters are really the right definition of an "average" between a set of distributions, despite their popularity as a research topic in its own right. It would be good to add at least some discussion of this problem to the paper.**
Thanks for your suggestion. We agree that Wasserstein barycenters are only one possible notion of average between distributions and that other notions could also make sense. However, there are several contexts in which Wasserstein barycenters have been proven to be optimal, as in the following cases:
- Theorem 2.3 of the NeurIPS paper (Chzhen et al, 2020) shows that optimal fair predictors are given by Wasserstein barycenters,
- Theorem 4 of the JMLR paper (Srivastava et al, 2018) shows that the barycenter of subset posteriors converges to the true posterior distribution.
We think that these results are strong evidence for the usefulness of Wasserstein barycenters. We agree with your suggestion and we will add more motivation of the relevance of Wasserstein barycenters in our paper.
**I know that conventions like this are very common in mathematics (similar to dropping parentheses because the "correct grouping of terms is obvious" -- no, it is not). As a computer scientist, I find these conventions really stupid. Being explicit immensely helps readability of the equations. Computer science learned the hard way that readability is much more important than brevity! I thus tend to insist that you explicitly write out over which distributions the expectations are taken (typically as a subscript to the E symbol).**
We agree that clarity is important. We will clarify the statement and proof of Theorem 4.3.
**References**
Chzhen et al. "Fair regression with wasserstein barycenters." Advances in Neural Information Processing Systems 33 (2020): 7321-7331.
Srivastava et al. "Scalable Bayes via barycenter in Wasserstein space." Journal of Machine Learning Research 19.8 (2018): 1-35. | null | null | null | null | null | null |
Dueling Convex Optimization with General Preferences | Accept (poster) | Summary: This paper considers the problem of dueling bandits with general preferences, where the preference model between two decisions (called the transfer function) is not specified to some specific choices. The most technical challenge in this problem is the estimation of gradients since only the preference feedback (a one-bit feedback generated from a Bernoulli distribution) is accessible, which is even harder than the one-point bandit feedback. To this end, the authors proposed a novel estimation of $g_t = o_t u_t$, which does not necessarily align with the true gradient but is still sufficient for a gradient descent update. As for results, the authors achieved an $O(\epsilon^{-4p})$ oracle query complexity to obtain an $\epsilon$-optimal decision. Furthermore, for strongly convex functions, the aforementioned bound can be further improved to $\tilde{O}(\epsilon^{-2p})$ using a similar idea like epoch-GD, which is theoretically optimal.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes, mostly.
Experimental Designs Or Analyses: There are no experiments in this paper.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: N/A.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: ## Strengths
1. Studying general preference functions is a meaningful problem, which may be further applied to real-world applications, for example, RLHF.
2. The adopted assumption is mild and reasonable. The authors have provided detailed explanations to validate its rationality.
3. The proposed algorithm is simple and easy to implement. It generally follows the update rule of gradient descent but with a carefully chosen gradient estimation. The key challenge lies in the technical analysis of this update rule.
## Weaknesses
1. The guarantee of Theorem 2 only holds for some unknown decision generated in the optimization process. To find it out, additional operations are needed, which will lead to extra query complexity.
Other Comments Or Suggestions: I noticed that Thm 2 has an assumption of $0 < \epsilon < 5 \beta \min\\{dD^2, \sqrt{d} r D / G\\}$. What if the required error $\epsilon \ge \min\\{dD^2, \sqrt{d} r D / G\\}$? I guess this should not be a big deal since the problem would become easy if the required error is large.
Questions For Authors: Not many.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > “The guarantee of Theorem 2 only holds for some unknown decision generated in the optimization process. To find it out, additional operations are needed, which will lead to extra query complexity.”
You are right to notice that, due to the challenging feedback model, the nature of our convergence guarantee becomes more intricate than simply holding for the final iterate of the algorithm. We address this issue in detail in the paragraph in lines 167-177 (right column).
For additional details regarding the $\log(T)$ factor see our reply to reviewer br4X.
> “What if the required error $\epsilon \geq 5\beta\min( d D^2 ,\sqrt{d}rD/G )$”
Indeed, as you probably realized, in this case we can simply set $\epsilon=5\beta\min( d D^2 ,\sqrt{d}rD/G )$ and obtain an only stronger guarantee, and the convergence bound will be independent of $\epsilon$ and it will only depend on the problem parameters $\beta,d,D,r,G$.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the reply. I acknowledge the contributions of this work. However, as mentioned in the review comments, the condition on $\epsilon$ seems to be an assumption, which would confuse readers. As the authors have said, when the required error is larger, a stronger guarantee can be achieved. As a result, I highly recommend that the authors could add this case and its corresponding results to the theorem to make it self-contained. I maintain my current score.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer’s additional comments and thoughtful discussion. We agree that a comment about the condition on $\epsilon$ is in order, to clarify that it does not limit the generality of our result in any way. We will carefully incorporate this into the final version - thanks! | Summary: This work studies convex optimization with dueling feedback and general transfer functions. The main contribution is an algorithm with $\epsilon^{-4p}$ convergence rate for smooth and convex functions and $\epsilon^{-2p}$ convergence rate for smooth and strongly-convex functions, where $p$ is the minimal degree (with a non-zero coefficient) in the transfer’s series expansion about the origin.
Claims And Evidence: The statements are supported with proofs.
Methods And Evaluation Criteria: NA
Theoretical Claims: The proofs in the main paper seem to be correct.
Experimental Designs Or Analyses: NA
Supplementary Material: I didn't review the supplementary material.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strength:
This work improves the prior art by considering a very general class of transfer functions where $p$ can be greater than $1$. The rates in smooth and strongly convex objectives match the existing lower bounds.
Weakness:
However, this setting $p>1$ seems not very important. This work lacks discussions of meaningful examples and concrete problems/applications in practice.
Other Comments Or Suggestions: NA
Questions For Authors: Are there any example of concrete problems in practice where $p > 1$?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > “Are there any example of concrete problems in practice where p>1?”
First, we should note that even for $p=1$ our results constitute the first convergence bounds for convex optimization with (approximately) linear transfer functions.
Next, to your question, it is worth recalling what the transfer function $\rho$ abstracts. One motivation is similar to Reinforcement learning from human feedback (RLHF), where the transfer function abstracts how human preference behaves given similarly-valued alternatives. The most natural one is to assume that when the alternatives are of similar values, humans select almost randomly. The degree $p$ abstracts how small changes in the quality of the two alternatives are translated to differences in human evaluations. A higher $p$ value implies that differences are harder to detect. For this reason, $p=2$ is equally well-motivated as $p=1$, and there is no particular fundamental reason to model preferences using a linear function.
Again, note that the transfer function transfers between the valuations (e.g. in LLM: how good is a response to a prompt) to the human response (e.g. in LLM: the likelihood that a human will prefer each alternative). There is no reason to believe the transfer should be necessarily linear. | Summary: This paper proposes an algorithm for the setting of dueling convex optimization, with a broad class of transfer functions.
Claims And Evidence: The claims are well supported.
Methods And Evaluation Criteria: The proposed methods make sense.
Theoretical Claims: I checked the proof for the non-strongly-convex case.
Experimental Designs Or Analyses: N/A
Supplementary Material: I reviewed all of the supplementary materials.
Relation To Broader Scientific Literature: The two most related works are Jamieson et al. (2012) and Saha et al. (2021). This work is a natural extension of Saha et al. (2021).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. This work follows a clear logic, and is technically solid, especially the analysis about the stopping time.
Weaknesses:
1. This work could be improved by justifying the algorithm design with experiments.
2. In general, the paper lacks interpretation of the results. I was not sure whether the proposed algorithm can be compared against some naive algorithms, given that the algorithm is the first one for the setting being studied. The choice of hyperparameters also need more discussions.
3. The algorithm design of this paper is mostly an extension of Saha et al. (2021b). The authors could have over-claimed the contribution of the algorithm design, especially the "relative gradient".
Other Comments Or Suggestions: 1, The concept of "admissible transfer functions" appears abruptly in the title of Subsection 2.2, without further explanations.
2. In terms of the structure of the paper, the main text includes too much proof detail and inadequate discussions of the main theorems.
Questions For Authors: 1. Could the authors further elaborate the following statement to solve the problem of unknown time index $t$ that achieves good performance: *Since errors accumulate only along the path from the root to the true minimum, both the accuracy and confidence of the entire comparison procedure will decrease by a $\log T$ factor.*
2. Can the assumption of $\rho(0)=0$ be relaxed? Can the results be improved if $\rho$ is an odd function?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > “This work could be improved by justifying the algorithm design with experiments”
This is primarily a theoretical work in optimization with partial feedback, and our main goal is to understand the fundamental achievable convergence bounds in this setting. This is also the focus of much of the prior work in this space. We feel that the provided theory gives ample justification for the algorithm.
> “I was not sure whether the proposed algorithm can be compared against some naive algorithms, given that the algorithm is the first one for the setting being studied”
When we have dueling feedback, it is even challenging to design “naive” algorithms for the general convex (and smooth) case. One could view the existing algorithm of Jamieson et al (2012) as an attempt to generalize a natural noisy binary search algorithm to a high-dimensional setting, but unfortunately, their convergence requires strong assumptions such as strong convexity as their extension relies on a coordinate descent scheme. Our main contribution is to give precise convergence bounds for the general convex (and smooth) dueling setting.
> “The choice of hyperparameters also need more discussions”
The hyperparameters of our algorithms are set to guarantee the best performance bound we can derive. The rationale for their choice is to optimize, in hindsight, the convergence bound we establish.
> “The algorithm design of this paper is mostly an extension of Saha et al. (2021b)”
First, we should emphasize that, in our view, expanding the scope of comparison-based / dueling optimization to a variety of nonlinear transfer functions is novel and significant. In fact, even the most well-studied transfers, like the sigmoid, were not covered by the existing literature in this context. Our aim was precisely to fill in this gap.
Regarding technical novelty, specifically compared to Saha et al. (2021b): on a very high level, like Saha et al. we follow the approach of designing a gradient estimator using the dueling feedback and using it for running gradient descent. However, our gradient estimation analysis is very different from that of Saha et al., due to the generality and nonlinearity of the transfer function in the feedback. This can be seen in Lemma 3 and its proof, which analyzes the gradient estimator in the case of nonlinear transfer: roughly, we do this by inspecting the local polynomial behavior of the transfer around zero and working out the effect of the polynomial transformation on the expected value of the gradient estimator. In the estimation process, the magnitude of the gradient gets distorted (see the $||\nabla f(w_t)||^{p-1}$ leading factor in the expected value) so we can no longer use the standard subgradient descent analysis, which we replace with a different approach akin to the analysis of normalized gradient descent.
We agree a more detailed explanation of the technical novelty compared to Saha et al. should be included in the paper itself - we will improve this in the final version.
> Other Comments Or Suggestions:
Thank you for your suggestions, we will incorporate them in the final version of the paper.
> “Since errors accumulate only along the path… will decrease by a $\log{T}$ factor”
We will have at the end of the run $T$ points, we would like to select the minimum of those $T$ corresponding function values. If we had exact comparisons, we could build a complete binary tree over the $T$ points, and compute the minimum by selecting at each node of the tree the minimum value between its descendents. With our actual noisy comparisons, we have “errors” of magnitude $\epsilon$ at each node of the tree, and if we sum these errors along the path to the minimal leaf, the error is amplified by a $\log(T)$ factor (which is the depth of the tree). Alternatively, we can start with a goal error of $\epsilon/\log(T)$ and end with a cumulative error of $\epsilon$.
> “Can the assumption of $\rho(0)=0$ be relaxed?”
It is worth recalling what the transfer function $\rho$ abstracts. One motivation is similar to Reinforcement learning from human feedback (RLHF), where the transfer function abstracts how human preference behaves given similarly-valued alternatives. The most natural one is to assume that when the alternatives are of similar values, humans select almost randomly. In our formal feedback model, this essentially implies that $\rho(0)=0$.
Having said that, technically our algorithm works without modifications in the case where $\rho(0) \neq 0$, since it is agnostic to additive bias in $\rho$ (this can be seen from the statement of Lemma 9 in the appendix).
> “Can the results be improved if $\rho$ is an odd function?”
We are unaware of whether the bounds can be improved for odd transfer functions. In fact, our preliminary results were initially for odd functions and we were happy to see that they extend naturally to more general transfer functions, with the same rates, as we describe in the paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for the response. The rebuttal revolves most of my concerns, especially the technical contributions. I have raised my score, and it would be great to see the discussion about the technical novelty compared with previous works in the revised version.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's engagement and openness to revisit the initial concerns after reading our response. And thanks again for the thoughtful and constructive feedback---we will be sure to incorporate these discussions in the final version. | null | null | null | null | null | null | null | null |
PhySpec: Physically Consistent Spectral Reconstruction via Orthogonal Subspace Decomposition and Self-Supervised Meta-Auxiliary Learning | Accept (spotlight poster) | Summary: The paper presents PhySpec, a novel method for hyperspectral image (HSI) reconstruction from RGB images. It addresses the "colorimetric dilemma" where existing methods fail to consistently reproduce ground-truth RGB from predicted HSI, compromising physical integrity. PhySpec uses orthogonal subspace decomposition to estimate camera spectral sensitivity (CSS) and ensures reconstructed spectra align with physical principles. It also introduces a self-supervised meta-auxiliary learning (MAXL) strategy to adapt trained parameters to unseen samples and constrain generated HSIs to accurately recover ground-truth RGB values. The method is validated through extensive experiments, showing superior performance compared to state-of-the-art methods.
Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence. The authors provide extensive experimental results on multiple datasets using metrics to demonstrate the superiority of PhySpec. They also include ablation studies to validate the effectiveness of individual components like CSS estimation, MAXL, and the dynamic illumination estimation module. The visual comparisons and error maps further reinforce the claims about improved spectral and spatial quality. However, the paper could benefit from more detailed discussions on the computational efficiency and real-world applicability beyond the provided datasets.
Methods And Evaluation Criteria: The methods and evaluation criteria for the HSI reconstruction from RGB images are reasonable. The standard criteria of SAM, SSIM, and PSNR are fitting for assessing both spectral and spatial quality. The proposed method was evaluated on the ARAD-1K Synthetic, ARAD-1K Real, and ICVL datasets. However, adding experimental results on wider datasets like CAVE and Harvard would enhance the evaluation.
Theoretical Claims: The mathematical formulation of subspace decomposition is plausible. Prior work (Lin & Finlayson, 2020) is cited for foundational theory, but the paper could benefit from deeper theoretical analysis of the proposed adaptations. In the part of formula (8), then we can define a linear operator Φ = sl that represents the spectral downsampling process while its pseudo-inverse Φ† represents the spectral upsampling procedure. Since spectral super-resolution is an ill-posed problem, it would be best to supplement the theoretical analysis or empirical studies on the existence of the pseudo-inverse.
Experimental Designs Or Analyses: The experiments comprehensively compare nine SOTA methods, yet the practicality of test - time adaptation (4 gradient steps) may be questioned. It would be better to supplement the experimental data on the time consumption of test - time adaptation in comparison to other methods. The ablation study effectively isolates component contributions. As a spectral super - resolution reconstruction work, comparisons should be added with the latest spectral super - resolution methods like MFormer and NeSSR, in addition to CESST. The paper already shows visualizations of the reconstruction results and the estimated camera spectral sensitivity. Could we add visualizations for the illumination estimation part?
Supplementary Material: The appendix includes additional error maps (Figure 8) and spectral curves (Figure 9), supporting visual claims.
Relation To Broader Scientific Literature: The work builds on spectral reconstruction (Cai et al., 2022b; Yang et al., 2024) and meta-learning (Liu et al., 2019), adding physical constraints. It bridges data-driven learning and imaging physics, advancing prior model-based and learning-based approaches. However, for scholars in the field of spectral super-resolution reconstruction, more literature on orthogonal subspace decomposition and its applications should be included.
Essential References Not Discussed: Recent work on "General Hyperspectral Image Super-Resolution via Meta-Transfer Learning" also uses meta-learning for transfer learning, "Unsupervised Test-Time Adaptation Learning for Effective Hyperspectral Image Super-Resolution With Unknown Degeneration" propose a Unsupervised Test-Time Adaptation method for Hyperspectral Image Super-Resolution Task, offering more background for this method. Unsupervised learning based spectral super resolution methods (e.g., "Semantic-embedded Unsupervised Spectral Reconstruction from Single RGB Images in the Wild", and "MFormer: Taming Masked Transformer for Unsupervised Spectral Reconstruction") can fine-tune only on RGB data during inference. This capability deserves discussion and experimental comparison
Other Strengths And Weaknesses: Strengths:
1. The method effectively addresses the colorimetric dilemma by ensuring physical consistency between reconstructed spectra and ground-truth RGB colors.
2. The integration of orthogonal subspace decomposition with self-supervised meta-auxiliary learning is innovative and provides a robust framework for HSI reconstruction.
3. The method fully considers the unknown spectral response function and lighting conditions in actual imaging processes and uses them to guide model design.
4. Comprehensive experimental validation on multiple datasets demonstrates the superiority of PhySpec over existing methods.
Weaknesses:
1. The computational complexity and efficiency of the method, particularly during the meta-auxiliary testing phase, could be further optimized for real-time applications.
2. While the paper shows strong performance on the provided datasets, the generalization to entirely new and unseen imaging scenarios beyond the tested datasets could be explored in more depth.
3. The dynamic illumination estimation module, while effective, might benefit from additional ablation studies to better understand its individual contribution to the overall performance.
Other Comments Or Suggestions: The paper is well - written and well - structured. However, the following improvements would enhance its clarity and comprehensiveness:
Please clarify the illumination estimation in Eq. 11 (e.g., kernel size k).
In Algorithm 1 Meta - auxiliary Training, it is written "while not converged do". Please add details on how to judge whether the model has converged in the main text or the implementation details of the experiment.
The paper does not introduce "manifold", but Figures 1 and Figures 3 use it to describe RGB and Spectral distributions. Please modify the figure descriptions or add explanations of "manifold" in the paper.
Questions For Authors: How does PhySpec’s computational time (including test-time adaptation) compare to SOTA methods?
Will code and pre-trained models be released to ensure reproducibility?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Reviewer yejW:
Thank you very much for your constructive comments and suggestions. We will try our best to address your concerns here.
### _Q1: Computational complexity and inference time of the meta-auxiliary testing phase._
**A1:** Thanks for your suggestion. We provide the inference time comparison in the table below that compares with MST++, PADUT, CESST, and SPECAT, four competitive methods in the spectral recovery task. All evaluations are calculated on the validation set of ARAD-1k Real, and the image size is $512 \times 482$, with a single NVIDIA Ampere A100 with 40G RAM. As can be seen, our method achieves fast inference time without meta-auxiliary testing and outperforms existing methods by a large margin. The inference time is still comparable with existing methods with meta-auxiliary testing.
| Method | MST++ | PADUT | CESST | SPECAT | PhySpec without MAXL testing | PhySpec with MAXL testing |
| ---- | ---- | ---- | ---- |---- | ---- |---- |
| Time (S) | 0.471 | 1.515 | 1.476 | 0.353 | **0.158** | 1.276 |
### _Q2: Generalization to unseen datasets._
**A2:** Thanks for your suggestion. In fact, we use each training set of the ARAD-1k Synthetic dataset and ICVL dataset for training and directly evaluate the validation set of ARAD-1k Real (The hyperspectral data of the test set is not available to the public). Considering that the CSS function of the ARAD-1k Real dataset keeps private and corresponding RGB inputs are post-processed with unknown noise and compression, we choose the ARAD-1k Real dataset for generalization ability validation, which means that the ARAD-1k Real dataset is totally an unseen dataset for evaluation.
This setting is sound for evaluating the generalization ability of each model in real-world applications, and the quantitative results provided in Table 1 validate that our method not only outperforms existing methods on synthetic datasets but also shows better performance on the real-world dataset.
### _Q3: Visualization of the illumination descriptor._
**A2:** We acknowledge that more details and an illustration diagram are needed to clarify our designed DIEM and the illumination descriptor, as also suggested by **Reviewer Dmx9**. We have provided an anonymous link that illustrates the architecture of this module at **[Link of DIEM](https://imgur.com/a/svPkzlZ)** and will update it in our revised paper. In fact, the illumination descriptor $l'$ is **implicitly** estimated by our designed DIEM since no illumination information is provided by the training dataset, which is why we could not evaluate and compare the illumination descriptor estimated by our method with ground-truth targets. Besides, it is also impractical to access such ground-truth illumination information in the real world. Therefore, we believe directly visualizing such a descriptor is unnecessary, considering that its effectiveness has already been validated through comprehensive ablation studies.
### _Q4: Essential references not discussed._
**A4:** Thanks a lot for your recommendation. We will survey those papers as you mentioned and include them in our revised paper, including:
[1] Cheng, Yingsong, et al. "General hyperspectral image super-resolution via meta-transfer learning." TNNLS 2024.
[2] Zhang, Lei, et al. "Unsupervised test-time adaptation learning for effective hyperspectral image super-resolution with unknown degeneration." TPAMI 2024.
[3] Zhu, Zhiyu, et al. "Semantic-embedded unsupervised spectral reconstruction from single RGB images in the wild." ICCV 2021.
[4] Li, Jiaojiao, et al. "MFormer: Taming masked transformer for unsupervised spectral reconstruction." TGARS 2023.
### _Q5: Other concerns._
**A5:** We choose the early stopping point as the convergent point. We plan to release our code and pre-trained models after the paper's status is confirmed. | Summary: PhySpec is presented in this paper. This is a method that attempts to reconstruct hyperspectral images from RGB images, ensuring the HSI reproduces the original RGB colours. It does it by learning its colour response, handling varying illumination, and uses orthogonal subspace decomposition achieve physical consistency. At test-time, an auxiliariy loss is used to fine-tune as a form of test-time adaptation. Experimental evidence to show the generalisation
Claims And Evidence: Claims in the paper are several about the PhySpec methodology.
(1) It produces HSI which accurately reproduce input RGB colours when projected back using the estimated camera spectral sensitivities.
This claim is mostly Supported. Evidence: Figure 7 (RGB reproduction), and the auxiliary loss function enforcing this. However, a direct comparison to a plain U-Net baseline is missing.
(2) The method generalizes better to unseen data (different cameras, illuminations) than existing methods.Strongly Supported. Evidence: Table 1 (cross-dataset evaluation – training on synthetic, testing on real).
(3) SOTA Performance: PhySpec achieves superior quantitative and qualitative results compared to existing methods.
Mostly Supported. Evidence: Table 1 (comparison to other methods). Again, a direct U-Net baseline is missing.
(4) For intepretability, effective CSS and Illumination Estimation: It accurately estimates camera CSS and an image-specific illumination descriptor. : Mostly Supported. Evidence: Figure 2 (CSS estimation visualization), Table 2 (ablation study showing the importance of DIEM). However, no uncertainty quantification.
(5) Test Time Adaptation helps. Supported. Evidence: Table 2 (bottom), that explores the hyper-parameter.
Implicit Baseline Comparison: The biggest issue is the lack of a direct comparison to a plain U-Net baseline with the same architecture. This makes it hard to definitively attribute all performance gains to the physical constraints.
Uncertainty Quantification: No uncertainty estimates are provided for the reconstructed HSI, the estimated CSS, or the illumination descriptor. This is a significant limitation for real-world applicability.
Noise Robustness: While generalization to real data suggests robustness, there's no direct experiment with controlled noise levels.
Methods And Evaluation Criteria: To enforce the physical constraint of RGB reproduction, the paper employs Orthogonal Subspace Decomposition to separate the HSI into components directly related to RGB and those that need further learning.
The camera spectral response is what is used for physical consistency. Learning it increases applicability makes sense as it can be reasonably assumed constant. The illumination is learned as well. There is a degeneracy in learning both the spectral response and the illumination for handling varying illumination, which is one of the challenge and to bring the OSD for the CSS and the DIEM for the illumination. To learn both CSS and illumination, the dataset also needs to have enough spectral diversity. While this is not discussed, this is implicit in the selection of the data ARAD-1k and ICVL. I think perhaps there is a need of a discussion on how this can limit the learning.
MAXL: this is an auxiliary meta-learning that is convincingly appropriate for improving generalization and enforcing the RGB reproduction constraint during both training and testing.
The evaluation (SAM, SSIM, PSNR) are fairly standard and have caveats but MSE error maps are also shown. Multiple datasets are evaluated, the the cross-dataset evaluation is also interesting to show one of the claim of the paper on generalisation from synthetic to real data. Overall the dataset and evaluation section is fairly consistent to show the paper's claims.
My main concerns are the following:
* lack of a direct comparison to a plain end-to-end U-Net trained on the same data would be quite valuable. Without it, it makes it harder to isolate the benefits of the proposed physical constraints, and show the generalisation.
* no uncertainty quantification: the absence of uncertainty (e.g., uncertainty intervals for the reconstructed HSI or estimated CSS) is a limitation, especially for real-world applications, and this is after all an application paper. I understand this can be of extra work given the various steps of the full pipeline, where each step would introduce errors.
Theoretical Claims: There are no specific theoretical claims, but some assumptions on the formulation, mostly due to the approximations in physical modelling. These assumptions could limit the scalibility of the physical modeling. To name a few:
* linearity of the decomposition: the orthogonal subspace decomposition itself is a linear operation. This is a limitation, as real-world spectral relationships can be non-linear.
* spatially invariant CSS: the model assumes that the CSS is constant across the sensor. This is never the case as far as I know, but it may be a second order effect for the applications of this method.
* global Illumination descriptor: The DIEM learns a global illumination descriptor. This might not be sufficient to capture highly complex, spatially varying illumination.
Experimental Designs Or Analyses: Mentioned above: large set of experiments, but missing a direct baseline comparison with a plain same u-net between the HSI and RGB.
Supplementary Material: I reviewed the appendix which includes a short section with more visuals.
Relation To Broader Scientific Literature: PhySpec differs from recent data-driven approaches (cited in the paper) by explicitly incorporating physical constraints derived from the image formation process. It attempt to physically model the camera's response and the illumination.
The paper has some similarity with Lin & Finlayson (2020) but has a number of extensions which make this paper a lot more comprehensive: learning the CSS, it handles Illumination variation with the DIEM, use more powerful neural network, and meta learning to improve generalization and robustness with MAXL.
All these extensions are well established individually, but their integration into PhySpec is what differentiates this paper.
Essential References Not Discussed: I do not see any essential reference missing.
Other Strengths And Weaknesses: PhySpec relies on paired RGB-HSI datasets, which are not readily available in most real-world scenarios. This limits the assessment of practical applicability.
Other Comments Or Suggestions: N/A
Questions For Authors: * Would you consider adding a comparison with a baseline u-net?
Ethical Review Concerns: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Reviewer Rrdb:
Thank you very much for your constructive comments and suggestions. We will try our best to address your concerns here.
### _Q1: Direct comparison with a plain U-Net baseline._
**A1:** Thanks for your suggestion. In fact, we have provided such a comparison in Table 2. In Table 2, we use a transformer-based U-Net architecture [Cai Yuanhao, et al. MST++, CVPRW 2022] as the baseline for the breakdown ablation study, which is the Sim-PhySpe in the first row. As you mentioned, this architecture can be viewed as the plain U-Net baseline. In fact, this architecture is a competitive baseline that has been validated in both [Cai Yuanhao, et al. MST++, CVPRW 2022] and [Cai Yuanhao, et al. MST, CVPR 2022], where it ranked first place for the HSI reconstruction task in the ENTIRE 2022 Spectral Recovery Challenge. As can be seen in Table 2, all other variants based on our method show performance gains over the vanilla U-Net-based baseline. As such, we believe the investigation of the ablation study is fair and efficient.
### _Q2: Uncertainty Quantification._
**A2:** Thanks for your insightful comments regarding the uncertainty estimation for the spectral recovery task. We are really interested in this topic to assess the reliability of model predictions, especially in the presence of noisy inputs, model limitations, or shifted data distributions, which is common in real-world applications. Though this concern has never been investigated in existing works, as suggested, we plan to study this issue in our future work, where uncertainty exists in both spatial and spectral dimensions. However, we have conducted comprehensive experiments across several datasets and ablation studies, including a real-world dataset, ARAD-1k Real, which verifies both the effectiveness and robustness of our method compared with SOTA methods.
### _Q3: Noise Robustness._
**A3:** Thanks for your suggestion. We conducted experiments to investigate the robustness of our model on controllable noise disturbance. Specifically, we injected Gaussian noise at various levels, especially $\sigma \in \{15, 25, 50\}$, into both training and testing samples of the ARAD-1k Synthetic dataset. We compared with MST++, PADUT, CESST, and SPECAT, four competitive methods in the spectral recovery task. The results are provided in the table below. As can be seen, our method still outperforms existing methods and shows less performance degradation with noisy inputs.
| Method | $\sigma = 15$ | $\sigma = 25$ | $\sigma = 50$ |
| ---- | ---- | ---- | ---- |
| MST++ | 34.11 | 33.87 | 33.55 |
| PADUT | 34.92 | 34.58 | 33.60 |
| CESST | 35.65 | 34.27 | 33.29 |
| SPECAT | 34.06 | 33.79 | 33.12 |
| Ours | **36.80** | **36.32** | **35.88** |
### _Q4: Spectral diversity of both CSS and illumination descriptor learning._
**A4:** Thanks for your insightful comments. In fact, we randomly selected 23 CSSs for training and 5 CSSs for testing from [Jiang, Jun, et al. "What is the space of spectral sensitivity functions for digital color cameras?." WACV 2013.] to construct the ARAD-1k Synthetic dataset, which helps our model to learn rich and diverse spectral distributions. In terms of the illumination descriptor, considering there is no illumination ground truth provided, it is designed to learn image-specific illumination information implicitly. Comprehensive ablation study experiments have validated its effectiveness.
### _Q5: Theoretical Claims of the linearity of the decomposition._
**A5:** Thanks for your comments. Although the decomposition itself is linear, which decomposes the original signal as a range-space component and a null-space component, the process of these two components' estimation is nonlinear. Especially, we re-parameterize the null-space component as an efficient compensation and regularization term, which enhances the spectral reconstruction with nonlinear residual information that adheres to both data measurements and prior information. | Summary: The authors propose a method of generating hyperspectral image data from RGB data. In addition to estimating the hyperspectral data, the authors also explicitly estimate the camera sensitivity curves and the illumination spectrum as latent variables in the network. As an auxiliary task, the camera sensor sensitivity variable, illumination spectrum variable, and the estimated hyper-spectral information is used to reconstruct the original RGB image in order to enable self-supervised learning. The paper provides comparative results on three hyperspectral data sets.
Claims And Evidence: The paper effectively compares the new components with prior work. The overall performance of the method is substantially better than the compared methods.
Methods And Evaluation Criteria: The data sets and methods used are appropriate.
Theoretical Claims: The mathematical derivations are clear.
Experimental Designs Or Analyses: The authors used three data sets to compare performance. The caption for table 1 seems to suggest that two data sets were used for training with a third held out for testing. I'm interpreting that to mean that the results for ARAD-1k Synthetic and ICVL are training accuracies, while the results for ARAD-1k Real are test accuracies. That procedure is fine, especially since the data set held out is the real-world set and not the synthetic set. However, the text in section 4.2 doesn't indicate anything about which data sets were used for training or testing and what results are being reported. It would be good to have the text of 4.2 match the caption of table 1 in terms of which results are training set performance and which are test set performance.
Supplementary Material: n/a
Relation To Broader Scientific Literature: The design components that decompose the problem into parts based on the physics of imaging are important to achieving good performance, as shown by the ablation study. It shows there is value in incorporating physical knowledge into the network design.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Some of the strengths of the paper include.
1. Decomposition of the task into parts according to standard physical models of imaging: CSS, illumination, spectrum
2. Enabling self-supervised learning by adding the auxiliary task of recreating the original RGB image.
3. Improved performance on a real-world data set.
My primary frustration with the paper was insufficient explanation of some components of the design (see below). I believe some minor revision is necessary in order for the text describing the DEIM block to be correct.
Other Comments Or Suggestions: Section 3.3: DEIM
The description of the DEIM is unclear, and the terms used don't appear in figure 4. In particular, what is the "latent feature f"? There is no "f" shown in figure 4, and it's not one of the terms used in the prior description of the process.
How does the 2-layer convolution in the DEIM integrate with the apparent convolution block in figure 4 that feeds into the pooling and DEIM blocks? Is there an additional pair of conv layers in the DEIM block? Figure 4 indicates that the RGB image x_n is fed directly into the DEIM block, along with a signal from a convolution block applied to x_n. How are the two signals combined?
Given the importance of the DEIM block, the paper needs to include a diagram, or more detail needs to be given in figure 4.
My reading of the description of the block is that some latent vector f, which is likely the result of a conv stack, is average pooled to create a kernel, g_k(h), which implies the kernel is a function of h, which is a 2-layer conv encoding of the RGB image. But how is f related to h? Then the kernel built from f (or h?) is applied to f. It feels like it would make more sense if the kernel was built from one signal (e.g. h) and applied to the other signal (e.g. f). With two encodings of the image being routed to the DEIM module (figure 4), is that what is actually happening?
Questions For Authors: It's not clearly stated, but are the RGB images used in the data sets linear images, or sRGB? Given the focus on physical consistency, my assumption is they are linear. That should probably be confirmed, because when most papers discuss "tradtiional RGB images", they imply sRGB images processed for human viewing.
Why are ResNet layers necessary to recreate the image as x = sly ? Are the s, l, y signals not suitable for direct combination?
The authors show the validity of the material spectra and camera sensitivity estimates, but there is no similar analysis of the illumination estimates. Is this also a color constancy paper, or are the illumination estimates more placeholders than actual estimates?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Reviewer Dmx9:
Thank you very much for your constructive comments and suggestions. We will try our best to address your concerns here.
### _Q1: Description of training and testing datasets._
**A1:** Thanks a lot for your concerns. Just as you envisioned, we use the training set of the ARAD-1k Synthetic dataset and ICVL dataset for training and directly use the validation set of ARAD-1k Real (The hyperspectral data of the test set is not available to the public) for evaluation. Considering that the CSS function of the ARAD-1k Real dataset is not disclosed to the public and the corresponding RGB inputs are post-processed with unknown noise and compression, we choose the ARAD-1k Real dataset for generalization ability validation.
We think this setting is sound for evaluating the generalization ability of each model in real-world applications, and the quantitative results provided in Table 1 validate that our method not only outperforms existing methods on synthetic datasets but also shows better performance on the real-world dataset.
We will revise Section 4.2 and add data usage details as you suggested.
### _Q2: Description of Dynamic Illumination Estimation Module (DIEM)._
**A2:** We acknowledge that more details and an illustration diagram are helpful in clarifying the proposed DIEM module. The DIEM is shown in the purple box (at the top) of the Meta-Auxiliary Training Stage in Figure 4. As can be seen, DIEM takes two inputs, latent feature **f** from a transformer-based encoder [Cai Yuanhao, et al. MST++, CVPRW 2022] and the original RGB input image $x_n$. The goal of this module is to capture the specific illumination representation for the input image adaptively rather than to use fixed features across all samples, which is more reasonable in the real world. To achieve it, inspired by the dynamic filters (e.g., [Han Chunrui, et al. Face recognition with contrastive convolution. ECCV 2018]), we first use two convolution layers with a kernel size of $3 \times 3$ to encode the input image $x_n$ into feature **h**. Then, we apply average pooling to **h** to generate an illumination-aware filter $g_k(h)$ with kernel size $k=7$ (Please note that we misspelled this feature **h** as **f** in our manuscript, and we will revise this error). Finally, the latent feature **f** is convolved with the illumination-aware filter with depth-wise convolution, followed by an average pooling layer to obtain an illumination-specific representation.
We have provided an anonymous link that illustrates the architecture of this module at **[Link of DIEM](https://imgur.com/a/svPkzlZ)** and will update it in our revised paper.
Your understanding of the system details is correct, and we sincerely appreciate it!
### _Q3: Clarification of input RGB images._
**A3:** The input RGB images used in our method are sRGB images, which is consistent with existing methods for fair comparison. Since RGB images provided in the ARAD-1k Real dataset are in sRGB format, as well as most other real-world datasets, to avoid modality misalignment, we adopt sRGB images for training. Please note that the physical consistency in our paper mainly refers to the fact that our method fundamentally exploits the intrinsic physical constraints between HSIs and corresponding RGBs, as well as the self-supervised meta-auxiliary learning framework that enforces generated HSIs to consistently and accurately recover ground-truth RGBs, thereby ensuring physical integrity for the ill-posed inverse problem.
### _Q4: Clarification of the ResNet in the auxiliary task and the illumination estimation._
**A4:** Thanks for your insightful comments regarding the necessity of the ResNet layers in recovering RGB images, as well as the validation of illumination estimation. In fact, the illumination descriptor $l^{'}$ is implicitly estimated by our proposed DIEM since there is no illumination information provided by the training dataset, which is the reason why we could not evaluate and compare the illumination descriptor estimated by our method with ground-truth targets. Besides, it is also impractical to access such ground-truth illumination information in the real world. As such, directly applying the $s, l^{'}, y$ signals to recover the target RGB image is suboptimal in such a fixed manner. In contrast, we adopt ResNet as a learnable module to introduce an auxiliary task to encourage accurate recovery of target RGB images with self-supervised meta-auxiliary learning. In the future, we will explore the potential of using our method with accurate illumination estimation by introducing target illumination priors, such as the illumination spectra of white and amber LEDs with a Specim IQ mobile HSI camera. | null | null | null | null | null | null | null | null |
DreamDPO: Aligning Text-to-3D Generation with Human Preferences via Direct Preference Optimization | Accept (poster) | Summary: This paper introduces DreamDPO, an optimization-based framework for text-to-3D generation that aligns the generated 3D content with human preferences. Instead of relying on absolute quality scores from reward models, DreamDPO employs Direct Preference Optimization (DPO). The method operates in three iterative steps: 1) Pairwise Example Construction, 2) Pairwise Comparison, and 3) Preference-Guided Optimization, where a novel piecewise loss function, derived from the pairwise preference, guides the update of the 3D representation parameters. This piecewise loss is designed to prevent noisy gradients when the compared examples are very similar.
The paper demonstrates the proposed method's effectiveness through experiments on the GPTEval3D benchmark, comparing it against 13 existing text-to-3D generation methods. Quantitative evaluations using ImageReward, CLIP score, and GPTEval3D's metrics to show good results in terms of text-image alignment, 3D plausibility, and texture/geometry detail. Qualitative comparisons also support these claims. The paper also explores the use of MLLMs/LMMs for providing preference feedback. Ablation studies investigate the impact of different diffusion model backbones, reward models, and the score gap threshold used in the piecewise loss.
## Update After Rebuttal
I thank the authors for their detailed rebuttal. The provided quantitative comparison with DreamReward (Rebuttal Table 2-1) is appreciated and clarifies their relative performance. The authors also committed to better contextualizing their work regarding feed-forward methods.
However, the justification for the $\tau$ hyperparameter remains unclear, and the potential limitations of using 2D metrics for 3D evaluation were not fully resolved. Most importantly, the concern regarding the method's slow computational speed compared to state-of-the-art feed-forward approaches was not addressed.
Due to these remaining issues, particularly the unaddressed speed limitation, my assessment has not changed.
Claims And Evidence: Overall, most of the claims made in the paper are backed up by good evidence. The results on the GPTEval3D benchmark (Table 1) and the visual examples (Figures 2, 3, 7, 11, 12) show that DreamDPO can generate reasonable 3D objects that match the text descriptions well. The piecewise loss function also seems to be important, as shown by Figure 6 and the explanation in Section 3.1. The experiments in Section 4.3 and Figure 8 demonstrate the flexibility of the method.
The paper does compare DreamDPO to DreamReward, but only qualitatively (Figure 7). While these visual comparisons are helpful, it would be much stronger to also include a quantitative comparison. This is a missed opportunity to directly compare DreamDPO to a closely related method that uses a different approach (a custom-trained 3D reward model).
Methods And Evaluation Criteria: The methods proposed in this paper show some promise for text-to-3D generation. The DreamDPO framework and the piecewise loss function are interesting ideas. However, there are some aspects of both the methods and the evaluation that could be strengthened. The reliance on pairwise comparisons and 2D image metrics, while common practice, introduces potential limitations that aren't fully explored, such as the gap between 2D-3D perception. More thorough ablation studies focusing on the piecewise loss would be beneficial. The evaluation, while using a standard benchmark, could be significantly improved by including direct 3D mesh quality metrics and, crucially, by providing a quantitative comparison with DreamReward. The current evaluation doesn't provide fully convincing evidence of the method's superiority, especially considering the reliance on rendered 2D views only.
Theoretical Claims: This paper is primarily empirical and does not present theoretical claims or proofs.
Experimental Designs Or Analyses: The use of the GPTEval3D benchmark and multiple metrics is generally a good approach, and the comparisons to a wide range of baselines are valuable. Still, the lack of a quantitative comparison with DreamReward is a significant issue. The ablation studies are helpful in understanding the impact of different components, but the exploration of LMM capabilities could be more extensive, and the justification of the $tau$ relies more on the 2D toy example. While the qualitative evaluations provide visual support, they are inherently subjective. Maybe a more thorough user study could strengthen the claims of improved human preference alignment.
Supplementary Material: The supplementary material is well-organized and provides useful information. The additional qualitative results (D.1) provide further visual evidence. Overall, I believe the supplementary material strengthens the paper by providing more context, implementation details, and supporting results.
Relation To Broader Scientific Literature: This paper connects its contributions to the broader literature on (SDS-based) text-to-3D generation and learning from human preferences.
Essential References Not Discussed: I did not find any essential references that were missing from the paper's discussion.
Other Strengths And Weaknesses: In addition to the points already raised, there are a few minor weaknesses. The discussion of limitations could be more thorough, particularly regarding the reliance on pre-trained models and 2D metrics. While the experiments analyze the impact, more justification about the hyperparameter selection would also make the paper clearer and stronger.
Crucially, the paper is very slow compared to many state-of-the-art 3D generation methods that use feed-forward approaches. This is a major practical limitation that needs to be addressed more directly.
Other Comments Or Suggestions: It would be helpful for the paper to better contextualize its approach within the broader landscape of 3D generation. While the SDS-based method using 2D diffusion models is a valid area of research, many of the current state-of-the-art results in terms of visual quality and speed are coming from feed-forward methods that use large-scale 3D models (diffusion or autoregressive) trained on massive 3D datasets. Mentioning these other approaches, and explaining why this paper focuses on the SDS-based method, would make the paper more complete.
Questions For Authors: 1. The paper is using 2D image reward models (HPSv2, ImageReward) to evaluate 3D assets. Could you elaborate on the potential limitations of this approach? Are there specific types of 3D inconsistencies or artifacts that these 2D models might miss? Have you considered any ways to mitigate this domain gap (e.g., incorporating some form of 3D consistency check)?
2. The piecewise loss uses a threshold of $tau = 0.001$. Figure 6 shows this works in 2D, but could you give a bit more explanation, or maybe some 3D-specific results, to justify this particular value?
3. The paper employs MVDream as the backbone. Have you tried other SDS-based text-to-3D generation methods? How does the proposed method perform with them?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive reviews. We provide our responses below.
> **Q1:** Quantitative comparisons with DreamReward.
**A1:** Thanks for your suggestion. We conduct a quantitative comparison on GPTEval3D to evaluate human preference, including the correction of numbers. In specific. we report the number accuracy and CLIPScore. The results below show that DreamDPO demonstrates competitive performance in prompt alignment and significant improvement in number correction.
| Method | Number Accuracy $\uparrow$ | CLIPScore $\uparrow$ |
| ----------- | --------------- | ------------------ |
| DreamReward | 41.7% | 0.2855 |
| DreamDPO | 71.7% | 0.2787 |
Table 2-1: Quantitative comparisons of DreamDPO and DreamReward on GPTEval3D. We calculate the CLIPScore for prompt alignment and number accuracy for number correction.
We attribute this to that DreamDPO does not rely on precise reward scores, allowing it to leverage various black-box AI models for scoring, which helps correct numbers and attributes and improves human preference alignment.
> **Q2:** The potential limitations on 2D metrics.
**A2:** Yes. 2D image reward models are relatively mature, with abundant data and good reward performance. However, they have limitations. While they improve prompt alignment, such as correcting character attributes (e.g., changing "man" to "knight"), they struggle to address inherent Janus issues (e.g., characters with three legs). Our DreamDPO is flexible with both 2D and 3D rewards. Experiments in [Figure 3-1](https://imgur.com/a/XdVx2sn) demonstrate that it effectively mitigates Janus issues and some potential artifacts, offering a more reliable evaluation for 3D consistency.
> **Q3:** More exploration of LMM capabilities.
**A3:** Thanks. We have further analyzed the failure cases of DreamDPO with LMMs (see R2-Q2). Additionally, we demonstrate that increasing the number of comparison candidates can effectively mine positive samples, leading to improved performance with LMMs (see R2-Q3). Lastly, we show that DreamDPO works effectively across various LMMs (see R2-Q4).
> **Q4:** Additional discussion with feed-forward methods.
**A4:** Thanks. Aligning feed-forward methods with human preferences is meaningful but remains underexplored, primarily due to the significant computational resources required. DreamDPO offers valuable insights for feed-forward methods. Specifically, by constructing candidates with varying noise and using ranking-guided optimization, DreamDPO provides a potential pathway for RL optimization of feed-forward methods. We will polish the paper with better contextualize within the broader landscape of 3D generation.
> **Q5:** Ablation studies on the score gap threshold in the 3D setting.
**A5:** Thanks. Please kindly check R1-Q1.
> **Q6:** More results with other SDS-based text-to-3D generation methods.
**A6:** Thanks. We evaluate the performance of DreamDPO with different backbones in Section 4.3. In specific, we evaluate the performance of DreamDPO with Stable Diffusion v2.1 (SD2.1). The results demonstrate that DreamDPO works effectively with SD2.1 and achieves competitive results compared to ProlificDreamer. While SD2.1 shows improvements over the baseline, MVDream outperforms due to its superior 3D consistency. Therefore, we adopt MVDream as the default backbone.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their rebuttal. I have carefully reviewed the rebuttal along with the comments from other reviewers. While I appreciate the clarifications provided, I will be maintaining my original score. | Summary: This paper proposes DreamDPO, an optimization-based method to better align 3D generation with human preferences for text-to-3D generation. In detail, it constructs pairwise examples to formulate a reward loss function for preferred images with lower loss and less preferred images with higher loss. It conducts comprehensive experiments with 13 baselines and shows the superiority of the proposed method.
Claims And Evidence: Yes, the claims are clear and convincing.
Methods And Evaluation Criteria: Yes, the author utilizes ImageReward score for human preference evaluation, CLIP score for text-image alignment evaluation, and GPTEval3D for 3D quality evaluation to evaluate their proposed method. Their method is effecient by using those meaningful evaluation criteria.
Theoretical Claims: This paper does not include the theoretical discussion.
Experimental Designs Or Analyses: I checked the soundness of the experimental designs and analyses. The main experiment demonstrates the effeciency of the proposed method compared with the other baselines with more delicate 3D generation. The authors also discussed the effect of different backbones, reward models, score gaps, pair examples, model design, and further applications in the experiment parts. I think the experiment is comprehensive and meaningful.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: N/A.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: * Strengths:
1. This paper is well-written and well-organized.
2. The research topic is insteresting and meaningful. Text-to-3D can be utilized in many practical applications such as games and designing.
3. The proposed method, DreamDPO, is much more efficient than existing baselines, which generates more delicate 3D images.
* Weaknesses:
1. How about the computational efficiency of the proposed method compared with the other baselines?
2. I am still confused about the determination of $x_t^{win}$ and $x_t^{lose}$. Do you assign the sample with lower loss to $x_t^{win}$ and the sample with higher loss to $x_t^{lose}$?
3. If current 3D iterms generated from text-to-3D generation can be used in manufacturing? I ask this question because I am very curious about the practical application of this field.
Other Comments Or Suggestions: Please see the weaknesses.
Questions For Authors: Please see the weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive reviews. We provide our responses below.
> **Q1:** The computational efficiency comparison to other baselines.
**A1:** Thank you for your question. We summarize the computational cost of our method compared with other text-to-3D generation baselines as follows:
| Method | DreamFusion | Fantasia3D | Latent-NeRF | ProlificDreamer | MVDream | DreamDPO |
| ----------------- | ---------- | ---------- | ---------- | --------------- | ------- | -------- |
| Computation Time | 1.5 hours | 1.5 hours | 30 minutes | 10 hours | 1 hour | 2 hours |
Table 2-1: The analysis of the time consuming of text-to-3D generation methods.
While introducing additional computational overhead due to pairwise example construction, our method achieves the best results on the GPTEval3D Benchmark, demonstrating superior alignment with human preferences. Meanwhile, we can mreduce generation time by 25% (to ~1.5 hours) while maintaining performance (see R1-Q4).
> **Q2:** How are $x_t^{win}$ and $x_t^{lose}$ determined?
**A2:** We utilize a reward model to compute the scores of pairwise examples. The sample with a higher score is regarded as the "win" example ($x_t^{win}$), while the sample with a lower score is regarded as the "lose" example ($x_t^{lose}$).
> **Q3:** Can current text-to-3D generated items be used in real-world manufacturing?
**A3:** Yes. Text-to-3D generation is increasingly suitable for real-world manufacturing, particularly in customized product design. It allows engineers to quickly create 3D models from textual descriptions. While further refinement may be needed to meet precise industrial standards, the technology is proving valuable in automating design workflows and enabling mass customization.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I decide to keep my positive score on this paper. | Summary: This paper introduces DreamDPO, an optimization-based framework for text-to-3D generation. The authors propose to integrate human preferences into the generation process through direct preference optimization. The authors claim that the key innovation is leveraging pairwise comparisons to guide optimization instead of relying on absolute, pointwise quality scores. In specific, DreamDPO constructs pairwise examples, evaluates them using reward models or large multimodal models (LMM), and then optimizes the 3D content with a designed preference-driven loss function. This paper shows superior alignment with textual inputs and improved controllability over existing methods on established benchmark.
Claims And Evidence: The claims presented are generally supported by substantial evidence through qualitative and quantitative experiments. The experiments demonstrate that DreamDPO somewhat generates higher-quality and more controllable 3D assets compared to existing methods, supported by human preference evaluation scores, GPTEval3D scores, and comprehensive ablation studies.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are sensible and appropriate. The benchmark GPTEval3D dataset is commonly used in text-to-3D generation. Meanwhile, the human preference evaluation using ImageReward also provides robust evaluation metrics. The experimental setups align with standard practices for evaluating generative models, ensuring validity and comparability with prior works.
Theoretical Claims: No explicit theoretical proofs are provided for this paper.
Experimental Designs Or Analyses: The experimental designs and analyses appear sound and rigorous. Detailed ablation studies (e.g., different Gaussian noise, score gap thresholds, reward models, and backbones) are clearly conducted, strengthening the validity of the results. The experimental framework effectively validates the method’s key contributions.
Supplementary Material: The supplementary material was reviewed, focusing on Appendix sections detailing additional implementation specifics, pseudo-code, and experimental details. Supplementary materials clarify the experimental procedures and strengthen the main text's claims.
Relation To Broader Scientific Literature: The most related paper is DreamReward. Both DreamReward and DreamDPO focus on the human preference alignment problem in text-to-3D generation. DreamReward collected and labeled a multi-view preference dataset and accordingly trains a multi-view reward model, and use this model for 3D human preference alignment.
Different with DreamReward, the authors claim that DreamDPO shifts from absolute scoring to direct preference optimization, and introduce a pair-wise optimization loss, which reduce reliance on precise pointwise quality evaluations. Therefore, DreamDPO could use the image-aware reward model and LMM for preference alignment generation.
Essential References Not Discussed: The paper appears comprehensive in its references.
Other Strengths And Weaknesses: Pros:
1. The motivation is clear, which addresses the need for improved human alignment and controllability in 3D generation tasks. Besides, the writing is easy to follow.
2. The method is simple but effective. It integrates human preferences through pairwise comparisons, which improves the alignment of the generated 3D content with user expectations and instructions. l believe it can improve other 3D generation methods based on optimization and have a wide generalizability.
3. The experiments are convincing. Experiments show DreamDPO generates higher-quality outputs in terms of geometric and textural details compared to baseline models like DreamFusion, DreamGaussian, and MVDream. Besides, it offers explicit and fine-grained control over attributes.
Cons:
1. The ablation study of the score gap threshold $\tau$ in the 3D generation setting is lacking, limiting insights into its influence on performance.
2. The analysis of failure cases is insufficient, which constrains a deeper understanding of DreamDPO’s limitations and operational boundaries.
3. The potential for performance improvement through increasing the number of comparison candidates is worth exploring and should be considered in future work.
4. The applications of various large multimodal models are underexplored. A more detailed discussion is encouraged to assess how LMM quality affects alignment with human preferences.
Other Comments Or Suggestions: The authors do not report the CLIP score in Table 1, which should be removed accordingly in the caption.
Questions For Authors: 1. The current parameter analysis is conducted on 2D (Figure 6). To provide a more comprehensive evolution, I suggest including ablation studies on the score gap threshold $\tau$ in the 3D setting.
2. Including failure cases would offer valuable insights into the limitations of DreamDPO and help delineate the boundaries of its effectiveness.
3. While the pairwise comparison method is effective, its potential could be further explored by increasing the number of comparison candidates per iteration. For example, investigating group-based loss functions, such as GRPO loss in DeepSeek-R1, might lead to improved performance.
4. Although the authors acknowledge that fine-grained controllability is currently limited by the capabilities of the LMM, it would be worthwhile to explore whether advancements in LMMs could mitigate this constraint.
5. It is recommended to conduct a broader evaluation across more LMMs, ranging from weaker to more advanced models, to gain a deeper understanding of their impact on controllability and overall performance.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive reviews. We provide our responses below.
> **Q1:** Ablation studies on the score gap threshold in the 3D setting.
**A1:** We present the ablation study of the score gap threshold $\tau$ in the 3D setting (see [Figure 2-1](https://imgur.com/a/XdVx2sn)). The results indicate that a large $\tau$ (e.g., $\tau=1$) degrades DreamDPO to the SDS loss, resulting in an over-smoothing issue. Conversely, setting $\tau=0$ results in over-saturation, such as a purplish sunlight appearance in rocks. Therefore, we recommend using a small but non-zero $\tau$.
> **Q2:** Show some failure cases to highlight DreamDPO’s limitations.
**A2:** While DreamDPO has shown improvements in aligning 3D generation with human preferences, there are still some failure cases. For example, number and attribute correction are crucial for prompt alignment, but DreamDPO sometimes fails to address these issues shown in [Figure 2-2](https://imgur.com/a/XdVx2sn). We found that this limitation arises from the generative model's capacity. When the positive sample is hard to generate, DreamDPO struggles to construct effective positive-negative pairs, causing it to degrade into SDS.
> **Q3:** Increasing the number of comparison candidates per iteration.
**A3:** Thank you for your valuable suggestion. We further extend DreamDPO to multi-sample comparisons. In specific, we expand STEP1 to multi-example construction, enabling the creation of more comparison candidates (e.g., 4 or 6). We then select the candidate with the highest score as the "win" example and the lowest score sample as the "lose" example. Experiments in [Figure 2-3](https://imgur.com/a/XdVx2sn) (the first and second columns) demonstrate that this improvement is effective, as it better mines positive samples and enables DreamDPO to construct positive-negative pairs more effectively.
> **Q4:** More evaluation on DreamDPO with large multi-modal models.
**A4:** Following your recommendation, we evaluated the robustness of DreamDPO with various LMMs. Specifically, we conducted experiments using three large multimodal models: Qwen2.5-VL-3B, Qwen2.5-VL-32B, and Qwen-VL-Plus. Notably, Qwen2.5-VL-3B has only 3B parameters, making it a relatively weak LMM. The results in [Figure 2-3](https://imgur.com/a/XdVx2sn) show that when the LMM performance is relatively poor (e.g., Qwen2.5-VL-3B in third column), it fails to correct numbers effectively. In SDS-based optimization, number correction should perform early, which requires strong LMM capabilities. Therefore, high-performing LMMs can improve DreamDPO.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors ‘ reply, I think the author has solved my concerns, and I am willing to raise my score. | Summary: The paper introduces the DreamDPO framework, designed to enhance text-to-3D generation by aligning generated content more closely with human preferences. Traditional methods often fall short due to their heavy reliance on precise evaluations, restricting flexibility and applicability. DreamDPO employs an optimization-based approach that incorporates human preferences through direct preference optimization.
The methodology consists of three key steps:
1. **Pairwise Example Construction**: The framework constructs pairwise examples by applying varying Gaussian noise to the diffusion model.
2. **Pairwise comparison**: It uses either reward models or large multimodal models to rank these examples based on their alignment with the provided textual prompts.
3. **Preference-guided optimization**: A preference-driven loss function guides the optimization of the 3D representation.
These steps allow DreamDPO to minimize dependence on exact scoring while granting robust control over the generation process.
Claims And Evidence: The claims made in the paper regarding the DreamDPO framework for text-to-3D generation are well-supported by clear and convincing evidence based on the findings presented.
1. **Performance Claims**: The paper asserts that DreamDPO outperforms existing techniques. This claim is supported by extensive experiments outlined in section 4, which detail experiment setups and comprehensive results comparing DreamDPO with 13 state-of-the-art methods.
2. **Quality and Control Claims**: The paper emphasizes improvements in the texture and geometry quality of the generated 3D assets. Direct reference to qualitative results is made, indicating that the method delivers high-quality outcomes and offers fine-grained control over the generation process. These aspects are discussed through quantitative and qualitative analyses, supplemented by ablation studies that validate the claims regarding quality and adaptability.
3. **Innovative Contributions**: The paper introduces a novel optimization-based approach integrating human preferences via direct preference optimization. This conceptual contribution is bolstered by detailing the three-step optimization process, highlighting its significance and effectiveness in achieving better alignment with human preferences compared to traditional methods.
However, while the paper’s claims are largely supported, additional context or clarification could enhance the arguments surrounding the advantages of using large multimodal models in the optimization loop and the implications of reduced reliance on precise scoring. Providing a more detailed discussion on how these elements distinctly outperform existing approaches could help mitigate doubts regarding the robustness of these claims since large multimodal models sometimes could have poor performance.
Overall, the evidence presented supports the claims made throughout the paper.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper make sense for the problem of enhancing text-to-3D generation. The DreamDPO framework introduces an innovative approach that integrates human preferences through a systematic process of direct preference optimization. This method comprises constructing pairwise examples influenced by Gaussian noise, ranking these examples against textual prompts using reward models or large multimodal models, and optimizing the 3D representation.
The evaluation criteria further support this methodology effectively. The paper utilizes two robust evaluation strategies: the ImageReward model, which scores 3D assets based on human preferences by assessing multi-view renderings, and the GPTEval3D benchmark, which compares the generated outputs against 13 baseline methods across five critical criteria—text-asset alignment, 3D plausibility, texture details, geometry details, and texture-geometry coherence.
Both the methodology and the evaluation benchmarks are well-founded and specifically tailored to address the challenges of generating 3D assets that align closely with human expectations, as reflected in the document’s results showing significant improvements in various metrics. Thus, they are appropriate for evaluating text-to-3D generation.
Theoretical Claims: The support of proposed framework appears to be on empirical evaluations and quantitative comparisons rather than theoretical proofs. Thus, there are no specific theoretical claims or proofs outlined that were checked for correctness in the paper. The issues addressed mainly revolve around the limitations of previous methods and how DreamDPO aims to overcome these challenges, rather than validating theoretical aspects through proofs.
Experimental Designs Or Analyses: The paper outlines a series of experiments that were conducted to evaluate the effectiveness of the DreamDPO framework in aligning text-to-3D generation with human preferences. Specifically, it mentions two evaluation strategies employed:
1. **Comparison Using a Text-to-Image Reward Model (ImageReward)**: This model was utilized to assess human preferences for the generated 3D assets based on their alignment with provided text prompts. The average preference scores were calculated across 120 rendered images of the 3D assets.
2. **Pairwise Comparisons with GPT-4V**: This strategy involved generating Elo ratings that reflect human judgments on various criteria, including text alignment, 3D plausibility, and texture-geometry coherence. The pairwise comparison results were used to calculate ratings that position the performance of DreamDPO against baseline methods.
The paper does not explicitly state any issues encountered with these experimental designs. However, it does highlight the limitations of prior methods that DreamDPO aims to address, particularly the reliance on accurate pointwise quality evaluations from reward models, which can hinder flexibility and adaptability. Therefore, while the experimental setup leverages innovative measures of evaluation, the potential weaknesses in the dependence on reward models and previous designs may still pose challenges.
Overall, while the paper provides a sound experimental framework, the discussion on limitations suggests that there are ongoing concerns regarding the robustness of the evaluation methods used.
Supplementary Material: The appendix includes various sections that expand on the main findings and implementation details of the DreamDPO framework. Specifically, the following parts are cover.
1. **Additional Implementation Details**: This section includes pseudo-code for DreamDPO and details on LMM-based pairwise comparison, which uses a large visual-language model for evaluating generated content.
2. **Supplementary Experimental Settings**: This section offers information on measurement metrics used in the experiments to assess the performance of DreamDPO.
3. **Supplementary Experimental Results**: It provides more qualitative results and detailed analysis comparing DreamDPO with existing methods, including various quantitative comparisons.
These additional materials enhance the understanding of the framework’s effectiveness and the experimental methodologies used to validate its claims.
Relation To Broader Scientific Literature: The key contributions of the paper build on several important concepts and findings in the broader scientific literature, particularly in the fields of generative models and human preference integration.
1. **Shift from Absolute to Relative Preference Evaluation**: The paper highlights that traditional methods of evaluating 3D generation heavily depend on precise pointwise quality assessments from reward models, which can be restrictive. Previous work has attempted to incorporate human preferences in 3D content generation, but still lacked flexibility and adaptability due to reliance on absolute scoring. DreamDPO shifts this paradigm by utilizing relative preferences, enabling better alignment with human expectations and enhancing the flexibility of the generation process.
2. **Integration of Large Multimodal Models**: The framework proposed in DreamDPO incorporates insights drawn from large multimodal models. This aligns with recent advancements in text-to-image generation, where models have successfully been used to infer and generate content based on textual inputs. By applying such principles to 3D models, DreamDPO takes advantage of the robust features learned from multimodal datasets, thereby establishing a strong connection with ongoing research in multimodal AI.
3. **Direct Preference Optimization**: DreamDPO’s method of optimization through direct preference ranking addresses previous limitations noted in the literature, wherein automated systems struggle to meet diverse user expectations. The incorporation of a preference-driven loss function within the generation process also ties back to reinforcement learning algorithms that have shown efficacy in similar tasks, using human preferences to guide learning in various contexts.
In summary, the contributions of DreamDPO are deeply interwoven with established theories and methods in the literature, pushing the envelope forward by offering a more adaptable and human-aligned approach to text-to-3D generation.
Essential References Not Discussed: I don’t find essential references that are not discussed.
Other Strengths And Weaknesses: The paper presents the DreamDPO framework, which reflects several notable strengths and weaknesses concerning originality, significance, and clarity.
**Strengths:**
1. **Originality:** The DreamDPO framework introduces a fresh approach to text-to-3D generation by emphasizing direct preference optimization based on human feedback, diverging from traditional methods that rely on pointwise quality evaluations. This optimization-based strategy enhances flexibility and adaptability in generating 3D content, setting it apart from existing techniques that often fail to fully align with human preferences.
2. **Clarity and Structure:** The paper is well-organized, clearly delineating the methodology, experiments, and outcomes. The three-step process of constructing pairwise examples, comparing them, and guiding optimization through preference-driven loss functions is explained clearly, making the complex ideas accessible.
3. **Empirical Validation:** Comprehensive experiments demonstrate that DreamDPO outperforms existing methods, providing robust evidence of its efficacy. This empirical validation strengthens the paper’s claims and offers a solid foundation for the proposed approach.
**Weaknesses:**
1. **Dependence on External Models:** While the framework utilizes reward models or large multimodal models for preference ranking, this dependence may limit the applicability of DreamDPO, especially in scenarios with limited access to such models. The necessity for high-performing models could also restrict its use in real-world applications where computational resources are constrained.
2. **Potential for Bias:** The integration of human preferences into the generation process raises concerns about the inherent biases present in the training data of the reward models. This could lead to outputs that reflect unwanted biases, which may necessitate additional measures to ensure fairness and ethical considerations in generated content.
3. **Time Consuming**: The optimization process takes around two hours on a single NVIDIA RTX A6000 GPU which is too long for text-to-3d generation since some latest method could produce results in less than 1 hour.
Other Comments Or Suggestions: I don’t have other comments or suggestions.
Questions For Authors: 1. Could you explain how the LMM worked in STEP2 more specific? How to get the questions for LMM? Is it provided by the user or generated from template?
2. Could you explain why you need to use such long time for generation? Is there any methods to speed up the generation process?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive reviews. We provide our responses below.
> **Q1:** Dependence on External Models.
**A1:** DreamDPO is compatible with external models and rule-based reward metrics, such as image quality metrics (e.g., BRISQUE [1]) and 3D-consistency evaluation. By leveraging 3D geometry information (depth and camera transformation matrices), we could establish stereo correspondence between views to evaluate multi-view consistency. These approaches do not require additional reward models and are suitable for scenarios with limited computational resources. As shown in [Figure 1-1](https://imgur.com/a/XdVx2sn), we include a case study demonstrating DreamDPO with BRISQUE, a no-reference image quality evaluator, to show its effectiveness.
> **Q2:** How to mitigate the potential bias from reward model?
**A2:** Reward hacking is a known issue in RL-based optimization. One solution is to scale the training data and model size of the reward model. As shown in our experiments in Section 4.3, the reward model HPSv2, which has a stronger generalization ability compared to ImageReward, demonstrates superior generation performance in most cases. Moreover, DreamDPO supports an ensemble of reward models, combining their ranking to reduce reliance on a single model and mitigate unwanted biases (see [Figure 1-2](https://imgur.com/a/XdVx2sn)).
> **Q3:** How does the LMM in STEP2 work, and where do its questions come from?
**A3:** We detail the LMM-based pairwise comparison in Section B.2. In STEP2, given pairwise examples, LMM conducts the comparison query sequentially. For each query, LMM performs visual question answering based on the provided image and query, and we extract the number of "yes" responses as the score. The questions for LMM can be customized by the user or generated from a template. For instance, LLM can automatically extract questions from a given prompt, such as generating "Is the leaf shouting?" from "A shouting leaf". Alternatively, users can define custom questions, such as "Does the elephant stay on the ground?" for "A dancing elephant".
> **Q4:** Why is the generation time long, and can it be improved?
| Method | MVDream | DreamDPO (10000 Steps) | DreamDPO (4000 Steps) |
| ------ | ------- | -----------------------| ----------------------|
| Computation Time | 1 hour | 2 hour | 1.5 hour|
Table 1-1: Reducing generation time for DreamDPO while maintaining improved performance.
**A4:** The vanilla SDS takes around 1 hour for optimization. Our method takes approximately 2 hours due to the pairwise example construction. To speed up the process, we can adopt a simple yet effective strategy: performing SDS for the first 6000 iterations and then switching to DreamDPO. As shown in [Figure 1-3](https://imgur.com/a/XdVx2sn), this approach reduces the generation time by 25% (to around 1.5 hours) while maintaining improved performance.
[1] Anish et al. Blind/referenceless image spatial quality evaluator. ASILOMAR 2011.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors’ response to address my concern. I decided to keep my score on this paper. | null | null | null | null | null | null |
Online Learning in the Random-Order Model | Accept (poster) | Summary: This paper studies online learning in the random order model, where the loss functions are fixed in advance but are presented in a random order. It has been known that the random order model lies between the stochastic model, where the loss functions are drawn from a fixed distribution, and the adversarial model, where loss functions are selected adversarially. This paper focuses on designing algorithms under the random order model by making use of algorithms under the stochastic models. On the one hand, the paper gives an example that shows naively running an algorithm that works under the stochastic model would result in a linear regret under the random order model. On the other hand, this paper introduces a framework that designs no-regret learning algorithms under the random order model using no-regret learning algorithms under the stochastic model. Examples of how this framework works include Prediction with Delayed Feedback and Online Learning with Constraints.
Claims And Evidence: The claims and related technical lemmas are stated clearly and the proofs of the statements are clear.
Methods And Evaluation Criteria: This paper uses regret bounds as a measurement of the performance of the online learning algorithms, which is standard in analyzing online learning algorithms.
Theoretical Claims: I only checked the proofs of the statements in the main body of the paper, For theorems in the appendix, I only checked the statements of the technical lemmas used for proving the theorems but did not check the proof carefully.
Experimental Designs Or Analyses: This paper is theoretical and does not need experiments to support the results.
Supplementary Material: I checked the statements of the theorems and the corresponding technical lemmas in the appendix.
Relation To Broader Scientific Literature: Online learning with adversarial order and stochastic order has been studied extensively. The random order model though is asymptotically equivalent to the stochastic setting, is different from the stochastic setting in finite time. The algorithmic framework presented in this paper might inspire designing no-regret learning algorithms for other online learning problems under the random order model.
Essential References Not Discussed: The setting studied in this paper is new and I think there are not many prior works closely related to this paper.
Other Strengths And Weaknesses: Strengths:
1. Random order is an intermediate model that lies between the stochastic setting and the adversarial setting. This paper shows that under the finite time analysis, the random order model is different from the stochastic setting and also designs an algorithmic framework that can convert existing no-regret learning algorithms in the stochastic setting into a no-regret learning algorithm under the random order model. This framework might be useful for studying other online learning problems in the random order model
2. The statements and the proofs of the theorems and the technical lemmas are clear. This makes the paper easy to follow.
Weakness:
I am confused about the structure of the main part of the paper. It is a bit unclear what the central result the paper wants to sell as there are many different online learning problems considered in this paper. Furthermore, it seems that bandits with switching costs as well as classification in the random order model are both important parts of this paper. However, even informal statements of results from these parts are not stated in the main body. In particular, it seems that the results on classification in the random order do not have deep connections with the simulation framework. These make the structure of the main part of the paper strange and confusing.
Other Comments Or Suggestions: Though not directly related, models such as online learning in the best order/ worst order/self-directed order models have been proposed since the 1990s. It could be helpful if related papers such as
*Ben-David, Shai, Eyal Kushilevitz, and Yishay Mansour. "Online learning versus offline learning." Machine Learning 29 (1997): 45-63.*
can be cited and discussed.
Questions For Authors: I am very confused about the motivation for analyzing the birthday testing problem in the stochastic setting. The first phase of the birthday testing problem looks quite artificial, even without the first phase, the algorithm should still be no-regret and by removing the first phase it should not result in a linear regret under the random order model. So, I am not sure what the analysis of the birthday testing problem implies.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback about the paper.
**Importance of the BIRTHDAY-TEST.** In Section 3, we prove that there is no black-box reduction from random-order to i.i.d. That is, we cannot hope to prove a statement of the form: “Any algorithm with sublinear regret in the stochastic setting has sublinear regret in the RO instance” which, in principle, could reasonably be expected. To argue about this impossibility, we exhibit a specific algorithm (BIRTHDAY-TEST) that is no-regret against any stochastic i.i.d. instance but fails against a specific random-order instance. We stress that this algorithm is only used in the proof of this impossibility result.
**Structure of the paper.** Unfortunately, we had to move important parts of our results to the appendix in the interest of space. We will exploit the extra page in the camera-ready version (and possibly move/shorten Section 3) to move parts of Appendices E and F in the main body.
**Ben-David et al. 97.** We thank the reviewer for pointing out this paper. Although not directly related to our model (we study random order, and they study online, worst-case offline, best-case offline, and self-directed sequences), they share our interest in investigating alternative input generation models beyond adversarial and i.i.d. We will add a discussion in the camera-ready version. | Summary: - This paper studies online learning in the random order model. Here, the adversary can pick an arbitrary sequence of loss functions, but must randomly permute them before presenting them one at a time to the learning algorithm.
- They show that, in full generality, online algorithms with low-regret under stochastic adversaries fail to obtain sublinear regret under random-order adversaries.
- On the other hand, the authors show how to use an online learning algorithm for stochastic adversaries to construct an online learning algorithm under a random-order adversary with minimal blow up in regret guarantees.
- Using this conversion, they give improved regret bounds for several settings: prediction with delayed feedback, online learning with constraints, bandits with switching costs, and online classification.
- For online classification, they show that the VC dimension is sufficient for learnability under random-order adversaries.
## Update after rebuttal
I thank the authors for their response. As they have satisfactorily addressed my questions and concerns, I will maintain my positive score for this paper.
Claims And Evidence: To me, the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the authors evaluate their algorithms using the standard notion of regret.
Theoretical Claims: I went through the proofs of Section 4.1 in the main text. The claims seem to check out to me.
Experimental Designs Or Analyses: This is a purely theoretical paper with no experiments.
Supplementary Material: Yes, I reviewed Sections B and F in the Appendix.
Relation To Broader Scientific Literature: This paper fits nicely into the recent interest of going beyond worst-case adversaries in online learning. Here, the goal is to obtain better regret guarantees by weakening the adversary in some reasonable way. From this perspective, this paper shows that what makes online learning hard under a worst-case adversary is not the fact that the adversary can pick an arbitrary sequence of loss functions, but the order of these loss functions. Indeed, this paper shows that even if you allow the adversary to pick the set of loss functions arbitrarily, online learning can be as easy as batch/stochastic learning if the adversary cannot control the order in which they are shown to the learner.
Essential References Not Discussed: With regards to online classification, a recent paper [1] studies a different type of intermediate setting where predictions about the future examples that need to be classified is available to the learner. Similar to the findings of this paper, the authors there show that for binary classification, VC dimension is also sufficient when such predictions are available.
[1] Raman, Vinod, and Ambuj Tewari. "Online Classification with Predictions." NeurIPS (2024).
Other Strengths And Weaknesses: **Strengths:**
- The paper is well-written and easy to follow.
- The conversion is intuitive and natural
**Weaknesses:**
One gripe I have with this paper is with its organization.
- In Section 1.1, the authors state that they apply their conversion to get improved regret bounds for 4 settings: prediction with delayed feedback, online learning with constraints, bandits with switching costs, and online classification. Yet, in the main text, only two of these are discussed in detail. My suggestion would be to move Section 3 to the Appendix and use the extra space to provide more details about your results for bandits with switching costs and online classification.
- In addition, the authors do not provide a comparison between the regret bounds they achieved under a random-order adversaries ad the optimal regret bounds under a worst-case adversaries. This makes it hard to tell how much of a quantitative advantage a random order provides. It would be helpful to have a table summarizing your results for the random-order adversary as well as the optimal regret bounds for a worst-case adversary.
My other issue with this paper is that the results are not very surprising as they just result from running the stochastic algorithm on sub-samples of the history. This approach is intuitive, but it is not clear that it is optimal for the random-order model in any of the settings that the authors study. Unfortunately, the authors do not provide lower bounds for the random-order adversaries in any of their settings.
Other Comments Or Suggestions: I would move the sentence "Let $n_i(a)$ denote the times..." under (iii) to after the sentence "Run algorithm $A$..." in (ii) in Simulation Procedure. It was not immediately clear to me that $n_i(a)$ was the number of times $A$ played action $a$ on iid data from $D_i$.
Questions For Authors: (1) What are the lower bounds for random order adversaries in each of the settings you study? If the stochastic algorithm is optimally chosen, do you obtain the optimal regret bounds under random order adversaries in any of your settings?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and positive feedback about the paper.
**Paper organization.** Unfortunately, we had to move part of our results to the appendix in the interest of space. We thank the reviewer for the suggestion, and we will exploit the extra page in the camera ready (and move/shorten Section 3) to move part of Appendices E and F in the main body.
**Tightness of our results.** We would like to point out to the reviewer that any lower bound for the stochastic setting also applies to the random order one (see also Appendix B); therefore, our results are tight (up to poly-log terms) for the random-order model (as they match the stochastic lower bounds). We highlight that all the settings of interest had minimax rates in the adversarial setting strictly worse than in the stochastic one (see section 1.1). For example, in bandits with switching costs, the adversarial setting is $T^{2/3}$ minimax optimal, while in the RO (and stochastic) is $\sqrt{T}$ minimax optimal. We thank the reviewer for the idea of a table summarizing the result. We will implement it in the camera-ready.
**Comparison between random order and stochastic models.** While we expected the RO model to be somewhat closer to the stochastic model than the adversarial one, we strongly believe quantifying the “distances” to be non-obvious.
In particular, our construction proving the non-existence of a black-box reduction from the stochastic to the random-order model shows that any similar “conversion result” needs a non-trivial component. We find the training and testing procedure both intuitive and interesting, as it successfully distills the similarities between random-order and i.i.d. instances. Finally, we find it surprising that the results we obtain are also tight in all the settings we consider (see the answer to Tightness of our results). We thank the reviewer who pointed out that this should be highlighted better. We will add a discussion in the final version.
**Additional literature.** We thank the reviewer for pointing out the recent NeurIPS’24 paper [1]. We will discuss it in the camera-ready.
**Other comments and suggestions.** We will move the sentence about $n_i(a)$ accordingly. Thanks.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and addressing my concerns. I will maintain my positive score. | Summary: This paper introduces a general framework, referred to as SIMULATION, that adapts stochastic (i.i.d.) learning algorithms to the random order model without significantly altering their finite-time performance guarantees. The core idea is straightforward: partition the time horizon into blocks of geometrically increasing length, sample an i.i.d. instance from past observations, and use it to train the algorithm in the current block.
Claims And Evidence: The findings are interesting but somewhat expected, given that the random order model is statistically indistinguishable from the i.i.d. model in the asymptotic sense. Additionally, there is a well-established body of literature on best-of-both-worlds algorithms that achieve optimal performance in both stochastic and adversarial settings [3]. Since the random order model lies between these two extremes in terms of generality, do such algorithms not already address the problem in this setting?
A key concern is the omission of highly relevant work [1], which presents near-optimal algorithms for various problems (including online packing, online learning, and feasibility) in both i.i.d. and random order models. Furthermore, reference [2] is also pertinent to this study.
References:
[1] Agrawal, S., & Devanur, N. R. (2014). Fast algorithms for online stochastic convex programming. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1405-1424.
[2] Devanur, Nikhil R., & Hayes, T. P. (2009). The AdWords problem: Online keyword matching with budgeted bidders under random permutations. In Proceedings of the 10th ACM Conference on Electronic Commerce, pp. 71-78.
[3] Bubeck, S., & Slivkins, A. (2012). The best of both worlds: Stochastic and adversarial bandits. In Conference on Learning Theory, pp. 42-1. JMLR Workshop and Conference Proceedings.
Methods And Evaluation Criteria: This seems to be fine.
Theoretical Claims: The proofs are clear.
Experimental Designs Or Analyses: No experimental result has been reported.
Supplementary Material: I have reviewed the supplementary material.
Relation To Broader Scientific Literature: Please see the comments above.
Essential References Not Discussed: [1] Agrawal, S., & Devanur, N. R. (2014). Fast algorithms for online stochastic convex programming. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1405-1424.
[2] Devanur, Nikhil R., & Hayes, T. P. (2009). The AdWords problem: Online keyword matching with budgeted bidders under random permutations. In Proceedings of the 10th ACM Conference on Electronic Commerce, pp. 71-78.
[3] Bubeck, S., & Slivkins, A. (2012). The best of both worlds: Stochastic and adversarial bandits. In Conference on Learning Theory, pp. 42-1. JMLR Workshop and Conference Proceedings.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: Typographical Error:
• Line 155, Second Column: "this can be tough ..."
Questions For Authors: 1. What happens when adversarial algorithms are applied in the random order setting? Do they yield improved guarantees in this model?
2. Is the proposed transformation universal in the sense that it provides a best-of-both-worlds result for both random order and i.i.d. inputs? Given the extensive literature on best-of-both-worlds models achieving optimal performance in stochastic and i.i.d. settings [3], the authors should discuss how their work relates to this body of research.
3. How do the results in this paper compare to those in [1] and [2]?
4. The counterexample presented in Section [3] appears somewhat artificial. What happens if one applies a standard no-regret algorithm, such as UCB, to the random order input?
5. Theorem 4.5 is meaningful only when $B=O(T)$. If $\rho$ is small, the bound loses significance. This contrasts sharply with the results of Immorlica et al. (2022), which achieve an optimal competitive ratio even for small budgets.
References:
[1] Agrawal, S., & Devanur, N. R. (2014). Fast algorithms for online stochastic convex programming. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1405-1424.
[2] Devanur, Nikhil R., & Hayes, T. P. (2009). The AdWords problem: Online keyword matching with budgeted bidders under random permutations. In Proceedings of the 10th ACM Conference on Electronic Commerce, pp. 71-78.
[3] Bubeck, S., & Slivkins, A. (2012). The best of both worlds: Stochastic and adversarial bandits. In Conference on Learning Theory, pp. 42-1. JMLR Workshop and Conference Proceedings.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback about the paper.
**Adversarial algorithms in RO** As we mention in Section 1.1., and detail in Appendix B, there is a natural hierarchy between the i.i.d., random order, and adversarial input models. In particular, any learning algorithm for the adversarial setting retains its regret bounds against RO inputs. However, an algorithm with optimal regret bounds in the adversarial setting may not be optimal in the (easier) RO model. Consider, for example, the bandits with switching cost problem, where the adversarial minimax regret is $\Theta(T^{2/3})$, as opposed to the stochastic (and RO) rate of $\Theta(\sqrt{T})$. If we were to use the optimal (in the adversarial setting) algorithm by Arora, et al. “Online bandit learning against an adaptive adversary: from regret to policy regret” (ICML 12) on a RO instance, we would obtain a (suboptimal) $\Theta(T^{2/3})$ regret bound (note, in fact, that their approach consists in dividing the time horizon in $T^{2/3}$ time batch, so the algorithm switches freely action between consecutive time batches, for a switching budget $\Omega(T^{2/3})$). A similar argument can be carried out for the prediction with delays model, where adversarial algorithms still retain their multiplicative dependence on the delay parameter, regardless of the input structure.
**Comparison with Best-of-both-worlds literature [3]** The reviewer asks “Since the random order model lies between these two extremes [adversarial and i.i.d.] in terms of generality, do such algorithms [BoBW algorithms] not already address the problem in this setting?”. The answer to this question is, in general, negative. In the typical BoBW literature (as e.g., [3]), the goal is to design learning algorithms with good instance-independent bounds in the adversarial setting and logarithmic instance-dependent bounds on stochastic instances. An algorithm with these properties is only guaranteed to retain its adversarial instance-independent bounds on the random order instance, not its instance-dependent stochastic guarantees; this implies its suboptimality as soon as there is a gap between adversarial and random order. On the contrary, our paper provides a general template to construct tight algorithms for the RO model that match the minimax regret for the i.i.d. scenario.
**Comparison with other related works** We thank the reviewer for the suggested literature. We will incorporate a discussion of both references in the final version of the paper. Although close in spirit to our research agenda of investigating the relationship between random order and i.i.d. inputs in online algorithms, our model and results are orthogonal to the ones studied in [1,2]. Consider, for instance, our online learning with constraints model. At each time step, our learning algorithm selects an action before observing the losses and constraint violations. In contrast, the online algorithm for Online Stochastic Convex Programming [1] first observes the losses and violations, and only then makes a decision. In other words, our setting falls within the domain of online learning, whereas theirs falls within the domain of competitive analysis.
**Counterexample** Regarding the point raised on the counterexample, the primary goal behind our construction is to provide a general framework that extends the applicability of algorithms designed for the i.i.d. setting beyond purely stochastic environments. Even though UCB may work “as-is” in this specific random order instance, the counterexample shows that not all algorithms that achieve sublinear regret bounds in the i.i.d. model automatically have sublinear bounds on regret in the random order model. This highlights the need for our general construction. We hope that the power of this general template is evident from the various instantiations we present across different online problems.
**Results of Immorlica et al** We are not sure we understand the concern raised by the reviewer regarding Immorlica et al. Indeed, the competitive ratio appears only in the adversarial setting of Immorlica et al. while here we show no-regret (ie, competitive ratio = 1). Our results align with those established by previous work for the stochastic setting. Specifically, we refer the reviewer to the stochastic regret guarantees in the work of Immorlica et al. There, they show a regret bound of $T/B\cdot \sqrt{T}$, which is $\sqrt{T}/\rho$ in our notation. Note that if we employed the algorithm of Immorlica et al. as a subroutine, we would still obtain the same regret rates as in Corollary 4.8. Moreover, in all cases in which $T\sqrt{T}/B$ is meaningful (i.e., when $B \ge \Omega(\sqrt{T})$), we obtain the same exact rate of $T/B\sqrt{T}$ (since the term $1/\rho^2$ becomes negligible as $1/\rho^2=(T/B)^2 \le T\sqrt{T}/B$).
If any concerns remain on this matter, we are happy to clarify further. Otherwise, we encourage the reviewer to reconsider this point in light of our answer. | Summary: The paper studies a general online learning setting where in every round $t \in [T]$ there is an unknown loss vector $\ell_t$, the learner needs to make a decision $x_t \in \mathcal{X}_t$, and incurs loss $\langle x_t, \ell_t\rangle$ (there might also be constraints that need to hold across the whole time horizon, like budget constraints.) Unlike the fully adversarial setting, the authors consider the case where the loss vectors are chosen by the adversary and then a random permutation of them is presented to the algorithm. The authors show that there exist (pathological) stochastic algorithms that do not work in this model. Then, the authors propose a template that can be used to obtain a "stochastic-to-online" transformation in their setting which works by breaking the interaction into logarithmic many intervals of doubling size, and using the interval $t-1$ to train the stochastic algorithm and apply it appropriately to interval $t$. They instantiate the template in various setting including online learning with delayed feedback, online learning with budget constraints, and Littlestone's online learning. The key technical observation is that the performance of the stochastic algorithm within consecutive intervals should be very close (e.g., Claim 4.3).
Claims And Evidence: Yes, they are supported by proofs.
Methods And Evaluation Criteria: The claims contain proofs, which make sense.
Theoretical Claims: The claims seem to be sound.
Experimental Designs Or Analyses: N/A.
Supplementary Material: I skimmed through it.
Relation To Broader Scientific Literature: The results are of interest to theorists mostly, and practitioners secondarily.
Essential References Not Discussed: References discussed.
Other Strengths And Weaknesses: Strengths:
- The authors consider a pretty general online learning setting and study a beyond-worst-case analysis of it by considering random permutations of the input sequence.
- The authors propose a pretty general template and they instantiate it in various settings showing strong results.
- The technical ideas are solid (at least they are above the bar of ICML).
Weaknesses:
- I don't see any strong weaknesses. Personally, I believe the claim that the paper "initiates the systematic study of the random- order input model in online learning." is a bit of stretch. I agree that the setting the authors study is more general than prior work, but special cases of it have been studied extensively (e.g. OCO with random permutation arrivals).
Other Comments Or Suggestions: - “The empirical distribution from this phase is then applied to the actual instance within the current block.” -> not sure what this means, do you mean the model trained on i.i.d. Samples from the empirical distribution of this phase?
- Why is “follow-the-leader” no-regret in the adversarial setting? Maybe follow the perturbed leader? (follow the leader suffers from switching the chosen action all the time and being one step behind from the optimal algorithm).
- “Surprisingly, there exists a random-order instance that fools BIRTHDAY-TEST.” -> I think the birthday-test algorithm is interesting, but I don’t think it’s surprising that it is fooled by a random-order input; given the construction (which is nice), I think it is clear that this algorithm will get confused.
- I wouldn’t call it “procedure” because this reads more as a black-box transformation but rather a “template”, but this is just personal preference.
Questions For Authors: No further questions, please look at the comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments and positive feedback about the paper.
Regarding the specific comments and suggestions proposed:
1. Yes, that is correct. We will rephrase the sentence to make it clearer.
2. As correctly stated by the reviewer, Follow-the-leader is no-regret only in the stochastic setting and may perform poorly in the adversarial setting. We use Follow-the-leader as a subroutine in BIRTHDAY-TEST, where we only need its no-regret property on stochastic inputs. Indeed, we only claim that BIRTHDAY-TEST is no-regret in the stochastic setting, as any no-regret algorithm for the adversarial setting automatically retains its no-regret property on random order instances.
3. And 4. Thanks for the suggestions; we will rephrase such comments and update the wording accordingly. | null | null | null | null | null | null |
Identification of Latent Confounders via Investigating the Tensor Ranks of the Nonlinear Observations | Accept (poster) | Summary: This paper studies the problem of learning discrete latent variable causal structures from mixed-type observational data using the graphical criteria of tensor rank conditions. To handle continuous observed variables, the author proposes a discretization method that ensures the discretized data satisfy the full-rank assumption, thereby allowing existing results for discrete data to be directly extended. The proposed method is further applied to the task of learning discrete latent variable causal structures from mixed-type observational data.
Claims And Evidence: The claims are generally well-supported by theoretical analysis, but certain aspects, such as novelty and baseline comparisons, require further clarification.
Methods And Evaluation Criteria: The proposed method is reasonable and well-motivated. The evaluation metrics selected are appropriate for the problem of latent confounder identification.
Theoretical Claims: The theoretical claims are clearly presented, and the proofs are OK.
Experimental Designs Or Analyses: The experimental design and analyses are generally sound. However, the simulated structure appears relatively simple, which may limit the evaluation of the method’s robustness in more complex settings.
Supplementary Material: The supplementary material provides detailed proofs for the theoretical results, along with illustrative examples and discussions. I have reviewed most of the content.
Relation To Broader Scientific Literature: The proposed method contributes to the problem of latent confounder identification. While the approach is interesting and relevant, its novelty and practical advantages over existing methods should be better articulated.
Essential References Not Discussed: The author gives a well-reviewed on the related literature.
Other Strengths And Weaknesses: Strengths:
1. The paper is well-written and clearly organized.
2. It provides a comprehensive review of related literature.
3. The problem of learning latent structures in non-linear models is both important and challenging. The proposed approach appears to be sound and well-justified.
Weaknesses:
1. The paper focuses on a specific class of non-linear models, where latent variables are discrete and observed variables are continuous, which may limit general applicability.
2. The tensor rank condition and its graphical criteria have been explored in prior work, so the novelty needs further clarification.
3. The Mixture Oracle method, a relevant baseline, is not included in the experimental comparison, which limits the evaluation.
Other Comments Or Suggestions: 1. Could the author clarify the key differences between the tensor rank condition in non-linear causal models and the tensor rank condition in discrete latent structure models? How does this work advance previous results, particularly in relation to [1]?
2. For the case of continuous observed variables and discrete latent variables, previous work—such as the Mixture Oracle method—has also explored identifiability. What are the key differences between your approach and the Mixture Oracle method?
Reference:
[1]. Learning Discrete Latent Variable Structures with Tensor Rank Conditions. NeurIPS 2024.
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your careful and valuable comments. We will respond to these issues point by point.
>[W] The Mixture Oracle method, a relevant baseline, is not included in the experimental comparison, which limits the evaluation.
**WQ1**: We have added experimental results using the Mixture Oracle as a baseline. Since K-means is not suitable for discrete data, we adapt the Mixture Oracle implementation by replacing K-means with the K-modes algorithm [1]. We evaluate the accuracy in identifying the latent support and include the results in the updated experiments. **As shown in Table 1 of https://anonymous.4open.science/r/TensorRank-E052/Experimental%20results.pdf**
One can observe that the Mixture Oracle method does not effectively identify the support of latent variables, even when using a discrete clustering algorithm. This may be due to the fact that clustering yields only an approximate solution and lacks theoretical guarantees for recovering the true latent structure.
>[Q1] Could the author clarify the key differences between the tensor rank condition in non-linear causal models and the tensor rank condition in discrete latent structure models? How does this work advance previous results, particularly in relation to (Chen et al. 2024)?
**Q1A1**: We would like to clarify our contributions in two main aspects—**applicability conditions and identification boundaries**—particularly in relation to the work by Chen et al. (2024).
1. **Our work considers a more general setting for when the tensor rank condition holds.** In particular, we allow the observed variables to be continuous and show that the latent structure remains identifiable when each latent variable has two sufficiently informative observed variables. In contrast, Chen et al. (2024) propose the tensor rank condition when all variables are discrete and each observed variable is assumed to be a sufficient measurement for its latent parent (i.e., the observed support has higher cardinality than the latent’s).
2. **We also extend the tensor rank condition to conditional probability tables, which leads to a more general identifiability condition for latent structures**, as described in Appendix D. While Chen et al. (2024) explore rank constraints based on the joint probability table under the assumption of a three-pure-measurement-variable model, our approach considers constraints on conditional distributions. This allows us to test conditional relationships among observed variables in the presence of latent confounding (i.e., impure structures), and further, to identify the causal structure among latent variables even under impure setting (Appendix D). To the best of our knowledge, this provides a novel identifiability condition for discrete latent variable models.
To evaluate structure learning under an impure setting, we conduct the following simulation, **as shown in Table 2 of https://anonymous.4open.science/r/TensorRank-E052/Experimental%20results.pdf**. We compare our method (Appendix D.2) with the approach proposed by Chen et al. (2024). One can see that the method by (Chen et al. 2024) cannot learn the causal structure among latent variable, this is because their method only suitable for the pure measurement model.
Moreover, in Appendix D.3, we show that only two purely measured variables per latent variable are sufficient to identify the measurement model when the latent structure is fully connected. **This result also relaxes the structural assumptions required by previous work based on the three-pure-measurement-variable assumption.**
>[Q2] For the case of continuous observed variables and discrete latent variables, previous work—such as the Mixture Oracle method—has also explored identifiability. What are the key differences between your approach and the Mixture Oracle method?
**Q2A2**: We would like to clarify our contributions with two main points, especially in relation to the Mixture Oracle approach.
1. **We offer a robust and testable method for structure learning**. Unlike the Mixture Oracle method, which relies on identifying a mixture model (as discussed in our Appendix E) and provides only an approximate method for parameter estimation—an approach that we find can be unreliable and difficult to test (see Sec. 3.1 and App. B)—our method introduces a hypothesis test for the tensor rank condition. This test not only enhances robustness compared to the Mixture Oracle but also allows us to identify the latent support with theoretical guarantees (see **W3A3** in response to Reviewer e9yn).
2. **We introduce a novel and more general structural condition for the identifiability of discrete latent structures** (refer to Point 2 in **Q1A1**). In contrast, the Mixture Oracle assumes that no edges between observed variables, which is a quite restrictive assumption.
We sincerely appreciate your thoughtful inquiry and hope this clarification helps. Please feel free to reach out if you have any further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response—The rebuttal addressed my concerns.
I believe this work makes a meaningful contribution to the existing literature, particularly in terms of applicability conditions and identification boundaries for tensor rank. At this point, I’m curious whether the identifiability bounds of tensor rank are well studied. If so, can this structure condition be extended to hierarchical settings, potentially enabling applications such as understanding image representations (see [1])?
I have raised my score to “Weak Accept.” I may consider increasing it further if a compelling real-world application scenario is provided.
Reference:
[1] Learning Discrete Concepts in Latent Hierarchical Models. NeurIPS 2024.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive assessment and constructive suggestions. Below, we address the two issues—on theoretical bounds and practical applications—point by point.
> [**Question 1**] I’m curious whether the identifiability bounds of tensor rank are well studied. If so, can this structure condition be extended to hierarchical settings, potentially enabling applications such as understanding image representations (see [1])?
**A1**: Thank you for the thoughtful question. **To our knowledge, the identifiability bounds of tensor rank have been extensively studied in our work, particularly in the context of joint distributions, subtensors, and conditional probability tensors.** Achieving stronger identifiability typically requires imposing additional structural assumptions or leveraging higher-dimensional tensor rank constraints, which tend to be more computationally expensive and statistically less stable in practice.
For instance, to identify latent support, we rely only on two-dimensional tensor rank, which leads to the two-sufficient-measurement assumption. If we assume all latent variables share the same support size, higher-order tensors (e.g., three-way tensors) could also be used for identification. **This suggests that identifiability bounds may vary depending on the structural assumptions one makes.**
Our theoretical results can be extended to hierarchical structures, enabling applications like those in [1]. Unlike [1], which requires an invertibility assumption for discrete component identification, our method only relies on the sufficient measurement assumption (implied by completeness). Moreover, we relax the pure-child assumption used in [1], allowing identification under sparse measurement settings. This broadens the applicability of our framework to more realistic scenarios.
>[**Question 2**] I may consider increasing it further if a compelling real-world application scenario is provided.
**A2**: Similar to [1], **our theoretical results can be useful in tasks like CLIP, by providing greater generality and explainability in modeling relationships between text and images**. Beyond this, the proposed method and the tensor rank condition are also applicable to biological data [2], gene expression studies [3], and social science domains such as psychology [4] — areas where latent confounders are prevalent and causal discovery under such conditions remains a significant open challenge. As an example, we demonstrate the applicability of our method on the Industrialization and Political Democracy dataset in Appendix H.1.
----------------
Besides, we conducted another real-world experiment on the Big Five Personality dataset (https://openpsychometrics.org/) in psychology as [4]. The result is presented in https://anonymous.4open.science/r/Big5-6063/. It consists of nearly 20,000 data points. Here, we use the 10 corresponding indicators of Conscientiousness, Extraversion and Agreeableness to identify the latent factors and underlying causal structure. We chose the Chi-squared test to test independence among variables.
**Acknowledgements.** Thank you for the constructive questions, which have inspired us to further explore the applicability of tensor rank and the broader identifiability results for latent variable models. We are glad that our responses addressed your concerns and we sincerely appreciate the improved evaluation. Please feel free to reach out if you have any further questions.
**Reference**:
[1] Learning Discrete Concepts in Latent Hierarchical Models. NeurIPS 2024.
[2] Causal Representation Learning from Multimodal Biological Observations. ICLR 2025.
[3] Automating the Selection of Proxy Variables of Unmeasured Confounders. ICML 2024.
[4] A Versatile Causal Discovery Framework to Allow Causally-Related Hidden Variables. ICLR 2024. | Summary: When observational data contain both continuous and discrete variables, learning the causal structure among latent variables becomes a critical problem. Existing methods are often sensitive to parameter estimation. In this paper, the authors propose a statistically testable approach, the tensor rank condition, to address this issue. By discretizing continuous variables appropriately, the authors introduce graphical criteria for non-linear causal models with discrete latent variables, leveraging the tensor rank condition. Building on this, the authors formulate the Mixed LSMs framework and further develop a two-stage algorithm: first identifying causal clusters and then inferring the causal structure among latent variables. This approach provides a novel solution to the identification of Mixed LSMs. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art methods.
Claims And Evidence: The claims are well supported by theoretical analysis and extensive experiments.
Methods And Evaluation Criteria: The proposed method is reasonable and sound. The evaluation metrics chosen in this paper are appropriate for the focused problem, i.e., the identification of latent confounders.
Theoretical Claims: The theoretical claims are well-discussed, and the proofs are presented clearly.
Experimental Designs Or Analyses: Experimental designs and analyses are sound.
Supplementary Material: The supplementary material mainly contains the proofs for theories proposed in the main text. These proofs are clear and easy to follow, and they appear to be correct, although I did not review every detail exhaustively.
Relation To Broader Scientific Literature: The proposed method is a novel solution for the latent confounder identification. The proposed solution is both interesting and novel and is more realistic in practical scenarios.
Essential References Not Discussed: To the best of my knowledge, all key references are well-discussed in the related work section.
Other Strengths And Weaknesses: Strengths:
- The authors present graphical criteria for non-linear causal models with discrete latent confounders, establishing a connection between tensor rank and d-separation relations in the graph. This offers a novel methodology for studying causal structures in non-linear models.
- In Appendix D, the authors provide identifiability results for discrete latent structures under a sparsity condition, relaxing the pure-child requirement from previous work. This is an interesting contribution.
- The authors discuss the discretization process in detail, providing clear examples that make the methodology easier to follow.
Weaknesses:
- Estimating tensor rank in practice can be challenging, yet the paper does not provide sufficient discussion on this issue.
- The data generation process in the experiments follows a mixture model rather than directly following Eq. (1). The author should clarify this.
- The effectiveness of the discretization approach depends on accurately estimating the rank of the probability table. However, the paper does not discuss how the precision of rank estimation impacts causal structure learning.
Other Comments Or Suggestions: NA.
Questions For Authors: 1. Previous works often assume sufficient observational support for latent variable identification. Why is the two-sufficient measurement condition enough for testing the tensor rank condition?
2. Why does the identifiability of Mixed LSMs require that at least one set of observed variables is caused by a single latent parent? If this is a structural assumption, it should be explicitly stated in the main text.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your valuable comments and suggestions and thank you for your positive assessment of our work.
>[W1] Estimating tensor rank in practice can be challenging, yet the paper does not provide sufficient discussion on this issue.
**W1A1**: As discussed in the Sec. 5 (Lines 402–404), we estimate matrix rank using the hypothesis test proposed by Mazaheri et al. (2023), and tensor rank following the approach in Chen et al. (2024). We will add more discussion on this point to improve clarity.
>[W2] The data generation process in the experiments follows a mixture model rather than directly following Eq. (1). The author should clarify this.
**W2A2**: Thank you for the comment. To ensure that the generated data satisfies the completeness condition, we use a mixture model as a practical and effective way to simulate data. Although the data is generated from a mixture model, the resulting joint distribution still reflects nonlinear dependencies (due to the probability property of the mixture model), and thus remains consistent with the setting described in Eq. (1). We will clarify this point in the revised text.
>[W3] The effectiveness of the discretization approach depends on accurately estimating the rank of the probability table. However, the paper does not discuss how the precision of rank estimation impacts causal structure learning.
**W3A3**: In our theoretical results, once the latent support size $r$ is identified, the causal structure can be recovered through rank-decomposition with specified $r$. Otherwise, an incorrectly estimated $r$ may lead to incorrect identification of latent structure, especially in the number of latent variables. To show that the latent support can be estimated consistently, we provide additional simulation results on estimating the rank of the probability table.
We consider the following settings: $G_1$: $L_1 \to \{X_1, X_2, X_3, X_4\}$; $G_2$: $L_1 \to L_2$, $L_1 \to \{X_1, X_2, X_3\}$, $L_2 \to \{X_4, X_5, X_6\}$. In both settings, the latent variables have support size 2, and the observed variables have support size 3. Each experiment was repeated 1,000 times with randomly generated data. Using the hypothesis test by Mazaheri et al. (2023), we find that the rank of the probability table can be accurately estimated in most cases. This suggests that the effectiveness of our structure learning method, which relies on tensor rank, is supported by the reliability of rank estimation.
||Accuracy|Accuracy|Accuracy|Accuracy|
|---|---|---|---|---|
|#samples|3000|5000|10000|30000|
|G1|0.82(±0.05)|0.85(±0.05)|0.84(±0.05)| 0.85(±0.06)|
|G2|0.76(±0.06)|0.79(±0.06)|0.80(±0.06)| 0.82(±0.05)|
>[Q1] Previous works often assume sufficient observational support for latent variable identification. Why is the two-sufficient measurement condition enough for testing the tensor rank condition?
**Q1A1**: Thank you for the question. For each latent variable, two sufficient measurement variable $X_i, X_j$ are enough to identify the latent support by detecting the rank deficiency in the joint distribution $\mathbb{P}(X_i, X_j)$. Moreover, there is a key property of tensor rank: the rank of a tensor is at least as large as that of any of its subtensors,
e.g., Rank$(\mathbb{P}(X_i, X_j, X_k))$ $\geq$ Rank$(\mathbb{P}(X_i, X_j, X_k=c))$.
For example, consider an case that $L$ d-separated $X_i, X_j, X_k$, where $X_i, X_j$ are sufficient measurement variables and $L$ has the support size $r$. If the rank condition holds for a subtensor, e.g., Rank$(\mathbb{P}(X_i, X_j, X_k=c)) = r$, then the full tensor $\mathbb{P}(X_i, X_j, X_k)$ must also have rank $r$, regardless of whether $X_k$ satisfies the sufficient measurement condition (i.e., even if its support is smaller than $r$).
Therefore, two sufficient measurements per latent variable are enough to test the tensor rank condition.
>[Q2] Why does the identifiability of Mixed LSMs require that at least one set of observed variables is caused by a single latent parent? If this is a structural assumption, it should be explicitly stated in the main text.
**Q2A2**: Thank you for pointing this out. This requirement is included in the Three-Pure-Child-Variable Assumption in Definition 4.1, and we further discuss it in Remark D.5. Broadly speaking, this assumption is used to identify the support of latent variables when considering the n-factor structure. We will make this assumption more explicit in the main text. | Summary: This work proposes a causal discovery algorithm for some class of causal graph involving discrete latent variables and both discrete and continuous observed variables. The algorithm is essentially an extension of Chen et al. (2024) which uses rank tests on the probability tensors of observed variables to infer the graph connecting the latent variables. The novelty of the approach resides in its ability to deal with continuous observed variables by proposing a discretization scheme motivated by a theoretical analysis (Section 4.1).
Claims And Evidence: I'm inclined to say that this work makes reasonable claims that are backed by sufficient evidence (both theoretical and empirical). That beings said, clarity is an important issue which prevents me from recommending acceptation of this paper. I found many cases where theoretical claims are so unclear that it is very difficult to assess whether they are sound or not. More on this later.
Methods And Evaluation Criteria: The paper is mainly theoretical, but does present some empirical validation on synthetic data. This sort of empirical analysis is fairly standard for a theoretical work and IMO sufficient. That being said, I highly recommend the authors to find either a more realistic dataset to test their approach or at least a motivating example. Right now the approach feels a bit unmotivated.
Theoretical Claims: I did read all the theoretical claims made in the main paper, but did not read the proofs in the appendix. The results appears believable, but clarity is a very significant issue IMO. (more on this later)
Experimental Designs Or Analyses: See above.
Supplementary Material: I checked only Appendix F, which I found cryptic and unclear.
Relation To Broader Scientific Literature: I am not following this literature closely, but it appeared to me that the contribution is properly contextualized within existing works. My understanding is that this work is an extension of Chen et al. (2024), which is made very clear and transparent in the manuscript, to deal with continuous observed variables. That being said, the delta between both works appears to be fairly small (it adds discretization strategy backed by theoretical analysis). Maybe the authors could explain a bit more how their work differs from Chen's?
Essential References Not Discussed: I didn't notice anything obvious.
Other Strengths And Weaknesses: Strengths:
- I thought the topic was interesting, the title and the general theme got me curious and stimulated. I believe this is overall an interesting direction.
- I appreciate the effort made by the authors to include a running example, which helped grounding some of the technical concepts.
Weakness:
- The work felt a bit unmotivated. IIRC, next to no effort is made to explain why this direction is exciting and could lead to interesting applications in the future.
- Novelty is between low and moderate. If I understood correctly, the only algorithmic novelty comes from the discretization phase which then allows the application of Chen et al. (2024), which works for discrete variables. Section 4.1 also contains a theoretical analysis of discretization. This feels a bit marginal, but I suppose this is subjective.
- The most important issue with this work is its lack of clarity. Below, I give a list of confusions I got as I read the manuscript, which I believe are due to poor writing.
Clarity:
- “In the causal graph, we do not allow directions from observed to latent to preserve the latent confounder structure.” Do you allow edges from observed to observed variables? It seems all the figures have graphs without such edges, but you do not mention it anywhere AFAIK. Maybe these assumptions should belong to an Assumption environment given their importance? Oh I see this assumption is done much later in Definition 4.1. Overall it seems like multiple results use different assumptions. It would be useful to refer to these definitions/assumptions using \ref directly in the theorems, otherwise it’s hard to follow which assumptions are made and where.
- Eq (1), f should be indexed by i, no?
- Assumption 2.1 (a): Do you mean for all $r \in \Omega$ here? As in, each marginal P(L_i) puts mass everywhere on $\Omega$? If so, this is not clear here.
- Assumption 2.1 (b): Might be useful to say explicitly what this conditional distribution contingency table is. What is its shape? Which dimension corresponds to which variable?
- Theorem 3.2:
- It would be nice to have a definition of what is meant by the rank of a tensor. I would argue that this is not a broadly known notion (unlike the rank of a matrix for example)
- X_p = {X_1, …, X_n} but then it’s written X_p \subseteq X… But earlier it was said that X was n-dimensional. So X_p is just the set of observed variables here? Or is it a subset?
- Can the conditional set S intersect with X_p? X? Or should it be constrained to the latent variables?
- General point: Looks like there’s a clash in notation. Lower case r is used both for the rank of tensor and for the cardinality of \Omega.
- Condition 3.3:
- I’m a bit confused here. The condition concerns only the variables X_i that satisfy the condition “for all L_j, X_i \indep X\X_i | L_j”, correct? I couldn’t find a graph in your figures that had such a node X_i. Could you give an example of such a variable in a given graph?
- Isn’t it the case that any function $g: \Omega \rightarrow \mathbb{R}$ is bounded since $\Omega$ is finite? Actually, are we assuming it’s finite? Also, am I right to assume that the codomain of $g$ is the real line?
- Are we saying that completeness holds for all conditionals P(L_j | X_i)? I.e. for all pairs (X_i, L_j)? That’s unclear because there’s no quantifier for i and j here.
- It seems completeness is a property of the joint P(L_j, X_i), not just the conditional P(L_j | X_i) since statements like E[g(L_j) | X_i] = 0 almost surely depends on the distribution P(X_i). Same for g(L_j) = 0 a.e., which cannot be verified from P(L_j | X_i) alone (you need the marginal over L_j).
- Appendix F on condition 3.3 didn’t help and is poorly written, I really didn’t get the point there.
- Theorem 3.4
- Same issues as in Theorem 3.2
- It seems we are manipulating a Tensor with n-dimensions where some of its dimensions have a continuum of indices. How do you define rank here? Also, this is a very unusual object so it should be described and explained more.
- Definition 4.1
- what’s a pure variable? It’s not defined here.
- what’s a “sufficient measured variable”?
- Line 244: The “tensor rank condition” is constantly mentioned, but is not properly defined anywhere. I genuinely don’t know what is meant here. I think what is actually meant here is whether we can find a discretization that will yield a tensor that satisfies the assumptions of Theorem 3.2, right?
- Proof of Proposition 4.2:
- Can you point to where we are assuming that each observed variable has a single latent parent? (you use this assumption on line ~266).
- Does this proposition mean that any discretization will work? (as long as it’s not clustering all values in a single state?
- Theorem 4.5: What’s k here? The iteration of some algorithm? Which algorithm?
- Definition 4.9: Typo here? n should be p no?
- Proposition 4.15: Isn’t it weird that X_s does not appear in (i), given the quantifier “for all X_s”?
Other Comments Or Suggestions: Line 429: Typo
Questions For Authors: My questions appears above, and concern mainly the clarity of the work.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your careful review. We address each point below and have corrected the typos. Please feel free to reach out with any further questions (due to limited space).
>W1W2: ... realistic dataset to test their approach.
A: Please see Appendix H.1.
>W3: Novelty is between low and moderate.
A: Please see the **Q1A1** response of #Review gVAe.
>Q1: Do you allow edges from observed to observed variables?
A: We allow edges between observed variables. This highlights that the tensor rank condition applies beyond the pure measurement setting (e.g., impure case in Appendix D).
>Q3: Assp. 2.1 (a): Do you mean for all $r\in \Omega$ here?
A: Yes, we mean that for any $r \in \Omega$ here — each marginal places non-zero mass on every element of $\Omega$.
>Q4: Assp. 2.1 (b): What is its shape? Which dimension corresponds to which variable?
A: The conditional distribution can be represented as a contingency table— a $|V_i| \times |Pa(V_i)|$ matrix.
>Q5.1: Theo. 3.2: ...what is meant by the rank of a tensor.
A: The rank of a tensor $\mathcal{X}$, denoted $rank(\mathcal{X})$, is the smallest number of rank-one tensors that generate $\mathcal{X}$ as their sum, where a $N$-way tensor is rank-one if it can be written as the outer product of $N$ vectors.
>Q5.2: Theo. 3.2: So $X_p$ is just the set of observed variables here? Or is it a subset?
A: We modify it to $\mathbf{X}_p =\{X_1,\cdots,X_{p}\} \subseteq \mathbf{X}, p \leq n$.
>Q5.3: Theo. 3.2: Can the conditional set $S$ intersect with $X_p$? Or should it be constrained to the latent variables?
A: Yes, we allow $S \cap X_p \neq \emptyset$ and allow observed variables included in $S$.
>Q6: Lower case r is used both for the rank of tensor and for the cardinality of $\Omega$.
A: We will use distinct symbols for tensor rank and the cardinality of $\Omega$ in revision.
>Q7.1: Cond. 3.3: ...only X_i that satisfy “for all L_j, $ X_i \bot X\setminus X_i | L_j$”, correct?... Could you give an example? Q7.2: ...function $g:\Omega \to R$ is bounded since $\Omega$ is finite? Q7.3: ...no quantifier for $i$ and $j$ here.
A: We will clarify this in the revision: for any $X_i \in \mathbf{X}$ that has only one (latent) parent $L_j$, we assume the conditional distribution $\mathbb{P}(L_j|X_i)$ is complete. That is, for all measurable real function $g$ such that $\mathbb{E}(|g(l)|) < +\infty$, $\mathbb{E}(g(l)|x) = 0$ almost surely iif $g(l) = 0$ almost surely. As an example, consider $X_7, L_3$ in Fig. 2. Besides, we assume that the domain $\Omega$ is finite. In this case, any function $g:\Omega \to R$ is indeed bounded.
>Q7.4: Cond. 3.3: It seems completeness is a property of the joint $P(L_j, X_i)$...Q7.5: App. F on cond. 3.3 didn’t help.
A: Yes, completeness is the property of the joint distribution. Due to the Bayesian formula, the completeness on the conditional distribution $P(L_j|X_i)=P(L_j, X_i)/ P(X_i)$ involves the joint distribution. In this paper, we adopt the conditional form for consistency with prior work (e.g., Cui et al., 2023).
In Appendix F, we illustrate that the completeness condition is not overly restrictive and can arise naturally in practice. We’ve revised the appendix to clarify this intention.
>Q8: Theo. 3.4: How do you define rank here?
A: As noted in Remark G.3, we treat the tensor as a theoretical representation of the joint distribution, with rank defined by its minimal rank-one decomposition. We will clarify this in the revision.
>Q9: Def. 4.1: What’s a pure variable? It’s not defined here. What’s a “sufficient measured variable”?
A: Pure variables denote the variables that have only one latent parent, and no observed parents. For an observed variable $X$ with support $\mathcal{X}$ and latent parent $L$ with support $\Omega$, we define a sufficient measurement as $|\mathcal{X}| > |\Omega|$.
>Q10: Line 244: The “tensor rank condition” ... is not properly defined anywhere...
A: The rank condition refer the rank deficiency of probability tensor. Besides, your statement is right, we would like to study when and how to use discretization to make this properity hold in continous data.
>Q11.1: where we are assuming that each observed variable has a single latent parent?
A: This assumption follows from the model definition, where we adopt the standard pure measurement setting (Silva et al., 2006), in which each observed variable has a single latent parent. We will clarify this more explicitly in the revision.
>Q11.2: Does this proposition mean that any discretization will work?
A: No, not all discretizations will work. We clarify this point in Remark 4.3 and illustrate it with an example in Appendix B.
>Q12: Theo. 4.5: What’s k here? The iteration of some algorithm?
A: k refers to the number of discretization steps, as discussed in Remark 4.6.
>Q14: Prop. 4.15: Isn’t it weird that $X_s$ does not appear in (i)...?
A: $X_s$ does not need to appear in (i) since it may not share a common latent parent with the other variables. | null | null | null | null | null | null | null | null |
Learning the Electronic Hamiltonian of Large Atomic Structures | Accept (poster) | Summary: This work focuses on scaling Hamiltonian prediction to large periodic structures. While a common ML problem, Hamiltonian prediction has been challenging in large structures due to the quadratic scaling of the Hamiltonian. The authors propose a partitioning scheme to enable distributed computing and batching of large unit cells. Further, the authors present a new dataset consisting of 3 periodic large compounds with a total of 7 structures. Due to the size of the structures, training on a single sample suffices.
## After the rebuttal
I appreciate the authors’ response but remain with my many points of my initial critisim. Primarily, comparisons to previous works are possible by evaluating on smaller datasets. While new datasets are always welcome this does not rule out the evaluation on preivous established ones. This undermines their second contribution as listed by the authors. Overall, I do not strongly object an acceptance but also do not recommend it.
Claims And Evidence: The following claims are made:
1. A single message passing GNN achieves accurate Hamiltonian prediction on large structures.
2. The authors demonstrate that Hamiltonian prediction scales to thousands of atoms.
3. The proposed partitioning scheme enables efficient training on large compounds.
In general, these claims also seem supported but the limited evaluation makes the specific judgements hard.
Methods And Evaluation Criteria: The author propose a new GNN for predicting Hamiltonians that is similar to previous works but only uses a single message passing step. They evaluate it with their partitioning scheme.
Theoretical Claims: The paper does not introduce theoretical claims.
Experimental Designs Or Analyses: The authors evaluate only on their newly proposed dataset their newly propose method. This combination poses a challenge for the reader to identity meaningful transferable improvements. Evaluation on standard datasets for their proposed GNN and previous GNNs on their new dataset would strengthen the work significantly and would improve its contextualization. Additional downstream application, e.g., by running DFT, or computing observables would strengthen the paper's position.
Supplementary Material: I skimmed all sections of the supplementary material.
Relation To Broader Scientific Literature: The authors should consider contextualizing their work with the batching works done in graph neural networks where similar issues for large graphs arise.
Essential References Not Discussed: -
Other Strengths And Weaknesses: Strengths:
* The paper is well written and easy to understand if one has the necessary background.
* The problem is interesting and the results are encouraging for large-scale Hamiltonian prediction.
Weaknesses:
* The works presents limited theoretical and technical contributions and heavily relies on its empirical results.
* The empirical evaluation is quite limited and does not allow the reader to identify the downstream performance or comparisons to previous works.
Other Comments Or Suggestions: * Please use consistent units, using eV and Eh in different places makes comparisons more difficult.
* l. 338 should be Table 3.
Questions For Authors: I am willing to increase my score if the authors manage to clearly isolate their contributions and show the evaluate in each contribution via ablation studies, comparisons with previous works, and evaluation on established datasets.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Isolation of Contributions:** Our two main contributions not found in other works are:
1. An augmented partitioning approach that allows arbitrarily large graphs to be broken down into independent partitions that (1) maintain the connectivity of the full structure through virtual nodes/edges, (2) fit into GPU memory during training, and (3) ensure high scalability without compromising achievable test accuracy.
2. A GNN model that is custom-made for scalable Hamiltonian predictions of large, complex atomic structures. Unlike previous models [3-8], it combines strict locality with efficient SO(2) convolutions plus multi-head equivariant attention to better distinguish between complex atomic environments.
**Ablation studies:** An ablation study of our proposed augmented partitioning approach is shown in **Table 2 of the results section**, where the omission of virtual components led to a 50% increase in node error and an 88% increase in edge error, as the graph tries to aggregate information through an incomplete graph structure. The strict locality is also motivated and justified by previous work like Allegro [5,8], plus we have also conducted studies on their effects on Hamiltonian matrices in **Appendix F**.
**Discussion of other batching methods for large graphs:** Previous popular works on large graph partitioning [9] employ neighborhood sampling approaches to reduce communication volume between partitions. This cannot be applied to Hamiltonian predictions, as omitting any graph connections leads to the wrong atomic neighborhood. Our method is a novel way to (1) batch the data, and (2) removes the need for communication between different partitions while maintaining graph connectivity.
**Evaluation on established datasets:** Established Hamiltonian datasets (e.g., QH9 [6]) consist of small molecules, which cannot be used to evaluate a model's performance on the large-scale problem we are trying to tackle. The use of custom datasets is also encouraged in the applications track, where ML is applied to a wide variety of problems that have yet to be tackled, and a valid comparison does not yet exist.
**For evaluating previous methods on our custom dataset, the closest accuracy comparison (within memory limits) is shown in Table 3 in the paper for HfO2 (and also in Table 11 in Appendix I.3 for GST and PtGe)**. Full graph training is the default baseline approach used by previous SOTA. We showed that for all three datasets, the application of augmented partitioning approach maintains accuracy, while significantly increasing scalability (shown in Appendix H.2). The values we obtained (0.99-5.16 meV) are also within range of what SOTA models obtained (1.5-3.23 meV) for simpler structures [3-4].
**Downstream applications:** We applied our approach to the simulation of valence change memory **in the response to Reviewer Kuqs**. The obtained accuracy allows us to observe trends in resistance contrast changes as a function of applied voltage and resulting ion movements. By subverting expensive DFT calculations entirely, we enable a range of previously unfeasible large-scale device simulations. **The true meaning of our work, therefore, lies in its ability to address a crucial bottleneck that previous methods (including DFT) could not.** It also sets a precedent for the prediction of complex structures at the scale of ~10^3 atoms per unit cell and can be used as an initial benchmark for future models that also aim to do so for various applications.
**References**
[1] Jónsson, H., Mills, G. & Jacobsen, K. W. in Classical and Quantum Dynamics in Condensed Phase Simulations (eds Berne, B. J. et al.) 385–404 (World Scientific, 1998)
[2] Kaniselvan, M., Luisier, M., and Mladenovic, M. An atomistic model of field-induced resistive switching in valence change memory. ACS Nano, March 2023.
[3] Wang, Y., Li, Y., et. al. Universal materials model of deep-learning density functional theory hamiltonian. Science Bulletin, 69(16):2514–2521, 2024b.
[4] Zhong, Y., Yu, H., Su, M., Gong, X., and Xiang, H. Transferable equivariant graph neural networks for the hamiltonians of molecules and solids. npj Computational Materials, 2023.
[5] Musaelian, A., Batzner, S., Johansson, A., Sun, L., Owen, C. J., Kornbluth, M., and Kozinsky, B. Learning local equivariant representations for large-scale atomistic dynamics. Nature Communications, 14(1):579, 2023
[6] Yu, H., Liu, M., Luo, Y., Strasser, A., Qian, X., Qian, X., and Ji, S. Qh9: A quantum hamiltonian prediction benchmark for qm9 molecules, 2023a.
[7] Y Li, et. al Enhancing the scalability and applicability of Kohn-Sham Hamiltonian for scalable molecular systems. ICLR 2025.
[8] Zhouyin, Z., et. al. Learning local equivariant representations for quantum operators, ICLR 2025
[9] M. Besta and T. Hoefler, "Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis," in IEEE Transactions on Pattern Analysis and Machine Intelligence | Summary: The author propose a graph partitioning strategies to localize message passing and new networks which leverages SO(2) convolution for predicting the Hamiltonian for large atomic structures. The author demonstrates its capabilities by predicting the electronic Hamiltonian of various systems with up to 3,000 nodes with and ≤0.55% error in the eigenvalue spectra.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: The derived Hamiltonian should also be evaluated with SCF iterations speedup or the MAE on system energy deriving from the Hamiltonian.
Supplementary Material: N/A
Relation To Broader Scientific Literature: It is somewhat relavent, it could be broadly applicable to distributed graph computing.
Essential References Not Discussed: Some other paper also leverages the SO(2) convolution such as [1].
[1] Enhancing the scalability and applicability of Kohn-Sham Hamiltonian for scalable molecular systems. ICLR 2025.
Other Strengths And Weaknesses: 1. Strength: The significance of this work is high, it could be a promising avenue for large-scale simulation or all-atom protein simulations.
2. Weakness: The evaluation on the real-world applications is very limited (such as SCF speedups, system energy prediction, dipole moment prediction). The graph partition strategy is not that novel, and the author did not addresses the non-diagonal blocks beyond cutoffs. Missing of the long-range interactions could potentially impact the real-world applications. The graph-partitioning could also potentially lead to non-smoothness in real world simulations.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How does the network handle the non-diagonal part of the Hamiltonian beyond cutoffs? Methods like QHNet leverages the pair construction layer which performs a tensor product between every pair of the atoms. I think it's not doable here.
2. What is the absolute error for the eigen-spectrum, is it within the chemical accuracy? (< 1 kcal/mol)
3. The graph would generate multiple graph slices, how do you combine them to generate the final H (a diagram or algorithm would suffice) ? And for the embedding of virtual nodes, will their embedding be updated by the embedding from other graph slices the next time you encounter the graph?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Non-diagonal part of Hamiltonian beyond cutoffs:**
**The use of a cut-off radius is standard practice in Hamiltonian prediction literature of bulk structures [3-4], which exploits the nearsightedness of the Hamiltonian matrix.** **Appendix F** include two studies on the nearsightedness of Hamiltonians for our dataset:
1. **In Fig. 7**, we illustrated the decay in values of Hamiltonian elements with increasing inter-atomic distance for all three datasets.
2. **In Fig. 8** we quantify the effect of discarding orbital blocks beyond a predefined cutoff on the prediction accuracy. Removing those with rcut > 4 Angstroms (the chosen rcut for HfO2 in the study) from the DFT reference data showed a negligible difference in the eigenvalue spectra of H.
**Long range interactions beyond cutoff:** In **Appendix F, Fig. 9** we studied the effects of various types of perturbations (shift, vacancy, substitution) beyond the cutoff radius and showed that they all have a negligible effect on the Hamiltonian matrix elements (less than 0.24%). This indicates that the Hamiltonian of the atoms in our datasets can be learnt from their local atomic environment. The use of strict locality is also well-established in literature, and previous architectures (e.g., Allegro for force fields [5] and [8]) have used this to improve scalability while still achieving state-of-the-art prediction accuracy.
**Absolute Error for Eigenspectrum:** The threshold of < 1 kcal/mol is normally used for the prediction of thermochemical quantities (e.g., interaction, ionization energy) and is not commonly applied to eigenspectra of bulk structures with several atoms. Still, we understand the need for a quantitative measure with units, so we computed the mean absolute error for all eigenvalues, obtaining 3.08 mHartrees (2.53 mHartrees for occupied energies).
**Real World Applications:** Unlike the molecular Hamiltonians predicted by models like WANet, whose applications include the extraction of molecular quantities (e.g., orbital energies), the Hamiltonians of large structures (> 10^3 atoms) targeted by our approach have their own set of downstream applications (e.g., semiconductor physics). To truly assess its practical relevance in this domain, we applied it to the simulation of valence change memory cells, **with details outlined in the response to Reviewer Kuqs**. Using predicted Hamiltonians for simulations, we achieved levels of accuracy that allow us to observe key trends in current vs. voltage characteristics of VCM cells and reveal the dependence of the current on the atomic geometry of these devices. In general, it enables large-scale simulation workflows involving repeated DFT updates over 100-1000 time steps that were previously unfeasible, and also allows large structures beyond DFT capabilities (10k+ atoms) to be simulated.
**Multiple graph slices and Hamiltonian construction:** In this paper, slices are only used to train the model. The final trained model was then used to predict the full structure/graph at once (e.g., all 3000 atoms), with only one final predicted Hamiltonian matrix obtained, so there is no need to assemble any slices. For very large structures, the Hamiltonian can also be predicted separately as different slices with multiple GPUs/CPUs. Since the nodes and edges in each slice define an independent sub-block, the final matrix can then be reconstructed by populating it piece by piece with the components of each partition. Both approaches lead to the same result.
**Role of virtual nodes/edges (whether they are updated by other graph slices):** We do not update the embeddings (outputs) of the virtual nodes and edges in any case. In the paper, we mentioned that their output embeddings have no meaning/do not exist (hence the term virtual), and are not involved in training and predictions. Their only purpose lies in their initialized inputs (atomic numbers and distances), which are used to inform the labelled (non-virtual) atoms within the partition of their correct one-hop atomic neighborhood. The use of only the inputs of virtual nodes and edges to maintain connectivity without communication is one of the key novel contributions of our approach.
**Note that the list of references is found in the response to Reviewer 4 (nuFS)**
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses. I am willing to increase my score to 3. | Summary: The paper proposes a new method for applying GNNs to learn the electronic Hamiltonian for atomic structures beyond the unit cell. The paper starts by introducing the use of GNNs for property prediction on fairly small size unit cell materials and molecules and motivates the need to capture more complex materials behaviors, such as defects and strains, at larger size scales. To enable GNNs to be applied for larger scale atomic systems, the paper proposes two contributions: 1. Learning the ground-state Hamiltonian using a SO(2)-equivariant GNN that learns local embeddings for the diagonal and off-diagonal blocks of the Hamiltonian. 2. A graphs partitioning method that enables partitioning larger input graphs, which represent larger atomic systems, to more effectively deploy the proposed GNN. The final paragraph of the introduction highlights that the authors deploy their method on amorphous systems, which include various defect types thereby providing a reasonable test case.
Section 2 describes relevant background for electronic structure modeling related to energy levels and wavefunctions that make up the Hamiltonian matrix. The Hamiltonian matrix is computed based on a basis of atomic orbitals that transform under spherical harmonics. The Hamiltonian matrix itself can be decomposed into sub-matrices that describe the interactions of orbitals on the same atom (diagonals) and between different atoms (off-diagonal). The main challenge with calculating Hamiltonian matrices is the computational cost, which has lead to approximations that rely on creating a repeatable unit cell of atoms. That unit cell, however, often fails to properly capture atomic behavior that require larger size, such as defects. Section 2 also describes related work on using machine learning for directly predicting the Hamiltonian matrix, focusing mostly on equivariant GNNs that operate in a spherical harmonic basis.
Section 3 introduces the primary method of the paper starting with the representation for the block of the Hamiltonian matrix and the proposed network architecture. The network architecture mainly relies on a combination of eSCN equivariant convolution to learn over the atomic graph and multi-headed attention to learn in embeddings for the local atomic environments. Section 3.3 describes the graph partitioning method that enables the proposed method to model interactions beyond local atomic interactions while still maintaining a local receptive field for scalability. The proposed partitioning approach leverages virtual nodes and edges that augment the graph for a given partition but are not used for the message passing. As such, the receptive field includes only a 1-hop neighborhood ensuring scalability to larger systems while maintaining sufficient accuracy. Section 3.4 describes the dataset generation focusing on moelcular dynamics trajectories of amorphous materials, namely: HfO2, GeSbTe, and PtGe. The systems span thousands of atoms with >10k orbitals and >100k edges.
Section 4 presents the main results. The first set of experiments presents ablations related to the proposed graph partition mechanism and generally find that partitioned graphs achieve competitive results when compared to full-graph training. The results in Section 4.2 claim that the proposed method can effectively capture the structural disorder contained in the amorphous materials data. Section 4.3 claims that the proposed method can capture compositional disorder in HfO2 and Section 4.4 outlines a case study comparing predictions of the full Hamiltonian using the trained network and various partition slices. Section 4.5 describes computational savings achieved by the graph partitioning approach outlining both compute and memory savings. Section 5 provides a brief conclusion.
## update after rebuttal
The authors answered my questions to my satisfaction and agreed to include relevant details that will make the paper stronger, including clarifying related work for their paper and providing a comparison on scalability. As such, I increased my score and support of the paper.
Claims And Evidence: The primary claims are backed up with evidence from targeted experiments on large-scale atomic systems. The method descriptions are through and provide relevant details for both the physics and machine learning considerations.
Methods And Evaluation Criteria: The methods and evaluation criteria used are created by the authors themselves and are generally well chosen to demonstrate their proposed method. The experiments could be strengthened by describing why relevant baselines (e.g. Allegro) may not be able to scale to the systems used.
The paper could be strengthened by showing additional experiments on smaller systems - ideally these results would show that minimal performance drops occur with smaller (unit cell scale) systems, providing more evidence for the capabilities of the proposed method.
Theoretical Claims: The theoretical claims presented are generally well supported.
Experimental Designs Or Analyses: The experiment design for the three amorphous materials, described in the main paper and supplementary material, are generally sound and detailed.
Supplementary Material: I reviewed most of the supplementary material, including the study on the cut-offs in Appendix F given its importance in supporting the considerations for the local receptive field.
Relation To Broader Scientific Literature: The paper generally covers a good amount of the relevant literature. It could be further strengthened by discussing references related to the challenges of applying GNNs directly in MD simulation [1][2][3], providing broader context for GNN-based methods for atomistic modeling [4] and tying the challenges to broader applications [5].
[1] Fu, X., Wu, Z., Wang, W., Xie, T., Keten, S., Gomez-Bombarelli, R. and Jaakkola, T., Forces are not Enough: Benchmark and Critical Evaluation for Machine Learning Force Fields with Molecular Simulations. Transactions on Machine Learning Research.
[2] Bihani, V., Mannan, S., Pratiush, U., Du, T., Chen, Z., Miret, S., Micoulaut, M., Smedskjaer, M.M., Ranu, S. and Krishnan, N.A., 2024. EGraFFBench: evaluation of equivariant graph neural network force fields for atomistic simulations. Digital Discovery, 3(4), pp.759-768.
[3] Gonzales, C., Fuemmeler, E., Tadmor, E.B., Martiniani, S. and Miret, S., 2024. Benchmarking of Universal Machine Learning Interatomic Potentials for Structural Relaxation. In AI for Accelerated Materials Design-NeurIPS 2024.
[4] Duval, A., Mathis, S.V., Joshi, C.K., Schmidt, V., Miret, S., Malliaros, F.D., Cohen, T., Liò, P., Bengio, Y. and Bronstein, M., 2023. A hitchhiker's guide to geometric gnns for 3d atomic systems. arXiv preprint arXiv:2312.07511.
[5] Miret, S., Lee, K.L.K., Gonzales, C., Mannan, S. and Krishnan, N.M., 2025. Energy & Force Regression on DFT Trajectories is Not Enough for Universal Machine Learning Interatomic Potentials. arXiv preprint arXiv:2502.03660.
Essential References Not Discussed: I encourage the authors to discuss the references in the above box. The papers related to challenges for GNNs in simulations appear particularly relevant. The paper could also benefit by providing more details on how their work exceeds the capabilities of prior work, such as Allegro [1] and Allegro-Legato [2] which is mentioned in the paper, that have tried scaling GNNs to large scale simulation.
The paper should also discuss relevant context for Hamiltonian Neural Networks [3] as a general method.
[1] Musaelian, A., Batzner, S., Johansson, A., Sun, L., Owen, C.J., Kornbluth, M. and Kozinsky, B., 2023. Learning local equivariant representations for large-scale atomistic dynamics. Nature Communications, 14(1), p.579.
[2] Ibayashi, H., Razakh, T.M., Yang, L., Linker, T., Olguin, M., Hattori, S., Luo, Y., Kalia, R.K., Nakano, A., Nomura, K.I. and Vashishta, P., 2023, May. Allegro-legato: Scalable, fast, and robust neural-network quantum molecular dynamics via sharpness-aware minimization. In International Conference on High Performance Computing (pp. 223-239). Cham: Springer Nature Switzerland.
[3] Greydanus, S., Dzamba, M. and Yosinski, J., 2019. Hamiltonian neural networks. Advances in neural information processing systems, 32.
The paper would also benefit from discussing prior work on graph partitioning for GNNs [4] in the context of the implemented partition algorithm. The algorithm in [4] seems to have a similar approach and claimed benefits.
[4] Mostafa, H., 2022. Sequential aggregation and rematerialization: Distributed full-batch training of graph neural networks on large graphs. Proceedings of Machine Learning and Systems, 4, pp.265-275.
Other Strengths And Weaknesses: Overall, the paper presents an interesting and potentially compelling method for scaling GNNs to larger materials systems. The paper could be strengthened by:
* Providing more details on whether the models evaluated are one for each system or a generalized model.
* Whether their proposed method would be applicable to smaller systems without performance drops. As mentioned before, this would help strengthen the reasons for using the method.
* Discussing whether the graph partitioning approach can be applied to any model architecture and what limitations may exist.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Did you train one model for each material system?
2. Do you think your approach can generally across different structures and systems? What would be needed for that?
3. Can you clarify if there is a different in the cut-off distance between slices and the cut-off distance used for graph creation?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **1. One model for each material system**: Yes, the models are trained for a specific material system, similar to most other literature ([4-8]). The current work does not demonstrate a universal Hamiltonian prediction model, though this can be achieved with a larger library of datasets, as shown by [3]. No public database for structures with 1000+ atoms per unit cell currently exists, so the necessary data must be manually generated. Data creation is computationally expensive due to the melt-and-quench and relaxation processes required. With enough collective effort and time, however, a universal model for all materials, including large, amorphous structures, can be feasibly trained with our approach.
**2. Generalization to different structures and systems**: Our approach (strictly local model + augmented partitioning) can be straightforwardly applied to other materials systems, including periodic structures (crystals, 2D materials) and molecules. We also applied it to the prediction of crystalline HfO2 structure and obtained far more accurate predictions (1.1 meV MAE) relative to amorphous HfO2 (5.16 meV) without any further optimization. Molecular examples are also included in the code attached in supplementary materials (though augmented partitioning and strict locality are excessive in these contexts due to their small, isolated systems and information-poor atomic neighborhoods). Here, the small number of atoms per unit cell, plus the larger similarity between training and test datasets, makes higher accuracies much more achievable.
**3. Cutoff for graph and cutoff for slice**: The cutoff (rcut) used to define the graph connectivity is not related to the slice length (t) used to define the partitions. They can be independently chosen.
**Applicability of partitioning to other architectures**: Augmented partitioning can be applied to other strictly local architectures (e.g., Allegro [5]), since it is implemented during graph construction.**
**How our work exceeds previous work:**
Firstly, for Allegro, the prediction task is for force and energies. The computational challenge they encounter is fundamentally different from ours, and comparisons are not possible as:
(1) Energy and force predictions involve only node embeddings, while Hamiltonian predictions also include edge embeddings to capture off diagonal blocks, and edges far outnumber the nodes (e.g. 3000 atoms, 500,000+ edges)
(2) H predictions require higher degree tensors to capture all orbital interactions.
**Scalability comparisons can, however, be made with previous Hamiltonian models. We provide a summary in Table 2. There are two main categories:**
**SO(3) models (e.g. QHNet)** use direct tensor products, and scale poorly with the lmax (maximum spherical degree) dimension. Because of this, they cannot be scaled to a large number of atoms.
**SO(2) convolution models (e.g. DeepH2, SLEM)** are more scalable with respect to lmax, but their examples are still relatively simple (< 100-200 atoms per unit cell) as the size of the graph used during training is limited by memory. So far, only our method has managed to train on and predict H for unit cells containing thousands of atoms due to efficient parallelization through strictly locality + augmented partitioning.
**Architecture** | **Approach** | **Max # Atoms per Unit Cell** |
|-----------------|-------------|-----------------------------|
| QHNet [6] | SO(3) | ~30 |
| WANet [7] | **SO(2)** | <200 |
| DeepH2 [3] | **SO(2)** | ≤150 |
| SLEM [8] | **SO(2)** | <100 |
| **This work** | **SO(2)** | 3000+ |
**Table 2:** Comparison of our model to state-of-the-art architectures on task, approach, and complexity of their respective training/testing datasets.
**For performance comparison**, we achieved errors (0.99-5 meV) within the range of the values obtained by DeepH2 (2.2 meV) and HamGNN (1.5-3.23 meV), despite our more challenging dataset. To further demonstrate what we can do with H predictions at such scale,**we provide a concrete application example in the response to Reviewer Kuqs.**
**Graph partitioning literature**: The paper brought up by the reviewer (Mostafa) focuses on reducing the size of the computational graph during fully-distributed training. However, in our case the memory limits can already be overcome when processing the input graph embeddings, even without tracking gradients. The approaches are thus entirely different. To the best of our knowledge, no previous work on large graph partitioning(e.g., GraphSAGE, ClusterGCN [9]) has ever targeted quantitative predictions of materials properties, for which the graph connectivity must absolutely be preserved. Our partitioning approach is therefore also original in this regard.
**Note that the list of references is found in the response to Reviewer 4 (nuFS)**
---
Rebuttal Comment 1.1:
Comment: Thank for the additional and clarifying details. Given this response, I am willing to increase my overall score to a 4. I would appreciate a deeper discussion on graph partitioning methods in the paper (can be in the appendix), which would encourage how works in adjacent fields relate to the present and potentially future work. | Summary: The authors propose a method based on SO(2) graph neural networks to learn the electronic Hamiltonian matrix in the structural relaxation process of inorganic materials. To address computational challenges associated with defective or large-scale crystalline structures, they introduce an efficient partitioning strategy that enables the model to handle arbitrarily large structures while preserving local atomic environments.
Claims And Evidence: The paper makes two primary contributions: (1) the development of a strictly local graph neural network (GNN)-based architecture for efficient prediction of the electronic Hamiltonian matrix; and (2) the introduction of an augmented partitioning method that enables large crystalline structures to be decomposed into smaller subgraphs while maintaining prediction accuracy comparable to global computation, thereby significantly reducing memory consumption.
For (1), the authors provide an extensive review of existing GNN-based Hamiltonian prediction methods in Section 2.2, but lacks a systematic comparison between the proposed architecture and previous methods, such as its advantages in accuracy, computational efficiency, and scalability.
For (2), I think the augmented partitioning method is well-supported with convincing evidence.
Methods And Evaluation Criteria: The authors construct three large-scale datasets of disordered crystalline materials and conduct a comprehensive analysis of prediction. Additionally, the authors employ three data augmentation strategies—atomic position perturbation, oxygen vacancy filling, and atomic substitution, to evaluate the model’s ability to capture both short-range and long-range interactions. The methodological design is robust.
Theoretical Claims: I have reviewed the theoretical claims in the paper and have not identified any apparent issues.
Experimental Designs Or Analyses: The authors’ experimental design and analysis are comprehensive and align well with general methodologies in Hamiltonian matrix prediction research. The experiments involve multiple large-scale disordered crystalline datasets, and the partitioning-based prediction method is rigorously evaluated for both model performance and effectiveness.
However, the paper provides little discussion on the practical effectiveness of the method or its impact on computational efficiency. In the final sentence of Section 4.5, the authors mention that the proposed method could serve as an initial guess for DFT packages to reduce computational costs. Given that one of the key applications of Hamiltonian matrix prediction is to expedite electronic structure calculations, it would be beneficial for the authors to include a more detailed quantitative analysis.
Supplementary Material: The authors provided code for visualization and for training and testing the model in the supplementary materials. No apparent issues are identified.
Relation To Broader Scientific Literature: In computational materials science, reducing computational time complexity while maintaining prediction accuracy has long been a key objective. In this work, the authors propose a partitioning-based method that addresses the challenge of directly handling large-scale disordered systems, offering promising applications in other disordered material systems.
Essential References Not Discussed: I think the authors have adequately cited and discussed prior work.
Other Strengths And Weaknesses: Strengths: The proposed partitioning algorithm achieves good computational performance without sacrificing accuracy, particularly for large-scale disordered material systems.
Weaknesses: The paper lacks comparative analysis with other existing Hamiltonian prediction methods and does not provide quantitative analysis, such as how many SCF iterations can be reduced or how much speedup can be achieved in actual calculations.
Other Comments Or Suggestions: No.
Questions For Authors: 1. Application performance, see "Experimental Designs Or Analyses".
2. Model comparasion, see "Claims And Evidence".
3. How would this method perform when applying to small crystal structure (maybe a slice of the unit cell), is it possible to provide the prediction result analysis of a small slice of one structure from the proposed dataset (compared with previous works)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1. Application performance**
Using the predicted H as an initial guess for SCF iterations is not our main focus, and the reduction in iterations also depends on the convergence threshold. We instead focus on key downstream applications where the Hamiltonian matrix H is the final product needed. Examples include ab-initio quantum transport (QT) simulations of nanoscale devices where electrical current is computed or the investigation of the solubility of doping atoms in semiconductors with the nudged elastic band (NEB) method [1,2]. In these cases, obtaining the required H from DFT is a crucial step that can consume more than 90% of the computational time, particularly in structures made of 1000 atoms or more.
**We provide here a concrete demonstration of our approach on a valence change memory (VCM) cell made of a TiN/HfO2/Ti/TiN stack and 5,268 atoms**. Normally, stoichiometric HfO2 is an insulator that blocks current flow. Applying a voltage introduces point defects (oxygen vacancies) that assemble into a conductive filament, changing the current by several orders of magnitude along the way.
We first train a model on slices of two HfO2 structures with randomly distributed vacancies, and use it to predict the H of structures where vacancies were arranged into filaments. **As these filament shapes were not in the training data, this is an out-of-distribution generalization task.** The predicted H can then replace its DFT counterpart in ab-initio quantum transport simulations to compute the electrical current, as summarized in Table 1.
| Device | Node Error [mEₕ] | Edge Error [mEₕ] | Current (Label H) [A] | Current (Predicted H) [A] |
|------------|------------|------------|----------------|----------------------|
| Filament 0 | 1.81 | 0.20 | 1.98E-09 | 1.47E-09 |
| Filament 1 | 1.74 | 0.20 | 1.12E-05 | 9.99E-06 |
| Filament 2 | 1.66 | 0.20 | 1.45E-06 | 1.96E-06 |
**Table 1:** Summary of prediction results and computed currents for devices with vacancy configurations forming different filament shapes.
The currents computed with predicted H are well within acceptable error for downstream applications, given that the current values of the three filaments are on different orders of magnitude (1e-9, 1e-6 and 1e-5 A). They provide qualitative insight on the spatial localization of current flow, and allow us to study the reversible dielectric-breakdown process in HfO2 oxides. This has important implications, considering that direct ML inference is much faster than DFT (**~2 seconds for forward pass vs 3.94 node hours** on GH200 superchips).
Firstly, we can enable previously unfeasible workflows involving repeated DFT updates. By replacing the DFT step with model inference, we can rapidly obtain the Hamiltonian matrices of different structures along, for example, an Molecular Dynamics (MD) trajectory, thus amortizing the training cost (<40 node hours) across 100s to 1000s of time steps. Besides filament formation in VCM, this can also be used to study electrical behavior during phase transition of GST in memory devices. Secondly, we can perform inference on structures far larger than what DFT can handle (10k+ atoms). **It is therefore not only an efficient substitute for DFT, but also an essential step to unlocking new research areas.**
**2. Model Comparison:**
**Accuracy**: Firstly, slices cannot be isolated from a large structure to test other models due to periodic boundary conditions imposed during DFT. Interactions with periodic images cause the H elements of the isolated slices to be entirely different from those of the same atoms in bulk structures (MAE of 837 meV). The use of slices for training/testing is uniquely enabled by our strictly local model + augmented partitioning approach. Regardless, we can still compare our values with the accuracy achieved by SOTA architectures on smaller structures. Despite the complexity of our dataset comprising of disordered structures with thousands of atoms, we achieved errors of **0.99 to 5.16 meV** that are within the range of the values obtained by DeepH2 (**2.2 meV**) [3] and HamGNN (**1.5-3.23 meV**) [4] for simpler periodic structures.
**Scalability**: Note that many previous works use the term ‘scalable’ to refer to the l-max dimension, which can be tackled with SO(2) convolutions [3]. On top of this approach, our augmented partitioning method also enhances scalability with respect to the number of atoms/edges in the graph (1k+ nodes ~ 100k+ edges), by allowing large graphs to be broken down and trained independently. To the best of our knowledge, **no other work has trained on graphs with 1k+ atoms**. We also provide analysis of the speedup offered by our approach when compared to conventional full graph training used by other models in **Appendix H**.
**Note that the list of references is found in the response to Reviewer 3 (nuFS)** | null | null | null | null | null | null |
Layer by Layer: Uncovering Hidden Representations in Language Models | Accept (oral) | Summary: The paper proposes a framework for analyzing representations throughout model layers. From the perspective of matrix-based entropy, they primarily measure properties like compression (e.g., prompt entropy), geometric smoothness (e.g., curvature), and augmentation invariance (e.g., LiDAR). They argue that for autoregressive models, intermediate layers outperform late ones, unlike bidirectional models. They demonstrate a similar trend for vision models. They also investigate trends related to model size, training progression, and chain-of-thought models.
Claims And Evidence: - Claim 1. There is a theoretical perspective using matrix-based entropy that unifies many metrics.
- While Sec. 3.4 presents the theoretical connections between matrix-based entropy and prompt and dataset entropy and InfoNCE, these results feel disconnected from the rest of the paper. Namely, dataset entropy and InfoNCE are not tested or discussed in the empirical experiments of the main text.
- Only a subset of metrics discussed in Sec. 3.3 appear in the main paper, while the full set appears in Figure 8 of the Supplementary. It would be nice to include a summary of these results in the main paper.
- Claim 2. The proposed framework reveals why intermediate layers outperform late ones.
- The paper does a reasonable job of validating this claim in Sec. 4.2 and Sec. 5, although the presentation could be improved.
- The discussion in L302 is too brief. Figures 1 and 2 indeed shows a relationship between trends in performance and prompt entropy, which can be used to contrast autoregressive (Pythia) and bidirectional models (BERT). However, the curvature and LiDAR plots are not discussed. The interpretation in L317 is incomplete — Mamba exhibits conflicting trends across metrics (in prompt entropy at 80% depth the layers are some of the most compressed, in LiDAR they are the least compressed). The result in Figure 1 where BERT’s best representation is at 10% depth should be discussed; the “conventional wisdom” would suggest that the last representation would perform the best, but that is not the case there.
- L408 / Figure 13 is missing a key performance result. In Figure 13, it is unclear why AIM is not plotted. To complete the argument, the paper should validate for autoregressive vision models, whether an intermediate layer performs the best as well.
- Other Claims
- Compression increases as models scale. This is shown in Figure 10; I liked this result and it was clearly conveyed.
- Compression increases with more training steps. This is shown in Figure 4; I again thought this was a nice result that was clearly conveyed.
- Residual connections drive compression. I am not sure I am fully convinced; L346 could be more thoroughly and clearly explained. I would have liked a more detailed explanation on the setup and annotations for the Figure 15 legend (e.g., explaining “pre attention,” “attention patterns,” etc.), and a longer walkthrough on pairings in the figure (e.g., compare the “Post Mlp” and “Post Mlp Residual”) to better understand this analysis.
Methods And Evaluation Criteria: See “Claims and Evidence” above.
Theoretical Claims: N/A
Experimental Designs Or Analyses: See “Claims and Evidence” above.
Supplementary Material: I reviewed Figures 10, 13, 14, 15 in the Supplementary as they were referenced in the main text.
Relation To Broader Scientific Literature: The framework of analysis is the main novel contribution of this paper. The paper correlates performance and various pre-existing compression metrics, for autoregressive vs. bidirectional models, and other settings.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths
- The presentation of matrix-based entropy in Sec. 3.2 is well-motivated and conveyed in an intuitive fashion.
- The paper evaluates not only on language models but also vision models, demonstrating the generality of its findings.
- The paper reveals compelling findings like larger models and more training steps lead to more representational compression.
Weaknesses
- The paper is missing some important discussion of state space models and autoregressive vision models (see “Claims And Evidence” above).
- The paper mentions a number of metrics in its unified framework and core theoretical results in Sec. 3, yet only a subset is discussed in the main text. The presentation could be improved with a more clear connection between the theoretical and empirical results (see “Claims And Evidence” above).
I am open to updating my score if the authors are able to address the listed weaknesses.
Other Comments Or Suggestions: In Figure 2 and other similar figures, it would be helpful to include some more intuitive semantic label for each metric. I have to spend some effort recalling that lower entropy means more compression, higher curvature means more abrupt changes, etc.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer U8pM for the detailed feedback and for indicating openness to reconsidering the evaluation. We appreciate the positive comments on our work. We address your specific points below. Note, that You may also be interested in the new experimental results provided for Reviewers 9n2s (generative tasks) and ctdG (unsupervised mechanism for selecting good intermediate layers).
## Addressing Framework Metrics and Theory-Empirical Connection (Claim 1 / Weakness 2)
You correctly noted that while Sec 3 outlined a framework connecting several metrics (including Dataset Entropy, InfoNCE) via theory, the main empirical discussion focused on a subset. This was primarily due to space constraints. However, we intend to take advantage of the extra page in the final submission to do exactly as you suggest and make a longer discussion. We will utilize the additional page in the camera-ready version to:
- Explicitly discuss the empirical results for **all metrics shown in Appendix Figure 8** in the main text.
- Expand the discussion in Sec 4.2 to fully incorporate Curvature and LiDAR trends alongside Prompt Entropy when comparing model architectures (Pythia, BERT, Mamba), addressing the noted brevity in L302.
## Addressing Empirical Validation and Presentation (Claim 2 / Weaknesses 1 & 2)
***Missing AIM Performance / AR Vision Validation:*** Thank you for pointing out the missing AIM results in Figure 13. This validation is indeed key. We have now performed this analysis and present the results for AIM (and BEiT) linear probe accuracy layer-wise in Table 3 below.
As shown, the autoregressive vision model AIM exhibits a modest performance gain (+1.9\%) at an intermediate layer (75\% depth) compared to the final layer. This aligns with our findings in language models and provides the requested validation. BEiT, consistent with other non-AR models, peaks at the final layer. We will integrate these results into Figure 13 and the main text discussion.
### Table 3: Val@5 Linear Probe Accuracy on ImageNet-1k at Different Layer Depths
| Model | 0% | 25% | 50% | 75% | 100% (Final) |
|--------|:-----|:------|:------|:-------------------|:-------------|
| AIM | 3.8% | 13.7% | 28.5% | 82.0% (L18) | 80.1% |
| BEiT | 2.9% | 7.1% | 14.6% | 46.8% | 62.5% |
**Mamba Discussion (Conflicting Metrics)** Regarding the observation at 80\% depth: Prompt Entropy indicates high compression, while LiDAR indicates high augmentation invariance. These are not conflicting, but rather co-occurring, properties. Thus, the interpretation is: "Mamba's most compressed layers (via entropy) are also its most augmentation invariant (via LiDAR)". We will clarify this distinction and provide a more detailed discussion of Mamba's unique trends in the expanded Sec 4.2. We will also include a discussion of BERT, which we omit here for character limits.
## Addressing Residual Connection Claim (Claim 2)
We appreciate the request for clarification on the "Residual connections drive compression" claim.
**Figure 15 Legend Details:** To clarify the setup, we used hooks to capture activations at key stages within each transformer block:
- **Initial Representations** Input to the block's residual stream.
- **Attention Patterns** Raw attention weights (query-key interactions).
- **Attention Outputs** Output of the attention mechanism value aggregation, projected back.
- **Attention Residuals** Result after adding attention output to the initial residual stream and applying LayerNorm.
- **MLP Output** Output of the MLP layers, projected back.
- **MLP Output + Residuals** Final block output (input to next layer), after adding MLP output to the **Attention Residuals** stream. This is the typical "layer output" we measure.
**Explanation**
Comparing MLP Output (step 5) with MLP Output + Residuals (step 6) in Figure 15 reveals the latter has a significantly lower effective rank (is more compressed). This occurs because the norm of the residual stream (Attention Residuals, step 4) is often much larger than the norm of the MLP Output (step 5). When added together, the high-norm residual component dominates the sum, effectively reducing the combined representation's rank/increasing compression compared to the MLP output alone. We will incorporate this detailed walkthrough into Sec 4 and add plots **showing the norms** of these components alongside Figure 15 to make this argument clearer and more convincing.
## Conclusion
We hope these responses, clarifications, and new results address the reviewer's points. We will incorporate these changes into the camera-ready version. Thank you again for the feedback. We hope these comprehensive responses and improvements demonstrate the soundness and significance of our work. We would be grateful if you would consider these points in reassessing our submission, and we welcome any further questions.
---
Rebuttal Comment 1.1:
Comment: - I reviewed the missing experiment on AIM; the provided table indeed shows a peak in accuracy at an intermediate layer, in contrast with the non-autoregressive models.
- Thank you for the clarified discussion of the Mamba results. I would also encourage the authors to address this observation about BERT: *in Figure 1 [...] BERT’s best representation is at 10% depth.* However, this point does not affect my score.
- Thank you for the detailed walkthrough of the results in Figure 15. It is an intriguing explanation that the residual stream is behaving as a noise filter, where it dominates the representation with a higher norm.
Given the clarifications, I have increased my score.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your review, and we are glad that our clarifications addressed your concerns. We are grateful that you have raised your score of the paper. | Summary: This paper introduces a unified framework for evaluating representation quality in language models. The framework is based on information theory, geometry, and invariance to input perturbations. The authors analyze how each layer balances information compression and signal preservation, challenging conventional wisdom by showing that intermediate layers outperform the final layers in certain tasks.
The proposed method is tested across a diverse set of 32 text-embedding tasks, covering various model architectures and domains. The authors also explore when and why these findings hold by linking them to fundamental properties such as entropy, invariance, and geometry via matrix-based entropy.
Claims And Evidence: The main aim of this work is to propose a unified framework for evaluating the representations of layers in LLMs. One of the initial claims is that intermediate layers can surpass final layers in performance due to their superior balance of compression and signal retention.
To this end, this study provided both a theoretical analysis of the matrix based entropy and an empirical validation using a broad set of tasks and models that seem to confirm the claims of the authors.
Methods And Evaluation Criteria: The study evaluates representation quality using metrics derived from information theory, geometry, and robustness to perturbations. Various measures are used in this context based on nominal works such as the InfoNCE, while considering not only different tasks but also architectural families of models.
To the best of my knowledge, even though the use of matrix-based entropy is not novel in the community, the interpretation of the arising properties combining the three different information theoretic, geometric and invariance properties, along with an empirical extensive evaluation, is.
Theoretical Claims: I check the theoretical claims of the paper. I would like some clarifications about the appropriate conditions on the data distribution and model that is required for Theorem 2, and if these hold in practice.
I would also like some clarifications on the prompt entropy and how we can construct prompts that are appropriate in the context of the provided analysis.
Experimental Designs Or Analyses: The experiments cover an extensive range of 32 text-embedding tasks, but additional details on how the prompts were constructed and how many were used per experiment, would help clarify the settings and improve reproducibility. Are these part of the considered datasets?
Supplementary Material: I read the supplementary material, mainly the experimental details, and the insights to some definitions and for further experimental results.
Relation To Broader Scientific Literature: I find that the proposed work is largely based on the previous works of (Giraldo et al., 2014; Skean et al., 2023) that are properly cited.
Even though this study does not constitute a groundbreaking contribution per se, it does provide a unified framework, concentrating and evaluating the results of previous works.
Essential References Not Discussed: In my view, the most essential works are appropriately discussed.
Other Strengths And Weaknesses: No further comments.
Other Comments Or Suggestions: No further comments.
Questions For Authors: Most of my concerns/questions have been expressed in the previous sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and feedback. We appreciate you recognizing the novelty in our unified interpretation and extensive evaluation. You may also find our new results relevant (details in responses to Reviewers 9n2s regarding generative tasks and ctdG regarding unsupervised layer selection).We address your specific points below:
### Clarification on Theorem 2 Conditions
You asked about the conditions required for Theorem 2 and their applicability in practical scenarios. Our theoretical framework relies on three main assumptions:
- **Orthogonal Equivariance:** We assume an orthogonally equivariant model. While transformers are generally permutation equivariant, this stronger condition enables theoretical tractability, though we acknowledge it's a simplification for standard models.
- **Gaussian Input Data:** We assume input data follows a Gaussian distribution, which simplifies analysis due to its tractable mathematical properties.
- **Representations on Hypersphere:** We assume token representations lie on a hypersphere. This is often approximated in practice due to Layer Normalization in high dimensions, which yields representations with nearly constant norms.
We believe these assumptions collectively provide a reasonable, albeit simplified, foundation for analyzing modern transformers. We will **explicitly state these assumptions** in the revised theorem statements and add a **discussion in the main text** outlining these conditions and the practical scenarios where deviations might occur.
### Clarification on Prompt Construction
Regarding your question on prompt construction, we give the following explanation and example. The Massive Text Embedding Benchmark (MTEB) framework we use provides standardized code to automatically generate prompts for each task. We used these default MTEB prompts without modification to ensure consistency and reproducibility. We recognize that examples were missing in our submission. We will add a discussion of the MTEB prompting strategy and include concrete examples (like the one below for EmotionClassification) in Appendix D.2.
- **Example Prompt Format (EmotionClassification)** "Classify the emotion expressed in the given Twitter message into one of the six emotions: anger, fear, joy, love, sadness, and surprise: {{SAMPLE GOES HERE}}"-
- **Example Full Prompt:** "Classify the emotion expressed in the given Twitter message into one of the six emotions: anger, fear, joy, love, sadness, and surprise: Thank you, reviewer!"
### Prompt Entropy
You also asked for clarification on prompt entropy. We hope the explanation above clarifies the prompt construction aspect. Regarding prompt entropy itself (our metric measuring token-level information uniformity), we are happy to elaborate further on its calculation or interpretation if specific aspects remain unclear.
### Conclusion
We have worked to address your points, particularly regarding the assumptions underlying our theory and the details of prompt construction. We hope these clarifications strengthen the paper in your view. Given these explanations and our commitment to updating the manuscript accordingly, we would be grateful if you might reconsider your assessment. Please let us know if any further questions or points require clarification. | Summary: This study investigates whether intermediate layers of language models offer more informative representations compared to the final layers. The authors propose a unified framework for assessing representation quality, employing seven evaluation metrics categorized into three groups: information-theoretic, geometric, and augmentation-invariance. The experimental design encompasses three model architectures: decoder-only transformers, state space models, and encoder-only transformers. The study evaluates each layer's embeddings across 32 tasks from the MTEB benchmark, covering classification, clustering, and reranking. Through comprehensive analysis, the research examines the performance of intermediate layers across different architectures, explores their variations under diverse training paradigms, and investigates the consistency of findings between textual and visual models.
Claims And Evidence: The claims made in this submission are supported by clear and convincing evidence. The authors conduct a comprehensive empirical investigation, encompassing diverse scenarios such as downstream task performance and parameter variations during training. Through systematic examination of layer-wise representations across various model architectures, the study shows that intermediate layers of language models can encode richer representations compared to their final layers.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are both appropriate and well-justified. The authors conduct a comprehensive evaluation across 32 MTEB tasks, systematically assessing representative models of various architectures through three distinct perspectives: information-theoretic, geometric, and augmentation-invariance. In addition, the authors provide a thorough explanation for their selection of specific tasks and evaluation metrics.
However, while the paper conducts experiments across 32 text embedding tasks, these tasks are predominantly focused on classification, clustering, and reranking. The study does not encompass other critical NLP tasks, such as machine translation, question answering, or dialogue generation, which are essential for a more comprehensive evaluation.
Theoretical Claims: I have reviewed the theorems in the Core Theoretical Results section and it seems right to me.
Experimental Designs Or Analyses: Most of the experimental design is well-aligned with the study's objectives, establishing a robust connection between the evaluation framework and the empirical findings.
However, the experiments "Finetuning Effects" and "Impact of Chain-of-Thought Finetuning" do not appear directly relevant to the main task. The reviewer is unclear about the rationale for including these experiments.
Supplementary Material: I have reviewed the supplementary material. It includes specific details of the evaluation (dataset details, prompt details), further derivations of the theorems on the main page, and experimental figures that could not fit on the main page.
Relation To Broader Scientific Literature: In the field of knowledge distillation, several studies explore the knowledge transfer based on the intermediate layers, which aligns closely with the core contributions of this paper.
Relevant works in this area include:
[1] Knowledge Flow: Improve Upon Your Teachers;
[2] MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities.
Essential References Not Discussed: The paper explores the performance of intermediate layers in language models, which is a topic of growing interest in the field. However, there are relevant studies addressing similar themes that have not been cited:
[1] Is Bigger and Deeper Always Better? Probing LLaMA Across Scales and Layers;
[2] Transformer Layers as Painters.
Other Strengths And Weaknesses: Other Strengths:
(1) The paper is well-written and well-organized, ensuring clarity for readers to grasp the core arguments and findings.
(2) The research topic is engaging and scientifically significant, with experimental results that offer valuable insights and implications for the broader research community.
(3) The experiments are comprehensive, offering substantial evidence to support the authors' claim that intermediate layers can, in certain cases, outperform the final layers.
(4) The authors have provided their codes for reproducibility.
Other Weaknesses:
(1) The experimental details and results for the AIM are not presented in the paper.
(2) While the paper employs MTEB as its primary evaluation benchmark, it is important to note that MTEB may not encompass the full spectrum of text data types or tasks. A notable limitation is the absence of low-resource languages or specialized domains such as medicine, law, and others.
(3) It will be more rigorous to discuss the limitations of the paper.
Other Comments Or Suggestions: (1) Present the experimental details of AIM, such as its scale, along with the corresponding results.
(2) Expand the evaluation to include tasks like machine translation, question answering, or dialogue generation.
(3) Consider extending the dataset domains to areas such as medicine and law.
Questions For Authors: (1) What is the rationale for including the "Finetuning Effects" and "Impact of Chain-of-Thought Finetuning" in the main empirical results?
(2) Are there any plans to conduct experiments on more advanced LLMs or MLLMs, such as Qwen2.5 and Janus?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer 9n2S for the detailed review, positive feedback on our claims, methods, and writing, and constructive suggestions. We are encouraged by the reviewer's assessment of our work as engaging and scientifically significant.
We address the reviewer's comments and questions below, including new experimental results motivated by the feedback. Note that they may also be interested in the new experimental results provided for Reviewers ctdG (unsupervised mechanism for selecting good intermediate layers).
## Addressing Evaluation Scope (Non-Embedding Tasks):
We appreciate the reviewer highlighting the importance of evaluating beyond embedding-centric tasks. To address this, **we conducted new experiments on generative and classification tasks.** Evaluating intermediate layers for generation requires obtaining logits, typically via an "unembedding" layer. We employed **the TunedLens technique** [1], which enables layer-wise analysis of generative capabilities. These results complement our MTEB findings, showing that (a) intermediate layers can outperform final layers on diverse tasks like QA, and (b) the optimal layer depth is task-dependent.
**New Results (QA & Classification)**
* On MMLU (Question Answering), using Llama3-8B with TunedLens, we found **Layer 28 achieves an average accuracy of 62.1\%, outperforming the final layer's 61.2\% (+0.9\%)**. This demonstrates that intermediate layers can offer benefits beyond MTEB tasks. (See Table 2 for aggregated results).
* We also performed layer-wise analysis on **ToxiGen** (toxicity detection) and **BLIMP** (grammaticality judgments) using LM-evaluation-harness. Interestingly, optimal performance varied: early layers excelled on ToxiGen, while late layers were best for BLIMP.
### Table 2: Average MMLU Accuracy with TunedLens at Different Llama3-8B Layers
| Layer Depth | 0% | 25% | 50% | 75% | 100% (Final) |
|---------------|:------|:------|:------|:---------------------------|:-------------|
| MMLU Score | 22.9% | 22.9% | 25.7% | 60.7% (L28 peak: 62.1%) | 61.2% |
## Responses to Questions & Other Points
**1: Rationale for Finetuning/CoT Experiments:** The primary goal of these experiments was not just performance evaluation, but to demonstrate the **utility and sensitivity of our proposed Matrix Entropy framework** for probing LLM behavior under different conditions (finetuning, CoT prompting), aligning with recent work like Seq-VCR [2]. We will reorganize the paper and incorporate relevant figures (currently in the appendix) into the main text.
**2: Experiments on Larger/Advanced Models:** We plan to experiment with the multimodal Llama3.2-11B-Vision model, which provides a valuable bridge between our language (Sec 4) and vision (Sec 5) experiments. Models like Qwen2.5 and Janus are excellent candidates for future work to further scale our findings.
**3. AIM Plots** We apologize for omitting the AIM details and results. These are included in our response to Reviewer U8pM and will be added to the main paper in the camera-ready version.
**4. Additional References** Thank you for suggesting these relevant references on knowledge distillation and layer analysis. They offer valuable perspectives on knowledge transfer and feature encoding in intermediate layers. We will incorporate and discuss these works in the related work section, strengthening our connection to the literature and helping address questions about intermediate features (raised also by Reviewer ctdG).
**5. Discussion of Limitations** We acknowledge the points raised regarding limitations and scope. As discussed above, our new results broaden the task variety beyond MTEB. Regarding limitations involving theoretical assumptions, as noted by Reviewer aJrj, our framework makes certain assumptions (detailed in our response to aJrj). We will clearly outline these in the revised manuscript's discussion section.
## Conclusion
We have put significant effort into addressing the points raised in your review, including conducting new experiments on non-embedding tasks (MMLU, ToxiGen, BLIMP) using the TunedLens approach, which directly addresses one of your main concerns and further demonstrates the generality of our findings. We hope these additions, along with our clarifications and planned revisions, strengthen the paper and demonstrate the value of our contributions. In light of these new results and our detailed responses, we would be grateful if you would consider raising your assessment of our work. Please do not hesitate to let us know if any further questions or concerns arise; we welcome the opportunity for continued discussion. Thank you again for the thoughtful feedback.
## References
[1] Belrose et. al., "Eliciting Latent Predictions from Transformers with the Tuned Lens", 2023.
[2] Arefin, et al, "Seq-VCR: Preventing Collapse in Intermediate Transformer Representations for Enhanced Reasoning", 2024 | Summary: This paper investigates representation quality across different layers of large language models (LLMs), challenging the conventional wisdom that final-layer embeddings are optimal for downstream tasks. Through systematic evaluation on 32 tasks from the Massive Text Embedding Benchmark, the authors demonstrate that intermediate layers often outperform final layers by up to 16% in accuracy. They propose a unified theoretical framework integrating information-theoretic, geometric, and invariance perspectives to explain this phenomenon, showing how matrix-based entropy can measure how effectively each layer balances information compression and signal preservation. The authors compare multiple architectures (transformers, state-space models) and domains (language, vision), finding that autoregressive models exhibit a pronounced mid-layer "compression valley" while bidirectional models show more uniform patterns. Additional analyses examine how representation quality evolves during training, how chain-of-thought finetuning affects mid-layer entropy, and how similar patterns emerge in vision models with analogous training objectives.
Claims And Evidence: The paper's central claims are well-supported by substantial empirical evidence. The assertion that intermediate layers outperform final layers is convincingly demonstrated through comprehensive benchmarking across diverse tasks and model architectures. The authors provide both quantitative metrics and visualizations showing how representation quality varies with depth.
The theoretical framework connecting entropy, geometry, and invariance is supported by mathematical formulations and empirical correlations between these metrics and downstream performance. The correlation analysis (Figures 3, 6, 7) demonstrating strong relationships between their proposed metrics and task performance provides compelling evidence for the validity of their framework.
The architecture-specific claims about compression patterns are well-supported by consistent findings across multiple model scales and types. The extension to vision models provides additional credibility to their argument that the observed patterns are driven by training objectives rather than data modality.
Methods And Evaluation Criteria: The evaluation methods are appropriate and comprehensive. Using the MTEB benchmark with 32 diverse tasks provides a robust test of representation quality across different use cases. The authors' approach of testing every layer systematically on each task allows for direct, fair comparisons.
The set of metrics developed to assess representation quality (prompt entropy, dataset entropy, curvature, LiDAR, DiME, InfoNCE) collectively capture different facets of what makes representations effective. The combination of these metrics with downstream performance establishes a clear connection between theoretical properties and practical utility.
The inclusion of multiple architectures (Pythia, Mamba, BERT) at various scales strengthens the generalizability of the findings. The controlled experiments with extreme input conditions (repetition, randomness, varying length) provide additional insights into how different layers process information.
Theoretical Claims: The paper makes several theoretical claims that appear sound. I verified the connections between matrix-based entropy and effective rank (Theorem 4), which is correctly established. The relationship between InfoNCE and entropy (Theorem 7) is well-grounded in information theory.
The theorems connecting prompt entropy to dataset entropy (Theorems 5 and 6) provide useful insights into how local (token-level) and global (dataset-level) properties interact. These theoretical results help explain why certain compression patterns lead to more effective representations.
The unification of seemingly disparate metrics under a common framework of matrix-based entropy is a significant theoretical contribution that appears technically sound, though some of the proofs would benefit from more detailed derivations.
Experimental Designs Or Analyses: The experimental designs are sound and well-executed. The layer-wise evaluation approach provides a comprehensive view of how representation quality evolves across model depth. The analysis of model architectures is systematic and controls for relevant variables by normalizing layer depths as percentages to allow fair comparison across different model sizes.
The training progression analysis (Figure 4) effectively captures how representations evolve during training, providing valuable insights into the dynamics of representation learning. The sub-component analysis of transformer blocks (Figure 15) is particularly illuminating, isolating the effects of different components (attention, MLP, residuals) on representation quality.
One limitation is that while the authors demonstrate superior performance of intermediate layers, they don't provide a systematic way to identify which specific intermediate layer is optimal for a given model or task without empirical testing.
Supplementary Material: No
Relation To Broader Scientific Literature: The authors have discussed carefully in their paper.
Essential References Not Discussed: The authors have discussed carefully in their paper.
Other Strengths And Weaknesses: **Strengths:**
- The integration of multiple perspectives (information theory, geometry, invariance) into a unified framework is elegant and insightful.
- The cross-domain validation with vision models significantly strengthens the generalizability of the findings.
- The analysis of sub-components within transformer blocks provides valuable insights into the mechanisms driving the observed patterns.
- The practical implications for representation extraction are significant and could influence how embeddings are utilized in downstream applications.
**Weaknesses:**
- The paper doesn't establish clear guidelines for selecting the optimal layer for a specific task or model without empirical testing.
- The connection between the observed patterns and the actual content being represented at different layers remains somewhat abstract.
- The paper doesn't discuss potential connections to other observed phenomena in transformers, such as the attention sink effect, which might provide additional explanatory power.
- The theoretical justification for why autoregressive models develop stronger mid-layer compression than bidirectional models could be more thoroughly developed.
Other Comments Or Suggestions: - The presentation of results could be enhanced by providing concrete examples of what types of features or patterns are being captured by intermediate vs. final layers.
- It would be valuable to include a discussion of the computational implications of using intermediate layer representations rather than final layers.
- A discussion of how these findings might inform architectural design choices for future LLMs would strengthen the impact.
Questions For Authors: 1. Have you investigated whether there are consistent patterns in which specific tasks benefit most from intermediate versus final layer representations? This could help develop guidelines for which layer to use for different application types.
2. Did you observe any relationship between the optimal layer depth and model scale? While you show that compression patterns become more pronounced in larger models, does this shift where the optimal layer for downstream tasks is located?
3. Your framework suggests that intermediate layers achieve an optimal balance between compression and signal preservation. Have you explored whether this balance can be explicitly optimized during training to enhance representation quality further?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer ctdG for the detailed, insightful review and constructive feedback. We appreciate the positive assessment of our claims, methods, theory, and experiments. Below, we address the reviewer's comments and questions, incorporating clarifications and new results. You may also be interested in the new results provided for Reviewers 9n2s (generative tasks). Due to character limits, we could not fully address your suggestions about architectural and computational implications, though we will incorporate these suggestions into the manuscript.
## New Results: Systematic Way to Select Optimal Layer
We agree guidance on unsupervised layer selection is valuable. Our correlation results showed unsupervised metrics (entropy, DiME, etc.) can act as proxies for downstream performance. To demonstrate this, **we present new results** (Table 1) evaluating unsupervised layer selection. We compare the naive last layer, the supervised best layer, and layers chosen by selecting those with minimum DiME, Dataset Entropy, or InfoNCE per task. As shown, for both Pythia-410M and LLM2Vec-8B, **specific unsupervised metrics yield better average MTEB performance than the last layer** confirming their utility. We will clarify this procedure and its benefits in the revised manuscript.
### Table 1: Average MTEB Performance with Different Layer Selection Schemes
| Model | Naive (Last) | Supervised (Best) | Unsupervised (min-DiME) | Unsupervised (min-Dataset Entropy) | Unsupervised (min-infoNCE) |
|--------------|:------------:|:-----------------:|:------------------------:|:---------------------------------:|:---------------------------:|
| Pythia-410M | 45.5% | 52.0% | 48.5% | 48.1% | 46.2% |
| LLM2Vec-8B | 63.9% | 66.3% | 60.0% | 50.4% | 64.3% |
## Responses to Points
**Detailed Proofs** We agree that additional detail in the derivations for Theorems 5 and 6, connecting local and global token-level properties, would be beneficial. We will include more comprehensive derivations in the camera-ready submission.
**Features in Middle Layers / Concrete Examples:** We concur that discussing intermediate features would strengthen the paper. While detailed analysis is ongoing, we will incorporate a discussion of related work [4] on layer-wise feature clustering and plan to add a brief analysis in the Appendix to provide more concrete examples of information captured by intermediate layers.
**Attention Sink** Thank you for highlighting the connection to the attention sink phenomenon [3]. We hypothesize that the mid-layer compression we observe could be related: attention sinks might channel contextual information through a few critical tokens, creating an information bottleneck. We will incorporate a brief discussion of this potential link in the revised manuscript.
**AR vs. Bi-directional:** Regarding the request for a more developed theoretical explanation for differing compression patterns, we acknowledge this is an interesting area. While we attribute the empirical difference to distinct training objectives, a full derivation is complex. We will refine our discussion and note this as an avenue for future theoretical work.
## Responses to Questions
**1: Best Layers for Different Task Types** We have looked at which layers are optimal for different task types. for Pythia-410M, optimal layers vary: ~50\% depth for Classification, ~75\% for Clustering and Retrieval. This suggests task-dependent preferences. We will include these breakdowns in the Appendix and discuss them.
**2: Effect of Model Scale on Optimal Layer Depth** Optimal depth tends to shift deeper with scale (first half for small Pythia models, ~70\% for 160M/410M). This ~70\% depth roughly corresponds to where entropy begins increasing after the dip (Fig 10a, Appendix), suggesting the optimal layer often lies where representational capacity expands post-compression. We will elaborate on this in the revision.
**3: Explicit Optimization During Training**
Explicit Optimization During Training: While we didn't optimize these metrics during training, we agree it's a promising direction. Concurrent work [2] explored this for finetuning (finding significant compression was less suitable for math tasks requiring full input detail). Optimizing during pre-training is an interesting future direction we will mention.
## References
[1] Sanyal, et al, "Inheritune: Training smaller yet more attentive language models", 2024
[2] Arefin, et al, "Seq-VCR: Preventing Collapse in Intermediate Transformer Representations for Enhanced Reasoning", 2024
[3] Xiao, et al. "Efficient Streaming Language Models with Attention Sinks." 2024
[4] Chen, et al. "Is Bigger and Deeper Always Better? Probing Llama Across Scales and Layers." 2023 | null | null | null | null | null | null |
LightningDrag: Lightning Fast and Accurate Drag-based Image Editing Emerging from Videos | Accept (poster) | Summary: This paper introduces LIGHTNINGDRAG, a high-speed, high-quality drag-based image editing framework that significantly outperforms existing methods in both efficiency and success rate. Unlike prior approaches that rely on Generative Adversarial Networks or large-scale diffusion models—often requiring over a minute per edit—LIGHTNINGDRAG completes edits in about one second. By redefining drag-based editing as a conditional generation task, the method eliminates the need for computationally expensive latent optimization or gradient-based guidance. Trained on large-scale paired video frames, the model captures diverse motion patterns, such as object translations, pose shifts, and zooming, leading to improved accuracy and consistency. Despite being trained only on videos, LIGHTNINGDRAG generalizes well to various local deformations beyond its training data, such as lengthening hair or twisting rainbows. Extensive evaluations demonstrate the superiority of this approach.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: There is no theoretical claim.
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes. I have checked more qualitative results and the demo video.
Relation To Broader Scientific Literature: This paper improves the efficiency and accuracy of drag-based image editing methods using samples from videos.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: 1. The technical contribution is somewhat limited. The overall structure of LightningDrag is essentially the same as Wear-Any-Way[1], especially in the reference encoder and point embedding attention components.
2. The qualitative experiments compared to other methods are not very comprehensive. There are only seven comparative examples in the paper and supplementary materials.
3. The author claims in the paper P3L150 that 'We begin by curating videos with static camera movement,' but the example in Fig. 1(c) shows a forward-moving camera. Does the author keep such data during the filtering process?
[1] Wear-Any-Way: Manipulable Virtual Try-on via Sparse Correspondence Alignment. In ECCV 2024.
Other Comments Or Suggestions: 1. It is suggested that the author add a section at the end of the introduction to summarize the contributions of this paper.
Questions For Authors: The reviewer's primary concern is the novelty of this paper, and the score will be adjusted based on the author's response.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the insightful feedback and appreciations in our work. Please find our response to your questions below.
**Q1**: Architectural design similar to Wear-Any-Way.
**A1**: Thank you for pointing this out. While our architectural design indeed draws inspiration from prior literature, including Wear-Any-Way, we respectfully emphasize several critical differences that clearly distinguish our work:
1) **Broader Scope:** Our approach addresses the general setting of drag-based editing on **arbitrary images**, whereas Wear-Any-Way specifically targets editing human clothing images. The difference in scope significantly impacts the underlying challenges and method pipeline design.
2) **Novelty in Video as Supervision Signal:** A core novelty of our paper is the insight that natural videos serve as a scalable and effective source of supervision for training general-purpose drag-based editing models. In contrast, Wear-Any-Way does not utilize video data and is restricted to image-based supervision within a narrow human-clothing domain. Due to our broader and more challenging scenario, our data pipeline also requires additional sophisticated steps (e.g., filtering camera movements, filtering frames with object key-points go out-of-bound, etc.) to accurately and scalably construct training pairs from diverse video data.
3) **Out-of-domain Generalization Insights:** As discussed in introduction (L80-L86) and illustrated in our empirical evaluation, our model notably generalizes to editing tasks that involve motions unseen in training (e.g., local deformation of rigid structures or objects). This observation provides novel insights into how video-based supervision can effectively enable image editing across broader, unseen domains, which is absent from Wear-Any-Way. For a more detailed discussion on this aspect, please see **A2** of "Response to Reviewer oUVx".
4) **Additional Novel Test-time Techniques:** Given the nature of our task and approach, we introduce additional novel test-time techniques not covered by Wear-Any-Way, such as Point-Following Classifier-Free Guidance (PF-CFG), CFG scale scheduler for PF-CFG, point augmentation, and sequential dragging. These innovations significantly improve performance and reliability for general-purpose drag editing.
We will revise the paper to better highlight these contributions and clarify the novelty of our approach.
**Q2**: The qualitative experiments compared to other methods are not very comprehensive.
**A2**: Thank you for the suggestion. We will include additional qualitative comparisons in the updated version to further strengthen our empirical analysis. That said, we would like to respectfully clarify that the seven examples presented were carefully selected to be representative, spanning a diverse range of styles, objects, and editing intentions. Moreover, the quantitative results in Table 1 are based on DragBench, a widely adopted benchmark dataset with diverse samples, which provides statistically meaningful evidence of our method’s effectiveness.
**Q3**: Example in Fig. 1\(c\) seems to be a forward-moving camera. Does the author keep such data during the filtering process?
**A3**: Thank you for the question. We confirm that all videos, including the one in Figure 1\(c\), were filtered according to the criteria stated in the paper to ensure static camera movement. The example in Figure 1\(c\) may appear to involve a forward-moving camera, but it actually features a static camera observing a scene where syrup is being poured onto a surface. The apparent change in scale results from the growing size of the syrup pile, not from any camera motion. We will clarify this in the revised manuscript to avoid confusion.
**Q4**: It is suggested that the author add a section at the end of the introduction to summarize the contributions of this paper.
**A4**: Thanks for the suggestion. We will include a summary of contribution at the end of the introduction in our updated version.
---
Rebuttal Comment 1.1:
Comment: My concerns have been well addressed, so I am willing to increase my rating to 3.
---
Reply to Comment 1.1.1:
Comment: We’re glad to hear that our rebuttal helped address your concerns. Thank you again for your thoughtful review and constructive feedback. We truly appreciate the time and effort you put into evaluating our work. | Summary: The paper presents a new approach, LightningDrag, for drag-style editing problem. LightningDrag features in significantly faster inference speed and good generalization. LightningDrag is trained by watching paired video frames. Finally, the authors showcase the performance of LightningDrag by comparing with other drag-style editing baselines.
## update after rebuttal
The rebuttal resolved my concerns. Please include discussions accordingly to the revised paper. I've increased my rating to weak accept.
Claims And Evidence: I appreciate most of the claims from the authors, but there are some claims that may need further discussions:
- The training mechanism proposed by the paper seems to show a significant overlap with Magic Fixup [1], which also uses paired video frames for training. However, there is no discussion or comparison with it in the paper. It might be very helpful to discuss or show by experiments the main difference with Magic Fixup.
- It is mentioned in L080-L086 that LightningDrag generalizes well to out-of-domain editing even if certain transformations have deformations. However, there is no theoretical or empirical evidence for it. It would be helpful if the authors could show some explanation for this part.
[1] Alzayer, H., Xia, Z., Zhang, X., Shechtman, E., Huang, J.B., and Gharbi, M. Magic fixup: Streamlining photo editing by watching dynamic videos. arXiv preprint arXiv:2403.13044, 2024.
Methods And Evaluation Criteria: The method part generally looks good to me. The authors design the model based on Stable Diffusion (SD)'s inpainting model, conditioned on features from IP-Adapter, appearance & reference image features from the additional appearance encoder, point control from point embedding. It might be a plus to visualize these feature maps for a clear understanding of the model, but it is not necessary.
The evaluation criteria also look good to me. The authors follow DragBench and use IF and MD as evaluation metrics. The time efficiency is evaluated by runtime under the same experiment settings.
Theoretical Claims: There is no theoretical claim mentioned in the paper. This paper is application-driven.
Experimental Designs Or Analyses: Authors compare with most of the previous drag-style editing methods, but there are some missing:
- [1] Alzayer, H., Xia, Z., Zhang, X., Shechtman, E., Huang, J.B., and Gharbi, M. Magic fixup: Streamlining photo editing by watching dynamic videos. arXiv preprint arXiv:2403.13044, 2024.
- [2] Mou, Chong, et al. "Dragondiffusion: Enabling drag-style manipulation on diffusion models." ICLR 2024.
- [3] Jiang, Ziqi, Zhen Wang, and Long Chen. "CLIPDrag: Combining Text-based and Drag-based Instructions for Image Editing." ICLR 2025.
Also, there's no ablation study in the paper. It might be helpful to include an ablation study to show the effectiveness of each component.
Supplementary Material: The supplementary material shows a demo for fast drag-style editing.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: One of the contributions, learning from paired video frames, is somehow pretty similar to Magic Fixup as mentioned in "Claims And Evidence". However, it has not been well discussed in the paper.
Other Strengths And Weaknesses: Other strengths:
- The paper shows a promising result in fast drag-style editing. The quality of the end product is competitive to state-of-the-art methods.
- LightningDrag shows a good appearance preservation ability compared to other methods, as shown in Figure 7.
- The test-time strategies provide further refinement for users if needed.
Other weaknesses: N/A.
Other Comments Or Suggestions: N/A
Questions For Authors: Please see my reviews above. My major concern for this paper is its similarity to Magic Fixup. More discussions or evaluations would be beneficial to differentiate two papers.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the insightful feedback and appreciations in our work. Please find our response to your questions below.
**Q1**: Need discussion on differences with Magic Fixup
**A1**: We acknowledge that our approach shares with Magic Fixup the general idea of leveraging video data; however, we respectfully highlight several key differences:
1) **Editing Instruction**:
Magic Fixup relies on user-provided "coarse edits", typically involving rough object translation or duplication. In contrast, our method employs drag-based editing, allowing users to specify precise points and exact movement trajectories. Thus, the editing intentions we target fundamentally differ from Magic Fixup. This further leads to difference in architetural design choice, where our architecture directly inject the encoded point embedding into self-attention modules for precise spatial control, while Magic Fixup place editing instruction at the input of the model.
2) **Complex Editing Capability**:
Magic Fixup excels at edits such as object translation/movement, but might struggle with intricate transformations involving rotations with complex local details. For example, in Figure 7 (row 3), the editing intention of significantly rotating a face from side-view to frontal-view would be challenging to deliver using coarse edits. On the other hand, our drag-based approach clearly conveys and effectively executes such complex editing intentions.
3) **Data Construction Pipeline**:
Our data preparation process differs significantly from Magic Fixup, involving precise point-tracking using state-of-the-art methods (e.g., CoTracker-2) and stringent filtering for static-camera videos. This pipeline provides accurate, scalable data specifically suited for general drag-based editing.
4) **Novel Test-Time Techniques**:
We further introduce novel test-time strategies, including Point-Following Classifier-Free Guidance (PF-CFG), dynamic scale scheduling for PF-CFG, point augmentation, and sequential dragging. These innovations enhance precision and robustness, which are absent from Magic Fixup.
5) **Simplified Real-World User Interface**:
Magic Fixup’s user interface inherently requires complex modules (e.g., segmentation models, rotation tools, zooming functions), making the interaction complicated. Conversely, our approach simplifies user interaction to just point-clicking and mask-painting, enabling intuitive editing and better user experience.
We will incorporate these clarifications into our revised manuscript.
**Q2**: Need more explanation for out-of-domain generalization edits.
**A2**: We would like to clarify our claim with both empirical evidence and further explanation.
Empirically, we demonstrate generalization to out-of-domain editing with deformation through a range of qualitative examples in Figures 3 to 7. In addition, our quantitative results are based on DragBench, which includes many samples involving local deformations (e.g., lengthening an object or deforming parts of a rigid structure). The strong performance on this benchmark supports our claim.
Conceptually, we view this generalization as a form of compositional generalization. While our training videos do not contain explicit editing instructions such as stretching hair or lifting mountains, they do include non-rigid deformations --- such as in Figure 1\(c\), where syrup accumulates and changes shape over time. By learning to model such non-rigid transformations, the model is able to generalize to similar deformation behaviors even in unseen or rigid-object scenarios.
Furthermore, our proposed Point-Following Classifier-Free Guidance (PF-CFG) plays a critical role in enabling this behavior. As shown in Figure 4, PF-CFG helps ensure that the semantic content around handle points is faithfully dragged toward the target, improving control and robustness. The benefit of PF-CFG is further confirmed in our ablation study (see **A1** of "Response to Reviewer RdPN").
We will incorporate these points into the revised manuscript to better support and clarify our claim.
**Q3**: Missing comparisons of CLIPDrag, DragonDiffusion, MagicFixup.
**A3**: Since our work is concurrent to CLIPDrag, we did not include it in this version of submission. We will update their results in the updated version.
As for the DragonDiffusion, we have compared with its improved version, which is DiffEditor. In addition, we have discussed DragonDiffusion in our introduction and related work. We will include the results of DragonDiffusion for sake of completeness in our updated version.
As for MagicFixup, since it follows a very different editing instruction, it might be challenging to fairly conduct large-scale comparison with our approach. We will include qualitative comparisons with MagicFixup in our updated version and update the detailed discussion among these two approaches.
**Q4**: Lacking in quantitative ablation studies.
**A4**: Please see **A1** of "Response to Reviewer RdPN" | Summary: This paper presents Lightningdrag, a diffusion model trained on video data, enabling accurate and consistent drag-based edits within seconds, leveraging source noise prior and point-following classifier-free guidance for improved accuracy and consistency.
Claims And Evidence: 1. Lightningdrag formulates drag-based image editing as a diffusion-based approach trained on video data, incorporating user-specified points via point embedding attention, which is novel.
2. The paper proposes novel strategies, including test-time refinements using noised source latents and inverse square classifier-free guidance, which significantly improve the quality of editing results.
3. As demonstrated in the supplementary videos and Table 2, Lightningdrag achieves notable improvements in time efficiency, completing edits within approximately one second, a substantial advantage over previous methods.
Methods And Evaluation Criteria: 1. The proposed methods and evaluation criteria make sense for the targeted application. Table 2 and supplementary videos effectively demonstrate the advantages of the method in terms of efficiency and editing quality, clearly outperforming previous approaches.
2. Figures and qualitative results illustrate the effectiveness of source noise prior, inverse square classifier-free guidance, and sequential dragging. However, the paper lacks quantitative ablation studies for each individual technical contribution, limiting detailed insights into their respective impacts.
Theoretical Claims: No Theoretical Claims
Experimental Designs Or Analyses: Overall, the experiment is extensive and thorough.Authors provide the detailed experimental settings and extra quantitate results in the supplementary.
Supplementary Material: Authors provide the detailed video results in the supplementary.
Relation To Broader Scientific Literature: Already discussed in the paper.
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths:**
- The presented approach achieves clear improvements in editing quality and efficiency, outperforming prior latent-diffusion and GAN-based methods.
**Weaknesses:**
1. Although the methods introduce beneficial components such as source noise prior and point-following classifier-free guidance, the paper lacks quantitative ablation studies, limiting the understanding of each component’s individual contribution.
2. The reason behind the significantly improved inference speed (as shown in Table 2) is not sufficiently explained, leaving the underlying factors contributing to high efficiency unclear.
Other Comments Or Suggestions: No
Questions For Authors: See weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the insightful feedback and appreciations in our work. Please find our response to your questions below.
**Q1**: Lacking in quantitative ablation studies.
**A1**: Thanks for the advice. We conduct the detailed quantitative ablation study as you suggested. Results are given below.
To start with, we conduct an ablation study to understand the effect of point-following classifier-free-guidance (PF-CFG).
||No CFG|Constant CFG|Square CFG|Linear CFG|Inv. Square CFG|
|----|-----|-----|-----|-----|-----|
|MD(↓)|22.27|17.76|17.73|18.59|18.95|
|IF(↑)|0.9071|0.8486|0.8743|0.8829|0.8896|
In the above table, the first column is the results without PF-CFG, while the rest are results with PF-CFG under different CFG scale schedules. Comparing the first column with the second columns, we show that adding PF-CFG will significantly enhance point-following (better MD) while compromise the overall appearance preservation (worse IF). As we explore different CFG scale schedules (from square to inv. square, the CFG value decreases increasingly faster throughout denoising process), we show that decaying CFG values during denoising process can effectively reduce the degradation on IF. Among these schedulers, we select inv. square as our default CFG scale scheduler to attain the best overall appearance preservation while maintain good point-following. Our numerical results here align well with the visual illustration in Figure 4.
Further more, we conduct ablation study on the choice of noise prior.
||Pure Noise|Mixed Noise Latents|Copy and Paste|Noise Source Latents|
|----|-----|-----|-----|-----|
|MD(↓)|19.76|19.01|18.53|18.95|
|IF(↑)|0.8878|0.8861|0.8876|0.8896|
From the results, one can observe that "Noise Source Latents" and "Copy and Paste" can outperform the other two strategies. Comparing to "copy and paste", "noise source latents" have slightly better IF and slightly worse MD. Therefore, it is hard to discern which one is better among these two. However, since "copy and paste" involves much more complicated implementation, we select "Noise Source Latents" as our default strategy in the spirit of simplicity. Moreover, the numerical results presented here also align well with the visual illustration in Figure 3.
**Q2**: The reason behind the improved inference speed is not sufficiently explained.
**A2**: Thank you for pointing this out. The significant speedup arises from our reformulation of drag-based image editing as a conditional generation task, as mentioned in the abstract and introduction. Unlike prior methods such as DragDiffusion or DiffEditor, which rely on iterative latent optimization or gradient-based guidance (sometimes requiring up to 80 iterations of forward and backward passes through the diffusion UNet), our method avoids this costly process. Instead, we achieve drag-based editing via a single feed-forward pass on our conditional-diffusion-model, making the latency comparable to standard diffusion image generation (around 1~2 second). We will clarify and include this detailed explanation in the revised version of the paper. | Summary: This paper proposes LightningDrag, a feed-forward diffusion model designed to achieve drag-based image editing. LightningDrag employs a point embedding network to encode drag instructions from users and preserves the image identity using an ID adapter and appearance encoder. Experiments demonstrate that LightningDrag achieves superior drag-based editing results without the need for slow optimization.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes. Additional results.
Relation To Broader Scientific Literature: Accelerate drag-based image editing.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
1. The idea of injecting drag instructions as conditions into an inpainting diffusion model is well-founded. This approach eliminates the need for heavy latent optimization, resulting in significantly faster editing speeds.
2. The authors explore various noise initialization strategies and implement point-following classifier-free guidance to enhance performance.
3. LightningDrag achieves much faster inference speeds compared to existing methods.
4. The presentation is clear and easy to follow.
Weaknesses:
1. The overall framework involves fine-tuning an inpainting diffusion model with additional conditions. Both the ID-preserving attention and point-following attention techniques are borrowed from previous work. Additionally, LightningDrag encodes dragging conditions as full-resolution image features, which is redundant since most values are set to zero. Although the authors implement several optimizations to accelerate the diffusion process, some module designs remain sub-optimal.
2. The authors mention additional operations, such as point augmentation, but more analysis and experiments are needed. For example, what is the success ratio without these augmentations?
3. SDEDrag outperforms LightningDrag on DragBench. While the authors claim that SDEDrag often results in undesired identity mapping, it would be beneficial to conduct a user study to evaluate this claim more thoroughly.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the insightful feedback and appreciations in our work. Please find our response to your questions below.
**Q1**: Limited novelties in architectural design.
**A1**: As discussed in the related work section, although we take inspiration from some previous literature for architecture design, we are targeting a more general setting: drag-based editing on **general images**, rather than specific domains like faces or objects.
One of the key novelties of our work is the insight that natural videos can serve as a viable and scalable training signal for general drag-based image editing. Moreover, as discussed in the introduction (e.g., Page 1 L80 to L86) and supported by empirical results (e.g., Table 1 and Figure 7), our approach generalizes well to out-of-domain manipulations that are not explicitly observed during training (e.g., hair lengthening, furniture deformation). This demonstrates a new perspective on leveraging video-based supervision for image editing with conditional diffusion models (See **A2** of Response to Reviewer oUVx for more discussion).
In addition, our training data construction pipeline can inspire following work on how to preprocess video frames to train image editing models. Also, our framework introduces several novel test-time techniques, including point-following classifier-free guidance (PF-CFG), point augmentation, and sequential dragging, which improve controllability and performance. Finally, we will release our code and models to support open research.
We will revise the paper to better highlight these contributions and clarify the novelty of our approach.
**Q2**: Encoding dragging conditions as full-resolution features might be redundant.
**A2**: For the dragging conditions, we would like to clarify that the point-embedding modules are relatively lightweight, accounting for only about 2% of the total model parameters (40M out of 2B). As such, encoding the point conditions as full-resolution feature maps does not introduce significant computational or memory overhead. More importantly, this design choice provides a clear spatial inductive bias, allowing the model to more accurately interpret and follow user-specified dragging instructions. We found this spatial encoding to be beneficial for learning precise and controllable drag-based editing behaviors. We will include this discussion in the updated version to better explain the motivation behind our design.
**Q3**: More analysis are needed for "Drag Engineering" operations. What is the success rate without them?
**A3**: All evaluation results --- both qualitative (Figures 7 and 8) and quantitative (Table 1) --- are obtained **without** using additional techniques such as point augmentation or sequential dragging. This demonstrates that our approach is robust and effective in many common cases without requiring extra operations. The additional techniques are designed to handle more challenging scenarios. Based on our experience with the DragBench dataset, our model produces satisfactory results on 162 out of 205 test samples without any additional operations. With point augmentation and sequential dragging, we are able to improve editing results on 21 more samples. We will clarify this in the revised version for better transparency.
**Q4**: Need more clarification on results of SDEDrag.
**A4**: A robust drag-based editing approach must achieve strong performance on **both** Image Fidelity (IF) and Mean Distance (MD). IF evaluates how well the overall appearance is preserved after editing, while MD measures how accurately the handle points are moved to their target positions. Although SDEDrag achieves a slightly better IF score, its MD (45.779) is significantly worse than ours (18.95), suggesting that it often fails to complete the intended dragging operation and instead produces results that remain close to the original image. This is also evident in the qualitative comparisons in Figure 7, where SDEDrag introduces minimal changes without effectively moving the handle points. We will include additional qualitative examples in the supplementary material to further support this point. | null | null | null | null | null | null |
LEVIS: Large Exact Verifiable Input Spaces for Neural Networks | Accept (poster) | Summary: The paper proposes an approach for the generation of verifiable input spaces for a neural network:
Instead of providing an answer on whether a given input space region is safe, the paper instead proposes to generate an input region for which the NN is verified. The paper claims that this technique can be used for model selection. The approach is evaluated on a set of benchmarks.
## update after rebuttal
I appreciate the authors' response which lifted all open questions and concerns I had.
I trust that the authors will update their paper as promised in the rebuttal.
In particular, regarding the approximative nature of the solution in Section 4.2 and regarding the convexity assumption.
Following the rebuttal, I have raised my score to "Accept" and stand with this score -- I am in favour of accepting the paper.
Claims And Evidence: See issues discussed below.
Methods And Evaluation Criteria: The proposed methodology seems to fit the problem well even though I have numerous concerns with the theory as currently presented (see "Theoretical Claims").
Issues with the evaluation are discussed in "Experimental Design Or Analyses".
Theoretical Claims: The paper seems to switch between notation where the property is satisfied if f*>0 and notation where the property is violated when f < 0 (only one of the two can be correct as 0 must be either safe or unsafe).
Importantly, this issue also shows up in the optimization problem formulated in (1a)-(1f):
In my understanding, since (1) is a strict inequality, this optimization problem might not have a minimum, but only a supremum.
Consequently, it might be advisable to go for the formulation where f(x)=0 is already unsafe? However, in that case the mathematics of the paper should be uniformly phrased in this manner.
Concerning the time complexity analysis on page 4:
While there may be empirical advantages to the chosen polynomial NN encoding, it is not clear to me why the NLP solver would only have a runtime of O(N log N). The paper does not provide any detailed analysis on why the worst-case computational complexity of this optimization problem would be so low. In particular, it seems to me that the bi-active neurons, while apparently a problem for optimization, do not lead to spurious counterexamples as for $p_j^i=q_j^i=0$ we have a consistent assignment of a ReLU's input and output variables (namely $z^i=\hat{z}^i=0$). Consequently, since NN verification alone does not require any optimization, a low complexity would break the NP-completeness which I find implausible.
I hope we can resolve this misunderstanding through the rebuttal.
Concerning Theorem 4.1:
The guarantees on LEVIS-$\alpha$ also require that $\mathcal{C}$ is convex and closed, which is dropped in the paper version of Theorem 4.1 and only appears in the appendix version.
Notably, this is quite a strong assumption: Note that even the visualization in Figure 2 does not satisfy this assumption as the red shaded area is *not* a convex set. If this assumption is indeed necessary (which seems to be the case judging from the proof), this warrants further discussion.
Experimental Designs Or Analyses: In Section 5.2 the paper compares the efficiency of their new relaxation with existing approaches on one input region for MNIST. In B.2 you mention that for your evaluation you use as the initial point the first image in the dataset. How does the performance of your approach (in 5.1 but also more generally) change for other initial points? Do you observe similar speedups? Is there a difference in behaviour across MNIST classes? I realize that it might be too late now to run these experiments, but this information would be essential to get a better understanding of the approach. As NN verification performance tends to have drastic variation across benchmarks and inputs, it is not clear whether the observed speedups hold in general or are specific to this particular NN and input region.
Additionally, a comparison to the techniques from [AAAI24] would be interesting (see "Relation to Broader Scientific Literature").
In case their technique is not directly comparable, the approach should be at least discussed as related work, as it seems strongly related.
Supplementary Material: No.
Relation To Broader Scientific Literature: [AAAI24] proposed an approach for the enumeration of safe regions for a given NN specification. Even though the paper's main contribution is a probabilistic approach, the work seems strongly related to the paper at hand and should be discussed.
[AAAI24] https://ojs.aaai.org/index.php/AAAI/article/view/30134
Essential References Not Discussed: Minimization of radii where specifications are verified has previously been explored [CP20] (see Section 4.3) which in turn cites another well-known work that minimizes adversarial radii outside the verification context [arxiv13] (see 4.1).
[CP20] https://link.springer.com/chapter/10.1007/978-3-030-58475-7_50
[arxiv13] https://arxiv.org/abs/1312.6199
Other Strengths And Weaknesses: I like the idea of turning the problem of NN verification on its head and instead search for the regions where robustness is verifiable.
I also like the proposed approaches to search for large epsilon balls to this end.
Other Comments Or Suggestions: Page 2:
- $f^*$ is not explicitly defined in the background section.
- It might be sensible to cite relevant literature on the NP-completeness of NN verification ([CAV17] or [RP21])
Page 4:
- missing space before (Scholtes, 2001)
- As mentioned before $p_j^i=q_j^i=0$ seems to have a consistent assignment for the ReLU inputs and outputs via $z^i=\hat{z}^i=0$
Consequently, this seems to be an issue specific to the performed optimization. It would be good to emphasize this more.
Page 5:
- There is a recurring inconsistency between the usage of $b_i$ and the usage of $b^i$ in places that seem to mean the same variable (e.g. line 3 vs line 6 of Algorithm 1)
- Degree sign of 90° misses ^
Page 6:
- If my understanding is correct Line 12 and line 13-15 in Algorithm 2 should be the other way around?
Page 7:
- It is not clear what benchmark is used for Table 2 and the experiments described in Section 5.3
- "of five random implementations": I think this is supposed to read executions/runs?
General:
- Your paper does not explain on how your proposed approach could aid model selection, even though the paper starts and ends with this suggestion. Can you provide more concrete details on how your approach could be helpful here?
[CAV17] https://link.springer.com/chapter/10.1007/978-3-319-63387-9_5
[RP21] https://link.springer.com/chapter/10.1007/978-3-030-89716-1_10
Questions For Authors: **(Q1)** Please respond to my concerns outlined in "Theoretical Claims".
If I am mistaken about them or you have answers on how to fix these issues, I would be willing to increase my score.
**(Q2)** How does your approach compare (empirically or conceptually) to the work in [AAAI24]?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful feedback. We address your concerns below.
---
### Theoretical Claims
**#1:** The paper inconsistently uses $f^* > 0$ for satisfaction and $f < 0$ for violation.
We leverage $f > 0$ to ensure satisfaction and $f < 0$ to ensure violation—these are distinct objectives. In Problem (1a)-(1f), we seek a verifiable ball by finding the nearest violating point $x^*$, which defines the ball’s radius $r = \|| x^* - c \||_p$. This formulation guarantees a minimum is sought.
**#2:** Claim of $\mathcal{O}(N \log N)$ NLP runtime lacks justification.
Our solver guarantees global optimality when ReLU activation states (active/inactive) remain consistent with the given index sets $I_+$ and $I_-$. This introduces a constrained space where the optimization is exact. The empirical optimality gap is zero for DC-OPF and only 0.004 for MNIST, indicating high fidelity in practice.
**#3:** Theorem 4.1 omits convexity/closedness assumptions in main text.
We appreciate you catching this—our empirical results indicate most problems converge. A common failure mode occurs when the verifiable region is open and ray-restricted subproblems lack solutions. These unbounded directions correspond to insensitive model dimensions and are excluded when computing new centers. The result also holds for star-shaped regions and finite-unions of convex sets (up to a measure-zero set of oscillators). While Algorithm 1 could be modified to detect oscillations or constrain to regions around the origin to force convexity, in both cases, convergence follows from Theorem 4.1 or its generalization. Since these issues didn’t arise in practice, we omitted them. The empirical performance mirrors the convexity-based guarantee, which we should have included explicitly.
**#4:** $p^i_j = q^i_j = 0$ seems to imply $z^i = \hat{z}^i = 0$ for ReLU.
Setting $p^i_j = q^i_j = 0$ does not imply $z^i = \hat{z}^i = 0$. These values result from solving the nonlinear program in Equations (2a)–(2c), not from direct assignment. The solution reflects ReLU behavior.
---
### Experimental Evaluation
**#5:** Section 5.2 only evaluates MNIST's first image. How robust is performance?
We evaluated multiple initial points and classes. As long as the input is correctly classified, performance is consistent. Poor initial points (e.g., radius = 0) lead all solvers to converge quickly. Runtime scales with neuron count, not class count. With more neurons, our method outpaces MIP-based baselines.
Below, we show more experiments on MNIST and CIFAR-10:
| Dataset | Proposed (s) | MIP$_\text{CROWN}$ (s) | Opt. Gap |
|-----------|--------------|------------------------|----------|
| MNIST | 1.34 | 10.48 | 0.004 |
| CIFAR-10 | 5.96 | 17.47 | 0.0009 |
**Takeaway:** The approach scales to larger networks, and compute time is independent of the initial point.
**#6:** Clarify benchmark and setup in Table 2; the phrase “five random implementations” is unclear.
- **Benchmark:** NN regression for OPF, as in [Brix et al., 2023].
- **Baseline:** Lipschitz-based bound from [Fazlyab et al., 2021].
- **Setup:** We solve (1)-(2) for the NLP variant and compare radius to the baseline.
- **“Five random implementations”:** Due to training randomness, we repeat 5× and report intervals.
---
### Related Work
**AAAI24 Comparison:** AAAI24 proposes sampling-based guarantees; we focus on deterministic ones via LEVIS. Our regions give 100% coverage under fixed activations and stronger guarantees at lower cost. This is useful under adversarial risks [Goldwasser et al., 2022].
**Radius Works [CP20], [arxiv13]:** [CP20] finds nearest disagreements, [arxiv13] the closest confirmation point. We find the nearest *misclassified* point. These are complementary and will be cited.
**NP-Completeness [CAV17], [RP21]:** We acknowledge verification and reachability are NP-complete. Our solver yields exact results when ReLU states—active ($I_+$) or inactive ($I_-$)—are fixed. Though this may introduce an optimality gap, empirical results show it’s negligible (e.g., 0 for DC-OPF, 0.004 for MNIST).
---
### Other Clarifications
**Variables & Notation:** We used $\mathcal{P}$ (not $\mathcal{F}$) for the verification condition. Fixed citation spacing, 90° notation, and unified $b_i$ vs $b^i$.
**Algorithm 2 Logic:** Early termination ensures efficiency; the order is correct.
**Model Selection:** We estimate verifiable input space to assess robustness. In safety-critical settings (e.g., power grids), this supports model selection based on stability guarantees (Cui et al., 2023).
---
### References
- Brix et al. (2023), *arXiv*
- Cui et al. (2023), *NeurIPS*
- Fazlyab et al. (2021), *CDC*
- Goldwasser et al. (2022), *arXiv*
---
Rebuttal Comment 1.1:
Comment: # POST REBUTTAL:
Dear Authors,
thank you for your response.
**#1:** What I meant to say with my comment is that $f=0$ should probably be one of the two, i.e. either $f \geq 0$ is satisfaction and $f < 0$ is violation or vice-versa.
**#2:** It would be good to clarify this in the paper even more. Initially this read to me like you claim your algorithm can solve general NN verification in polynomial time.
**Response to Weakness #1 of h45y:** I think this clarification helps.
**Algorithm 2 Logic:**
My concern is not the early termination, but that you seem to add $m$ to $Q$ and subsequently modify it.
Since $m$ is not used anymore afterwards, this still seems to me like it's the wrong order.
Otherwise I am happy with your responses and will thus increase my score to Accept.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer wzBC,
Thanks for your constructive comments and feedback.
**#1**: Thank you for the additional clarification. We agree that the violation condition should include equality, and we will update Equation (1f) to $f \leq 0$.
**#2**: You're right—we will explicitly emphasize in Section 4.2 that our solver provides only an approximate solution in polynomial time.
**Algorithm 2 Logic:** You're correct—lines 12 and 13–15 should be swapped. In our implementation, we add m to Q after executing lines 13–15. We'll correct this typo in the paper. Thanks for catching it. | Summary: In Neural Network Verification, the goal is to verify that a certain input-output relation holds. E.g. all inputs in some local neighborhood should have the same classification. This is a challenging (NP-hard) task. The authors propose a new technique that can be used to compute a *underapproximation* of the preimage of an output set that satisfies some linear constraint. All inputs in this preimage are guaranteed to satisfy the constraint. To this end, they also propose to use complementary constrained optimization to speed up MIP problems. The authors evaluate their new technique on a variety of applications.
Claims And Evidence: *LEVIS-alpha can find "a single, large verifiable ball that intersects at least two boundaries of a bounded [output] region"*
The argument is sound and convincing: 1a-1f find the closest adv. ex, so every point closer to the center must not be an adv. ex.
*LEVIS-beta "captures the entirety of the verifiable space"* I find this claim to be too strong - according to the last sentence in 4.3.3, LEVIS-beta depends on the random initialization. So I fear there's a risk of not finding the "right" random seeds and therefore not identifying the entire verifiable space. Even in the best case, it would only approximate it up to $\epsilon$. "LEVIS-beta is an any-time algorithm that computes increasingly larger lower bounds to the verifiable space" may be a more appropriate claim.
*Complementary constrained optimization speeds up the MIP verification*
This seems to be confirmed by the experiments.
Methods And Evaluation Criteria: The benchmarks are appropriate in the sense that other preimage papers use similarly small network architectures (e.g. Katha et al.). But the paper would benefit from a discussion why larger networks cannot be processed this way - or if they can, the respective experiments.
Theoretical Claims: I checked Theorem 4.1 and did not find any issues
Experimental Designs Or Analyses: The experimental design seemed to be correct, but I did not check them in depth.
Supplementary Material: I did not check the supplementary material.
Relation To Broader Scientific Literature: The key contribution is LEVIS-alpha/beta, which is a novel idea.
Essential References Not Discussed: I'm not aware of uncited essential references.
Other Strengths And Weaknesses: The paper proposes two interesting algorithms (LEVIS-alpha/beta) that are novel in the literature. It's a promising approach to compute an *underapproximation* of the preimage of an output set, which has significant applications e.g. for safe controllers.
Other Comments Or Suggestions: 1) Could the MIP using 1a-1f be replaced by an adversarial search? This would not guarantee that there are no closer adversarial examples, but could be much faster. As long as the original MIP is solved *eventually* (as LEVIS converges to a center), this should be sufficient, right?
2) $p^i$ and $q^i$ are used in Section 4.2 but not properly introduced. Their meaning can be inferred, but a short description would be helpful. In what cases do they take which values? Currently, they are called the "complementary variables", but that term is not defined.
Questions For Authors: 1) In Section 4.2, you state that you use $p^i$ and $q^i$ to decide for which neurons to include integer variables. How do you first compute these values? Do you run NLP once, then extract them, and then create the MIP? Could this be replaced/improved by e.g. using IBP (integer bound propagation) instead?
2) If you sample verification queries (or use a benchmark from VNN-COMP, if there is one with a network small enough to support your procedure), how many of them could you immediately verify using your technique? If the query is about an input region that's covered by your preimage, it's known to be safe. How does this runtime compare to the time you need to compute the preimage?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and encouraging review. We address your comments and suggestions below.
---
### Clarifications on Claims
> **Claim Concern:** The statement that "LEVIS-beta captures the entirety of the verifiable space" is too strong, especially since the method depends on random initialization.
**Response:**
We agree and will revise the wording to more accurately reflect that LEVIS-beta computes increasingly larger lower bounds on the verifiable space. While LEVIS-beta often achieves broad coverage, we do not claim it captures the entire space. It provides a growing conservative approximation under limited compute.
---
### Questions
> **Q1:** In Section 4.2, how are the complementary variables $z_i, w_i$ computed? Do you first run an NLP to extract them, then create the MIP? Could this be improved with IBP?
**Response:**
We compute the complementary variables by solving the NLP in (2a)–(2c), where $p_i, q_i$ are decision variables jointly optimized. IBP is not used here directly, but can help in MIP formulations by tightening bounds on ReLU outputs and reducing integer variables.
> **Q2:** If sampling verification queries (e.g., from VNN-COMP), how many can your technique verify immediately? What is the runtime comparison between verification and preimage computation?
**Response:**
Our solver is applicable to general ReLU-based networks but we have not yet evaluated on VNN-COMP queries due to time constraints. Since the problem formulations differ, direct runtime comparison between verification and preimage computation is nontrivial. Instead, we compare against a standard MIP baseline under similar conditions (Section 5.2).
---
### Minor Comments & Suggestions
> **S1:** Could MIP constraints 1a–1f be replaced by adversarial search? It would not guarantee completeness but may be faster.
**Response:**
Adversarial search is appealing but lacks full coverage of the adversarial space. Techniques based on gradients can be misleading due to local sensitivity. MIP (or its relaxations) is needed to map **global** bounds. Still, adversarial search could assist with initialization or guidance—this is a promising direction for future work.
> **S2:** $z_i$ and $w_i$ appear in Section 4.2 but are not clearly introduced.
**Response:**
Agreed—we will clarify that $z_i$ and $w_i$ are **complementary slack variables** enforcing the relaxed constraint $z_i w_i \leq \varepsilon$ in the NLP formulation.
> **S3:** Please discuss whether larger networks can be processed.
**Response:**
Yes, our method scales well. As shown below, it runs faster than MIP$_\text{CROWN}$ on CIFAR-10 while maintaining tight optimality gaps:
| Dataset | Proposed (s) | MIP$_\text{CROWN}$ (s) | Opt. Gap |
|-----------|--------------|------------------------|----------|
| MNIST | 1.34 | 10.48 | 0.004 |
| CIFAR-10 | 5.96 | 17.47 | 0.0009 |
CIFAR-10 has ~12.4K more neurons than MNIST and 4× the input size. Our solver remains efficient and scalable. For large networks, we first compute an approximate solution with a small optimality gap, then refine it using a reduced MIP (Section 4.2). Lowering $\varepsilon$ (e.g., suggest to be $10^{-5}$) tightens the gap while preserving tractability.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and clarification. I stand by my original rating. | Summary: The paper aims to find verifiable input space for a NN, i.e., input region where no adversarial example exists.
Claims And Evidence: C1. A MIP based verification framework that provides maximum verifiable input ball around a center c.
E1. The paper provides a clear formalization of this claim in eq. 1a-1f.
C2. Faster solver for the MIP based verification framework.
E2. They convert the ReLU function, into an alternative formalization --- however, the details of how this is obtained is not clear to me. A small example regarding this may help.
C3. The authors provide a novel search strategy based on dynamically updating the centers based on adversarial points along different directions.
E3. This strategy is quite clearly explained in section 4.3.1 and 4.3.2
C4. The paper provides another search strategy that provides a union of verified balls.
E4. The strategy is quite clearly explained in section 4.3.3 and Algorithm 2.
Methods And Evaluation Criteria: The paper builds a MIP based verification framework for getting certified input spaces.
And they also empirically verify the computational superiority of their method both in terms of time complexity and radii of the verified input region. The experimental setting is quite simple.
Theoretical Claims: The main theoretical claims (informally) states that the verified balls provided by their methodology intersects the boundary of the true (verified) region atleast on two points.
I did not check the proof. But the claim seems quite reasonable given the paper's methodology.
Experimental Designs Or Analyses: The authors provide a rigorous set of experiments, analyzing computational time, and verified radii.
Supplementary Material: I did not review the supplementary material
Relation To Broader Scientific Literature: The paper is quite relevant to the general research in verifying NNs.
Essential References Not Discussed: None, to the best of my knowledge
Other Strengths And Weaknesses: Strength:
- Clarity: Although, I am not an expert in the field, I found most aspects of the paper to be quite clearly written. Except, the evidence for C2 (see above in claims and evidence).
- Novelty: The proposed MIP based search strategies are quite innovative.
Weaknesses:
- None, in my understanding. However, I could have missed some details. And I am not well-versed with the larger literature.
Other Comments Or Suggestions: I think section 4.2 after equation 2c, can be written more clearly for non-experts.
Questions For Authors: Is there a way to exactly compute the total volumes of the verified regions provided by LEVIS-beta?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful and encouraging review. We are glad to hear that you found our contributions clear and innovative. Below, we respond to your comments and suggestions.
---
### Comments on Claims and Evidence
> **Comment C2 / Evidence E2:** The conversion of the ReLU function into an alternative MIP-friendly formulation was unclear. A small example may help.
**Response:**
ReLU is a piecewise linear function: for $\hat{z} = \text{ReLU}(Wz + b)$, we have $\hat{z} = Wz + b$ if $Wz + b \geq 0$, and $\hat{z} = 0$ otherwise. To encode this in a MIP, we introduce a binary variable $a$, where $a = 1$ if $Wz + b \geq 0$, and $a = 0$ otherwise. Using the Big-M method, this conditional behavior can be captured with linear constraints, making the ReLU compatible with MILP solvers.
---
### Weaknesses
> **Weakness #1:** Section 4.2 (after Equation 2c) could be written more clearly for non-experts.
**Response to Weakness #1:**
Sure, here is a revised version to be easier to comprehend for non-experts:
We introduce a small regularization parameter $\varepsilon > 0$ (recommended to be $10^{-5}$) to handle the nonlinear inequality $p^i q^i \leq \varepsilon$ for better convergence [Scholtes, 2001]. This simplifies the problem, allowing it to scale polynomially with the number of neurons $N$ instead of being NP-hard.
However, solvers for nonlinear programming (NLP) may still find only locally optimal solutions, particularly when both $p^i_j$ and $q^i_j$ are zero for some neuron $j$ (called the **bi-active set**). This set is defined as: $I_0 \equiv \{ j \mid p^i_j = q^i_j = 0 \}.$ To manage this, we categorize neurons into three groups: If $p^i_j > 0$, then $q^i_j = 0$ and $j \in I_+^i$. If $p^i_j = 0$ and $q^i_j > 0$, then $j \in I_-^i$. If $p^i_j = 0$ and $q^i_j = 0$, then $j \in I_0^i$.
We then construct a simplified MIP that includes integer variables only for neurons in $I_0^i$. This significantly reduces the number of binary variables—often to a small subset of neurons—making the optimization more tractable, as shown in Section 5.2. When the activation states are consistent with $I_+^i$ and $I_-^i$, the resulting solution is globally optimal.
---
### Question For Authors
> **Question #1:** Is there a way to exactly compute the total volumes of the verified regions provided by LEVIS-beta?
**Response to Question #1:**
This is a very good question. Computing the volume of the union of balls may be difficult as the balls overlap, and consequently, the total volume is not the sum of individual volumes, but it is definitely a good suggestion to be considered in future work. | Summary: This paper presents an algorithm for computing inner-approximations of neural network preimages as a union of balls. The method is split into two sub-methods, one for maximizing an inner-approximating ball, and another for generating new balls to append to the overall approximation. Experiments are conducted on optimal power flow and MNIST digit classification examples to illustrate the performance and algorithmic properties of the proposed approach.
Claims And Evidence: For the most part, the claims are adequately supported. My two primary concerns which lack clarity and/or convincing evidence are:
1. Theorem 4.1 relies on the very strong (and impractical) assumption that the input set is convex, but this assumption is hidden away in the appendix and not mentioned at all in the main paper.
2. The experiments attempt to show that the preimage inner-approximations generated by the proposed method are larger (stronger) than prior baselines. However, the chosen baseline based on Lipschitz bounds is quite weak, which degrades the experimental support.
See additional discussion in Other Comments Or Suggestions below.
Methods And Evaluation Criteria: Proposed methods and evaluation criteria make sense, but could be made stronger by using more realistically-sized datasets like CIFAR-10 or ImageNet.
Theoretical Claims: The proof in the appendix seems to be rigorous, albeit missing the technical assumption that $\mathcal{C}$ is full-dimensional (in the sense that it contains an open ball). Otherwise, the inner-approximation generated by the proposed method would be trivially empty.
Experimental Designs Or Analyses: The experiments appear to be sound.
Supplementary Material: I reviewed the appendices, which appear to be good.
Relation To Broader Scientific Literature: This paper proposes a new method for neural network preimage approximation, which can be used for verifying the safety and reliability of neural networks employed in a variety of different scientific domains, e.g., control systems, power systems, medical diagnosis systems, etc., all of which require rigorous robustness guarantees. The proposed method is intimately related to prior works that focus on verifying the safety of fixed input regions.
Essential References Not Discussed: The paper focuses on preimage approximation, which is a relatively new area. The only highly related reference that I noticed was missing was [1]:
[1] Zhang, Xiyue, Benjie Wang, and Marta Kwiatkowska. "Provable preimage under-approximation for neural networks." International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Cham: Springer Nature Switzerland, 2024.
Other Strengths And Weaknesses: In general, I have a handful of questions, suggestions, and concerns regarding formatting and presentation, mathematical clarity and validity, and lack of convincing experimental baselines. Please see my specific comments in Other Comments Or Suggestions below.
Other Comments Or Suggestions: 1. Lines 55, 78, and 240 (Column 2): Do you mean "disconnected" instead of "discontinuous"?
2. Line 75, Column 2: "$\mathcal{B}(c)$ represents a ball centered at $c$." Of what radius?
3. Line 110: "Inputs that, when perturbed, result in erroneous neural network outputs..." I think this needs rephrasing: the perturbed input is the adversarial example, not the (nominal) input that gets perturbed.
4. It seems like from Line 86, Column 2, onwards, you are assuming that the output $f(x)$ is a scalar (so that, for instance, $\min_{x\in\mathcal{S}} f(x)$ is well-defined). This is a common assumption in the verification literature, since the scalar-valued specification being verified can often be absorbed into the final linear transformation. If this is what you are assuming, then please explicitly mention this to the reader.
5. Based on your optimization problem (1), it looks like you are computing a ball that inner-approximates the true verifiable input set. In the third listed contribution on Line 64, Column 2, you mention that your method "cover the entire verifiable space," which would imply that you are generating an over-approximation. I suggest that you change the "cover" wording in this listed contribution to more accurately reflect your inner-approximation approach.
6. Line 154, Column 2: "...complementary variables for the ith neuron." Previously, you used $i$ to index the layer number, not the neuron, and it still looks like you are doing so in problem (2), since you are still using the preactivation and activation vectors $z^i$ and $\hat{z}^i$. Therefore, do you mean to say "layer" instead of "neuron" here?
7. Line 162, Column 2: "...are the nonlinear equivalence of the ReLU function." The constraint (2a) is linear. It looks like the only nonlinear constraint is the bilinear inequality $p^i q^i \le 0$. You may want to reword your sentence here to more accurately reflect this fact.
8. It looks like you may want to remove the first "then" on Line 175.
9. If I understand your directional adversarial point optimization approach, you are first fixing a direction to find an adversary along, and then running your optimization (3) in order to find the closest adversarial example to the previous one ($b$), along the given direction. If this is the case, aren't (3d) and (3e) already imposed before the optimization takes place? In other words, it seems like (3d) and (3e) are not optimization constraints.
10. Section numbering: 4.3.2. (1) and 4.3.2. (2) look quite odd. I'd suggest formatting these sections differently to avoid the four-fold section numbering with parentheses.
11. Line 209, Column 2: There is some clash in your notation. Previously, you used $d$ to denote a point in $\mathbb{R}^n$ to set up your directional adversary optimization, but now you are using it to denote the number of pairs of boundary adversarial points. I'd suggest changing some of the notation to avoid this clash.
12. On a related note (notation), you previously used $R^n$ to denote $n$-dimensional Euclidean space, but then later you used $\mathbb{R}^n$. It is best to remain consistent throughout the paper.
13. In Section 4.3.2. (1) and Algorithm 1, you are using both subscripts and superscripts on the iteratively computed $b$ vectors. It seems like you should stick to one or the other and make all of your indexing notation consistent for these vectors.
14. Theorem 1: "...almost symmetric about the center $c$ with less than $\epsilon$ mismatch" These terms "almost symmetric" and "$\epsilon$ mismatch" really should be made mathematically precise.
15. Line 267, Column 2: Again, I think that it is more common to use "disconnectedness" here rather than "discontinuity."
16. Clarifying question: For the LEVIS-$\beta$ algorithm, it seems to me like you are creating your "new ball" by first determining a new center $m$, which is very close to the boundary of the previous ball, and then solving the directional adversary optimization (3) to find an adversarial example closest to $m$, which gives you a new ball centered at $m$. This would mean that two adjacent balls actually intersect. However, your Figure 4 suggests that the balls might not intersect. Am I missing something?
17. Experiments: Can your methods scale up to something more reasonably sized, even as large as CIFAR-10?
18. Have you thought of any methods for optimizing the search angle $\theta$? Choosing $\theta=0$ and $\theta=90$ degrees seems like it could be extremely limited in high-dimensional space. Even a comparison of these angles to randomly chosen angles would be an interesting ablation study.
19. Section 5.3 (comparison to baseline): It is well-known that network bounds based on the global Lipschitz constant are very overconservative. I would expect to see a comparison of your method versus a stronger method, one that is likely also a MIP-based or branch-and-bound-based technique. For example, [1] appears to be one of the current state-of-the-art methods for computing inner-approximations of neural network pre-images.
20. Where is the $\sqrt{2}$ coming from in line 378? I would expect to see a $\sqrt{n}$ if it were based on equivalence of norm inequalities.
21. The main theoretical result, Theorem 4.1, does not state any restriction about the convexity of the input space $\mathcal{C}$ in the main body of the paper. But then, in the appendix, the theorem statement is changed to be restricted to convex input spaces $\mathcal{C}$. This is a \emph{major} restriction both theoretically and practically; it is not just a minor ``condition.'' As you mention multiple times in the paper, the input spaces $\mathcal{C}$ are typically nonconvex for neural networks. This poses a significant limitation for your theoretical result, and, in my opinion, it is simply unacceptable (and seemingly dishonest) to hide this condition until the appendix.
[1] Zhang, Xiyue, Benjie Wang, and Marta Kwiatkowska. "Provable preimage under-approximation for neural networks." International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Cham: Springer Nature Switzerland, 2024.
Questions For Authors: See above Other Comments Or Suggestions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive review. Below, we address the key concerns and questions you raised.
---
### Major Weaknesses
> **Weakness #1:** Theorem 4.1 critically assumes that the input space is convex...
**Response:**
Thank you for identifying this. Empirically, most problems converge. Failures typically occur when the verifiable region is open—some ray-restricted subproblems may have no solution. Since these directions correspond to model-insensitive inputs, they are excluded from new center computations.
We verified that convergence still holds for star-shaped regions and unions of convex sets, up to a measure-zero set of oscillations. Algorithm 1 could be extended to isolate oscillatory behavior via added constraints or by requiring the final ball to include the starting point. In such cases, Theorem 4.1 (or its generalization) ensures convergence. Since these cases did not arise in practice, we omitted this detail. We will clarify the convexity assumption in the paper.
> **Weakness #2:** The experimental comparison is weak... [Zhang et al., 2024] is not cited.
**Response:**
Our formulation differs from prior work, limiting directly comparable baselines. We compare with MIP-based methods in Section 5.2. Zhang et al. (2024) assumes axis-aligned input hyperrectangles and polyhedral output sets, which differ from our setting. We will cite the work and clarify the distinctions.
> **Weakness #3:** Experiments are conducted on small datasets...
**Response:**
In addition to MNIST, we report results on CIFAR-10 using our NLP solver from (2a)–(2c), compared against MIP$_\text{CROWN}$ on identical tasks:
| Dataset | Proposed (s) | MIP$_\text{CROWN}$ (s) | Optimality Gap |
|-----------|--------------|------------------------|----------------|
| MNIST | 1.34 | 10.48 | 0.004 |
| CIFAR-10 | 5.96 | 17.47 | 0.0009 |
Despite CIFAR-10’s added complexity, our solver is ~3× faster with a tighter gap. This confirms our method’s scalability to larger networks.
---
### Questions For Authors
> **Q1:** Can your methods scale to CIFAR-10?
**Response:**
Yes — as shown above, our method scales well to CIFAR-10 and outperforms MIP$_\text{CROWN}$ in runtime.
> **Q2:** Have you considered optimizing the search angle?
**Response:**
Yes, and we believe it could improve convergence by reducing overlap. We plan to explore this in future work.
> **Q3:** Do adjacent balls intersect? Figure 4 suggests otherwise.
**Response:**
They can. Figure 4 shows a simplified case. We will revise it to depict overlapping balls explicitly.
---
### Minor Comments & Suggestions
> “Discontinuous” → “disconnected”
**Response:**
Agreed — will revise.
> Clarify radius of ball
**Response:**
Defined as $r = \||x^* - c\||_p$, where $x^*$ is the nearest adversarial point, using the $l_p$ norm (see Section 2).
> Perturbed inputs phrasing is confusing
**Response:**
We will clarify: perturbed inputs are adversarial only if they lead to misclassification (Goodfellow et al., 2014).
> Is output scalar?
**Response:**
No — “scales” refers to scalability. We will revise for clarity.
> “Cover entire verifiable space” is misleading
**Response:**
We will change to “tightly underapproximate the verifiable input space.”
> Layer vs. neuron index
**Response:**
Index $i$ refers to the layer. We will clarify this.
> Constraint classification
**Response:**
Only (2b) is nonlinear — we will clarify.
> Redundant “then”
**Response:**
We will revise to remove “then.”
> Notation reuse (e.g., $j$)
**Response:**
We will rename $j$ in Section 4.2 to $s$ for clarity.
> Notation consistency
**Response:**
We will standardize all notation (e.g., $\mathbb{R}^n$).
> Algorithm 1 indexing style
**Response:**
We will fix inconsistencies.
> “Almost symmetric” / “$\varepsilon$-mismatch”
**Response:**
We will define “almost symmetric” precisely — angle > $\pi/2$ between $x_1 - c$ and $x_2 - c$.
> “Disconnectedness” vs. “discontinuity”
**Response:**
We agree — will revise to “disconnectedness.”
> Source of $\sqrt{2}$ term
**Response:**
It is from the $l_2$ norm difference between one-hot vectors, as in Fazlyab et al. (2021), Section II.D.
> Section 4.3.2 formatting
**Response:**
We will revise the nested numbering.
> Missing citation: Zhang et al., TACAS 2024
**Response:**
We will include this citation and clarify how their assumptions differ from ours.
---
### References
- Fazlyab, M., Morari, M., & Pappas, G. J. (2021). *An introduction to neural network analysis via semidefinite programming.* IEEE CDC.
- Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). *Explaining and harnessing adversarial examples.* arXiv:1412.6572.
- Zhang, X., Wang, B., & Kwiatkowska, M. (2024). *Provable preimage under-approximation for neural networks.* TACAS.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thorough response. Most of my concerns have been addressed, aside from my desire to see a comparison against baselines that are stronger than Lipschitz-based methods. I understand that the problem focus and pre-image approximation form of Zhang et al. (2024) may be slightly different from yours, and that you are comparing radii in your current experiments, but I don't see why you couldn't compare, for instance, the volume of your union-of-balls preimage approximation to the volume of Zhang et al.'s union-of-hyperrectangles approximation, on a problem with a polyhedral output set. I encourage the authors to consider including such a comparison, as it would certainly strengthen the empirical evaluation.
That said, as the majority of my concerns are addressed and I still find value and novelty in the proposed methods, I am willing to raise my score to 3 (weak accept), *under the assumption that the authors explicitly mention and discuss the convexity assumption in their main theoretical result*.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Hx3K,
Thank you for your feedback and response. We will consider adding the comparison of the union of the volumes in the revised version. | null | null | null | null | null | null |
Causal Abstraction Inference under Lossy Representations | Accept (poster) | Summary: This paper introduces projected abstractions and an algorithm to compute them from a low-level SCM.
### Update after rebuttal
First of all apologies for an initial review that may have read harsh.
So I carefully read the other reviewers' opinions and the rebuttals. It didn't really bring much more clarity. My impression is that there is even some divergence in what the reviewers think the contribution of the paper is. In parts it seems that even the formalization is considered as part of the contribution, where I think that notation-wise most is taken from what "Neural Causal Abstractions", which I believe is the authors' previous work.
I really do not know what to do with this work. I am convinced that the general research direction of the paper is important, and maybe so is even the contribution. But, at least in the way how it is written down, I can hardly imagine that the paper will have a huge impact beyond a very limited audience that will really make the effort to understand the work in detail. I think that the paper might even be relevant not only for theoretical scientists, so the potential contribution is beyond mere theory I would say. I think it is interesting for me and wouldn't consider myself a theoretic researcher in this field.
So my concerns about clarity prevail and I am going to leave my assessment as is, but apparently I am the outlier. If for the other reviewers the paper not only makes a relevant technical contribution but also manages to properly convey it in their opinion, I have no trouble with the paper being accepted.
Claims And Evidence: I am unsure what significant claims this paper wants to make. There is the introduction of the computation of projected abstractions, which is probably a straight forward algorithm once one buys the ton of definitions of projections. There is also the claim that those projected abstractions "make an important step around the AIC", but it is not entirely clear to me neither what this is supposed to mean exactly and how the algorithm assures it. I am not saying that the experiments don't deliver on a certain expectation, I am just saying that the paper fails to make the contribution understandable, even to someone with fairly solid knowledge in causal inference.
Methods And Evaluation Criteria: n/a
Theoretical Claims: There are some theoretical statements (even though some or just definitions, like Prop 1). Based on the density of the paper and the lack of clarity, I was not even able to assess what the theorem statements want to say. Sorry, I really made an effort.
Besides, I am not sure about the relevance of the whole question, because the approach starts from the premise to have a low-level SCM already available, which is of course usually not the case. I am partially willing to revise my review (even though I have spent quite a significant time on this paper already to understand what is going on) if the authors answer to my below questions.
Experimental Designs Or Analyses: n/a
Supplementary Material: I didn't check this, because the main paper was already incomprehensible.
Relation To Broader Scientific Literature: Good.
Essential References Not Discussed: None I am aware of.
Other Strengths And Weaknesses: The presentation of the paper is in my view not top-tier conference-ready. In general the paper is very difficult to read, even for someone who is fairly familiar with the field. On about 3 pages, the paper introduces over a 100 symbols, which in itself is already a challenge. I think that it should be possible to somehow reduce the formal burden of the paper. But what is worse, I believe that there are mistakes in the notation. So when the reader arrives at Proposition 1 and the 4 millionth symbols W_i^o and W_i^v, then I think it should be W_i^u instead. In that same proposition, there is a syntactic definition of \delta(W^o, W^u) that has no semantic attached, or at least none that I would understand. Usually explanations follow after the formal definitions, but the intellectual burden for the reader is really enormous, and the authors make little effort to wrap it into didactically well elaborated portions.
Other Comments Or Suggestions: Let someone familiar with causal inference (and maybe abstractions) read your paper before you submit to ICML and check on clarity.
I beg the authors to try to make things simpler notation-wise or be at least more didactic and structured in the write-up. I understand that there are two layers of abstraction going on, and defining mappings between them considering all relevant details can be tough. But my experience tells me that probably some definitions could be merged and even some symbols ommitted.
It is urgently necessary that language is used more carefully. In the motivation (line 70-72 left), the authors say "it may be desirable to have a formalism in which these kinds of ambiguous abstractions are well-defined". I strongly doubt that the word "well-defined" is the correct choice here. My current intuition is that with a different type of abstraction you have other *properties* that allow you to do something that you cannot do with previous approaches. If your contribution is not more than making something well-defined, then we seriously have an issue.
In this line, the contribution needs to become clearer. And with this I do not mean to make an explicit bullet list of 3 un-understandable points, but to very concretely formulate a limitation of previous approaches and how you overcome it. I believe that example 1 could help a lot in this regard if one introduces it on an intuitive (already somewhat formal) level without the need of having the AIC conditioned formalized already at that point. It is really weird that we have a 2.5 page introduction and the relevant example only follows in section 2.
Questions For Authors: - if I already had a low-level representation (input of your algorithm), I clearly am in danger to suffer from the potential problem of AIC violation if I map into the abstraction space. Which advantage can I gain from the projection to compensate this risk other than shortcuts (summarizing certain situations in low-level setup)?
- even if I am interested in summarizing variables, couldn't I avoid running into the AIC problem altogether by reasoning on the low level and doing an abstraction only post-hoc?
- in a similar line: If you admit dependency between U-variables, it seems that you start right away from the premise that even the low-level SCM is not described at such a level of detail that noises are independent (as they should be ideally). To me, the projections just seem to make a step into another SCM where there are possibly yet more of those dependencies, and we give it a special name (AIC). The question is: Why is the AIC violation more severe for reasoning than having dependencies among variables in U?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's feedback and believe some misunderstandings may have led to a harsh evaluation. We respectfully ask for reconsideration based on the clarifications below.
> In general the paper is very difficult to read [...]
**Response:** We recognize that the reviewer’s primary concern is the paper’s density. While we aimed for clarity through diagrams and examples, some content had to be relegated to Apps. B and C, preventing us from elaborating as thoroughly as we would have liked. The density arises from building upon several works that already involve complex semantic systems, including:
1. Pearl Causal Hierarchy (PCH): Our approach is general, encompassing all 3 layers of the PCH. While we chose to present examples using simpler interventional quantities from Layer 2, we present all definitions generally to convey the full scope of our contribution (Layer 3).
2. Abstractions: We follow established abstraction theory, using concepts like inter/intravariable clusters, $\tau$-abstractions, AIC, and C-DAGs, with notation aligned to prior literature.
3. Causal Generative Modeling: Our work extends neural causal models (NCMs), forming the basis for Sec. 4 experiments.
In the main paper, we prioritized our new contributions over reviewing prior work, referencing established concepts rather than restating them.
That said, we are committed to improving clarity by:
- adding a notation summary table,
- expanding discussion on related works,
- using the extra page to elaborate on each definition.
We appreciate your willingness to work with us in evaluating the contributions presented.
>I am unsure what significant claims this paper wants to make.
**Response:** We emphasize in our paper that our primary goal is to develop a causal abstraction framework robust to violations of the abstract invariance condition (AIC). When two low-level interventions produce different effects but map to the same high-level intervention, existing frameworks fail to address the resulting ambiguity. This problem is particularly prevalent when working in the representation space after performing representation learning, as a learned, lossy representation is unlikely to avoid such ambiguity. Working in a representation space offers substantial advantages for high-dimensional causal inference, a direction we aim to advance in this paper. We hope this motivation is clearly conveyed in the introduction (lines 58–75, left), supplemented by the example on HDL and LDL cholesterol, as is our proposed solution (discussed from lines 76-95 (left)). We will refine the introduction for clarity and welcome any specific suggestions.
>the approach starts from the premise to have a low-level SCM already available [...]
**Response:** The relaxation of this requirement is the entire contribution of Sec. 3. Recognizing that we are unlikely to have the whole low-level SCM, we introduce identification results (Def. 9, Thm. 3) that specifically aim to infer causal quantities given limited data from the low-level SCM, such as observational data.
> if I already had a low-level representation (input of your algorithm) [...]
**Response:** The benefits of working in the abstraction space are similar to those motivating the use of representation learning in non-causal contexts: reducing dimensionality for improved tractability, transforming data into spaces with desirable mathematical properties (e.g., the linearity of Word2Vec), and better interpretability.
The AIC challenge is unique to the causal setting, as non-causal contexts do not require consideration of how causal relationships between variables are preserved or altered when the representation is lossy. Previously, the trade-off of working in the abstracted space, as you mentioned, meant accepting AIC violation risks for the benefits of representation learning. Our work resolves this dilemma by enabling the advantages of representation learning without incurring any drawbacks from AIC violations.
> even if I am interested in summarizing variables [...]
**Response:** Certainly, but doing so would prevent leveraging the computational benefits of representation learning in the causal modeling process.
>in a similar line: If you admit dependency between U-variables [...]
**Response:** This is a good question. Recognizing that AIC violations can be reinterpreted as projections into the exogenous space is one of the key insights of our paper. Previously, the prevailing view was that if a high-level intervention was ambiguous, performing the abstraction was futile since it meant ignoring details that were relevant to the setting (resulting in mathematical contradictions).
The natural connection to SCM partial projections enabled a generalization of previous frameworks, accommodating AIC violations by interpreting the lost details as exogenous variables. Establishing this formal connection and generalization constitutes a key and non-trivial set of contributions presented in Sec. 2. | Summary: This paper introduces a new notion of causal abstraction called "projected abstraction" that extends causal abstraction theory to handle lossy representations—situations where multiple low-level interventions with different effects map to the same high-level intervention. The authors show how to construct projected abstractions from low-level models, translate causal queries between levels, identify high-level causal relationships from limited low-level data, and demonstrate the effectiveness of their approach in high-dimensional image settings. Broadly, this work bridges causal reasoning and abstraction.
Claims And Evidence: The paper claims to address the limitation named the "Abstract Invariance Condition", a common scenario where two variables cannot be abstracted together because they have different downstream impacts. The proposed notion of projected abstraction deals with this issue by representing the loss of information from collapsing two variables that have different downstream impacts in terms of exogenous variables in the high-level causal model.
This is an important direction for causal abstraction theory to be developed in. Exact transformations or abstractions should be very rare, and understanding how to model lossy abstraction will be key in all practical efforts.
Methods And Evaluation Criteria: The MNIST task in this paper seems appropriate for the proposed methods
Theoretical Claims: I can confirm that the main text is coherent and the definitions make sense given prior work in the area. No obvious issues stick out. However, I cannot attest to the proofs or appendix material as it would take too much time as a reviewer to go through them all in detail.
Experimental Designs Or Analyses: The experimental section seems to be good to me, but it will be difficult for readers not familiar with NCMs. I would recommend trying to give the experimental section some more room to breath.
Supplementary Material: I went through the supplementary material, but not in much detail. Broadly, it seems thorough and thoughtful!
Relation To Broader Scientific Literature: This paper is written for theoretical researchers in the field of causality. Within that context, this paper is very strong and will be of great interest. However, I believe that researchers outside this field will find it difficult to work through.
I don't think this is a real issue, and I think ICML should accept theoretical work on causality!
Essential References Not Discussed: These aren't essential citations, I believe that the authors did a good job citing the relevant work in causality. However, given that causal abstraction has been applied to mechanistic interpretability, it might be good to situate the paper relative to that literature and explain to readers from that field how this relates.
https://proceedings.mlr.press/v162/hu22b.html
https://proceedings.neurips.cc/paper/2020/hash/92650b2e92217715fe312e6fa7b90d82-Abstract.html
https://arxiv.org/pdf/2502.20914v1
https://arxiv.org/abs/2106.02997
Other Strengths And Weaknesses: I think introducing the notion of identifiability into causal abstraction is an important aspect of this work, as it helps bridge existing theory on causal abstraction to other research in causality.
Additionally, developing methods for inferring abstract structure from limited concrete low-level data is crucial for causal abstraction to be applied in real world settings.
Other Comments Or Suggestions: N/A
Questions For Authors: How does this relate to causal feature learning? I don't think you cite any work from that area, but it seems deeply related to me.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and review. We are happy that the work was understood. To answer your concerns:
>The experimental section seems to be good to me, but it will be difficult for readers not familiar with NCMs. I would recommend trying to give the experimental section some more room to breath.
**Response:** Thank you for the suggestion. We will expand the experimental section to contextualize NCMs for unfamiliar readers, and we will include a section in the appendix discussing NCMs and their relevant results to the discussion in this paper.
>However, given that causal abstraction has been applied to mechanistic interpretability, it might be good to situate the paper relative to that literature and explain to readers from that field how this relates.
**Response:** Thank you for these citation suggestions; we will be sure to include them. Mechanistic interpretability is a prominent issue in the field of causal abstractions, and we will certainly add an appendix section to discuss it in more detail. Although this work focuses more on the foundational aspects of abstraction theory, particularly performing cross-layer inferences, we find that many of the results are strongly applicable to both problem settings. Relating this work to the mechanistic interpretability problem is a clear next step.
>How does this relate to causal feature learning? I don't think you cite any work from that area, but it seems deeply related to me.
**Response:** Actually, the problem of causal feature learning is highly relevant. Papers such as “Visual Causal Feature Learning” (Chalupka et al., 2014) leverage data-identifiable forms of the AIC to learn ideal representations and variables that best preserve the causal relationships between variables. This connection is briefly discussed in “Neural Causal Abstractions” (Xia & Bareinboim, 2024) Appendix D.2. We will also include a section in our paper complete with these citations. | Summary: This paper presents a theory of causal abstractions that generalizes them beyond the usual abstract invariance condition. It formalizes the idea of general causal abstractions through the concepts of partial SCM projections, soft interventions and generalized queries. An algorithm to construct these is general abstractions is obtained. Results regarding inference on the general causal abstractions (through a notion of project cluster causal diagram) are obtained. Some experimental validation of the theory is presented with inference tasks on coloured MNIST datasets.
Claims And Evidence: - The contributions of the paper are mainly theoretical. The theoretical framework is very well supported by useful and insightful results, clear explanations and examples.
- The experimental validation is somewhat convincing although not totally. I do not think that this is a significant weakness of the paper, it is known that the learning problem for causal models is difficult. However, I would like to see a more nuanced discussion of the experimental results (see below). I would also like to see a discussion of the limitation of the experimental setup.
- I would like to see a discussion of the limitations of the proposed theoretical framework, as well as well as future perspectives.
Methods And Evaluation Criteria: The proposed evaluation is appropriate.
Theoretical Claims: The defintions and statements are clear and sound. I however did not check the proofs in detail.
Experimental Designs Or Analyses: - "The abstractionless model has higher error than the projected C-DAG model since it operates in a higher-dimensional space." I don't find this explanation fully satisfying. As far as I understand, the fact the the baseline operates on a higher-dimensional space could be both beneficial or detrimental. The reduction in dimension is also not dramatic. It is necessary to understand better if this is indeed the cause of the improvement or if there is another explanation.
- In the coloured MNIST experiment one thing is not clear to me, should it even be possible to sample from the right distribution using a binary representation?
- We see an improvement with respect to the non-causal model, but the digits themselves don't look right compared to the 16 dimension RNCM model. I could like to see versions of both models with intermediate numbers of dimensions (for example going from R^16 to R) to see how to performance degrades with less dimensions.
Supplementary Material: I reviewed only the experimental part in detail.
Relation To Broader Scientific Literature: As far as I know, this paper is a very significant contribution to the literature on causal abstractions. I am not aware of previous works that dealt with relaxing the AIC condition. The results of the paper open the door to applying causal abstractions to much wider problems.
Essential References Not Discussed: Not that I know of.
Other Strengths And Weaknesses: **Other strengths**
- The work tackles an important problem in an original way
- The paper is very well written and there is a nice effort to make it understandable
Other Comments Or Suggestions: I suggest to nuance/precise this type of statement:
"Combining these two modes of reasoning is vital for building more advanced AI systems."
The word "vital" is strong, so either replace with something weaker or elaborate on what is meant.
Questions For Authors: In definition 7, is there a consistent generalization of the PCH to soft interventions? If yes, would be interesting to discuss/include it.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review and valuable insight in our experimental analysis. We address your comments below.
> I would like to see a discussion of the limitations of the proposed theoretical framework, as well as well as future perspectives.
**Response:** Indeed, one of the main limitations of the theoretical framework is that, as a tradeoff for violating the AIC, it is possible that more causal dependencies are introduced, as illustrated through the definition of the projected C-DAG in Def. 8 (more edges are added when there are AIC violators). We will emphasize this point further in the paper. This naturally implies that there is a challenge of determining how to balance the tradeoff between lossy representations and loss of causal constraints, and we will also make a point of this as a potential future direction of work.
> "The abstractionless model has higher error than the projected C-DAG model since it operates in a higher-dimensional space." I don't find this explanation fully satisfying [...]
**Response:** In this particular experiment, the dimensionality reduction is quite extreme. In the $\mathcal{G}$-NCM, the model is trying to sample the entire image $X$ as an intermediate step, while in the $\mathcal{G}_{\mathbb{C}}^{\dagger}$-NCM, the variable $X$ has been abstracted into a binary variable. Since the high-level model has much less information in the input dataset, its only serious advantage is the dimensionality reduction. That said, the error of the $\mathcal{G}$-NCM still trends downward with higher data, since it is theoretically still a sound method of estimating the query. With much more parameters and compute, it is possible that the approach could achieve similar errors. We will include more discussion on this in Appendix D, as well as discussion on the tradeoff of performing such a large dimensionality reduction.
> In the coloured MNIST experiment one thing is not clear to me [...]
**Response:** Indeed, we chose the binary representation as the most egregious example of an AIC violation, and we illustrate one of its clear flaws – one should not be able to achieve any good reconstruction of the image with such a lossy representation. This highlights the strength of our approach of projected sampling – whatever we lose in the representation, we can represent it in the exogenous space. An example analogy is this: suppose we have an image of 10 bits, but we represent it with only 1 bit. We certainly cannot reconstruct a 10-bit image with only 1 bit, but we can reconstruct samples by sampling the other 9 bits. The particular distribution of 9 bits is based on the theory discussed earlier in Sec. 2: we sample conditioned on the 1 available bit and the parents of the variable as if we are translating a high-level intervention to the low-level. We will include this discussion in the paper.
> We see an improvement with respect to the non-causal model, but the digits themselves don't look right compared to the 16 dimension RNCM model [...]
**Response:** Thank you for the suggestion, we will run the two methods at different representation sizes to see how the performance scales. Our expectation is that while the original RNCM degrades in performance as the AIC is more and more violated (translating to worse performance with lower dimensionality), the RNCM with projected sampling does not suffer from this issue. To continue the bit analogy from above, the reconstruction of a 10 bit image would look better if we had 9 of the bits compared to only having 1 bit. However, with projected sampling, we are sampling the remaining bits anyway, so we always have the full 10 bits. Generally speaking, performance of the projected sampling approach could potentially be improved by simply improving the base architecture of the model, although this is out of the scope of our work.
> I suggest to nuance/precise this type of statement: "Combining these two modes of reasoning is vital for building more advanced AI systems." [...]
**Response:** Thank you for the suggestion, we will change it to “Combining these two modes of reasoning unlocks great potential for building more advanced AI systems.”
> In definition 7, is there a consistent generalization of the PCH to soft interventions? [...]
**Response:** Yes, this is indeed an interesting point. We frame Def. 7 in the context where the high-level model is performing hard interventions to emphasize where the noise in the corresponding low-level soft interventions arises as a response to AIC violations. That said, the method could certainly be generalized to handle cases where the high-level model is performing soft interventions as well. Then, the corresponding low-level interventions would be soft interventions that aggregate the results of all of the possible sampled interventions in their high-level counterparts. We will add a discussion on this in the appendix, as you suggested. Thank you!
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. I hope to see an updated experimental section in the post-rebuttal manuscript, as well as a discussion of limitations.
Also to clarify the last question, I was also referring to a generalization of the ladders of causality from Definition 2 to the setting of Definition 7. | Summary: This work addresses the problem of causal abstractions, providing an intriguing approach that could extend traditional fine-grained causal applications to more general scenarios. It emphasizes higher-level causal relationships and inferences. A key contribution of this work is the relaxation of the function class limitation in abstraction functions, bridging low-level causal models to high-level ones, and enabling lossy representations known as projected abstractions. Specifically, the work introduces three rules, as outlined in Definition 8 (Partially Projected C-DAG), which play a critical role in handling violations of the abstract invariance condition. Experiments on color MNIST demonstrate the benefits of using projected abstractions.
Claims And Evidence: The proposed method aligns with the claims made in the paper.
Methods And Evaluation Criteria: Empirical evaluations on toy data validate the benefits of the proposed method. Here, in the case of color MNIST, which is relatively simple and serves as toy data, the significant advantages of the proposed method are not fully demonstrated. While the importance of relaxing the abstraction function is clear, especially for real-world applications, the simplicity of the dataset limits the verification of these benefits.
Theoretical Claims: I reviewed the theoretical aspects at a high level but did not rigorously verify the correctness of the theorems.
Experimental Designs Or Analyses: The provided analyses are generally well-conducted.
Supplementary Material: The appendix contains theoretical proofs and experimental details, which I reviewed briefly.
Relation To Broader Scientific Literature: The paper tries to work on an important problem.
Essential References Not Discussed: Related works are thoroughly discussed in this paper.
Other Strengths And Weaknesses: Strengths:
The organization is well-structured, and the presentation is relatively clear.
It addresses a very important theoretical problem, and the problem is also significant in practice.
I really like the three rules in Definition 8, as they play an important role in 'correctly' constructing high-level causal models.
Some examples are provided, which greatly aid in understanding from an intuitional viewpoint.
Problems:
1) The Abstraction Function, as defined in Definition 4, is required to satisfy certain assumptions to maintain specific structures for inter/intravariable clusterings. This implies that the function class of the Abstraction Function is constrained within an equivalent class. While this is clear from the definition, I am concerned about how such a function class is restricted.
2) The three rules in Definition 8 are key factors in the proposed method. I am curious about the underlying motivation for introducing these three rules. What is the reasoning behind their inclusion, and how do they contribute to the overall approach?
3) Overall, this work mainly addresses AIC violations, but I am concerned whether the results hold regardless of the degree of AIC violations. Does the method remain effective when AIC violations are more significant, or is its performance sensitive to the extent of these violations?
4) My final concern is regarding the experiment settings. While I understand that this work is primarily theoretical, the motivation for addressing lossy representations stems from a practical viewpoint. In this context, the experiments are conducted at a relatively simple level, which raises concerns about the effectiveness of the proposed method in more complex scenarios.
Suggestions:
1) While I can understand the overall story, I feel that the writing style may not be very accessible for those who are not familiar with this area. It would be helpful to provide some intuitive interpretations for the basic and key definitions. For example, a brief discussion of Definitions 3 and 4 from an intuitive standpoint could enhance clarity.
2) It would be better to introduce, from an intuitive (high-level) standpoint, why consistency estimation is still possible when faced with lossy representations, and how the proposed method addresses this issue.
Other Comments Or Suggestions: none
Questions For Authors: none
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review and detailed suggestions. Addressing the reviewers points:
> The Abstraction Function, as defined in Definition 4, is required to satisfy certain assumptions [...]
**Response:** Yes, broadly, the paper is focused on a specific family of abstractions called constructive abstractions. Most works found in this literature, including ours, focus on this family because there is a natural mapping between low- and high-level interventions and distributions, allowing for well-defined inferences. These abstractions are most easily realizable in practice as well, since the construction of the high-level model naturally follows from the declaration of the clusters, which are interpretable and can also sometimes be learned (see “Neural Causal Abstractions” (Xia & Bareinboim, 2024).) If one would prefer to allow unrestricted abstraction functions, the results of the paper may still be applicable, but one would have to define how low-level interventions correspond to high-level interventions on a case-by-case basis (i.e., it is not necessarily the case that an intervention on $do(\mathbf{X}_L = \mathbf{x}_L))$ on the low-level is equivalent to an intervention on $do(\tau(\mathbf{x}_L))$ on the high-level, or even well-defined).
> The three rules in Definition 8 are key factors in the proposed method. I am curious about the underlying motivation [...]
**Response:** Indeed, that’s a cool question, thank you. The three rules demonstrate the core insight of allowing lossy abstractions – by losing some information in the variables, we must make up for it by removing causal constraints (i.e., adding more edges). These rules are necessary for guaranteeing the validity of the downstream causal inferences, as shown through Thm. 2. Without adding these edges, one risks claiming that a causal query is identifiable when it is not. For example, in Fig. 1(a), if the edge from $Z$ to $Y$ is not included, one may mistakenly assume that $P(Y \mid do(X), Z) = P(Y \mid do(X))$, which is the mistake made in Ex. 2. The proof of Thm. 2 shows the math behind what could go wrong when each of these rules are ignored.
> Overall, this work mainly addresses AIC violations, but I am concerned whether the results hold regardless of the degree of AIC violations [...]
**Response:** This is a great question. In fact, this is precisely the issue that we are solving. In the past, approaches that rely on the AIC would deteriorate in performance as the AIC becomes more and more violated. In contrast, the projected abstraction approach does not care how much the AIC is violated and performs identically regardless. Note, for example, in Fig. 4 of our MNIST experiment, reducing the representation size to a mere binary variable strongly violates the AIC and therefore results in very poor performance from the RNCM (3rd row), but with projected sampling, we can produce images as usual (4th row).
> My final concern is regarding the experiment settings. [...]
**Response:** Our goal with the MNIST experiment was to provide a proof-of-concept of how one could potentially learn a causal generative model without worrying about AIC violations. More generally, this would allow for a unification of out-of-the-box representation learning methods and causal generative modeling methods since it would not matter how the representation is learned once we move to the causal generative modeling phase. With this unification understood, scaling to more complex datasets is a matter of scaling the architecture and representation learning methods using state-of-the-art techniques from general deep learning research, which can be done separately from the causal results guaranteed by our work.
> While I can understand the overall story, I feel that the writing style may not be very accessible [...]
**Response:** We aimed for a more rigorous analysis of this highly technical topic for the sake of concreteness, so its more mathematical nature is somewhat to be expected. We do appreciate the suggestion and will add some clarifying sentences in the final report and include an appendix section discussing some of the key prior works to help bring readers up to speed with the contents of the paper.
> It would be better to introduce, from an intuitive (high-level) standpoint, why consistency estimation is still possible when faced with lossy representations, and how the proposed method addresses this issue.
**Response:** Yes, thank you for the suggestion. We will include the discussion mentioned above on the tradeoff between the lossiness in the abstracted variables and the loss of constraints in the projected C-DAG and why inferences in the new model still apply.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
Some points have been addressed in the rebuttal—I appreciate your effort. However, I also reviewed the comments from other reviewers, particularly Reviewer iNbN. To some extent, my concerns regarding the writing style are shared by Reviewer iNbN. For now, I maintain my current rating. I will participate in the next stage of discussion with the AC and the other reviewers, particularly with regard to the writing style. | Summary: Existing causal abstraction frameworks often struggle with lossy abstraction functions, where different low-level interventions produce distinct effects but map to the same high-level intervention. To address this, the authors propose projected abstractions, a new framework that extends previous definitions to handle lossy representations more effectively. The construction of projected abstraction and inference is shown.
Claims And Evidence: The claim that the Abstract Invariance Condition (AIC) can be easily violated and invalidate causal conclusion is well supported by the example and explanations. The discussion on when and how lossy transformation should be used is interesting. This highlights the importance of the problem that the paper is tackling.
Methods And Evaluation Criteria: The abstraction projection looks like a practical method to address the problem of AIC violation.
Theoretical Claims: I have checked until Def 6.
L110-118 could be put into Def 2, as I got confused by undefined notations when reading Def 2.
Def 5: $\tilde{pa}_V$ is not used.
Def 6: Definition of $\tau$-abstraction is missing in the main text (please move or point to Def 10 in Appendix A).
Experimental Designs Or Analyses: I checked the two experiments and they look correct to me.
Supplementary Material: I didn’t review supplementary materials.
Relation To Broader Scientific Literature: The key contribution seems to be a generalization from the prior literature. I think it is a useful addition to the direction of causal representation learning.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper is notational heavy and contains a number of definitions that require the readers to consume, especially in Sec 1.1 and 2.1. They are not the easiest to follow. It would be helpful for the authors to state briefly why those definitions are needed and how they will be used. Aside, it is hard to cross-reference the notations from all different parts in the paper to Alg 1. I suggest the authors add explanations to it.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. The details of the theoretical contribution are unclear. It seems like projection has been proposed in Lee & Bareinboim, 2019, but partial SCM projection seems novel in this paper. However, is definition 6 original or discussed in prior work? Could the authors articulate what is novel in the paper in comparison Lee & Bareinboim, 2019? Since the paper’s main contribution is a formalism framework that generalizes from previous literature, at a higher level, I think it would be better to have a separate paragraph to summarize the theoretical contribution.
2. For experiments, what does the variance in each method (Fig 5) come from? It seems like the projected C-DAG approach has a larger variance than other methods. Is it some sort of bias-variance trade off and/or can authors discuss it?
3. What is the scalability of the proposed method in a more complicated, real-world scenario? MNIST is a classic but relatively simple dataset.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful questions and positive review. We appreciate the multiple suggestions and will address them as follows:
> L110-118 could be put into Def 2, as I got confused by undefined notations when reading Def 2.
**Response:** We will adjust Def. 2 to accommodate this and make it clearer; thanks!
> Def 5: $\widetilde{\mathbf{pa}}_V$ is not used.
**Response:** Thanks for pointing this out. It was originally used in settings in which the AIC was not violated, but we do not consider such cases in the main body, so we will omit this from the definition to improve legibility.
>Def 6: Definition of $\tau$-abstraction is missing in the main text (please move or point to Def 10 in Appendix A).
**Response:** We will include this as suggested.
>The paper is notational heavy [...]
**Response:** Thank you for the feedback. We do assume some background of the previous literature, but indeed, our efforts to fit all of our content into the page requirement have unfortunately led to the paper being a bit dense. To remedy this problem, we have included Appendix B for further discussion and Appendix C for clarifying examples. If accepted, we will use the extra page to clarify each definition and make the theory easier to follow. We will also add a table in the appendix to clarify all of the notation we use throughout the paper.
Finally, to answer your questions:
> 1. The details of the theoretical contribution are unclear. It seems like projection has been proposed in Lee & Bareinboim, 2019 [...]
**Answer:** Yes, SCM projections were proposed in Lee & Bareinboim (2019). We draw inspiration from this result to generalize to partial SCM projections (one of our contributions). In contrast to Lee & Bareinboim (2019), which solves a causal RL problem to find optimal intervention sets in a causal bandits setting, we work in an entirely different space, applying projections to define a causal abstraction formalism. Def. 6 and Thm. 1 (both contributions of our paper) establish this intriguing duality between partial SCM projections and causal abstractions, which is surprising and may otherwise seem unrelated. The idea that a lossy abstraction that violates the AIC can be thought of as partially projecting away the abstracted variables is the main insight that allows us to generalize previous abstraction inference results to new settings where the AIC does not necessarily have to hold. This insight leads to further results about how to perform more general causal inferences across abstractions (Sec. 3) and a new type of sampling procedure for high-dimensional causal generative modeling (Sec. 4). In short, the generalization of SCM projections to the partial form is indeed a contribution of the paper, but the main motivation for this contribution is to generalize challenging instances of causal abstraction inference. We thank the reviewer for pointing this out, and we will include this discussion in the paper's appendix.
> 2. For experiments, what does the variance in each method (Fig 5) come from? [...]
**Answer:** The variance for each approach arises from the sampling of the training data and the randomness in the training process (e.g., parameter initialization). One explanation for the seemingly higher variance in the projected C-DAG approach may simply be that the plot somewhat exaggerates it due to the log-log formatting. Another deeper reason may be that there is a tradeoff between granularity and constraints when we convert from the original C-DAG to the projected C-DAG (as we are potentially adding more edges). With more edges, there are fewer constraints; therefore, converging may be more challenging despite the higher accuracy. Of course, this more nuanced view is just brought up by the new machinery developed in this paper. This paper is the first to study relaxed abstractions under violations of AIC, which are very likely to occur in almost any scenario.
> 3. What is the scalability of the proposed method in a more complicated, real-world scenario? [...]
**Answer:** One issue that is resolved by our paper is that the AIC can be a challenging thorn when attempting to apply out-of-the-box representation learning tools in causal generative modeling contexts. We would like to be able to learn an NCM on top of any representation, but the AIC prevents us from doing so. The goal of the MNIST experiment is to show a proof of concept that, regardless of how the representation arises, we can still perform causal inferences and sample causal images using the new projected-sampling approach. This is powerful as it means that the method can achieve robust performance regardless of the method of representation learning used. Scaling to even higher-dimensional settings may require larger architectures, but fortunately, one can leverage the vast research of the general deep learning community to accomplish this while still enjoying the causal benefits from this work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. It addresses my concerns. I particularly like the point ``duality between partial SCM projections and causal abstractions'', so I'm increasing my score from 3 to 4. | null | null | null | null |
When Diffusion Models Memorize: Inductive Biases in Probability Flow of Minimum-Norm Shallow Neural Nets | Accept (poster) | Summary: The paper rigorously proves that, when the training points are orthogonal and we use the MMSE denoiser, score flow converges to the vertices of a hyperbox, wherein the vertices of the hyperbox are the partial sums of training points. Similarly, they prove that probability flow converges to a vertex on this hyperbox or a point on the boundary of the hyperbox. Both proofs analyze the stable points of the flow under a "low-noise regime" showing that, for score flow, the only stable points are the vertices of the hyperbox, whereas for probability flow these stable point include the boundary. They proceed to show that these results hold using a shallow ReLU neural network with a minimal l2 norm with some error wherein some partial sums have become unstable due to the error between the MMSE and neural network denoiser. Finally, they show experimentally that as the number of training points increases, the diffusion process converges to a higher proportion of non-training points thus the generalization is increasing.
Claims And Evidence: All claims are stated clearly and have been rigorously proved. Any assumptions made for the theory are well discussed. The theory is then backed by sufficient experiments which extend to a more practical setting.
Methods And Evaluation Criteria: The stable point analysis used to prove the claims gives a good intuition for convergence and the approximations used to simplify the analysis are justified.
Theoretical Claims: I did not check the proofs in the appendix.
Experimental Designs Or Analyses: The main experiments are with carefully constructed data and models in line with the theory proposed. There are supplementary experiments which relax the assumptions and demonstrate that the theory still hold with some error.
Supplementary Material: I read the appendix to some extent.
Relation To Broader Scientific Literature: - They move beyond Zhang & Pilanci's (arXiv 2024) work on univariate data by considering high-dimensional data belonging to a simplex and showing convergence using probability flow and score flow instead of Langevin Dynamics; thus showing different convergence guarantees depending on the inference procedure used.
- Carlini et al. (USENIX Security 2023) empirically showed that training data could be extracted by locating stable points. This work significantly advances this by providing theoretical guarantees that training points are stable points for certain arrangements of data (i.e., orthogonal data).
- There have been recent studies into the relationship between memorization and generalization. This work suggests that generalization improves with the number of training points, but memorization persists (training points are still stable but we are less likely to converge to them). This is a fascinating contribution as it is often thought that memorisation implies a lack of generalization.
- The theoretical results rely heavily on Zeno et al. (NeurIPS 2023) which provides an explicit form of the minimum MSE estimator for particular arrangements of data points. This is unfortunate, as extending this work to alternative arrangements than those considered would require also extending the work of Zeno et al. This point is also noted below in weaknesses.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: Strengths:
- The paper provides a rigorous analysis of the stable points of probability flow and the score flow, this contributes greatly to our understanding of where a diffusion process consisting of a sequence of denoisers converges to.
- The results focus on the impact that training data has on the final convergence points of the diffusion process; effectively explaining what forms the new data produced by the diffusion process can take on.
- The paper presents an interesting view that generalization in diffusion models is the result of having a larger proportion of stable points which are not the training data.
- The stable point analysis is nice and relatively easy to follow. The diagrams help demonstrate what is happening.
- Experiments align closely with the theory considered and help consolidate the conclusions.
Weaknesses:
- Analysis is restricted to the "low-noise regime", but this is not mentioned in the limitations section. Specifically, for every experiment the "low-noise regime" was supposedly enforced, but it is unclear how. Line 247 mentions 1/3 of the denoisers used are trained with this low-noise regime, but does not allude to how this might differ in practice. Are we guaranteed to have some denoisers in the "low-noise regime"?
- Experiments stick to synthetic data which is orthogonal. Since diffusion models are often used on image data, some mention of how this applies or can be understood on image data would be useful.
- The paper lacks discussion about the consequences of these results in a practical setting. The results may not precisely hold in such a setting, but what influence do they have? Do they provide intuition about the result of a diffusion process in general?
- The theoretical results rely heavily on Zeno et al. (NeurIPS 2023). This is unfortunate, as extending this work to alternative arrangements of data than those considered would require also extending the work of Zeno et al.
Other Comments Or Suggestions: - For the diagrams in Figure 3, if Virtual pts. were above Training pts., I think it would be easier to see that the generalization increases and that a larger percentage of samples converge to the virtual and boundary points.
Questions For Authors: 1. Do you have an intuition as to why we never converge inside the hyperbox? What would the data look like there? Is it not meaningful with respect to the training points?
2. What intuition can your results tell us about diffusion models used in practice? For example, what does the hyperbox look like for image data, can the results provide intuition about the images diffusion models converges to?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s positive feedback and are glad that they find this a fascinating contribution. The interplay between memorization and generalization is indeed intriguing.
# Low-Noise Regime Details
Theoretically, the boundary of the low-noise regime is determined by the diffusion time scheduler, the number of timesteps, $T$, the training data, and the noise realizations. In the experiments, we approximate this boundary by considering a ball radius equal to 4 standard deviations. Specifically, we first calculate the variance schedule of the $T=100$ diffusion timesteps. Then, we check the minimal distance between training points. We consider the low-noise regime boundary to be an eighth of this distance. Once we have this boundary, we extend the variance schedule to include at least $50$ variance values below the boundary.
Since we are discretizing an ODE, as long as we are using small enough timesteps, we would get the same results. We will clarify this in the revised manuscript.
In practical models, we are guaranteed to have some denoisers in the low-noise regime, since the final sampled point is obtained when $t \approx 0$, where the noise level is inherently small.
Additionally, the low-noise regime is practically important for diffusion sampling, as the "useful" part of the diffusion dynamics occurs primarily when the noise level drops below a certain critical threshold [R1].
# Implications for Practical Image Diffusion Models
In practice, natural images lie on a low-dimensional linear subspace due to their approximate low-rank structure (see Zeno et al. (2023), Appendix D3). As a result, the denoiser first contracts the data towards this subspace (see Theorem 2 in Zeno et al. (2023)). If the subspace has sufficiently high dimension, the data points become approximately orthogonal (following the $\exp(D) \gg N > D \gg 1$ argument in our response to Reviewer SMjF under “Orthogonal Dataset Choice”). Consequently, the geometric structure in high-dimensional image spaces might resemble the hyperbox studied in our work, though restricted to a subspace, and with slightly acute angles rather than strictly orthogonal ones.
# Reliance on Zeno et al. (NeurIPS 2023)
We acknowledge the reliance on Zeno et al. (NeurIPS 2023), but we believe that advancing the understanding of diffusion models remains valuable, even within these constraints. We believe that exploring alternative data arrangements is an interesting and challenging direction for future work.
# Fig. 3 Points Order
Thanks for this valuable suggestion, we will swap the order in the revised paper.
# Convergence Inside the Hyper-Box
We do not converge inside the hyperbox because, as shown in Fig. 8 (Appendix F), the score function trajectory, both inside and outside the hyperbox, always points toward its boundary. As a result, the probability flow ODE leads to convergence at the hyperbox boundary rather than within its interior.
[R1] Gabriel Raya & Luca Ambrogioni. Spontaneous symmetry breaking in generative diffusion models. NeurIPS, 2023. | Summary: Commonly, diffusion models use the probability flow ODE to generate high-quality images. Obviously, the score (or reverse) SDE can be used to generate images as well. But the process for the score SDE is understandable, since it is derived from the forward SDE. However, it is unclear on the distinction between the probability flow and the score SDE, regarding their ability to generate high quality images and duplicate training points. To tackle this distinction and to further elucidate on the mechanism of generating from probability flow ODE, this work studies the probability flow of shallow ReLU neural network denoisers trained with minimal $\ell_2$ norm on datasets, where the actual minimizers can be solved. Interestingly, the work finds that probability flow reaches more general manifold points, while score flow (SF) more often converges to training points or virtual points (sum of training points).
Claims And Evidence: ## Claim
The claims made by this paper are theoretical and backed by the simulation results on simple-solvable datasets in conjunction with proofs provided by the authors. These claims are:
(1) On clean and orthogonal data, both score and probability flow samplings follow a similar trajectory for a given initialization point.
(2) However, sampling with probability flow can reach more general manifold points due to the early stopping in the scheduler. In other words, sampling with probability flow can reach both training points and virtual points (which are sums of the training points). In the hyper-box data, such virtual points can exist on the boundary of the hyper-box. Although score flow ODE can converge to training points, it is quite rare for that to happen. In contrast, it is more likely to converge on virtual points. Meanwhile, probability flow ODE tend to converge to virtual points at the boundary.
## Evidence
(1) To establish the first claim, strongly following the work of [Zeno et al. (2023)](https://openreview.net/forum?id=gdzxWGGxWE), the authors first define the problem setting and the representation cost $R(\mathbf{h})$ where $\mathbf{h}$ is the **predicted training point** given a perturbation $\mathbf{y}_t$ of a training point $\mathbf{x}_i \in \{ \mathbf{x}_n \}^{N}_{n = 1}$ in which it tries to predict.
This representation cost $R$ focuses on the minimization of the $\ell_2$ regularization term $C(\theta)$ from the following MSE loss:
$\mathcal{L} (\theta) = \frac{1}{M N} \sum^M_{m = 1} \sum^N_{n = 1} \lVert \mathbf{h} (\mathbf{y}_{n, m}) - \mathbf{x}_n \rVert^2 + \lambda C(\theta)$
where $\mathbf{y}_{n,m} = \mathbf{x}_n + \epsilon_{n, m}$ in which there are $m = 1, \dots, M$ perturbations and $n = 1, \dots, N$ training samples.
The minimizer of $R$ is: $\quad \quad \mathbf{h}^* \in \underset{\mathbf{h}}{argmin} \ R(\mathbf{h}) \quad $ s.t. $\quad \mathbf{h}(\mathbf{y}_{n, m}) = \mathbf{x}_n \quad \forall n, m$
but for the multivariate (more general) case, the constraint $\mathbf{h}(\mathbf{y}_{n, m}) = \mathbf{x}_n$ is modified into $\mathbf{h}(B(\mathbf{x}_n, p)) = \{ \mathbf{x}_n \}$. $B(\mathbf{x}_n, p)$ denotes a ball centered at the training point $\mathbf{x}_n$ with radius $p$. Thus, the minimizer setup, which the analysis focuses on, is
$\mathbf{h}_p^* \in \underset{\mathbf{h}}{argmin} \ R(\mathbf{h}) \quad $ s.t. $\quad \mathbf{h}(B(\mathbf{x}_n, p)) = \{ \mathbf{x}_n \} \quad \forall n$
Detailed in section (4), using the hyper-box as the data, the authors demonstrated the score flow converges to the vertex (or training point) of the hyper-box closest to its initialization point (see Theorem 4.3 and Appendix D.2). Based on Eq. (18), it is demonstrated that both of the flows share similar trajectory, in which they can arrive at similar stationary points.
(2) In the case of probability flow on the hyper-box, there is a $p_t$ scheduler which inhibits its guarantee to land on the closest vertex. Thus, it can instead more often land on a boundary point to the vertex. **This directly supports claim (2) that probability flow can reach more general manifold points due to early stopping in the scheduler**. The authors formalize this through theoretical analysis showing that while score flow ODE tends to converge to virtual points (or sums of training points), probability flow ODE has a higher likelihood of converging to points on the boundary of the data manifold.
The empirical evidence in Figure (2) clearly illustrates this distinction, showing that probability flow sampling more frequently lands on boundary points, whereas score flow predominantly lands on virtual points. This systematic difference in convergence behavior confirms the theoretical prediction that probability flow can reach both training points and a broader set of manifold points (including boundary points), while score flow is biased toward virtual points.
These aspects are also established in two more data settings, obtuse-angle dataset and equilateral triangle dataset.
(3) The convergence to training points for both flows is further exacerbated as there are more training points. Partly, this is due to the creation of more virtual points. This fact is demonstrated in Figure (3) which details the fraction of training points, boundary points, and virtual points as the training data size $N$ increases. Although the figure only shows up to $N = 30$, as $N \rightarrow \infty$ it is likely that there will be more virtual points than training data generated from the models.
## Strength
(1) The methodology of using solvable shallow ReLU networks allows for precise mathematical characterization of the sampling behaviors, leading to provable results rather than just empirical observations. This approach enables the authors to establish clear connections between score and probability flows and their trajectory in generation.
(2) The findings regarding virtual points and boundary points have significant implications for diffusion model's dynamics. Specifically, by revealing how probability flow can reach more general manifold points while score flow more often converges to training or virtual points, the paper provides some insights on memorization and generalization in diffusion models.
(3) Overall, the paper is presented in an accessible and engaging manner, which makes it approachable for both newcomers to the field and experts. Frankly, I did enjoy reading the paper.
## Weakness
(1) The paper's analysis is limited to simplified data distributions and a shallow ReLU network (with a single hidden layer), which may not generalize well to complex, high-dimensional data distributions used in practical applications. This restriction to tractable theoretical settings leaves questions about whether the conclusions apply to modern deep diffusion models which operate at a much higher dimension and learn more complex distributions.
(2) Furthermore, the analysis of this work relies heavily on the work done by [Zeno et al. (2023)](https://openreview.net/forum?id=gdzxWGGxWE), which assumes a full $d-$ dimensional ball centered around each training data. In real-world scenarios, the number of noisy samples is typically smaller than the input dimension $d$.
(3) Moreover, the paper's analysis did not include manifold with curvatures. Many real-world datasets, especially in high-dimensional spaces, exhibit complex geometric properties that cannot be captured by simple Euclidean geometry. It would be important to extend the analysis to account for such manifolds, as this could provide a more accurate and robust understanding of how score and probability flows behave in high-dimensional, non-Euclidean spaces.
## Questions
(1) Can this study be expanded to the hypersphere of low dimension or a circle? Or a manifold with curvatures?
(2) If dropout was to be included to the model, do you think that there will be more virtual points generated?
(3) In the empirical results section, Figure (2) demonstrates the behavior of probability and score flows for the hyper-box dataset. Would it be possible to provide similar visualizations for the obtuse-angle and equilateral triangle datasets to further highlight the distinctions between these flows and how they behave on different data?
Methods And Evaluation Criteria: The experiments used to confirm the theoretical claims are sound, and I do not see any problems with them. However, I would like to see experiments on manifold with curvatures.
Theoretical Claims: Yes, I did check the theoretical claims, which I mentioned above, made by the paper.
Specifically, I spent a lot of time checking proofs behind Theorems (4.3 and 4.4). Such proofs are located in entirety of Appendix D.
Overall, I do not see any problems with the proofs in the main text and appendix as they were formatted and written decently.
Experimental Designs Or Analyses: I find no problems with the experimentation, including their designs and analyses. However, I do find the experimentation still somewhat lacking. Also, the experimentation detailed in Figures (12) and (13) feel redundant to me.
Supplementary Material: Yes, I reviewed the entire Appendix. I focused mostly on Appendix D to understand the proofs for the theorems delineated in the main text. Additionally, I also spent a decent amount of time on sections (B, C, and F) of the Appendix to understand the additional experimentation on obtuse angle and equilateral triangle datasets.
Relation To Broader Scientific Literature: This work is quite valuable, in my opinion. Firstly, it provides an analysis which differentiate PF and score flow ODEs, regarding their dynamical behavior. This distinction can be related to memorization and generalization (if done correctly). Specifically, the work provides some results on the fractions of sample types (e.g., virtual points and training points) generated from the diffusion model, regarding which flow is utilized.
Moreover, with these findings, I believe the authors can relate them to the aspects of Hopfield models and theory as well. Specifically, both virtual points and boundary points could be seen as spurious patterns, but as the training data size increases, most virtual points become generalized patterns. See [Pham et al. (2024)](https://openreview.net/pdf?id=zVMMaVy2BY).
Essential References Not Discussed: A tangential study of memorization in terms of the manifold --- [Ross et al. (2024)](https://arxiv.org/pdf/2411.00113)
A tangential study of the dynamical trajectory in diffusion models --- [Biroli et al. (2024)](https://www.nature.com/articles/s41467-024-54281-3)
A tangential study of generalization in diffusion models via the aspects of convolution ---
[Kamb and Ganguli (2025)](https://arxiv.org/pdf/2412.20292)
A tangential study of memorization and generalization in diffusion models via the lens of Hopfield models --- [Pham et al. (2024)](https://openreview.net/pdf?id=zVMMaVy2BY)
A tangential study of loss in manifold dimension which leads to generalization in diffusion models --- [Achilli et al. (2024)](https://arxiv.org/abs/2410.08727)
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: See Above
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer’s kind words. We are glad to hear that the paper is presented in an accessible and engaging manner and that it was enjoyable to read.
# Application to Higher Dimensional Data
Indeed, we make simplifications to allow tractability. Please note, however, that while $N>D$ makes it impossible for the dataset to be exactly orthogonal, in the realistic regime $\exp(D)\gg N>D\gg 1$ most pairs are nearly orthogonal for standard Gaussian data. We believe this is a main reason why the orthogonality assumption is reasonable and common in high-dimensional theoretical analysis.
Please see more details in our response to Reviewer SMjF under “Orthogonal Dataset Choice”.
# Manifolds With Curvature
We appreciate this insightful suggestion. Our theoretical analysis is based on Euclidean geometry and does not easily extend to nonlinear manifolds with curvature. Investigating how score and probability flows behave in such spaces is indeed an important direction for future research. Additionally, identifying meaningful experimental quantities analogous to those we defined (e.g., virtual points) in curved manifolds poses an interesting challenge.
# Dropout Effect
The impact of dropout on the generation of virtual points depends on several factors, including whether it is applied to the input, hidden layers, or both, as well as the dropout rate. Could the reviewer kindly elaborate on the specific dropout setting they have in mind?
# Results for Additional Datasets
We thank the reviewer for the suggestion. We will add additional visualizations to the appendix and highlight these distinctions in the revised version of the paper.
# Connection to Hopfield Models
We appreciate this insightful suggestion. In the revised version of the paper, we will incorporate the Memorization, Spurious, and Generalization metrics into our experimental results.
# Essential References
We thank the reviewer for these references; we will discuss these related works in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for your response. At first, I was a bit disappointed at your response to my comments since it was brief. But instead, I've taken a look at your response to other reviewers. Although I do not like looking responses to other reviewers since I'm afraid they might biased my opinion, I think your responses were good to me.
Thus, I will raise my score from 3 to 4 and provide my comments and suggestions below.
Firstly, I would like to justify my increase. I think this paper provides a valuable perspective on the trajectory behavior of score SDE and PF ODE, regarding on why one may work better than the other. Now, with the current results, I think the score of 3 is satisfactory for the paper since the story is still weak with them. **But I will take my chance with the authors and trust that they will add additional experiments, and make connections with Hopfield models regarding memorization, generalization, and possibly the emergence of spurious points as a way to contrast boundary and virtual points. I think this would be very interesting!**
Regarding **dropout**, it would be interesting to see its effects applied on the hidden layers.
Best of luck.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer’s trust and the thoughtful suggestions. In the short time we had, we were able to produce the following additional results.
**Simulation Details**
We computed the metrics from Pham et al. (2024) using the same setup as in Figure 2: the training set contains 31 orthogonal points, and we use the same 500 sampled points as the evaluation set (so $∣S∣=31$, $∣S^\text{eval}∣=500$ in the notation of Pham et al.). To construct the additional set $S^′$ such that $∣S^′∣=100\times∣S∣$, ensuring $S’$ is much larger than the training set (following the guidelines set by Pham et al.). Both $S^′$ and $S^\text{eval}$ were sampled using probability flow. We used the $L_{\infty}$ metric as in Figure 2, and will include additional thresholds and distance metrics in the revised version.
Our goal is to compare Pham et al.’s classification of evaluation points into *memorization*, *spurious*, and *generalization* categories with our own categorization into *training points*, *virtual points*, and *hyperbox boundary points*. We set $\delta_m=0.2$, corresponding to the threshold used in our Figure 2 ($L_{\infty}<0.2$). We used $\delta_s=0.15$; additional values will be included in the revised paper.
**Results:**
* **Points marked as memorized** by Pham et al.:
* 100% are marked by us as training points
* **Points marked as spurious:**
* 52.94% are marked by us as virtual points
* 0.98% are marked by us as training points
* 46.08% are marked by us as boundary points
* **Points marked as a generalization:**
* 11.35% are marked by us as virtual points
* 4.86% are marked by us as training points
* 83.78% are marked by us as boundary points
These results suggest that many virtual points are classified as *spurious* under the Pham et al. criteria.
This matches our analysis: virtual points are stable, stationary points of the score flow; therefore, given a large evaluation set, we will have a cluster of points near the virtual points, which matches the spurious definition.
However, due to the exponential number of virtual points, we cannot expect a cluster to form around each of them. As a result, some virtual points are also classified as “generalization”, under these metrics. | Summary: The authors analyze the behaviour of score and probability flow ODEs when the score is estimated using min-cost shallow neural networks and restricted datasets. Under these assumptions, they derive theoretical results which find that the stationary points of PF-ODE and score flow ODE consist of summation of training set elements. Empirically, they confirm their theoretical findings by demonstrating the stability of the predicted stationary points and that PF-ODE and score flow ODE sampling converge to these samples.
## Update after Rebuttal
I'd like to thank the authors for performing additional experiments on my behalf. I sincerely appreciate the effort to produce these results in such a short amount of time.
I am fairly conflicted about this paper. I still am skeptical that these results are truly useful to the community. From the authors' latest reply, and simple linear algebra, for $N>=d$, any sample can be decomposed into a linear combination of the training set points so I struggle to determine what takeaways these finding have for practical diffusion applications. However, the theoretical results of this paper are sound, and are empirically well supported by the paper's experimental results. I have decided to update my score to a 3.
Claims And Evidence: The only claim which I take issue with is the implied similarity between convergence to virtual and boundary points under these restrictive assumptions and the combination of semantic components of images as observed in Stable Diffusion outputs (see abstract, L236). This claim is made with little evidence, as to my understanding this work predicts convergence to hyperbox boundaries or vertices. It is not clear to me that "semantic sums" are equivalent to the dimension-wise sums of this work.
Methods And Evaluation Criteria: The proposed evaluation methods are reasonable tests of their theoretical predictions
Theoretical Claims: My main concern regarding theoretical claims is regarding equation (17) - the minimizer of the constrained optimization problem when trained on orthogonal data, ie $x_i^\top x_j = 0$. The authors cite Theorem 3 of Zeno et al. 2023 as the source of this minimzer. However, Theorem 3 of zeno et. has the condition of $x_i ^\top x_j < 0$. Could the authors please clarify why this relaxation is acceptable?
Experimental Designs Or Analyses: A key assumption of this work is that the denoiser should fit the training data exactly. While this is completely reasonable for low noise levels where the modes of the dataset do not overlap, I am concerned about the training of the 100 other denoisers outside of the "low-noise regime". In this regime, a well trained denoiser should not exactly fit data points, but the posterior mean (ie eq. 2), which for a finite training set is a simple weighted average of the training points. A denoiser in this regime constrained to fit the training points exactly is therefore not a reasonable substitute for a well-trained diffusion model.
I am concerned that constrained optimization of the non "low-noise regime" denoisers may affect the conclusions reached from the empirical results. For example in Figure 6, with unconstrained optimization of the denoiser, no convergence to virtual points is observed in contrast to Figure 2 (b).
Supplementary Material: I reviewed the supplementary material sections A, B, D and F
Relation To Broader Scientific Literature: The contributions of this paper are somewhat complementary to the emerging literature on memorization in diffusion models, ie Somepalli et al. 2023 and Kadkhodie et al. 2024. However, the restrictive assumptions made in this paper make it somewhat less relevant to the general case of memorization/generalization in large scale diffusion models.
Essential References Not Discussed: The related work presented in this manuscript is to the best of my knowledge not missing any essential references.
Other Strengths And Weaknesses: Although I believe that the contributions of this paper are novel and generally well done, its main weakness is the strength of assumptions required to allow analytic tractability.
## Strengths
The problem of memorization in diffusion models is extremely relevant, and this paper's analysis of the probability flow ODE provides an interesting clue as to how memorization may occur in larger models. In addition, the presentation and writing are generally good. Finally, the empirical results generally confirm the predictions made by the theory and are sensible
## Weaknesses
The main weakness of this work are the restrictive assumptions. Specifically, the assumptions regarding orthogonality seem especially restrictive. In most deep learning settings, (ie stable diffusion) the number of data elements far exceeds the data dimensionality. The theoretical results also depend on a constrained optimization procedure which does not reflect the general practices for diffusion training. While I understand these assumptions are required for theoretical analysis, they also limit the contribution. Although the probability flow ODE results are potentially relevant to general diffusion sampling these results are questionable due to the optimization procedure of the denoiser.
Other Comments Or Suggestions: While first reading this work, I was confused by the term "scheduler" which is referenced in the abstract and introduction but is only defined on line 152. I suggest the authors introduce this concept earlier or perhaps replace it with "diffusion time scheduler" or similar to avoid confusion with other schedulers such as optimization schedules which can also have early stopping.
Questions For Authors: I've repeated the first two questions from prior sections of my review here for clarity
1. Regarding the min-cost solution of equation (17), why is the orthogonal assumption reasonable with Zeno et. al 2023 require obtuse angles between data points?
2. When training denoisers outside of the low-noise regime, why should denosiers be constrained to exactly fit the training data?
3. On line 123(R) you state "Notably, in contrast to the probability flow ODE, the min-cost denoiser here is independent of t" Could you clarify why the min-cost denoiser is dependent on the choice of ODE you wish to solve?
4. In your empirical results, you use 0.2 as a threshold for matching. Does this threshold have significance?
5. Across all of your experiments you find that samples can be categorized as Boundary, Training Pts. or Virtual Pts. Did you find any samples which fell outside of these classes?
6. In Figure 3 you evaluate the fraction of virtual points for $N \in [10, 15, 20, 25, 30\]$. How does the sample distribution change as you extend to $N > d$?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback.
# Connection to Semantic Sums
We are sorry for the lack of clarity. We did not aim to claim that the virtual points in orthogonal data must be equivalent to combinations of semantic components of images as observed in Stable Diffusion. This is only maintained as a motivation for virtual points. We agree that the current phrasing in the abstract may be confusing, and will change this in the revised paper.
# Relaxation of Obtuse Angles
The same arguments used in the proof of Theorem 3 in Zeno et al. (2023) can be applied to prove equation (17). The requirement for strictly obtuse angles (i.e., $x_i^Tx_j < 0$ instead of $x_i^Tx_j \leq 0$) in Zeno et al. (2023) is only made specifically to ensure the uniqueness of the solution. The proof of that theorem, therefore, applies to the orthogonal case as well, just without uniqueness. In other words, the function in (17) is a minimizer of (8), but not necessarily the unique minimizer.
Specifically, the proof of Theorem 3 in Zeno et al. (2023) uses the fact that any unit active along two data points making an obtuse angle can be “split” into two units aligned with each data point with strictly lower representation cost. When the rays are orthogonal, the two “split” units may have the same representation cost as the original unit. So while the interpolant given by equation (17) is still a minimizer, it is no longer necessarily unique in the case of orthogonal data.
We will clarify this in the paper.
# Constraining Denoisers to Exactly Fit the Training Data Outside the Low-Noise Regime
Indeed, outside of the low-noise regime, the denoisers would not exactly fit the training data, and the min-norm denoiser in this case is not defined (as we can not get zero loss).
We use the same training regime for all denoisers for consistency. To complement that, we conduct additional simulations in App. E, where we train the neural denoisers using a standard training procedure using the Adam optimizer with weight decay, obtaining similar results. We hypothesize that this is because high-noise denoisers primarily influence the initial condition, before the critical noise threshold is crossed, and we start the “useful” final sampling phase [R1].
# Strength of Assumptions
Indeed, we make simplifications to allow tractability. Please note, however, that while $N>D$ makes it impossible for the dataset to be exactly orthogonal, in the realistic regime $\exp(D) \gg N>D\gg 1$ most pairs are nearly orthogonal for standard Gaussian data. We believe this is a main reason why the orthogonality assumption is reasonable and common in high-dimensional theoretical analysis.
Please see more details in our response to Reviewer SMjF under “Orthogonal Dataset Choice”.
# The Constrained Optimization Procedure
App. E holds additional simulations where we train the neural denoisers using a standard training procedure with the Adam optimizer, with or without weight decay regularization (which promotes min-norm results, but not forces it). Specifically, for weight decay, we tried $\lambda=0.25,0.5,1$. All values led to similar results, therefore we included in Fig. 6 only the case of $\lambda=0.25$. As can be seen, in the case of WD=0 we converge only to the training points or boundary points of the hyperbox, whereas for WD>0 we converge to virtual points as well, which aligns with the results achieved with the Augmented Lagrangian method.
# Diffusion Time Scheduler
Thanks for this suggestion, we will change the terminology in the revised paper.
# Min-cost Denoiser Independent of $t$
The time dependence of the min-cost denoiser is through $\rho$. Specifically, in eq.11 (probability flow ODE), the RHS includes a derivative w.r.t $t$ of $\sigma_t^2$ times the score function, so by applying time re-scaling arguments, we get a time-dependent $\rho$, and, therefore, denoiser. In eq. 13, however, the RHS does not include a derivative with respect to $t$, so $\rho$ and the denoiser are time-independent.
# Significance of the 0.2 Threshold
We show the effect of changing the thresholds values in App. F. Qualitatively, this does not drastically affect the results.
# Samples Beyond Training/Virtual/Boundary Points
No, all points converged to one of the 3 categories, which matches our theoretical analysis.
# Extending Fig. 3 to $N>d$
We thank the reviewer for this suggestion. In strictly orthogonal data, it is impossible to have $N>d$. If the reviewer has a different dataset in mind, we will be happy to train all our denoisers (150) on the relevant $N$ and $d$ on this new dataset, and include our findings in the revised paper. Note also that if we change the dataset, it can become challenging to define and find the virtual points.
[R1] Gabriel Raya & Luca Ambrogioni. Spontaneous symmetry breaking in generative diffusion models. NeurIPS, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you to the reviewers for their response to my initial review. Most of the items I raised have been reasonably addressed by your rebuttal. I am leaning towards increasing my score on this basis.
My only remaining item of concern is the applicability of your work to larger, non-orthogonal datasets. I have read your response to SMjF. I am satisfied that most pairs of data in this regime are nearly orthogonal and that this assumption is not as restrictive as I feared. However, I am still unsure of how to reconcile this near-orthogonality with your response to my question regarding Fig 3. for $N > D$. If you cannot define virtual points in this regime, I am concerned that the findings of this work are not useful for understanding generalization for general datasets. I would appreciate if the authors could expand upon how their findings can be understood in this setting.
In response to your request for a dataset, I would suggest perhaps augmenting your original orthonormal training set with additional samples randomly drawn from the surface of the hyperbox. Do you still find that training points exclusively converge to vertices and edges of the box in this case?
---
Reply to Comment 1.1.1:
Comment: **Defining Virtual Points**
We apologize for not being sufficiently clear. Virtual points are always easy to define conceptually, as well as for theoretical analysis, as "all points which are sums of clean training points".
The problem lies in how to define a point 'x' as "virtual" when examining the result of a numerical simulation, for non-orthogonal data. For example, if $N \gg d$ and the training points have acute angles, then to find if 'x' is a 'k'-order virtual point, we need to search over k-choose-N combinations of the training points, which can quickly become a prohibitively large number as k and N increase. For each combination, we need to decide if the point 'x' can be decomposed as a sum of these training points, up to some tolerances (which are required since we have a noisy simulation). Choosing good values for the tolerances is an interesting statistical problem when the angles are acute. Lastly, it is harder to visualize the virtual point in this case (we can't just project to the hypercube like we did here).
**New simulation Details**
Following the reviewer’s suggestion, we augmented the original orthonormal dataset used in Figure 2 (which included 31 orthogonal data points in D=30) with additional random data points generated as follows:
* We sampled a random vector with i.i.d. elements from the uniform distribution [0.3,0.7]. This choice avoids the degenerate case where no denoisers are active in the low-noise regime.
* We then projected the vector on a random face of the hyperbox to ensure that the new random data points lie on the hyperbox surface.
We trained the same neural denoisers in Figure 2, using M=500 noisy samples. We considered two cases: N=40 and N=50. As in the original experiment, we used AL optimization with $L_{\infty}$ metric with 0.2 threshold.
**Simulation Results**
As can be seen from the results below, in these cases (when N>D), the probability flow almost exclusively converges to the hyperbox surface: either to the boundary, training points (either old orthogonal points or the new points on the surface), or to other vertices of the hyperbox (the original virtual points).
||$N=40$|$N=50$|
|----|----|----|
|hyperbox surface|99%|98.21%|
|original virtual points|2.4%|3.4%|
|orthogonal training datapoints|32%|19.6%|
|new random data points|14.6%|19%|
We will add to the revised paper the results for different metrics and thresholds. | Summary: The authors analyze memoization of diffusion models in shallow relu networks. To analyze this, they consider the probability flow ode, and obtain additional results by introducing a simpler score flow. Under some assumptions, they show that both ODE's have stationary points corresponding to the training points, and in some cases sums thereof. They verify their theoretical claims through various experiments on small orthogonal datasets and confirm their theory holds
## update after review
The authors addressed my concern about N > D well, and I have increased my score from 3 to 4 as a result
Claims And Evidence: It seems to me the claims match the evidence.
Methods And Evaluation Criteria: The evaluation is on simple datasets, but seems appropriate for the problem setting, i.e. a theoretical analysis of neural networks for diffusion modeling
Theoretical Claims: The theorems are well explained, and seem (to the best of my ability of judging) to be correct. I checked the correctness of the proofs in Section 3 in appendix A.
Experimental Designs Or Analyses: I think the choice of orthogonal dataset is interesting, but potentially understandable, please see my question under "strenghts and weaknesses"
Supplementary Material: I checked supplementary material appendix A to review proofs.
Relation To Broader Scientific Literature: -
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Overall the paper is well written and good to follow, even for someone with limited knowledge of theoretical analysis of shallow neural networks. The theorems and proofs are well explained, and seem (to the best of my ability of judging) to be correct. While the problem set up is simple, it is understandable to the extent that a rigorous theoretical analysis of anything more complicated quickly becomes intractable, I do have some questions:
- The authors state that a standard normal distribution becomes more orthogonal under increasing dimension. While this is true, most machine learning datasets have N >> D, and can therefore not be orthogonal. Can the authors defend their choice of orthogonal dataset and why this would be relevant to actual datasets?
- Can the authors explain what they mean by "interpolates the training data"? Do the authors mean that in an open ball around a training point the denoiser maps to the training data? This seems to be what the definitions say but this is not what I would consider "interpolation".
- Can the authors elaborate whether virtual points correspond to generalization? If I understand the argument correctly, the authors interpret the training-data "copy-pasting" sometimes observed in diffusion models that way. Is it correct to understand this "copy-pasting" as the virtual training points discussed in the theory? Ultimately: are virtual points desirable, or not?
- Following up on this question, for an orthogonal dataset, what does it mean to "generalize"? Or perhaps a better question is, is the underlying distribution simply a categorical distribution?
- Is it possible to model the noise distribution as a true noise distribution rather than a fixed collection of points? Why did the authors choose to use a fixed set of points for the noise distribution?
Other Comments Or Suggestions: NA
Questions For Authors: See strengths and weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s positive feedback.
# Orthogonal Dataset Choice
First, when $N>D$, it is indeed impossible for the dataset to be exactly orthogonal. However, for standard i.i.d. Gaussian data $x_n$, it is easy to show (e.g., using the analysis in Section 3.2.3 of [R1] and the union bound) that the largest cosine of the angle between two different datapoints is, with high probability,
$\max_{n\neq m} \frac{x_n \cdot x_m}{\|x_n\| \|x_m\|} \sim \sqrt{\frac{\ln N}{d}}$.
Therefore, in the realistic regime $\exp(D) \gg N>D \gg 1$, most pairs are nearly orthogonal. We believe this is a main reason why the orthogonality assumption is reasonable and common (e.g., [R2]) in high-dimensional theoretical analysis.
Second, to make sure we are on the same page, please note that our analysis goes beyond the orthogonal case and covers all the analytically solvable cases found in Zeno et al. 2023, though some of this analysis was relegated to the appendix. For example, in App. D (line 630), we extend our analysis beyond the strictly orthogonal case by considering the scenario where the convex hull of the training points forms an obtuse-angle simplex. We showed there that similar behavior emerges even when strict orthogonality does not hold, supporting the broader relevance of our findings. In the main paper, we indeed focused on orthogonal datasets: this was done to simplify the presentation, and since the solution is qualitatively similar to that in the obtuse angle case.
# Interpolation Meaning
Yes, by "interpolates the training data," we mean that the denoiser maps each noisy sample to its corresponding clean training point, which leads to a zero empirical loss. Solutions with zero empirical loss are commonly called interpolators in the ML theoretical literature (e.g.,
[R3]). As an approximation to this, we further assume that the denoiser maps the entire open ball centered around each clean data point to that clean point (this was also done and explained in Zeno et al.).
# Virtual Points & Generalisation
Yes, we view these virtual points as desirable, since they go beyond the empirical data distribution, and create new combinations not seen before in the training data. It is perhaps more accurate to view this on some scale: low-order combinations (e.g., which can be easily observed as “copy-paste”) may seem closer to memorization (“low creativity”), while high-order combinations show a higher degree of generalization (“high creativity”). We will discuss this interesting point in the paper.
# Generalisation in Orthogonal Datasets
In the case of an orthogonal dataset, we interpret the hyperbox boundary as an implicit data manifold, even though we do not assume an explicit sampling model that generates the training data (e.g., a distribution supported on the manifold). In this context, we define generalization as the ability to sample points from the hyperbox boundary.
# Noise Distribution Modeling
In practice, diffusion models are trained for a finite number of iterations using Gaussian noise. Consequently, during training, we effectively observe (different) noisy samples that all lie within an open ball around each clean data point (see additional details in sec. 3 in Zeno et al. 2023).
In the multivariate case, finding an exact min-cost solution for finitely many noise realizations is generally intractable. So, we assume that the denoiser maps each open ball to its corresponding clean data point.
[R1] Roman Vershynin. High-Dimensional Probability. 2019.
[R2] Andrew M. Saxe et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv:1312.6120, 2013.
[R3] Siyuan Ma et al. The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning. ICML, 2018. | null | null | null | null | null | null |
The Sparse-Plus-Low-Rank Quasi-Newton Method for Entropic-Regularized Optimal Transport | Accept (poster) | Summary: This paper is about solving entropic-regularized optimal transport problem via a quasi-Newton method. Entropic-regularized OT problem has applications in machine learning, but its solution is difficult to find. The paper first presents some theoretical analysis Hessian sparsification, which can be used to solve the OT problem in a more tractable way. The paper then presents a low-rank approximation to the Hessian matrix to facilitate computations. The paper later gives theoretical analysis of the method and validates its performance via experiments.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Not always, as the paper barely touches on machine learning.
Supplementary Material: No supplementary material.
Relation To Broader Scientific Literature: It is related to relevant literature in using OT in machine learning.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The strength includes a comprehensive analysis of using sparse Hessian and a low-rank approximation in solving entropic-regularized OT problem and a large amount of experiments. Another strength is to set the \tau dynamically so when the solution is close to the minimum, \tau is updated more gradually.
Biggest weakness is the seemingly lack of relevance of this paper to machine learning. While OT is indeed widely used in machine learning, this paper barely covers any machine learning techniques or applications. This makes the paper somehow out of scope of this conference.
In Eq. (5), is the low-rank approximation always needed? If not, when is it needed?
In Eq. (6) and (7), how is u, actually u_k, determined? It seems it was never defined in the main text, though it was given in the supplementary material, and it seems u_k was a typo. And can the authors give some explanation about the meaning of a, b, u, v in Eq. (6) and (7)? That will help readers see the motivation behind the method.
The experimental results are not always clear to me. In Figure 1, especially second row, it seems the proposed SPLR method had similar run-time vs log10 Gradient norm curves as several existing methods.
In Figure 2, when the Hessian is not so dense, the advantage of SPLR method is more obvious. Considering Figure 1 and Figure 2 together, it seems to indicate the SPLR method works better if Hessian is not so dense in the first place.
Figure 4 seems to indicate that the regularization weight is critical in determining each method's speed of convergence. While SPLR gave better performance than some methods, it converged slower than a few other methods. If so, that means a careful selection of the regularization weight \eta is necessary, and this may increase the computational load of the proposed method (as well as the other methods). Figure 11 shows similar patterns.
Another concern is that it seems SPLR experienced large oscillation in reducing Log10 Gradient Norm, though some other methods had the similar behavior, but this may raise the risk of early or incorrect stopping of iterations. Can authors elaborate on this?
Other Comments Or Suggestions: Please see strengths and weaknesses.
Questions For Authors: Please see strengths and weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful comments. Please see our point-by-point responpses below.
---
> Not always, as ... on machine learning.
> Biggest weakness is ... of this conference.
We sincerely appreciate your feedback on our manuscript. Your observation regarding the limited discussion on machine learning is invaluable and has encouraged us to further explore this perspective to enhance our work.
To provide a more comprehensive overview, we have reviewed recent advancements in computational optimal transport (OT) within the machine learning community, particularly those presented at ICML. Our findings indicate a substantial body of literature [1-4] that focuses on this topic, underscoring its relevance within the field.
Furthermore, the [ICML 2025 Call for Papers webpage](https://icml.cc/Conferences/2025/CallForPapers) includes "Optimization (convex and non-convex optimization, matrix/tensor methods, stochastic, online, non-smooth, composite, etc.)" as a topic of interest, highlighting the strong connection between our research and ongoing discussions in the machine learning community.
Once again, we sincerely appreciate your insights, and we would add more discussions on the importance of OT optimization in the revised paper.
**References**
[1] Dvurechensky, Pavel, Alexander Gasnikov, and Alexey Kroshnin. "Computational optimal transport: Complexity by accelerated gradient descent is better than by Sinkhorn’s algorithm." ICML, 2018.
[2] Lu, Haihao, Robert Freund, and Vahab Mirrokni. "Accelerating greedy coordinate descent methods." ICML, 2018.
[3] Lin, Tianyi, Nhat Ho, and Michael Jordan. "On efficient optimal transport: An analysis of greedy and accelerated mirror descent algorithms." ICML, 2019.
[4] Guminov, Sergey, et al. "On a combination of alternating minimization and Nesterov’s momentum." ICML, 2021.
> In Eq. (5), ... is it needed?
Thank you for your insightful question. We cannot guarantee that the low-rank approximation is always necessary. However, based on our numerical experiments, particularly in Figures 5 and 6, we have observed that adding the low-rank term generally leads to faster convergence of the algorithm. Specifically, when the curvature condition $(s^k)^T y^k > \varepsilon \|y^k\|^2$ holds, we consistently find it beneficial to include the low-rank term.
> In Eq. (6) ... behind the method.
Thank you for pointing this out! The correct notation should be $u = y_k$, and we will fix it in the main text.
As for the variables $(a, b, u, v)$ in Eq. (6) and (7), they are mostly notational: $u$ and $v$ represent two low-rank components, derived from the BFGS quasi-Newton method, and $a$ and $b$ are their respective scaling coefficients, determined by the secant equation (Eq. 6).
These terms collectively help ensure a more stable and efficient update, improving the convergence behavior of the method. We will expand the explanation in the main text to better highlight the motivation behind these choices.
> The experimental results ... in the first place.
Thank you for your insightful observation. Indeed, our results confirm that the SPLR method is as good as alternatives in virtually all scenarios, and significantly outperforms others when the Hessian is sparse. This demonstrates the robustness of SPLR in worst cases and its high efficiency in favorable cases.
This behavior is due to the sparse-plus-low-rank design, which ensures stable and efficient performance across different Hessian structures.
> Figure 4 seems to ... shows similar patterns.
We appreciate your observation about the impact of the regularization weight $\eta$ on convergence speed. Indeed, as shown in Figures 4 and 11, the choice of $\eta$ significantly affects all methods, including ours. While our paper focuses on optimizing the OT problem *given* a fixed $\eta$, we acknowledge that selecting $\eta$ is critical in practice—though this lies outside our current scope. Investigating systematic strategies for choosing $\eta$ efficiently could be an interesting direction for future work.
> Another concern is ... elaborate on this?
Thank you for your valuable observation. We appreciate your concern regarding the oscillation in the Log10 Gradient Norm observed in SPLR. However, we would like to clarify that such oscillations do not pose a risk of early or incorrect stopping. In fact, the gradient norm serves as a rigorous stopping criterion. Specifically, when the gradient norm is reduced below a predefined threshold, we can theoretically prove that the optimality gap is also bounded by a corresponding threshold. This ensures that the stopping condition is reliable and provides confidence in the convergence of the method.
On the other hand, our method guarantees that the dual objective function value is nonincreasing across iterations, so if the stopping criterion is on the function value, it will not have any oscillation. | Summary: This paper proposes a Sparse-Plus-Low-Rank Quasi-Newton (SPLR)method for entropic regularized Optimal Transport (OT). The proposed algorithm improves the approximation of the Hessian matrix by adding a low-rank term, thus better solving the dense situation, effectively solving the entropic-regularized OT problem and additionally reducing the amount of computation. The theoretical analysis, experimental validation, and convergence guarantee in the paper are all sufficient, demonstrating the superior performance of the SPLR algorithm.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes, all proofs.
Relation To Broader Scientific Literature: Quasi-Newton Method
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths: **
(1) The authors effectively enhanced the algorithm's efficiency by integrating a low-rank matrix into the sparse approximation framework for Hessian, with rigorous theoretical analysis demonstrating both convergence properties and computational complexity reduction, well-supported by theoretical analysis.
(2) The empirical tests conducted on both synthetic and actual datasets, highlight the practical superiority of SPLR compared to conventional techniques such as Sinkhorn, L-BFGS, and SSNS. The ablation study, detailed in Appendix A.1, convincingly confirms the essential role of the low-rank component.
**Weaknesses:**
(1) It is mentioned that the algorithm has a lower time complexity, but it relies on the choice of sparse matrices during the computation process, and the paper does not provide specific theoretical analysis.
(2) The paper mentions that during the experimental process, the algorithm can achieve super-linear-like convergence speed, but in the theoretical proof, the algorithm only reaches linear convergence speed. What is the cause of this phenomenon? Is it possible to further derive corresponding results in theory?
Other Comments Or Suggestions: (1) It is mentioned that the algorithm has a lower time complexity, but it relies on the choice of sparse matrices during the computation process, and the paper does not provide specific theoretical analysis.
(2) The paper mentions that during the experimental process, the algorithm can achieve super-linear-like convergence speed, but in the theoretical proof, the algorithm only reaches linear convergence speed. What is the cause of this phenomenon? Is it possible to further derive corresponding results in theory?
(3) Although the paper provides a brief introduction to existing quasi-Newton algorithms, it does not compare their convergence rates with those of classical algorithms. It is recommended to add a comparison table to illustrate the superiority of the algorithm.
(4) How is the boundedness of H(x) ensured during the iteration process?
(5) In the article, a rank-2 approximation term was added to achieve a faster convergence rate. Can this approximation term be of a higher rank?
(6) Is there an error in line 16 of Algorithm 1?
Questions For Authors: Please see the above comments and suggestions.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful comments. Please see our point-by-point responpses below.
---
### Weakness 1/Comment 1
We thank you for raising this important point about our complexity analysis. Let us clarify the theoretical aspects of our algorithm's computational efficiency:
1. **Theoretical per-iteration cost**: As noted in Section 5 (Lines 267-270), each iteration's dominant cost is the inversion $(H_{\Omega}^{k+1} + \tau_{k+1} I)^{-1}$ in the computation of $B_{k+1}^{-1}$. According to Chapter 10 in (Shewchuk, J. R., 1994), when it is solved using conjugate gradient method, the time complexity is $O(\rho n m \sqrt{\kappa})$, where $\kappa$ is the condition number of $H_{\Omega}^{k+1}$.
2. **Convergence Guarantees**:
- Theorem 5.1 establishes linear convergence
- The convergence constant depends on $L$ and $U$
You are absolutely correct that the sparse pattern selection affects complexity, and we appreciate the opportunity to clarify this relationship. Our method provides explicit control over this through $\rho$ while maintaining theoretical convergence guarantees.
### Weakness 2/Comment 2
We sincerely thank you for identifying this important gap between our theoretical and empirical convergence results. The observed superlinear-like behavior indeed requires deeper investigation, and we offer the following possible causes:
- *Approximation quality improvement*: As optimization progresses (especially near the solution), our SPLR approximation appears to capture increasingly accurate Hessian information. This is evidenced in Fig. 2 (columns 2-3) where the log gradient norm decreases sharply after certain iterations.
- *Hybrid behavior*: The method may initially follow linear convergence (as proved) but transition to superlinear once the approximation error becomes sufficiently small.
We greatly appreciate your suggestion to pursue this theoretically, as it could lead to significant new insights about Hessian approximation methods. The gap between our current theory and observations points to exciting future research directions.
### Comment 3
We thank you for this valuable suggestion. Below is a proposed comparison table we will include in the revised manuscript to clearly demonstrate the advantages of our SPLR method:
| Method | Convergence Rate | Per-Iteration Cost | Memory Usage | Hessian Approximation Type |
| --------------- | ------------------------------------------------ | --------------------------------------------------------------------------- | -------------------------- | -------------------------- |
| Newton's Method | Quadratic | $O(n^3)$ | $O(n^2)$ | Exact |
| BFGS | Superlinear | $O(n^2)$ | $O(n^2)$ | Dense low-rank update |
| L-BFGS | Linear | $O(nm)$ ($m$: memory) | $O(nm)$ | Limited-memory low-rank |
| SNS<br> | Superlinear (conjectured) | $O(\rho(\varepsilon) n^2 \sqrt{\kappa})$ ($\varepsilon$: element threshold) | $O(\rho(\varepsilon) n^2)$ | Sparse only |
| SSNS | Quadratic | $O(\rho(\delta) n^2 \sqrt{\kappa})$ ($\delta$: error threshold) | $O(\rho(\delta)n^2)$ | Sparse only |
| SPLR | Theoretically linear;<br>emperically superlinear | $O(\rho n^2 \sqrt{\kappa})$ | $O(\rho n^2)$ | Sparse + Low-rank |
Your suggestion significantly improves our paper's ability to communicate the method's advantages, and we appreciate this constructive feedback.
### Comment 4
Thanks for raising this question. The basic idea is as follows: since our method guarantees that the objective function value is nonincreasing, the iterates $\{x_k\}$ are restricted to the level set $D = \{x: f(x) \leq f(x_0)\}$, which can be proved to be compact. Since $H(x)$ is continuous on $x$, it must be bounded on $D$. Of course, the bounds may depend on the initial value $x_0$.
### Comment 5
We thank you for this insightful technical question. Our rank-2 approximation is motivated by the rank-2 modification scheme in the BFGS quasi-Newton method, and can indeed be naturally generalized to higher ranks with all theoretical gurantees. The question highlights an important implementation flexibility that deserves more explicit treatment and we will adjust our manualscript accordingly.
### Comment 6
We thank you for pointing out this typo. It should be $auu^T + bvv^T$. | Summary: The paper concerns faster solver for the optimal transport (OT) problem, by proposing a new type of Hessian approximation within the quasi-Newton iterative solvers. Building on previous work that used sparse Hessian approximations, the authors introduce low-rank approximation added to the sparse format (hence sparse "plus" low rank in the title). The authors also provide convergence analysis, showing linear convergence.
Claims And Evidence: Authors claim rigorous proof for linear convergence, and demonstrate super-linear convergence in examples.
Methods And Evaluation Criteria: Authors demonstrate the performance of the new sparse plus low rank Hessian approximated quasi-Newton iterations by applying to benchmark problems. These make sense, although there is some room for more challenging problems.
Theoretical Claims: I only did a light check of the proofs, the claims seem to be reasonable and I did not spot any errors.
Experimental Designs Or Analyses: The experiments are standard benchmarks (MNIST, Fashion-MNIST, ImageNet, etc), I find them suitable for analyzing convergence rates.
Supplementary Material: I did a light review of the proofs.
Relation To Broader Scientific Literature: The existing methods have used low-rank approximations to the quasi-Newton iterations (e.g. BFGS) or sparse approximations, but not both. This work uses the sum of both types of approximations in a systematic way to accelerate convergence.
Essential References Not Discussed: I do not have any suggestions.
Other Strengths And Weaknesses: Strengths
The proofs rely on clever combinations of elementary linear algebraic results. Presentation is clear, the proofs are well-written.
Weaknesses
The overall motivations for seeking sparse plus low-rank perhaps can be explained for the benefit of the audience. Is there a more intuitive argument why such an approximation can converge faster?
The authors do not present a potential limitation to the method.
Other Comments Or Suggestions: minor notes:
- page 1: perhaps mention what a cost matrix is (or its form)?
- The notation $H^{k+1}_\Omega$ (e.g. page 5) is a bit confusing, since powers $(H_\Omega)^{k}$ also appear in the text. Perhaps there is a better version that keeps the iteration number as subscripts?
Questions For Authors: What is the overall motivation for using sparse approximations? I understand the low-rank Hessian updates are well-known to be effective in general, but sparse approximations seem special. Adding a sparse matrix to a low-rank one seems to resemble robust PCA; is there any insightful relation between robust PCA and this choice of Hessian approximation?
Should the addition of the low-rank part affect the choice of random selection criteria in sparse directions? i.e. Is there a reason to stick with Algorithm 2 even though now low-rank term was added in the Hessian?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful comments. We give our point-by-point responses below.
---
### Weakness 1
Thanks for raising this question on motivation. Our approach is motivated by two key insights:
1. **Problem-specific Sparsity**: The entropic-regularized optimal transport (OT) problem naturally exhibits (approximatly) sparse Hessian structure, particularly when $\eta$ is small. This sparsity is unique to OT and related problems.
2. **General Low-Rank Utility**: Low-rank approximations are successfully applied in quasi-Newton methods (e.g., BFGS, L-BFGS) because they efficiently capture dominant curvature information while maintaining computational tractability.
Our intuition is that the sparse-plus-low-rank (SPLR) structure may better preserve the problem's intrinsic geometry compared to pure sparse or pure low-rank approaches, leading to a faster convergence speed, as is shown in our empirical results. For example, if the Hessian "residuals" (i.e., the elements removed during sparsification) have some low-rank structure, then SPLR naturally captures this information.
### Weakness 2
We thank you for highlighting this important consideration. Our primary limitation is that although superlinear-like convergence performance is observed, we currently lack formal theoretical justification for this observation. We will clarify such limitation in the revised manuscript.
### Note 1
We thank you for this helpful suggestion to improve clarity. In the introduction, we will add a brief explanation of the cost matrix in OT problems:
> The cost matrix $C \in \mathbb{R}^{n \times m}$ encodes the pairwise transportation costs between source and target distributions, where $C_{ij}$ represents the cost of moving one unit of mass from source location $i$ to target location $j$. For example, when comparing two discrete probability distributions over spatial positions, $C_{ij}$ might be the Euclidean distance $\| x_i - y_j \|^2$ between points $x_i$ and $y_j$ in the source and target domains, respectively.
We agree that further clarification will help readers, particularly those less familiar with OT, and will incorporate it into the revised manuscript.
### Note 2
We thank you for this observation, and appreciate the opportunity to clarify our notation system:
Indeed, we already use iteration number as subscripts:
- $H_k$: the true Hessian matrix at iteration $k$;
- $B_k$: the approximated Hessian matrix with SPLR at iteration $k$.
The point is that we use $H_{\Omega}^{k+1}$ to distinguish between the true Hessian and the sparsified ones. We acknowledge that better notations might be possible, but this is the clearest expression we have developed so far. When we need to express the power of $H$ (which only appears in **Assumption 3.2**), we have added a pair of parentheses around $H$. We will add a remark in the revised paper to avoid potential ambiguity.
### Question 1
We thank you for this excellent question. Here are our comments:
1. **Motivation**: See our response to **Weakness 1** above.
2. **Connection to robust PCA**: The reviewer astutely observes the conceptual parallel to robust PCA. Indeed, both approaches rely on a _sparse + low-rank_ decomposition to provide a structured approximation of a given matrix. However, while robust PCA—formulated as a _Principal Component Pursuit (PCP)_ problem [1]—aims to _recover_ the low-rank and sparse components through optimization, SPLR adopts a more _constructive_ approach, providing these components via closed-form solutions. Given that robust PCA has been shown to effectively capture the essential structure of a matrix, we argue that SPLR’s approximation should likewise preserve the key information in the Hessian, ensuring a well-justified representation. Importantly, this intuition is further supported by our experimental results, where SPLR shows a superlinear-like convergence speed.
Your comment has helped us recognize that this connection deserves more explicit treatment, and we appreciate the opportunity to elaborate on these important points.
### Question 2
We thank you for raising this important point. As noted, the low-rank term (Eq. 7) is computed from the sparsified Hessian $H_{\Omega}^{k+1}$, meaning the random selection in Algorithm 2 directly influences the low-rank approximation.
The rationale behind combining Algorithm 2 with the low-rank term is to strike a balance between computational efficiency and information retention. While Algorithm 2 deliberately sparsifies the Hessian to reduce computational cost—inevitably losing some structural information—the low-rank term serves as a compensatory mechanism. This hybrid approach ensures that we maintain a meaningful approximation to the Hessian while remaining computationally tractable.
[1] Candès, Emmanuel J., et al. "Robust principal component analysis?." Journal of the ACM (JACM) 58.3 (2011): 1-37. | Summary: The authors propose a quasi-Newton algorithm for solving entropic optimal transport (EOT) problems. The classical Sinkhorn algorithm enjoys linear convergence with a rate independent of the problem dimension but depends exponentially on the supremum norm of the cost function. There have been some recent works that aim to develop quasi-Newton methods for computing EOTs with super-linear convergence based on the idea of sparsely approximating the Hessian of the Kantorovich dual of EOT. The authors propose a flexible, theoretically justified sparsification scheme, and also combine the Hessian sparsification with low-rank Hessian approximation, similarly done as in L-BFGS. The authors deduce an asymptotic convergence guarantee and linear convergence rate for the proposed algorithm. The proposed algorithm seems to work very well in many synthetic and real-data experiments.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: **Thm. 3.3 and Cor. 3.4 on Hessian sparsification**
1. The authors state that these results are significant because they guarantee positive definiteness of the Hessian after sparsification. But since they only sparsify off-diagonal entries and all Hessian entries are positive, diagonal dominance (or Gershgorin's circle theorem) trivially implies that the Hessian remains positive definite after any specification pattern. It does not, however, directly imply that the condition number improves. I think this latter point should be the main emphasis of these results.
2. In the second and the third remark, the authors point out that their incremental sparsification decreases the Hessian's condition number, so it leads to numerical stability. While this is certainly true, there is no quantitative bound on the improvement of the condition number. If it is hard to justify theoretically, I recommend validating this claim by adding some experiments on plotting the condition number vs. iteration.
3. It is not clear why Assumption 3.2 is "very weak" at this point.
**Thm. 5.1, Cor. 5.2, Thm. 5.3 on the convergence guarantees**
1. These results are derived from standard analysis of quasi-Newton methods, as the authors indicate in the appendix. Namely, the authors take the quotient space of the dual variables by setting $\beta_{n}=0$ to get rid of the shift-invariance of the dual objective ($f(x)$ in their notation). While the dual objective is only convex and not strongly convex originally, after constraining on this subspace it becomes strictly convex. Further restricting this on the level set, one can obtain uniform bounds on the eigenvalues of the Hessian along the trajectory of the algorithm, provided that the objective values decrease monotonically.
However, this standard analysis does not give linear convergence rate that is independent of the problem dimension. For instance, the rate $r$ in Thm. 5.3 may depend on $m$ and $n$, so the linear convergence stated there could effectively be sublinear for large-scale problems. The issue is that the condition number of the dual objective can be as large as order n. For instance, if at the current iterate the dual variables $\alpha_k$ and $\beta_k$ are the same (assuming $m=n$), then the $2n \times 2n$ full Hessian has zero eigenvalue since the diagonal entries are $Cn$, which is the absolute row sums of the off-diagonal entries. Imposing $\beta_{n}$ gets rid of one row from the (1x2) block matrix and the corresponding column of the (2 x 1) block, and the resulting off-diagonal sum is $C(n-1)$. The maximum eigenvalue is still of order n, but the minimum eigenvalue is of order 1.
In the state-of-the-art analysis of Sinkhorn (e.g., Carlier 2022 SIOPT), one addresses such issue by directly showing that the Sinkhorn iterates are uniformly bounded (by using closed-form expression of Sinkhorn iterates) and strong convexity of the entry-wise dual objective (in this case the exponential function) on a compact interval. Can the authors adapt a similar strategy and get a dimension-independent linear convergence rate of their algorithm?
Experimental Designs Or Analyses: There are through synthetic and real-data experiments. Also, the additional experiment on the necessity of low-rank approximation provided in the appendix was very helpful.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: The method for computing EOT provided in this work could potentially be very useful for other researchers and practitioners.
Essential References Not Discussed: Guillaume Carlier, "On the Linear Convergence of the Multimarginal Sinkhorn Algorithm" SIAM Journal on OptimizationVol. 32, Iss. 2 (2022)10.1137/21M1410634
Other Strengths And Weaknesses: **strengths**
1. The paper is relatively well-written and it was easy to follow.
2. The proposed algorithm seems to be performing very well in various tasks, outperforming many existing methods.
3. There are some theoretical guarantees on the Hessian sparsification and convergence guarantee.
**weakness**
1. Theoretical guarantees, especially for the ones for convergence, are weak. Please see the comments in the theory section.
2. L043 "Sinkhorn generality exhibits sub-linear convergence": I don't think this is quire right. Do the authors have reference to back up this claim? Experimentally, in all plots Sinkhorn shows linear convergence. Theoretically, Sinkhorn enjoys linear convergence with a rate independent of the problem dimension but depends exponentially on the supremum norm of the cost function [Car 22]. Also, the authors write in the same paragraph that optimizing on the Kantorovich dual as a different approach than Sinkhorn, but Sinkhorn is just an alternating maximization on the dual and it looks like matrix scaling in the primal space.
3. L125 and hereafter: Please just use $\nabla f(x)$ instead of $g(x)$.
4. L129: $\tilde{T}$ is not defined.
5. L163: The authors state that the existing Hessian sparsification by thresholding by values does not give a control on the density after thresholding. But one could easily threshold the top x % to get a desired density after thresholding, and this is essentially what the authors do.
6. Thm. 4.1: the set $\Omega^{*}$ is ill-defined.
7. The first paragraph in Sec. 4.3 is a repetition from the previous page.
8. Comparing the two plots in Fig. 1 in the second and the third column, it appears that BCD (Sinkhorn) is faster than SPLR in the iteration number. Since it is the cheapest per-iteration, it should be much faster than SPLR than in wall-clock time. The plots in the second row are misleading since SSNS is too slow and jams all other curves.
9. L954: One needs to argue the sub-level set D is compact in order to obtain uniform bounds L,U on the eigenvalues of the Hessian over D. This is missing.
Other Comments Or Suggestions: See above
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful comments. We give our point-by-point responses below.
---
### Thm.3.3 and Cor.3.4
1. We respectfully clarify that positive definiteness cannot be directly guaranteed via the Gershgorin circle theorem in this case. Specifically, for the sparsified Hessian $H_{\Omega^*}$, the Gershgorin disc centered at $h_{n+1,n+1}$ has radius $\sum_{j \neq n+1} |h_{n+1,j}|$, admitting $\lambda = 0$ as a possibility. Thus, the theorem alone does not preclude singularity. Moreover, you are correct that explicitly highlighting the improvement in the condition number would strengthen the impact of our results and we will emphasize this point more clearly.
2. Thanks for this insightful suggestion. Indeed we have conducted numerical experiments on the condition number (this was actually done before we develop the theory), and we will add such experiments in the revised paper.
3. There are two key reasons:
1. The assumption is analogous to the concept of *irreducibility* in Markov chains. Irreducibility is considered a mild condition because it only requires connectivity, not quantitative bounds on probabilities and it is *necessary but not sufficient* for stronger properties (e.g., ergodicity).
2. Assum. 3.2 can be satisfied with *extremely sparse* matrices—_e.g._, retaining just one complete row and column (yielding a density lower bound of only $O(1/(nm))$).
We will add a brief remark in the revision to make this intuition clearer.
### Thm. 5.1, Cor. 5.2, Thm. 5.3
Indeed the current theoretical analysis still follows the classical framework of quasi-Newton methods, but we emphasize that one key ingredient of proving the linear convergence is to bound the condition number of the approximate Hessian matrix, and in our case it is highly related to Thm.3.3 and Cor.3.4, which are unique to OT problems. Your invaluable comments have encouraged us to actively explore novel technical tools to further optimize the linear rate constant.
We will revise the paper to acknowledge the significance of the theory in Carlier (2022).
### Weakness 1
See our responses above.
### Weakness 2
You are absolutely correct that the Sinkhorn algorithm has a linear convergence. Our phrasing was indeed misleading, and we will correct it in the revision. Our intended point was that, while the convergence is *theoretically* linear, the rate can *practically* resemble sub-linear behavior when the regularization parameter $\eta$ is small ($1 - e^{-24\\|C\\|_{\infty}/\eta}$ approaches 1).
Also, Sinkhorn is indeed an alternating maximization procedure on the dual. We will revise for precision.
### Weakness 3
We will update the manuscript accordingly.
### Weakness 4
$\tilde{T}$ represents the first $(m-1)$ columns of matrix $T$, as defined in **Notation** (Line 68).
### Weakness 5
While thresholding the top x% of elements would indeed allow direct density control, we emphasize that existing Hessian sparsification methods do not employ this approach, nor do they provide theoretical guarantees for arbitrary density selection. Specifically:
1. SNS: Uses an elementwise threshold $\rho$. Though we can set the Hessian to a desired density, there are no theoretical gurantees on invertibility or convergence rate.
2. SSNS: Sets a Hessian error threshold $\delta_k = \nu_0 \\|g_k\\|^{\gamma}$ at each iteration, which depends on the gradient norm for theoretical guarantees. This scheme cannot be set to an arbitrary density a priori, as $\delta_k$ varies with optimization progress.
In contrast, our proposed scheme explicitly enables any desired density level.
### Weakness 6
The sparsification pattern $\Omega^*$ consists of all coordinates in the first row and the first column.
For example, if $n=4$ and $m=4$, then $\Omega^*=\\{(1, 1), (1, 2), (1, 3), (2, 1), (3, 1), (4, 1)\\}$.
Could you please clarify in what specific way $\Omega^*$ appears ill-defined? We would be happy to provide additional explanations.
### Weakness 7
We sincerely apologize for this unintentional redundancy and we will carefully revise this section to eliminate repetition.
### Weakness 8
The discrepancy in our current plots likely stems from that each Sinkhorn "iteration" in our implementation includes both the $\alpha$ and $\beta$ optimization steps (two full matrix scaling operations).
### Weakness 9
We thank you for this important technical observation. We will make the proof more rigorous in the revised manuscript. Below are the main ideas: we can first show that the optimal function value is finite, i.e., $f^*>-\infty$. Then the set $D^*=\\{x:f(x)\le f^*\\}=\\{x^*\\}$ is non-empty and bounded. Corollary 8.7.1 of [1] shows that $D_c=\\{x:f(x)\le c\\}$ is bounded for every $c$, which implies that $D = \\{x: f(x) \leq f(x_0)\\}$ is bounded.
[1] R Tyrrell Rockafellar. Convex Analysis. Princeton University Press, 1970. | null | null | null | null | null | null |
KinDEL: DNA-Encoded Library Dataset for Kinase Inhibitors | Accept (poster) | Summary: The paper introduces a novel dataset containing DEL screen data for two different proteins, and benchmarks the performance of several supervised ML methods in the context of learning from the provided data.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: New dataset.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths of the paper include the importance of the problem (DELs seem to be a useful screening technique in practice, and have received relatively little attention from ML community), the rigorousness of the conducted experiments (I appreciate e.g. testing different splitting conditions, and the choice of benchmarked ML method, which to my knowledge are fairly good coverage of SotA), and realistic discussion of limitations of the data.
In my opinion, the paper has two main weaknesses:
1. I suspect it will be difficult to follow by people with ML background (it definitely was for me at times, and I have some experience with DELs - some things only became clear on 2nd pass, because they rely on information from later sections). To give a few examples (not exhaustive):
- In introduction: “Multiple rounds of washing are conducted to remove any weak binders, and the DNA tags of surviving molecules are sequenced as a measure of binding affinity” - it’s not clear at this point what is the result of this sequencing, and how is it a measure of binding affinity (as opposed to be a binary binding/non-binding indication). Counting is clarified in section 2, but this caused confusion until then.
- Figure 2 uses the term library capture which wasn’t immediately obvious to me, similarly it mentions biotinylation of protein which doesn’t seem to be described at all.
- “Biophysical Assay Validation”: it isn’t clear how this subset of data was selected (in the next section the Appendix is eventually referenced, but again it leaves the reader hanging for some reason until then). The number of samples seems very small, given the discussed noisiness of data, is this a common practice to use so few data points for validation?
- “Pre-selection Data”: it isn’t clear what causes differences in abundance of molecules. Also, from the ML point of view, it’s also not clear how this information is typically used - should it involve count normalization?
- I would personally love to see some high-level overview of the data from the ML perspective, perhaps as a graphical abstract-type figure: summarized in a single place number of molecules, distribution of counts (with a note that it correlates with binding affinity), number of targets, information about estimated noise (baseline correlations?), information about test set, perhaps also splits. There is a lot of detail in the paper separating this information, and finding the (in my mind) important pieces takes some effort.
2. Experimental evaluation:
- The description of held-out test sets is very vague in terms of value range - would it be possible to demonstrate Kd’s in more detail than saying it’s a “range of Kd values” (and doing so in the appendix)? This is fairly meaningless statement, and it impacts the interpretation of the results - do ML methods discriminate binders and non-binders? Is it ranking within binders?
- “all models were trained using the top 1M compounds with the highest counts” - this seems like a very confusing design choice, could the authors elaborate on this? It goes back to the previous point: if the test set contains only binders, it seems like cheating to train only on the data with highest counts. Perhaps there are good reasons for this (e.g. non-binders being distinguishable by things like docking), but in that case discussion would be warranted.
- Also going back to the previous points, “We ultimately wish to rank molecules by binding affinity” - again, it isn’t clear to me that’s the case, and that the goal is not discriminating binders / non-binders - discussion would be great.
Other Comments Or Suggestions: Minor remarks:
- Would it be possible to have a split in which different data partitions contain completely separate synthons? Would it make sense to do so?
- One source of DEL data that I think would be worth mentioning is https://www.aircheck.ai/; I am not sure if it has associated paper though.
Questions For Authors: N/A.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your careful review of our work and your valuable feedback.
**Dataset description**:
We appreciate your specific comments on making our paper more accessible to machine learning researchers. We will revise Figure 2 by removing unnecessary technical details that do not contribute to the understanding of the data format. For instance, the biotinylation of proteins is detailed in the appendix and can be omitted from Figure 2 to avoid potential confusion. Additionally, we will add a new subplot that outlines the high-level data acquisition process and data format. This subplot will clarify the inputs and labels (pre- and post-selection count data) used in our models. We will also make sure to define all new terms that might be confusing for the machine-learning audience.
**Biophysical Assay Validation**:
We will relocate the appendix reference concerning the selection of validation compounds to an earlier section of the text for clarity. The limited number of validation points is due to the high cost associated with these experiments, as each molecule requires resynthesis, either on-DNA or off-DNA, along with separate biophysical assays for each.
**Pre-selection Data**:
During the synthesis of the DNA-encoded library, not all molecules are produced in equal amounts. Variability can arise from differences in synthesis efficiency, coupling yields, or synthesis errors. Additionally, some molecules may precipitate or adhere to storage containers, resulting in loss and lower observed counts. While this data is typically used to normalize counts, there are important caveats. Sequencing inaccuracies can lead to error propagation, especially for low-abundance molecules. DEL-Compose is a model that addresses these issues by incorporating pre-selection counts as inputs, which serve as a multiplier when modeling the zero-inflated Poisson distribution in the loss function. We will add this discussion in the appendix.
**Range of Kd values in the held-out test data**:
We will provide histograms of Kd values in Appendix C. Our approach involves selecting both binders and non-binders with varying Kd values to evaluate our model's ability to accurately rank these molecules using Spearman’s correlation. Although we propose a regression problem where molecules are ranked from most to least promising, the Kd values can also be categorized into two groups for testing classification models, as the dataset includes both binders and non-binders.
**Only 1M top molecules are used for training**:
We observe that the baseline models struggle with sparse DEL data and require modifications to the loss function or additional inductive biases to handle label sparsity. To simplify learning, we train on the top 1M compounds. This introduces some bias relative to the evaluation set, but since we aim to rank order the molecules rather than make binary predictions, we believe our model can effectively rank molecules in validation sets. Nonetheless, we provide access to the full dataset for researchers to explore better results with more training points.
**Ranking molecules by binding affinity**:
All the evaluated models predict continuous values, employing regression, and we use Spearman's correlation as the evaluation metric. Unlike Pearson's correlation, which identifies linear relationships, Spearman’s correlation effectively assesses ranking quality. We will add a brief discussion on the ranking issue and our rationale behind selecting this evaluation metric. In short, these models are often used to score available molecules to maximize their binding affinity. It is challenging to establish a firm threshold to define a binder because priorities change depending on the target and stage of drug development. Measuring model performance via ranking ensures that good models retrieve the best options for any given target and screening pool combination.
**Other data splits**:
Regarding the splitting of KinDEL, we explored several methods and determined that the disynthon split is optimal. Although we considered splitting datasets by single synthon positions, this approach presented two challenges. Firstly, only one part of each molecule is unique in the testing data, while the model encounters all possible disynthons at the remaining two positions. Secondly, some synthons may feature motifs essential for binding, such as kinase hinge-binding motifs, which could end up entirely in the testing data. We believe that the disynthon approach strikes a sensible balance. Please see Figure 4, where the visible line patterns relate to enriched disynthons, and the absence of clear planes indicates the lack of enriched (mono)synthons.
**AIRCHECK datasets**:
Thank you very much for this suggestion. We will reference this resource in the related work.
Thank you once again for your invaluable review and suggestions. Please let us know if you have any further questions.
---
Rebuttal Comment 1.1:
Comment: I feel that my comments were addressed appropriately, and I decided to increase my review score. | Summary: The KinDEL paper provides a new dataset of laboratory measurements related to DNA-encoded libraries (DEL) together with baseline models that analyze these data. The experiments used a library of 81 million small molecules interacting with two kinases: MAPK14 and DDR1. The authors further extended the dataset with additional on-DNA repeat measurements of the top hits, binding affinity measurements for selected molecules off-DNA molecules, and predicted 3D structures of the interactions of these off-DNA molecules with the targets. This dataset opens up an opportunity for important work related to denoising DEL experiments and for incorporating the DEL experimental results in downstream applications.
Claims And Evidence: The lack of public datasets related to DEL experiments is a known issue that has plagued the public development of denoising algorithms for DEL datasets. The huge quantity of data generated from these experiments is ideally suited for downstream applications that incorporate machine learning. This is the first such dataset to link the raw DEL measurements with on-DNA repeats and proposed 3D structures. These additional off-DNA measurements are invaluable for testing the generality of the small molecule models. The chosen targets are well-studied proteins that have additional public datasets available, so one could extend this work in the future to test the generality and the limitations of DEL datasets. I found no problematic claims in the submission, though I haven't inspected the dataset itself carefully beyond the analyses presented herein.
Methods And Evaluation Criteria: Everything in this paper makes sense. The authors are certainly very familiar with the problem domain and provide strong baselines and well-documented datasets. The division of the training datasets in 3 different splits and the documentation of baseline models on all categories follow the best current ideas in this domain.
A totally minor, mostly aesthetic point: the numbers of molecules for each of the on-DNA, off-DNA, and extended on-DNA datasets are only mentioned in the top of the tables (presumably written as n=30, 33, 41 mean in Table 1, and similar in Table 2). The authors might consider listing these numbers more prominently or point to the entries in the table from somewhere in the text.
Theoretical Claims: There were no major or novel theoretical claims in this work. The ideas around possible combination of 3D structures with DEL datasets have been previously published. The choice of baseline models make sense and the most advanced of them was also previously published last year (Chen et al, 2024). The observation of improved behavior for the reasonable baselines on off-DNA data compared to the count-based analyses is also meaningful and is the main reason for the development of non-trivial analysis methods for datasets from DEL selection experiments.
Experimental Designs Or Analyses: The experiments were sound and valid. Depending on the library composition, the use of Avi tags and streptavidin beads might be suboptimal for immobilization during a DEL selection experiment. However, in this specific case, the additional measurements without targets, the fluorescence measurements (where streptavidin binding is an optimal choise), and the validation of the baseline models outside the DEL setup strongly suggest that the dataset is clean.
Supplementary Material: Yes, I did review sections, A, B, C.
Relation To Broader Scientific Literature: This work adds to a rather limited set of public datasets of DEL experiments. The authors did a good job of reviewing them in section 5.1.
Essential References Not Discussed: A detailed understanding of this paper requires understanding of the broader scientific literature related to DNA-encoded library selection experiments, and a reasonable understanding of the industrial process of drug discovery, some of which is unfortunately not well documented and differs in different companies.
The use of such data by academics and non-experts in the future is of course possible, which makes this particular contribution important enough to warrant consideration in this conference.
Other Strengths And Weaknesses: The paper is clear and concise. The dataset is valuable. It would have been useful to test the generalization of the models on other known molecules for these targets, as well as to evaluate the ability/suitability of these models to score decoys or discriminate molecules for these specific kinases vs other kinases. However, such extensions are best left as follow-up work to this contribution and would have needlessly extended the length and scope of this work.
Other Comments Or Suggestions: No additional suggestions.
Questions For Authors: No questions to the authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for dedicating your time to review our paper and for your positive feedback and valuable suggestions. We will include the number of data points for the testing sets in the main text as recommended. Furthermore, we agree that evaluating the generalization of the models on external datasets, including compounds with activity against other kinases and decoys, is a compelling research direction. We see this as a promising avenue for future work. Once again, we appreciate your thorough review and insightful feedback! Please let us know if you have any further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you. I have no additional questions or comments. | Summary: This paper introduces a dataset of DNA encoded libraries for two kinases, MAPK14 and DDR1, with 81 million compounds. This dataset will be of use in drug discovery applications and modeling of related biological processes.
Claims And Evidence: The claims of the paper are well supported by the tables shown in the article.
Methods And Evaluation Criteria: The evaluation and benchmark of the data in terms of Kd predictions is sensible and the results look reasonable.
Theoretical Claims: There are no proofs.
Experimental Designs Or Analyses: I read through the experimental design and analysis and see no obvious issues.
Supplementary Material: I reviewed the appendices and they make sense.
Relation To Broader Scientific Literature: This article introduces a useful dataset that contains high chemical diversity, and will be useful as a reference for the development of new methods.
Essential References Not Discussed: I don't have any references to add.
Other Strengths And Weaknesses: This paper does not really fit within the scope of the main conference track. It would be better suited to a workshop, or a journal that is better suited to evaluate the experimental assays included in this study. There are no model or algorithm developments of significance.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the Reviewer's time and effort in evaluating our paper and are pleased with the overall positive feedback. The only concern raised pertains to our choice of venue for publication. We would like to address this by outlining our reasons for selecting this machine-learning conference over a journal.
Firstly, we believe that presenting our dataset and benchmark at such a conference maximizes its impact by directly reaching its target audience—machine learning researchers. The lack of high-quality DEL datasets is a bottleneck for developing advanced machine learning models. By contributing this work, we aim to advance research in analyzing the outputs of innovative compound screening technologies.
Furthermore, this year ICML has specifically called for application-driven submissions. The Call For Papers seeks “Application-Driven Machine Learning (innovative techniques, problems, and **datasets** that are of interest to the machine learning community and driven by the needs of end-users in applications such as healthcare, physical sciences, biosciences)”. We believe our paper aligns well with this focus, as it provides a valuable new dataset for an important area of science that currently suffers from a lack of high-quality data.
Thank you once more for your feedback. Please let us know if you have any further questions.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors reply to my review.
I do not really see why a dataset would need to be published in the premier conference for machine learning practitioners for it to reach this audience. ChEMBL, PDB, UniRef, or more recently PLINDER, OAS, PDBind are all datasets used by ML researchers interested in applications to chemistry and biology, and none of these datasets themselves have been published as main track ML conference paper.
Nonetheless, it is clear this is a valuable contribution to the community. Given enthusiasm of some of the other reviewers as well, I have bumped the score to 3. | null | null | null | null | null | null | null | null |
Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale | Accept (poster) | Summary: This paper introduces a new agent environment, WINDOWS AGENT ARENA, for testing agents' capabilities within the operating system (exclusively focusing on Windows OS). The authors claim that WINDOWS AGENT ARENA is fully scalable and parallelizable, which enables rapid evaluations compared to other similar benchmarks. The experiments show that current LLM-based agents are still far from human-level performance, and WINDOWS AGENT ARENA serves as a valuable testbed for future work in testing agent systems on Windows OS.
Claims And Evidence: The claims made in the submission are supported by the experiments.
Methods And Evaluation Criteria: The evaluation criterion adopted in this paper is the success rate, which is reasonable for assessing agents’ performance in the OS environment and evaluating task completion.
Theoretical Claims: Considering this is an application-driven benchmark paper, there is no need to verify any theoretical claims.
Experimental Designs Or Analyses: As a benchmark paper, I have some concerns about the comprehensiveness of the experiments. Specifically:
1. The paper provides only global statistics on the success rates of different agents' performance across several task types. However, it remains unclear what specific challenges the proposed benchmark presents, as well as the underlying reasons for the poor performance of current agents. This lack of detail does not offer further insights for future work.
2. It would be beneficial to test and discuss additional agent frameworks. As the authors mention, the paper primarily relies on Chain-of-Thought reasoning. However, other widely-used agent frameworks, such as ReAct and Reflexion, should also be incorporated and explored.
Supplementary Material: I have reviewed the appendices referenced in the main text.
Relation To Broader Scientific Literature: This paper primarily focuses on Windows OS as the environment for testing agents’ performance, addressing a gap in the current research community. Its scalability and parallelizability enable rapid evaluations compared to other similar benchmarks, making it a significant contribution.
Essential References Not Discussed: No, the references appear to be sufficient.
Other Strengths And Weaknesses: **Weaknesses**
1. Some tasks overlap with those from *OS World*, as they are adopted from it. It would be beneficial to include more Windows OS-specific tasks, such as system settings changes or other Windows-specific activities, to make the work more applicable and realistic.
2. Given that Windows OS is fully closed-source, how can the authors ensure that future works will have sufficient access to modify settings in order to improve and adjust their methodologies? If not, the impact of this benchmark could be problematic, as the improvement strategies would be limited to a narrow scope.
Other Comments Or Suggestions: No.
Questions For Authors: Please refer to the weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your comments/feedback. Please see our answers below.
**RE: Reasons for poor agentic performance and further insights for future work**
* Due to character limits in our reply, we refer to our responses to Reviewers 88jp and sjR1 where we detail reasons for agents’ general/common failures and by task domain/category, respectively. We plan on including these, along with further insights (as noted in our response to sjR1), in our revisions.
**RE: Other frameworks (ReAct/Reflexion)**
* We designed our agent to align with ReAct principles and will better clarify this in our paper. We already (Appendix D):
* Explicitly instruct the agent to perform structured multi-step planning (reasoning)
* Ask it to describe why it chose a certain action (rationale)
* Require it to perform a single action per step, wait for the screen to update (observe), then iterate again.
* Explicitly instruct agent to "Verify at each step whether you're on track."
* Due to space constraints, we list a subset of Reflexion results, one of which comes close to best performance (19.5%):
| inputs | model | Office | Web Browser | Windows System | Coding | Media & Video | Windows Utils | Total |
|------------|--------|--------|-------------|----------------|--------|---------------|---------------|--------|
| a11y | gpt-4o | 0% | 9.4% | 30.4% | 0% | 0% | 8.3% | 7.2% |
| a11y + omniparser | gpt-4o | 0% | 18.9% | 28.3% | 25% | 16.7% | 0% | 14.3% |
**RE: More Windows-specific tasks/activities**
* To clarify, of our 150+ tasks, a considerable portion (~33%) indeed focuses on Windows-exclusive aspects like its system settings and utilities/tools.
* We also modified and tailored adapted tasks to specifically target/test aspects unique to Windows (Windows-specific UI conventions, interaction styles, native application workflows and context menus, ribbon interfaces, dialog windows, and keyboard shortcuts), thereby ensuring tasks are performed in a natural Windows-specific way. As such, our adapted/inspired tasks exclusively leverage Windows’ APIs, UI elements (e.g., Windows File Explorer, Taskbar, native menus), and common user actions on Windows (e.g., drag-and-drop interactions, native clipboard behaviors)
* Many computer activities/workflows are not specific to a single OS—which is why we substantially modify tasks to reflect realistic and common Windows workloads done in a Windows-specific/native way.
* To illustrate, for a "creating a pivot table from spreadsheet data" task, differences in native OS elements, distinct UI ribbons, iconography, etc. create different visual/interactive elements and different ways of performing the same task depending on the OS; as a result, agent performance—including visual perception/reasoning, planning, action—can differ considerably.
* Windows:
* Default icon theme ("Colibre") matches Microsoft's Fluent UI style and the ribbon-style toolbar (resembling Microsoft Excel’s ribbon).
* Steps for creating a pivot table involve selecting data, accessing "Insert" > "Pivot Table..." through a ribbon-like toolbar prominently positioned at the top without the need to navigate to anything.
* Dialogs for selecting ranges/placement within sheets use native Windows UI elements (Windows-specific dialog windows and controls)
* Linux (e.g., Ubuntu with GNOME ):
* Default icon theme ("Breeze" or "Elementary") follows Linux's typical flat design conventions which diverge significantly from Windows’ UI; ribbon-style toolbars are not the default and a traditional toolbar with menu-driven interface is used instead.
* Creating a pivot table requires navigating via a menu ("Insert" > … > "Pivot Table..."), involving GTK-styled pop-ups/dialogs that differ visually/structurally from Windows.
* Native Linux dialogs, influenced by GNOME/KDE, display different visual+interactive characteristics (e.g., GTK-based dialog boxes) than Windows, altering how an agent parses visual cues, UI elements, plans/acts, etc on Windows vs. other OS.
* Lastly, we faced limitations in our task design due to the paywalled/closed-source nature of Microsoft programs (e.g., Office365) despite our efforts to make these accessible where we used popular open-source counterparts instead.
**RE: Ensuring access to Windows**
* We provide a way to use a free evaluation copy which can be used for benchmark deployment and can be easily/continuously renewed thereafter. Our benchmark also allows users to easily install their own programs, add/configure new tasks, etc. By default, it relies on open-source widely maintained applications, etc. so anyone can modify these as needed.
* Even if Windows changes system settings in future releases, the provisioning setup for the VM image and Docker can be easily updated to work with new OS versions. Additionally, our method for accessing screen information uses the accessibility (a11y) tree from Win32 apps that is consistently maintained.
---
Rebuttal Comment 1.1:
Comment: Thanks for your elaboration and complementary experiments. I have no further questions and will maintain my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer again for the thoughtful feedback as well as taking the time to evaluate our work. Please let us know if there are any further questions/feedback! | Summary: This paper introduces Windows Agent Arena, a benchmark for evaluating multimodal operating system agents. The benchmark contains over 150 diverse tasks on the Windows OS platform and leverages the Azure cloud environment for parallel evaluation.
Claims And Evidence: The benchmark supports multiple modalities and domains across the Windows operating system. Leveraging the Azure cloud environment, it enables rapid, parallel evaluations. Additionally, the authors provide unrestricted access to the Windows OS testing environment.
Methods And Evaluation Criteria: The evaluation is comprehensive, covering various closed-source and open-source models. The limited performance of existing models demonstrates the benchmark's potential and difficulty.
However, the baseline methods exclude agent systems like Camel or smolagent, instead focusing only on several MLLMs. The evaluation methodology for these models on the benchmark remains unclear. Additionally, while Appendix D mentions an agent system called Navi, its absence from the main paper creates confusion.
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: Please check the Method section above.
Supplementary Material: Yes, I have read through the supplementary material.
Relation To Broader Scientific Literature: This paper can be seen as an extension of OSWorld to the Windows OS. More tasks are desired for a comprehensive evaluation. Please see the Weakness below.
Essential References Not Discussed: This paper has discussed the essential references.
Other Strengths And Weaknesses: A major weakness is the limited number of tasks. While this benchmark is important for the agent community given Windows OS's widespread global usage, it would benefit from additional tasks to enable more comprehensive evaluation of agent systems. Furthermore, the evaluation methodology could be more granular. Beyond measuring final success rates, analyzing intermediate process steps would provide valuable insights into why and how tasks fail.
Other Comments Or Suggestions: No other comments or suggestions
Questions For Authors: Does this benchmark provide desired plan steps for each task, such as the trajectory needed to successfully complete one task?
With humans achieving a 74.5% success rate, what are the common failure modes in this dataset? How many participants were involved in this test setting?
In Table 2, I don't see any functionality for audio files, such as listening to audio. Does the benchmark support this capability?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful feedback; we address each point below.
**RE: Agent systems like Camel/smolagent & evaluation for these model systems**
* Our focus was on single agent systems, particularly state-of-the-art/popular multi-modal LLMs commonly used as agent/reasoning backbones.
* However, multi-agent systems like Camel are also able to run on our benchmark as is. We believe this to be an important direction and plan on future work for multi-agent evaluations.
* Thank you for bringing smolagents to our attention--we did not see it (as it had released around our submission ~late Dec 2024). Smolagent is more of an agent wrapper but has elements that strongly resemble our agents already (e.g., memory, ReAct-like, writing Pythonic code as actions and tools/functions, etc.).
**RE: Appendix D mentions Navi**
* Thank you for pointing this out. This was an artifact from earlier experiments and has been removed along with similar mistakes.
**RE: Limited number of tasks**
* We prioritized creating fewer tasks, each representing distinct but realistic skills/workflows, to avoid trivial task variations and better control for (and understand) agentic failures on these categories; we also make it easy for users to create their own tasks.
* Nonetheless, we still have >3x as many Windows tasks (>150) as the next closest (~40 tasks in OSWorld).
**RE: More granular evaluation and intermediate steps on task failures**
* We agree that this is important. Due to character limits, we refer to our responses to Reviewer sjR1 and 88jp detailing general agent failures and task-domain specific ones, respectively. We will include the full set of failures and analysis in our revised paper.
**RE: Does this benchmark provide desired plan steps/trajectory to complete task**
* No, given the task complexity and open-endedness of the computing environment, there are multiple ways to accomplish any single task so many valid trajectories can exist.
* e.g., even a simple task like creating a new file and saving it to a location can have multiple ways to complete it. The agent can: navigate to the location via file explorer and create the file (creation can also happen via multiple ways: keyboard shortcut, right-clicking the dropdown menu, explorer create file button, etc.), or it can create the file via powershell, or it can can open a file in a text/code editor and then save it, etc.
* A desired/preset trajectory can also introduce potential bias; we use number of steps to track trajectory quality but allow the agent freedom otherwise.
**RE: With a 74.5% human success rate, what are common failure modes?**
* Among human evals, the most common failure cases were: (1) the inability to find the “correct” next step to progress (e.g., lack of knowledge/familiarity of how to perform certain functions within a program), (2) misinterpreting the task instruction to mean something else, and (3) carelessness and human error. Our human success rate appears similar/comparable to those on other benchmarks (e.g., 70-80% for OSWorld, AndroidWorld, etc.)
* Example of (1): the user could not figure out how to convert an MP4 to MP3 in VLC player (unable to find the relevant settings after several minutes), resulting in the user giving up and marking the task as failed.
* Example of (2): On a task setting Chrome to automatically delete all on-device site data, the participant tries to configure deleting history and cookies, but “on-device site data” actually refers to a separate/different concept under “privacy and security.”
* Example of (3): on a task that asks to change the first letter of each word in a document to uppercase, the participant forgets to capitalize 2-3 words.
* We will include the full set of details/examples in the revised manuscript.
**RE: Number of participants**
* 1 participant, a casual user of Windows, performed the tasks without any human or digital assistance (i.e., internet); statistics and details are reported in Appendix B. There were originally 3 but of the remaining 2, 1 attempted <1/4 of the tasks and the other even fewer.
**RE: Audio capability**
* Yes, our benchmark does support audio as it was designed to be flexible in integrating additional modalities. We describe an implementation (our intended way) below.
* One way is to have the user provide audio recordings (e.g., a WAV file) of the spoken task instructions along with an audio-capable agent (e.g., giving a VLLM an external tool like speech-to-text transcription or have an agent with inherent audio understanding). The agent would need to understand the task via audio and match the STT transcription to the actual task JSON/instruction. Beyond that, nothing else would change and the benchmark would run as normal.
* Other ways are also possible so long as the corresponding task JSONs (see Appendix A.5 for details) are properly defined; however these JSONs are designed to be flexible and easily customizable.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and most of my concerns are addressed and I will raise my score by 1.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your time, questions, and feedback. If there are any further questions, comments, or concerns, please let us know! | Summary: The paper introduces **Windows Agent Arena**, a benchmark for testing multi-modal AI agents on real Windows OS tasks. It provides 154 tasks across everyday applications (office, web browsing, coding, etc.) and uses a scalable, parallel evaluation setup (e.g., using Azure) so that tests finish in about 20 minutes. The work is practical and motivated by the need for a realistic, open evaluation environment for Windows agents.
Claims And Evidence: **Yes.**
- **Claims**: The paper claims that current AI agents (including models like GPT-4V) are far below human performance (only ~19.5% success vs. 74.5% human).
- **Evidence**: They back this up with extensive experimental results and comparisons to other benchmarks. The evidence is solid, though some details (like the “free access” to Windows) could be clearer.
Methods And Evaluation Criteria: **Yes.**
- **Environment**: Defines a POMDP where agents see screenshots and accessibility trees, then act via mouse, keyboard, and OS commands.
- **Evaluation**: Uses binary rewards (success/fail) with automated checks on the final state. This is a reasonable choice for a practical benchmark.
Theoretical Claims: No new theoretical proofs are presented. The paper uses standard formulations (POMDP) to describe the environment.
Experimental Designs Or Analyses: **Yes.**
- The experiments test several state-of-the-art models and different input parsing methods.
- Ablations show that combining UI accessibility data with pixel-based detectors improves performance.
- Overall, the design is solid and shows that the benchmark is challenging.
Supplementary Material: **Yes.**
The reviewer has checked the supplementary material and Appendix in the PDF.
Relation To Broader Scientific Literature: The paper builds on and extends ideas from previous benchmarks like **OSWorld**, **WebArena**, and **AndroidWorld**. It fills an important gap by focusing on the Windows OS.
Essential References Not Discussed: The reivewer think that there is no more related works need to be discussed.
Other Strengths And Weaknesses: **Strengths**:
1. Practical, scalable, and fills a clear gap in current research.
2. The task set is diverse and realistic.
**Weaknesses**:
1. Current agent performance is very low and some setup details (e.g., Windows licensing, reproducibility) could be more clear.
Other Comments Or Suggestions: There is no more suggestion or comment.
Questions For Authors: 1. How exactly is the Windows environment provided to researchers (pre-built images, trial licenses, etc.)?
2. What are the main reasons for agent failures (vision, planning, or action execution)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your questions/feedback! We address your points in turn below.
**RE: Low agent performance**
* Yes, performance is low in absolute terms; however, in relative terms, the performance we've seen has largely been in line with trends from agent performance results on other comparable benchmarks (e.g., best success rates at ~10-20% on OSWorld, Visual Web Arena, etc.). The performance gap we observe has been consistent with the low performance of current LLVM agents on real world computer tasks, along with some of the problems with current agents (e.g., imperfect visual grounding/reasoning).
**RE: Benchmark setup and how the Windows environment is provided**
* We cover this in Sec. 3.3 (pgs 5-6), but we lay out more details below which we will include in the paper for further clarity.
* The Windows environment is setup using a two-step approach:
* First, users download a Windows 11 Enterprise Evaluation ISO (a free 90-day trial provided by Microsoft) and use our automated provisioning scripts to build what we call a "golden image." This image is a fully functional Windows 11 VM pre-configured with all the necessary software and settings for running our benchmark.
* Second, for Docker containers and VM provisioning, once the golden image is created, it’s integrated with Docker to streamline deployment and testing. This allows researchers to run the environment locally (via WSL/Linux) or via Azure for parallelization.
* In short, the environment is provided as a pre-built, reproducible VM image (derived from a trial license) coupled with Docker containerization, ensuring a consistent and ready-to-use setup for computer-use research. As a result, our benchmark is one of the few (if not the only) that provides users open access to the Windows OS and computing environment for agent research and development. Our benchmark also allows users to install their own programs/applications, add new tasks for their own needs, etc.
* This way, despite not being an open-source OS, users can access Windows via our benchmark as a self-contained docker image; users can then utilize a free evaluation copy for benchmark deployment that can be easily and continuously renewed thereafter.
**RE: main reasons for agent failures**
* We provide a more detailed list below which we will include in the revised paper:
1. *Vision-related failures.* In our experience, agents:
* Fail to recognize if the previous action did anything, especially when there's no image modality and the screen state must be inferred from the accessibility tree
* Trust captions from omniparser, resulting in wrong icons being clicked when the labels are wrong, misleading the agent into selecting incorrect UI elements.
* Hallucinate successful actions (i.e., clicking the wrong element, then hallucinating that the correct dialog/next state was reached even if the ground-truth computer state did not actually change).
* Click the wrong element with a similar bounding box ID, especially when the screen has dense bounding boxes, causing the agent to click incorrect but visually adjacent UI elements.
* Be “blind” to popups, leaving it unable to exit out
2. *Planning-related failures,* The agent can:
* Fail to follow the output structure resulting in the actions failing to be parsed, causing downstream parsing and execution to fail entirely.
* Repeat actions, such as clicking a mislabeled icon multiple times
* Take the instruction too literally, and scroll/search for non-existent UI elements even if the correct UI element is already on the screen. Misinterpretation of instructions or overly literal plan formation causes unnecessary and incorrect actions.
* Run out of steps early (hitting the max step counter) due to some of the issues above
* Complete the task, but not output "DONE" (forgetting to signal task termination), failing to correctly recognize successful task completion or termination conditions.
3. *Action Execution-related Failures* The agent can:
* Often forget to press the "enter" key (for example, when typing a URL into the address bar) to confirm an action
* Attempt to scroll without hovering the cursor over the scrollable area, resulting in multiple scroll attempts with no change
* Select semantically similar but incorrect element (e.g., selecting a button labeled "Start" when intending another "Start" button or selecting an item from a "Size" column instead of the header) due to ambiguous element identification.
* Try to output absolute coordinates to select cells in office apps. Action execution relying incorrectly on absolute pixel-based positioning rather than omniparser's output (e.g., element IDs)
* Close a secondary window, incorrectly assume it’s still open, then attempt to click UI elements that no longer exist.
---
Rebuttal Comment 1.1:
Comment: The reviewer has read the rebuttal carefully.
The reivewer has no more questions and will maintain the score. | Summary: This paper introduces Windows Agent Arena, a benchmark environment specifically designed for evaluating multi-modal AI agents within the Windows operating system. The authors develop a suite of 154 diverse tasks across various applications (Office, Web Browsing, Windows System, Coding, Media & Video, and Windows Utilities) that require planning, screen understanding, and tool usage capabilities. This benchmark also introduces scalable infrastructure, which allows for parallel evaluation in as little as 20 minutes compared to days for sequential evaluation. The authors evaluate several state-of-the-art multi-modal LLMs as agent backbones, with the best configuration achieving only a 19.5% success rate compared to human performance of 74.5%, highlighting significant room for improvement in this domain.
Claims And Evidence: The primary claims about the benchmark's value, scalability, and performance findings are well-supported by evidence. The parallel execution advantage is clearly demonstrated with timing data. The extensive evaluation of various agent configurations (20+ variants) provides compelling evidence for the benchmark's utility in comparing different approaches. The gap between the best agent performance (19.5%) and human performance (74.5%) convincingly demonstrates the current limitations of multi-modal agents in Windows environments.
Methods And Evaluation Criteria: The methods proposed for agent evaluation are appropriate and well-designed. The authors formalize agent behavior as a partially observable Markov decision process (POMDP) with clearly defined observation and action spaces. The evaluation based on device state after agent execution is a sound approach for determining task success. The benchmark includes a good variety of tasks across different applications, representing realistic user workflows in Windows OS.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental design for evaluating different agent configurations is comprehensive and well-executed. The ablation studies comparing different visual parsing methods and model backbones provide valuable insights. The analysis of failure cases correctly identifies key challenges for current agents, particularly in visual-language alignment and precise Set-of-Marks identification. The Azure parallelization approach for benchmark evaluation is creative and effective.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The paper positions itself well within the growing body of research on agent benchmarks. The authors provide a comprehensive overview of related work, including other benchmarks such as OSWorld, AndroidWorld, WebArena, and GAIA. They clearly articulate how Windows Agent Arena addresses limitations in existing benchmarks, particularly for Windows OS evaluation and benchmark scalability.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- Parallelizable evaluation: The Azure-based parallel evaluation infrastructure dramatically reduces evaluation time compared to serial execution.
- Comprehensive ablation studies: The extensive evaluation of different model and input parsing configurations provides valuable insights into the factors affecting agent performance.
- Practical tasks: The benchmark includes realistic tasks that actual Windows users would perform, increasing its practical relevance.
Open-source contributions: The authors provide infrastructure and tools to make agent research more accessible to the community.
Weaknesses:
- Limited discussion of optimizing agent prompts: There is limited analysis of how prompt design impacts performance in this paper.
- Incomplete explanation of the gap between human and agent performance: More analysis could be provided on which specific capabilities need to be improved to close this gap.
- Dependency on commercial cloud infrastructure for parallelization: The benchmark's scalability advantage depends on Azure, which may limit accessibility for some researchers.
Other Comments Or Suggestions: - The paper would benefit from a deeper analysis of the performance disparities across different task domains (e.g., why agents perform better on Web Browsing and Windows System tasks compared to Office tasks).
- It would be helpful to see more detailed failure analysis for specific task categories, rather than just general failure patterns.
- Some discussion of computational costs associated with the parallelization approach would help researchers better understand resource requirements.
Questions For Authors: - The gap between human and agent performance is substantial (74.5% vs. 19.5%). Based on your analyses, which specific capabilities would need to be improved first to make the most significant progress in closing this gap?
- You mention that UIA markers can take "from a few seconds up to several minutes to be queried depending on screen complexity." Could you elaborate on how this impacts the practical utility of UIA-based approaches and whether there are ways to optimize this process?
- The paper identifies visual-language misalignment as a common cause of failures. Have you considered pre-training or fine-tuning approaches specifically designed to improve this alignment for Windows OS interfaces?
- How sensitive is agent performance to the specific formulation of prompts? Did you experiment with different prompt designs, and if so, how much variation in performance did you observe?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments/questions. We address each point in turn below.
**RE: deeper analysis of performance disparities and failures across different task domains**
* Higher-quality accessibility trees on chromium web-browsers, less UI/element “clutter” and cleaner UI ribbons/interfaces on Windows system and web browser interfaces (i.e., less dense concentration of icons/buttons, sparser/cleaner separation between icons), etc. makes it easier for the agent to visually parse and identify the screen accurately. LibreOffice programs have denser UI icons/elements on the screen (e.g., more words, cells, icons), resulting in much lower performance.
* We list some common failures by some task categories below and will include a full, more granular set in our revised manuscript (due to character limits here).
* **Office**: the dense UI ribbons of LibreOffice results in Set-of-Marks not bounding every icon, overlapping bounding boxes, etc., limiting the agent's screen understanding
* In Calc, the agent can’t sufficiently capture all spreadsheet cells/icons visually, limiting task understanding and available info. E.g., failing to properly set cell formats due to misinterpreting formatting menus (e.g., `12345` becoming `0012345`)
* In Writer, misinterpreting words' positions due to incorrect bounding box detection for UI elements, resulting in text not properly splitting for text alignment tasks. Failing to accurately localize sentences for certain tasks due to insufficiently granular visual parsing, causing incorrect highlighting or missed annotations
* **Web**
* Difficulty with UI slider controls and, at times, scrolling dropdown menus
* Inability to close/navigate out of said pop-ups despite attempts to (sometimes pop-ups where the way to exit out is not made visually obvious)
* Failing to visually recognize numeric cues on webpages, causing navigation to incorrect sites (e.g., "find the most popular banter discussion thread" task)
* When setting a webpage as homepage, misinterpreting the config screen due to visually similar options (like "Startup pages" vs. "Home button")
* **System**
* With file explorer, compressing files using 7-zip can result in agent failing to correctly interact with 7-zip's interface due to misinterpreting fields visually, causing failed compression
* **Utilities**
* When using calculator to calculate differences between dates, the agent can fail to input the correct dates (correct planning but incorrect action mapping due to set-of-marks not recognizing the keys/digits)
* When counting word instances in a text file, the agent can fail to correctly parse the UI’s displayed count, resulting in incorrect counts
**RE: computational costs and resource requirements of parallelization**
* We provide this info in Appendix A.6 (pg 20) describing resources, time, and cost of parallelization.
**RE: specific capabilities to close this gap**
* One is to extend visual/screen understanding to text and tabular data (in addition to set-of-marks around icons/UI elements). This would help agents on tasks better represented as text (e.g., notepad, command line/terminal) or a mix of both UI icons and text (e.g., spreadsheet and word processor programs).
* Another is better self-verification: the agent sometimes believes its action changed the screen state even when it doesn't take effect (even with screen feedback). Unfortunately, instructing the agent to verify/check does not fully resolve this either.
**RE: impact of query time on UIA utility and optimizations**
* Since the UIA tree API wasn’t designed for high-throughput queries and supplies the tree piece-by-piece, extracting a full snapshot results in significant latency. The problem only worsens with more applications/windows open.
* Possible optimizations include: (1) Focusing only on the “active” window’s tree (but one would need to query a tree again when switching programs) or (2) Caching the tree for each program/app ahead of time, which would relieve latency but requires additional overhead.
**RE: pre-training/fine-tuning to improve Windows alignment**
* Yes, good point. We considered it; however, we realized that it required significant amounts of trajectory data on Windows which was not available at the time, serving as part of the inspiration for creating this benchmark (i.e., to help generate said data).
**RE: sensitivity of performance to prompts**
* We’ve observed variations in overall success rates (which we will add to the revised paper) from removing certain components from the agent’s prompt for GPT4V/o:
* w/o memory: (19.5% → ~10-12%)
* w/o in-context examples and provided functions for the agent to use: (19.5% → ~2-4%)
* The prompts also change depending on the different kinds of inputs (e.g., omniparser output that contains the set-of-marks and element IDs, accessibility tree or a11y, etc.). These variations and their impact on performance are in Table 4. | null | null | null | null | null | null |
Latent Variable Causal Discovery under Selection Bias | Accept (poster) | Summary: This paper investigates the problem of Latent Variable Causal Discovery in the presence of Selection Bias and proposes a new statistical tool, Generalized Rank Constraints, to simultaneously handle latent variables and selection bias within linear Gaussian models.
## update after rebuttal
I have decided to raise my score from 3 to 4. The authors have provided thorough and thoughtful responses to my concerns. For Q1, they addressed the challenge of distinguishing selection bias from latent confounding by discussing both the limitations of second-order information and the potential of higher-order information with additional assumptions. For Q2, they clarified the computational complexity of the generalized rank constraints compared to conditional independence tests, highlighting the efficiency of their method. For Q3, they acknowledged the omission of the RFCI reference and included it in the revised manuscript. These detailed and constructive responses demonstrate the authors' commitment to addressing feedback and improving the quality of their work, leading me to adjust my score upward.
Claims And Evidence: The claims submitted are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed method and/or evaluation criteria (e.g., benchmark datasets) are meaningful for the problem or application at hand.
Theoretical Claims: No
Experimental Designs Or Analyses: I believe the experimental design or analysis is reasonable and valid.
Supplementary Material: I have not read the supplementary material.
Relation To Broader Scientific Literature: This solves the problem of eliminating the influence of selection bias to discover hidden variables and restore causal relationships when hidden variables and selection bias exist at the same time.
Essential References Not Discussed: There are some latent variable causal discovery algorithms not discussed and compared in this paper, for example:
[1] Colombo D, Maathuis M H, Kalisch M, et al. Learning high-dimensional directed acyclic graphs with latent and selection variables[J]. The Annals of Statistics, 2012: 294-321.
Other Strengths And Weaknesses: Strength:
This solves the problem of eliminating the influence of selection bias to discover hidden variables and restore causal relationships when hidden variables and selection bias exist at the same time.
Weakness:
1. The paper mentions that certain latent variable structures and selection bias structures may be equivalent under rank constraints. How can they be distinguished?
2. There is a lack of discussion on the time complexity comparison between the generalized rank constraints method and conditional independence tests.
Other Comments Or Suggestions: No
Questions For Authors: 1. The paper mentions that certain latent variable structures and selection bias structures may be equivalent under rank constraints. How can they be distinguished?
2. There is a lack of discussion on the time complexity comparison between the generalized rank constraints method and conditional independence tests.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's insightful comments and suggestions. Please see below for our response.
---
**(Q1)** The reviewer wonders how one might distinguish between selection bias and latent confounding in practice.
**A:** Thank you for this insightful question. We try to address it from two complementary perspectives:
1. **Using only second-order information (covariances), this is a hard problem:**
- Distinguishing selection bias from latent confounding based **solely on rank constraints is, in general, a hard problem.** Ideally, one would need to characterize the rank equivalence class–the set of all graphs (with potentially different latent and selection variables) that entail the same rank constraints. Then, the presence of selection or confounding could only be confirmed when all rank-equivalent graphs contain them.
- However, a general characterization of rank equivalence **remains open and challenging,** even in the setting without selection bias. Due to space limit, please kindly refer to our response to Reviewer nj2K's Q1 for detailed discussion.
- Moreover, even with a full characterization of rank equivalence, distinguishing between selection and confounding–as in our Section 3.3–may still **require algorithmic reasoning** over sets of rank constraints rather than straightforward graphical patterns. This is similar to the setting with only CI constraints. For instance, in our Example 4, the presences of latent variables or selection are implied by FCI's output, not by any obvious graphical patterns.
2. **With higher-order information and additional parametric assumptions, certain distinctions may become feasible:**
- Let us consider the example of two measured variables. Whether they are confounded ($X_1 \leftarrow L \rightarrow X_2$), directly causal related ($X_1 \rightarrow X_2$), or selected ($X_1 \rightarrow Y \leftarrow X_2$) are indistinguishable by rank or CI constraints, as all three have $X_1 \not \perp X_2$.
- However, they may become distinguishable with more parametric assumptions. For the direct causal effect, for instance, in a linear non-Gaussian setting, it can be distinguished from the other two cases via independence of regression residuals.
- For the case of selection bias, suppose the selection acts through truncation, and the scatterplot of observed variables may exhibit a clear truncation pattern (e.g., as in Figure 2 in our draft). Reproducing this pattern under the other two cases **would require more complex**–and arguably less plausible–functional forms, often by reverse-engineering rather than natural functions.
- Hence, with a proper **definition of simplicity and the preference for it**, the model can be further identified.
Note that observations in (2) fall outside the scope of this work, which serves mainly as a (first) proof of concept to show that rank constraints alone can remain informative under selection bias.
However, we sincerely appreciate the reviewer's question, which points to an important future direction: exploring which additional assumptions are needed, and **how the above two lines can be combined** to enable a more practical identification of selection bias and latent confounding in real-world scenarios.
---
**(Q2)** The reviewer asks for the time complexity comparison between the generalized rank constraints and conditional independence tests.
**A:** Thank you for the question. Please let us note that the generalized rank constraints do not change the computational complexity of the statistical tests themselves–they simply **provide a characterization** of the covariance terms for recovering the graph.
Specifically, testing rank constraints (e.g., via canonical correlation analysis) and testing for conditional independence (e.g., via Fisher's Z-test) both involve computations on the covariance matrix. The underlying operations–matrix inversion, or eigenvalue decomposition–are **of comparable complexity.**
As for the algorithmic procedure used to recover the graphs, ours follows the same structure as FCI. Hence, like FCI, our algorithm has a worst-case complexity exponential in the number of observed variables. However, if the underlying graph is sparse–a common and reasonable assumption–**the runtime becomes polynomial.**
We also report our empirical running times. Though with a same complexity, our method is **consistently the fastest** among competitors, possibly thanks to some implementation speedup. Due to space limit, please kindly refer to our response to Reviewer nKfC's Q1 for the results.
---
**(Q3)** The reviewer notes that a relevant work (RFCI) was not discussed.
**A:** We thank the reviewer for pointing out this relevant reference [[C+12]](https://tinyurl.com/4mw7h5ku). We have now included it in the updated manuscript and revised our discussion accordingly.
---
Once again, we thank the reviewer for the insightful feedback, and hope the questions have been properly addressed.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing the feedback. After re-assessing the manuscript and evaluating the revisions, I have decided to elevate the score from 3 to 4.
---
Reply to Comment 1.1.1:
Comment: We are delighted to know that your questions have been well addressed. We sincerely appreciate your thoughtful feedback and recognition of our work. Thank you. | Summary: The authors address the problem of causal discovery with latent confounders when the data has a selection bias. Here, the authors address this via generalized rank constraints that extend beyond conditional independencies. The ranks of covariance submatrices in biased data can reveal information about the latent causal structure and the selection mechanism. The proposed method has been evaluated in artificial and real-world data.
Claims And Evidence: Under the given assumptions (linear and Gaussian), Theorem 1 supports the claim that ranks preserve structural information under selection bias. However, distinguishing between latent confounding and selection bias is only discussed through examples.
Methods And Evaluation Criteria: The run experiments make sense, although real-world validations rely heavily on qualitative assessments.
Theoretical Claims: Theorem 1 appears to be sound and Proposition 3 directly follows from applying the theorem to one-factor models.
Experimental Designs Or Analyses: The experiments are fair, but use a simplistic selection mechanism and the real-world data lacks quantitative validation.
Supplementary Material: Skimmed over the proof, but did not check the details.
Relation To Broader Scientific Literature: Fair discussion of how related literature is lacking in the area of latent confounders under selection bias. While some discovery methods that support latent variables are discussed, it is only a small (but sufficient) selection.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - Good motivation
- Fair discussion of related work and pointing out the novel aspect
- Clear notation and figures
- Limited to linear Gaussian models and linear selection mechanism
- Distinguishing between latent confounding and selection bias is unclear
Other Comments Or Suggestions: See questions.
Questions For Authors: The paper addresses an interesting problem and is novel in this regard. While the theoretical contribution is great (even though it requires some limiting assumptions), the experiments could be more insightful. Some questions:
- It is unclear how robust your approach is against a violation of linearity and the Gaussianity assumption. Experiments with data that explicitly violates your main assumptions would be insightful.
- In practice, how could one determine if selection bias is present in a dataset versus just latent confounding?
- The complexity of the proposed method is unclear. It generally appears to be fast (especially compared to PC/FCI etc.). Can the authors briefly comment on this?
- How robust is the rank determination under small sample sizes, do you have some insights on this?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful comments. Below, we provide detailed response.
---
**(Q1)** The reviewer suggests several additional experiments.
**A:** Thank you for the constructive suggestions. In light of them, we conducted new experiments to evaluate our method under violations of **Gaussianity** and **linearity**, under **small sample sizes**, and regarding **time complexity**. Results are summarized below and detailed in [this anonymous link](https://tinyurl.com/ydmznrxm).
1. **Non-Gaussianity**: We simulate linear SEMs with noise types: Gaussian, Exponential, Laplace, Uniform, and Gumbel. These choices test the sensitivity to skewness, tail behavior, and asymmetry. The numbers of edgemark differences on 10 and 20 nodes and 5 random seeds (same below) are reported. Despite the violations, our method consistently outperforms others:
|#Nodes|10||||20||||
|-|-|-|-|-|-|-|-|-|
|**Noise**|Ours|PC|BOSS|FCI|Ours|PC|BOSS|FCI|
|Gaussian|**19.0±8.3**|28.2±9.1|27.6±12.6|35.6±12.2|**39.4±6.4**|76.0±7.2|65.4±4.3|88.8±6.9|
|Exponential|**19.8±7.8**|27.4±8.1|28.4±10.4|35.0±11.8|**40.6±5.8**|69.2±6.2|60.0±9.6|80.2±11.4|
|Laplace|**18.0±7.9**|28.4±7.9|22.2±6.4|33.2±11.3|**39.6±3.0**|70.0±11.2|62.8±11.0|84.8±8.5|
|Uniform|**17.0±8.6**|28.8±9.6|30.6±11.3|34.6±11.9|**35.4±6.9**|64.0±9.8|72.6±12.7|77.4±10.7|
|Gumbel|**18.4±8.1**|25.8±6.2|24.6±7.4|33.8±10.5|**38.8±6.0**|69.6±10.3|64.0±10.8|80.4±11.7|
2. **Nonlinearity**: We simulate additive noise SEMs with functions: Linear, Leaky ReLU, Tanh, Cubic, Quadratic, and Since. All but the last two are monotonic. We do see a performance drop on the last two, while comparing to other methods, ours remains the best:
|#Nodes|10||||20||||
|-|-|-|-|-|-|-|-|-|
|**Function**|Ours|PC|BOSS|FCI|Ours|PC|BOSS|FCI|
|Linear|**19.0±8.3**|28.2±9.1|27.6±12.6|35.6±12.2|**39.4±6.4**|76.0±7.2|65.4±4.3|87.6±4.2|
|Leaky ReLU|**20.0±7.7**|28.4±8.6|27.6±12.5|32.6±11.5|**42.4±6.1**|69.6±7.2|71.6±10.0|87.0±7.1|
|Tanh|**20.8±6.8**|28.8±9.0|24.0±8.8|30.0±13.3|**38.2±6.4**|66.6±9.1|50.6±9.8|77.8±4.6|
|Cubic|**25.0±4.3**|28.0±5.0|27.8±4.7|34.0±3.6|**54.0±5.7**|75.8±12.4|64.0±8.8|81.5±13.4|
|Quadratic|**25.8±3.5**|26.2±3.7|28.6±4.8|27.8±5.0|**55.2±3.6**|61.4±8.8|59.6±10.4|70.6±5.7|
|Sine|27.4±2.7|28.2±4.4|**25.8±6.4**|30.8±6.2|**47.6±6.5**|60.0±9.0|48.8±6.6|66.0±9.6|
3. **Small sample sizes**: Using linear Gaussian SEMs, we vary sample size. With fewer samples, our method continues to detect low-rank patterns reliably and outperforms others:
|#Nodes|10||||20||||
|-|-|-|-|-|-|-|-|-|
|**#Samples**|Ours|PC|BOSS|FCI|Ours|PC|BOSS|FCI|
|100|**24.6±4.5**|24.8±7.7|25.0±5.7|26.0±5.4|**53.8±4.2**|54.8±5.6|55.6±6.4|56.4±3.7|
|500|24.0±6.9|**23.8±6.5**|27.4±8.3|29.6±7.5|**46.2±11.7**|55.4±10.1|48.8±10.3|60.2±11.0|
|1000|**22.0±6.2**|26.8±8.1|22.4±7.5|27.4±8.2|**53.0±5.9**|62.2±7.0|61.4±6.2|74.2±10.1|
4. **Time complexity**: Our algorithm follows the same structure as FCI. Hence, like FCI, it has a worst-case complexity exponential in the number of variables. However, if the graph is sparse–a common and reasonable assumption–the runtime becomes polynomial. We report running time in ms below:
|#Nodes|10||||20||||
|-|-|-|-|-|-|-|-|-|
|**#Samples**|Ours|PC|BOSS|FCI|Ours|PC|BOSS|FCI|
|100|**12±1**|479±15|453±19|447±15|**43±4**|488±29|462±12|493±41|
|1000|**28±8**|493±19|475±35|500±30|**110±31**|542±29|503±13|606±53|
|10000|**110±48**|629±31|604±42|665±54|**434±113**|976±106|776±32|4364±3554|
---
**(Q2)** The reviewer wonders how one might distinguish between selection bias and latent confounding in practice.
**A:** Thank you for this insightful question. Below, we try to briefly address it from two complementary views. Due to space limit, for a more detailed discussion, please kindly refer to our response to Reviewer KB6X's Q1.
1. **Using only second-order information (rank in covariances), this is a hard problem**:
- Ideally, **one would need to characterize the rank equivalence class**, which is an open and challenging problem (see, e.g., our response to Reviewer nj2K's Q1).
- Even with a full characterization, distinguishing the two may still **require algorithmic reasoning** over constraint sets, rather than any obvious graphical patterns.
2. **With higher-order information and parametric assumptions, some distinctions become feasible**:
- Even for two rank-equivalent graphs, data generated from one graph with a natural SEM may require a much more complex, unnatural SEM to reproduce under the other graph.
- Such distinctions with a **preference for simplicity can guide model selection.**
Though (2) is beyond this work's scope, we sincerely appreciate the reviewer's question that highlights a valuable direction: seeing how the two lines can be merged to better identify selection and confounding in practice.
---
We want to thank the reviewer again for all the valuable feedback, and we hope the reviewer's questions are properly addressed. | Summary: In this paper, the authors propose the use of rank constraints to infer causal structure of the latent variables underlying a set of measurements, when these latent variables are themselves subject to selection bias (e.g. more conscientious individuals are more likely to fill out a full Big 5 questionnaire).
### update after rebuttal ###
I thank the authors for their answers. My assessment remains positive.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I checked the proofs and they seem correct.
Experimental Designs Or Analyses: The setup of the empirical evaluation is sound, including experiments on synthetic and real-world data. As the authors primarily compare to basic rather than state-of-the-art methods, there is a risk for an illusion of progress.
Supplementary Material: I checked the proofs.
Relation To Broader Scientific Literature: The paper related its key contributions well with regard to the literature.
Essential References Not Discussed: The most relevant papers are discussed.
Other Strengths And Weaknesses: - Strengths
The approach is novel and interesting.
Few methods can effectively deal with both latent variables and selection at the same time.
The paper is clear and straightforward.
- Weaknesses
It's unclear how the results here can be used to deal with cases where we don't know how many latent variables there are
It's not quite clear to me how Theorem 1 can be used when we don't know what the selection variables are
Other Comments Or Suggestions: None.
Questions For Authors: - Can you explain why in Fig. 3, the graph for China contains more nodes than for Canada and Germany?
- In the experiment of the Big 5, can you explain to what extent the results depend on the "objective" existence of the measured traits? Big 5 often involve various preprocessing steps (such as varimax rotation), how do we know our results are not due to these preprocessing steps?
- In Definition 4 we assume that the parents of each $Y_i$ are among the latent variables $L$. Does this matter, when the paths from $L \to X \to Y$ would also all be linear?
- In Fig. 4 (or in a followup in the Appendix), it would be interesting to see the kinds of errors made by the algorithm. Does it mostly miss selection, confounding, or something else?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's constructive comments. Please see below for our response.
---
**(Q1)** The reviewer wonders how to identify latent structure and selection bias when the structural assumption of one-factor models are violated–such as when the number of latent variables is unknown, or when selection acts on observed variables as $L \to X \to Y$.
**A:** Thank you for this insightful question. Moving beyond structural assumptions has long been a goal–but also a challenge–for rank-based methods. Below, we address this from two complementary perspectives: why the problem is inherently hard with rank constraints alone, and how additional information beyond ranks may help.
1. **Using only second-order information, this is a hard problem:**
- Distinguishing selection from confounding via ranks in arbitrary graphs requires characterizing the **rank equivalence class**–the set of all graphs (with possibly different latent and selection variables) that entail the same rank constraints.
- However, this characterization **remains open and challenging, even without selection.** Due to page limit, please kindly see our response to Reviewer nj2K's Q1 for details.
- In short, despite four decades of work on rank-based methods, no such characterization has been developed, which is why current methods, including ours, typically rely on structural assumptions (e.g., one-factor models).
- Even with such a characterization, distinguishing selection and confounding may still **require algorithmic reasoning** over constraint sets rather than direct graphical patterns (see Example 4).
2. **With higher-order information and additional parametric assumptions, certain distinctions may become feasible:**
- Consider two measured variables. Whether they are confounded ($X_1 \leftarrow L \to X_2$), directly causal ($X_1 \to X_2$), or selected ($X_1 \to Y \leftarrow X_2$) is indistinguishable by rank or CI constraints, since all imply $X_1 \not \perp X_2$.
- But they may be distinguishable under additional assumptions. For instance, in linear non-Gaussian models, direct causation can be identified via residual independence.
- For selection bias, suppose it acts through truncation, scatterplots may show clear cut-offs (see Fig. 2). Reproducing such patterns under the other scenarios would require **more complex**–and arguably less plausible–functional forms. With a **preference for simplicity**, such patterns can guide model selection.
Though (2) lies beyond this work's scope, we appreciate the reviewer’s question, which points to a valuable direction: merging the two lines to better identify selection and confounding in practice.
---
**(Q2)** The reviewer asks for clarification on several experimental details:
- **(Q2.1)** Why in Fig. 3, the graph for China contains more nodes than for Canada and Germany?
**A:** Thank you for your careful reading. As noted in Sec. 5.2, "some variables are missing in certain countries due to the low response rate."
- **(Q2.2)** In Big 5, to what extent do results depend on the "objective" existence of the measured traits?
**A:** Although traits are labeled for the personality they are designed for (e.g., "Openness"), **we do not use such side information.** Instead, we cluster traits using rank constraints: under one-factor models, two observed traits share a latent parent iff the rank between them and all others is one. These inferred clusters indeed align well with the labels.
- **(Q2.3)** How do we know our results on Big 5 are not due to preprocessing (e.g., varimax rotation)?
**A:** Thank you for raising this. While preprocessing like varimax rotation is common in factor analysis, here we directly use the raw, discrete data from [link](https://tinyurl.com/4brackf6)–**no preprocessing is applied.**
That said, directly verifying model assumptions is still difficult due to selection-induced distortions. However, from another view, the many low-rank patterns–which would typically be destroyed under model violations–are detected, and the recovered graph appears reasonable. This offers a **partial empirical validation** for model adequacy. See Reviewer nj2K’s Q3 for more.
- **(Q2.4)** The comparisons are mainly to basic methods.
**A:** We do have included BOSS [[A+23]](https://tinyurl.com/3saupn8x), a recent SOTA outperforming classical constraint and score-based methods. No other latent-variable methods are used, because they require different structural assumptions and the outputs are incomparable.
- **(Q2.5)** It would be interesting to see the kinds of errors made by the algorithm.
**A:** Thank you–this is a valuable point. In the 20-node setting of Fig. 4, our method greatly improves selection-edge ('--') recovery (F1: 0.12 to 0.77). Though '--' edges are few, their correct detection aids propagation, improving direct-edge ('->') recovery as well (F1: 0.52 to 0.80).
---
We want to thank the reviewer again for all the valuable feedback. | Summary: This paper extends the rank constraint and t-separation criteria in latent variable causal discovery to scenarios involving selection bias. The authors demonstrate that the rank constraint retains its informativeness even under selection bias, despite the potential invalidation of the linear Gaussian assumption. Additionally, they generalize the t-separation criteria to the augmented selection graph, enhancing its applicability. Leveraging these developed tools, the authors investigate the problem of causal discovery in a one-factor model under conditions of selection bias.
Claims And Evidence: Yes. The two primary theoretical contributions of this paper are Theorem 1 and Proposition 4. Specifically, Theorem 1 extends the t-separation criteria introduced by Sullivant et al. (2010) to scenarios involving selection bias, thereby broadening its applicability. Proposition 4 builds on this extension to address the causal discovery problem in a one-factor model under selection bias. Both results are supported by rigorous, well-constructed proofs and are complemented by intuitive illustrations that enhance their clarity and accessibility.
Methods And Evaluation Criteria: Yes. The major proposal of this paper is that the rank constraint remains valid for causal models under selection bias. The authors have successfully demonstrated their points using intuitive examples and rigorious theorems. It is convincing that the proposed method can work under latent causal discovery with selection bias.
Theoretical Claims: Yes, I have checked the correctness of the proofs.
Experimental Designs Or Analyses: Yes, the experiment design and analyses are sound.
Supplementary Material: Yes, Appx. A and Appx. B mainly.
Relation To Broader Scientific Literature: This paper extends the rank constrain and t-separation criteria introduced by Sullivant et al. 2010 to cases with selection bias.
Essential References Not Discussed: No
Other Strengths And Weaknesses: A weakness of this paper is the lack of a comprehensive characterization of rank equivalence graph, as also acknowledged by the authors. This can affect the application of their theory to real world causal discovery problems.
Other Comments Or Suggestions: No
Questions For Authors: 1. I am particularly interested in Sec. 3.3. While a detailed characterization of the rank equivalence graph is not yet available, can the authors provide some intuition or potential ideas to this problem? Further, is it possible to delevep an algorithm like the FCI to recover the rank equivalent graph? How is this rank equivalent graph related to the causal graph over latent variables (which is our target)?
2. Can the rank-constraint theory be extended beyond linear Gaussian data? (I presume not) How this method work empirically when the linear Gaussian assumption does not hold, e.g., with linear nonGaussian data or nonlinear data?
3. Does the personality trait example in the introduction section satisfies the linear Gaussian model?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's constructive comments and helpful feedback. Please see below for our response.
---
**(Q1)** The reviewer asks about intuitions for the rank equivalence class.
**A:** Thank you for this constructive question. We fully agree that the following three problems are fundamental:
1. A graphical criterion to decide rank equivalence,
2. A formal object to describe the rank equivalence class,
3. An algorithm to enumerate all members of the class.
However, as the reviewer noted, **all three remain open and challenging.** To illustrate the hardness, we may consider e.g., the spider graph and the (2,6)-factor analysis graph, and see how two seemingly so different graphs can still, counterintuitively, be rank equivalent [[SSK10]](https://tinyurl.com/5z8x7td5).
Indeed, despite almost four decades of work on rank-based methods since [[G+86]](https://tinyurl.com/2s48ba2v), no general characterization of rank equivalence has yet been developed, even in settings without selection bias. This is why these methods, including ours, typically rely on structural assumptions (e.g., one-factor models), rather than arbitrary graphs.
That said, we see two potential directions:
1. **Algebraic direction**: Rank constraints can be seen as encoding flow capacities between node sets. Reformulated, the problem becomes: _Given unknown graph structure, but known flow (e.g., min-cut) values between certain node groups, what can we recover?_ Algebraic geometry may offer tools to approach this.
2. **Algorithmic direction**: Even without a full characterization, one might still build a sound but incomplete algorithm first, by relaxing existing methods gradually. This also mirrors the trajectory of CI-based methods: while FCI was first introduced in 1993 [[SGS93]](https://tinyurl.com/yyzy32f3), the direct graphical criterion for equivalence came in 2002 [[RP02]](https://tinyurl.com/3afjpm98), the full class characterization in 2008 [[Zha08]](https://tinyurl.com/2s47c6yc), and enumeration algorithms are still under study [[WDZ24]](https://tinyurl.com/yeys53vb).
Finally, even with a full characterization, distinguishing latent variables from selection bias–as in our Section 3.3–may still require **algorithmic reasoning over sets of rank constraints rather than graphical patterns.** This mirrors the CI setting–such as in our Example 4, the presence of latent variables or selection is implied by FCI's output, not by any obvious graphical patterns.
We thank the reviewer again for highlighting this important and exciting problem.
---
**(Q2)** The reviewer wonders whether the rank-constraint theory can be extended beyond the linear Gaussian setting.
**A:** This is a very important question. We address it from the following three folds:
1. **Theoretical scope**: Our results indeed rely on properties of the linear Gaussian model. Specifically, Theorem 1 relies on the closure of Gaussians under point conditioning, so that linear structures in covariances are preserved under selection. This does not hold in general non-Gaussian or nonlinear settings.
2. **Empirical robustness**: Nonetheless, simulations on both linear non-Gaussian and nonlinear data show that our method remains empirically effective (consistently the best among competitors). Due to space limits, please kindly refer to our response to Reviewer nKfC for results.
3. **Broader outlook**: Finally, we note that this work serves as a (first) proof of concept showing that ranks can remain informative under selection bias. Just as original linear-Gaussian rank results were extended to, e.g., nonlinear, cyclic [[Spi13]](https://tinyurl.com/25y2wx4s), and discrete models [[Gu25]](https://tinyurl.com/n6jm5jry), we believe the insights here may generalize as well, though the tools may differ.
---
**(Q3)** The reviewer wonders whether the personality trait data satisfies the linear Gaussian model.
**A:** Thank you for this insightful question. **Strictly speaking, no**–they are discrete questionnaire responses. However, some literature interprets such ordinal variables as discretized versions of Gaussians, in which case covariance ranks can still be informative [[LN08]](https://tinyurl.com/2423ea4d).
More broadly, **model testing in our setting is difficult:** Even if the data were first generated by a linear Gaussian model, selection can then induce strong non-Gaussianity and nonlinearity. Hence, even with Gaussian tests like Mardia's, we cannot easily tell whether a non-Gaussianity arises from model violation or simply selection distortions.
However, from another view, note that our method detects many low-rank constraints in the Big Five dataset. Since major model violations would typically destroy such low ranks in the covariances, these low ranks, together with the meaningful causal structure recovered from them, offer a **partial empirical validation for model adequacy.**
---
Once again, we thank the reviewer for the insightful feedback. | null | null | null | null | null | null |
Advancing Personalized Learning with Neural Collapse for Long-Tail Challenge | Accept (poster) | Summary: This paper addresses a major challenge in personalized learning, where many existing methods assume high-quality, well-annotated benchmarks. In real-world settings, such benchmarks often exhibit long-tail distributions that negatively affect model performance. The authors propose a novel approach called Neural-Collapse-Advanced Personalized Learning (NCAL), which introduces a Textmodality Collapse (TC) regularization to optimize the distribution of text embeddings in large language models (LLMs). NCAL is model-agnostic, ensuring it can work with various architectures, and it is shown to improve performance significantly, surpassing previous state-of-the-art methods while mitigating class imbalance.
Claims And Evidence: The paper demonstrates convincing evidence that NCAL effectively addresses the long-tail distribution issue and enhances performance. The claim about improving generalization ability is well-supported by experiments. However, the text lacks detailed information on the underlying mathematical principles of the Textmodality Collapse regularization, and a deeper explanation would be beneficial.
Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria make sense for the problem. Personalized learning is a significant area of interest, and addressing the limitations of the benchmark, especially their long-tail distributions. It is an important step in improving model performance. The introduction of NCAL, with its Textmodality Collapse regularization, is a reasonable approach to optimize text embeddings and improve generalization ability.
Theoretical Claims: I have specifically checked the correctness of the proofs for the theoretical claims in this paper. However, the TC loss in Equation 14 and the loss in Equation 13 should be further explained and clarified.
Experimental Designs Or Analyses: The authors conducted experiments on two real-world personalized learning datasets and compared various types of large language models as baselines. The overall experimental design is reasonable and sufficiently demonstrates the effectiveness of the proposed method. However, the authors should visualize the statistical analysis of these two real-world datasets.
Supplementary Material: I have reviewed the supplementary material from Section A to Section C.
Relation To Broader Scientific Literature: This paper presents NCAL, a model-agnostic method that addresses long-tail distributions in personalized learning by optimizing text embeddings with Textmodality Collapse (TC) regularization. NCAL improves model generalization, mitigates class imbalance, and achieves state-of-the-art performance, advancing data-driven methods in real-world scenarios.
Essential References Not Discussed: In my understanding, the important references have already been discussed.
Other Strengths And Weaknesses: ## Strengths
- NCAL enhances the generalization ability of models by addressing the challenge of class imbalance, improving performance on diverse datasets.
- By being model-agnostic, NCAL can be applied across different architectures, increasing its versatility and potential for wide adoption in various domains.
## Weaknesses
- The authors should provide a statistical analysis of the datasets used in the experiments and visually present the long-tail distribution of sample sizes in each category for a clearer understanding.
- In Figure 4, the authors show the performance differences between various methods based on different values of $\tau$. The rationale for selecting $\tau = 0.25/0.03/0.15/0.25$ is unclear. The authors should explain why these four specific values of $\tau$ were chosen for the experiments.
- In Table 3, two performance spikes are observed for the “w/o prompt” condition in green. The authors should provide an explanation for the possible reasons behind this unexpected result.
Other Comments Or Suggestions: In Equation 5, the symbol $\alpha$ is not explained.
Questions For Authors: See other comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > **Q1: However, the text lacks detailed information on the underlying mathematical principles of the Textmodality Collapse regularization, and a deeper explanation would be beneficial.**
**A1:** Thank you for the reviewer's insightful comment. In our paper, the goal of TC regularization is to promote a more balanced and structured text feature distribution, thereby addressing the long-tail distribution problem by leveraging the constraints from the information of different categories. In Section 4.3, we explain the mathematical principles behind TC regularization from the perspective of gradient optimization and its role in constraining the text feature space.
> **Q2: However, the TC loss in Equation 14 and the loss in Equation 13 should be further explained and clarified.**
**A2:** We further clarify the distinction between the TC loss in Equation 14 and the loss in 13. The gradient in 13 consists of an "intra-class cohesion" term and an "inter-class repulsion" term. In class-imbalanced learning paradigms, the gradient signals for underrepresented minority classes become dominated by the repulsion term, which creates geometric configuration biases that can result in suboptimal feature space organization for rare categories. Equation (14) explicitly shows that the TC loss adjusts text representations to ensure maximal angular separation, mitigating the adverse effects of imbalanced gradient updates.
> **Q3: However, the authors should visualize the statistical analysis of these two real-world datasets.**
**A3:** In the TMWPL dataset, there are seven classes: Recall, Formulate, Identify, Represent, Implement, Inferences, and Analyze, with sample sizes of 5045, 1283, 717, 711, 405, 235, and 216, respectively. On the other hand, the PMTD dataset consists of eight classes: R-FR, I-Q, R-SR, F-I, F-F, U, I-H, and R-RR, with sample sizes of 1014, 880, 737, 518, 510, 389, 104, and 62, respectively.
> **Q4: The authors should explain why these four specific values of $\tau$ were chosen for the experiments.**
**A4:** Firstly, $ \tau $ represents the imbalance ratio of the data, defined as the ratio of the number of samples in the least frequent category to the number of samples in the most frequent category. In our paper, 0.03 corresponds to the $ \tau $ of our original data, while 0.25 is the maximum $ \tau $ that our data can support for training without compromising the model's performance. Beyond this threshold, the data volume is insufficient for effective training.
> **Q5:** The authors should explain the possible reasons behind this unexpected result.
**A5:** The reviewer raised an interesting question. In the ablation experiments, the performance improvement of the "w/o prompt" condition compared to the baseline method may stem from the model's ability to more effectively construct the text feature space in the absence of a prompt. This suggests that removing the prompt could allow the model to be more flexible in feature extraction, thereby enhancing performance. Although this phenomenon is intriguing, we believe further experiments are needed to validate its correctness, and we plan to explore this as part of future work.
> **Q6: More experimental designs for our method.**
**A6:** We followed the reviewer's suggestion and considered additional baselines to compare the effectiveness of our method. The experimental results are shown below. Under the comparison with the same model parameters and even larger model parameters, our method still demonstrates robust performance on long-tail distribution datasets. This further confirms that the introduction of neural collapse can constrain the learning of the model's class space, thus avoiding biases caused by data with a large number of samples.
| Model | Parameters | Dataset | Acc |
| --- | --- | --- | --- |
| glm-4-9b-chat | 9B | TMWPL | 67.71 |
| Ministral-8B-Instruct | 8B | TMWPL | 66.86 |
| Yi-1.5-9B-Chat | 9B | TMWPL | 63.14 |
| Baichuan2-7B-Chat | 7B | TMWPL | 61.14 |
| **NCAL-Qwen2.5-Instruct** | 7B | TMWPL | **74.86** |
> **Q7: In Equation (5), the symbol $\alpha$ is not explained.**
**A7:** $\alpha$ is a constant parameter for scaling in our paper.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the rebuttal. After carefully reviewing the authors' responses, all of my concerns have been adequately addressed, including a detailed rationale for the proposed technique and a more in-depth experimental analysis. Therefore, I am willing to raise my rating.
---
Reply to Comment 1.1.1:
Comment: We are glad to know that your concerns have been effectively addressed. We are very grateful for your constructive comments and questions, which helped improve the clarity and quality of our paper. Thanks again! | Summary: Personalized learning, particularly data-driven approaches, faces challenges due to long-tail distributions in real-world benchmarks, which affect model performance. To address this, the authors propose NCAL (Neural-Collapse-Advanced Personalized Learning), which leverages Textmodality Collapse (TC) regularization to optimize text embeddings in large language models (LLMs). NCAL is model-agnostic, compatible with various architectures, and improves generalization while mitigating class imbalance. Extensive experiments show that NCAL achieves state-of-the-art performance and enhances existing methods.
Claims And Evidence: The claims in the submission are generally well-supported by clear evidence, with experiments demonstrating NCAL’s effectiveness in addressing long-tail distributions and class imbalance. However, the claim about mitigating class imbalance would benefit from further discussion or visualizations to highlight the improvement in class distribution. More visual evidence could strengthen the claim of enhanced category representation.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are appropriate for the problem. The use of NCAL to address the long-tail distribution problem in personalized learning is a novel approach. The experiments on benchmark datasets (TIMSS and PMTD) are suitable for evaluating the effectiveness of the proposed method in real-world scenarios.
Theoretical Claims: Yes, I have reviewed the theoretical claims in the paper. The proofs for the claims related to the effectiveness of the NCAL method appear to be logically sound and well-supported by the experiments. However, the mathematical formulation and the justification behind the introduction of Textmodality Collapse (TC) regularization could benefit from further explanation, particularly regarding its impact on the feature representations and how it mitigates the long-tail distribution issue. More clarification in this regard would strengthen the theoretical foundation of the approach.
Experimental Designs Or Analyses: The experimental design in this paper is generally sound, with a clear setup and methodology. However, incorporating additional experiments to evaluate the method’s performance under varying imbalance rates and providing more details on the experimental setup, such as dataset characteristics and imbalance ratios, would offer deeper insights into the model’s robustness and generalizability.
Supplementary Material: I have checked the supplementary material.
Relation To Broader Scientific Literature: This paper builds upon prior research in personalized learning and long-tail distribution challenges, addressing limitations in existing benchmarks that assume high-quality, well-annotated data. By leveraging the simplex ETF structure and introducing Textmodality Collapse (TC) regularization, NCAL refines text embeddings within LLMs, aligning with recent advancements in feature representation learning.
Essential References Not Discussed: The paper provides a solid discussion of related works.
Other Strengths And Weaknesses: ## Strengths
- The paper introduces NCAL, which leverages the simplex ETF structure and TC regularization to mitigate class imbalance, improving model generalization in personalized learning.
- The proposed method is model-agnostic and demonstrates superior performance across multiple architectures and benchmark datasets, achieving state-of-the-art results.
## Weaknesses
- Could you further explain Equation (10)? Are $i$ and $j$ texts from the same category or different categories?
- Although the authors have validated the effectiveness of their method on two representative data-driven personalized learning datasets, it is recommended to conduct experiments on additional open-source datasets to more comprehensively assess the method’s generalization ability and robustness.
- Table 3 shows the impact of the presence or absence of prompts on performance. Have the authors considered the specific impact of different types of prompts on performance?
Other Comments Or Suggestions: Please refer to other reviews.
Questions For Authors: Please refer to the weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > **Q1: However, the claim about mitigating class imbalance would benefit from further discussion or visualizations to highlight the improvement in class distribution. More visual evidence could strengthen the claim of enhanced category representation.**
**A1:** We appreciate the reviewer’s comments. Below, we present the statistical distribution of the data used in our paper. Please note that these two datasets were collected from real-world scenarios, with PMTD containing eight categories and TMWPL comprising seven categories.
| Classes number | Class 1 | Class 2 | Class 3 | Class 4 | Class 5 | Class 6 | Class 7 | Class 8 |
| ------- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| Classes | R-FR | I-Q | R-SR | F-I | F-F | U | I-H | R-RR |
| PMTD samples | 1014 | 880 | 737 | 518 | 510 | 389 | 104 | 62 |
| Classes | Recall | Formulate | Identify | Represent | Implement | Inferences | Analyze |
| TMWPL samples | 5045 | 1283 | 717 | 711 | 405 | 235 | 216 |
> **Q2: However, the mathematical formulation and the justification behind the introduction of TC regularization could benefit from further explanation, and how it mitigates the long-tail distribution issue.**
**A2:** The goal of NCAL is to learn a simple ETF through the method of neural collapse, to construct a uniformly partitioned category feature space. To achieve this, we introduce TC regularization, which aims to constrain the feature distribution of category partitioning learned from text data across different categories, effectively addressing the long-tail distribution problem.
> **Q3: However, incorporating additional experiments to evaluate the method’s performance under imbalance rates and experimental details.**
**A3:** On the TMWPL dataset, we conducted experiments based on the Qwen2.5-Math-Instruct architecture, evaluating the model's performance under different levels of long-tail distribution, ranging from low to high imbalance ratios (0.03, 0.08, 0.13, 0.18, 0.25). The corresponding performance results were 74.86, 75.01, 76.45, 78.06, and 76.86. The experimental results indicate that as the degree of long-tail distribution weakens, the model's performance improves.
> **Q4: Could you further explain Equation (10)? Are i and j texts from the same category or different categories?**
**A4:** Yes, I will provide a more detailed explanation for Equation (10). The texts $i$ and $j$ come from different categories, and the goal is to constrain the feature representations of samples from different categories to approach a uniform distribution, i.e., $1/(N-1)$, where $N$ represents the number of categories.
> **Q5: Although the authors have validated the effectiveness of their method on two representative data-driven personalized learning datasets, it is recommended to conduct experiments on additional open-source datasets to more comprehensively assess the method’s generalization ability and robustness.**
**A5:** As recommended by the reviewer, we have included a comprehensive study comparing three open-source datasets: [IITJEE NEET AIIMS Students Questions Data](https://www.kaggle.com/datasets/mrutyunjaybiswal/iitjee-neet-aims-students-questions-data), [MathDial](https://github.com/eth-nlped/mathdial/tree/main), and [DialogID](https://github.com/ai4ed/DialogID/tree/main). The results, presented below, highlight the effectiveness of NCAL compared to additional baselines. The experimental results indicate that our method consistently outperforms others across all datasets.
| Model | Qwen2.5-Instruct | NCAL-Qwen2.5-Instruct | Llama3.1-Instruct | NCAL-Llama3.1-Instruct | Qwen2.5-Math-Instruct | NCAL-Qwen2.5-Math-Instruct |
|-------|-------|-------|-------|-------|-------|-------|
| Parameters | 7B | 7B | 8B | 8B | 7B | 7B |
| INASQD-Acc | 94.50 | **96.50** | 95.50 | **98.00** | 95.00 | **96.50** |
| MathDial-Acc | 51.50 | **57.00** | 51.00 | **53.50** | 52.00 | **57.50** |
| DialogID-Acc | 85.15 | **87.46** | 85.75 | **87.34** | 84.53 | **86.03** |
> **Q6:** The impact of different types of prompts on performance?
**A6:** Thank you for the reviewer’s suggestion. Based on the prompts considered in our manuscript, we have referred to related work and explored additional prompt forms, including base prompt (see Table 5 in our manuscript), w/o prompt, few-shot (introducing a few reference samples in the prompt), role play (personalized scene prompts), and only class name. The experimental results are shown below. The results demonstrate that our method maintains strong robustness and generalization across different prompt settings.
| Model | base prompt | w/o prompt | few-shot | role play | only classname |
| -- | -- | -- | -- | -- | -- |
| NCAL-Qwen2.5-Instruct | 74.86 | 75.00 | 74.86 | 74.86 | 74.00 |
| NCAL-Llama3.1-Instruct | 69.43 | 69.71 | 68.29 | 68.57 | 69.43 |
| NCAL-Qwen2.5-Math-Instruct | 74.86 | 71.43 | 72.57 | 74.86 | 72.00 | | Summary: Personalized learning has gained significant attention due to its ability to address individual student needs. Still, many methods rely on the assumption of high-quality benchmarks, which are often unrealistic. To overcome this, the authors proposed NCAL, which utilizes a TC regularization to adjust the distribution of text embeddings within LLMs. NCAL is designed to be compatible with multiple models and shows impressive results in improving generalization and overcoming class imbalance. Experiments confirmed its ability to achieve SOTA performance.
Claims And Evidence: While the paper provides evidence for the performance improvements and generalization ability of NCAL, the claim of it being “model-agnostic” remains somewhat unclear. More specific details on its application to different models and a deeper explanation of how it addresses class imbalance will help to reinforce these claims.
Methods And Evaluation Criteria: yes
Theoretical Claims: yes, no issues found
Experimental Designs Or Analyses: yes
Supplementary Material: yes, from Sections A to C
Relation To Broader Scientific Literature: The method contributes to the broader scientific literature by addressing the long-tail problem in data-based personalized learning.
Essential References Not Discussed: no
Other Strengths And Weaknesses: ## Strengths
- The proposed method can effectively tackle the problem of long-tail distributions in real-world data, which often affects the performance of models, making the method highly relevant for practical applications.
- NCAL shows strong results in improving existing models, achieving state-of-the-art performance while also enhancing the model’s ability to deal with class imbalance.
## Weaknesses
- The authors should add ablation experiments to further discuss the impact of different imbalance ratios.
- Different forms of prompts can significantly impact the performance of LLM models. In Section 5.3, the authors examined the effect of using or not using prompts on the experimental results but did not further analyze the performance differences across different prompt formats.
- I believe the authors are tackling an important and practical issue in the field of personalized learning. I recommend including more baseline methods,e.g., [1-3], for data-driven personalized learning to further strengthen the comprehensiveness of the study.
[1] Askarbekuly, N. and Aniciˇ c, N. Llm examiner: automating assessment in informal self-directed e-learning using chatgpt. Knowledge and Information Systems, pp. 1–18,2024.
[2] Ayeni, O. O., Al Hamad, N. M., Chisom, O. N., Osawaru,B., and Adewusi, O. E. Ai in education: A review of personalized learning and educational technology. GSC Advanced Research and Reviews, 18(2):261–271, 2024.
[3] Chang, L.-H. and Ginter, F. Automatic short answer grading for finnish with chatgpt. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 23173–23181, 2024.
Other Comments Or Suggestions: Please refer to Other Strengths And Weaknesses
Questions For Authors: In Section 6, the concept of equidistant text representation (ETR) regularization needs further explanation. Does it share the same meaning as Textmodality Collapse (TC)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > **Q1: While the paper provides evidence for the performance improvements and generalization ability of NCAL, the claim of it being “model-agnostic” remains somewhat unclear.**
**A1:** NCAL enhances feature learning by aligning with the simplex equiangular tight frame (ETF) structure through Text-modality Collapse (TC) regularization, which optimizes text distribution. This design ensures NCAL's compatibility with various models, making it a versatile plug-and-play framework. As evidenced in Table 2, we demonstrate its effectiveness across multiple LLM architectures, including Qwen2.5 and Llama3.1, further supporting its model-agnostic property.
> **Q2: More specific details on its application to different models and a deeper explanation of how it addresses class imbalance will help to reinforce these claims.**
**A2:** We have further clarified the learning mechanism of NCAL. Specifically, NCAL aims to learn features that conform to the Equiangular Tight Frame (ETF) structure by introducing Text-Modality Collapse (TC) regularization. This optimization improves the distribution of text embeddings within the representation space of large language models (LLMs), thereby enhancing the robustness and generalization ability of LoRA fine-tuned LLMs under long-tail data distributions.
> **Q3: The authors should add ablation experiments to further discuss the impact of different imbalance ratios.**
**A3:** As suggested by the reviewer, we conducted a comprehensive comparative study on different imbalance ratios. Specifically, we evaluated the model under imbalance ratios of 0.03, 0.05, 0.10, 0.20, and 0.25. The experimental results are shown below, indicate that as the imbalance ratio decreases from 0.03 to 0.25, the model's performance improves accordingly, increasing from 74.86 to 76.86.
| Data Redio (NCAL-Qwen2.5-Math-Instruct) | TMWPL-Acc |
| - | - |
| 0.03 | 74.86 |
| 0.05 | 75.29 |
| 0.10 | 77.14 |
| 0.15 | 78.00 |
| 0.20 | 78.29 |
| 0.25 | 76.86 |
> **Q4: Different forms of prompts can significantly impact the performance of LLM models. In Section 5.3, the authors examined the effect of using or not using prompts on the experimental results but did not further analyze the performance differences across different prompt formats.**
**A4:** Following the reviewer's suggestion and related work, we validate our method with more prompt forms, including base prompt (see Table 5 in our manuscript), w/o prompt, few-shot (introducing a few reference samples in the prompt), role play (personalized scene prompts), and only class name(see Table 4). The experimental results are shown below:
| Model | base prompt | w/o prompt | few-shot | role play | only class name |
| ------- | ----------- | ---------- | -------- | ---------------- | -------------------- |
| NCAL-Qwen2.5-Instruct | 74.86 | 75.00 | 74.86 | 74.86 | 74.00 |
| NCAL-Llama3.1-Instruct | 69.43 | 69.71 | 68.29 | 68.57 | 69.43 |
| NCAL-Qwen2.5-Math-Instruct | 74.86 | 71.43 | 72.57 | 74.86 | 72.00 |
The experimental results show that different prompt forms have little impact on the model. The possible reason is that our method, using the TC regularization approach, can learn a good category feature space that is not overly influenced by the prompt during training and inference. This experiment further supports our motivation.
> **Q5: I recommend including more baseline methods, for data-driven personalized learning to further strengthen the comprehensiveness of the study.**
**A5:** We thank the reviewer for pointing out this issue. We conducted extensive experiments based on the papers recommended by the reviewer. Unfortunately, we found that the codes for the methods in [1] and [3] have not been released, making it difficult for us to reproduce these methods within the short time frame for the rebuttal. Additionally, [2] is a survey paper, from which we selected the most recent representative state-of-the-art (SOTA) works for performance comparison. The experimental results are as follows, showing that our method still achieves SOTA performance compared to other SOTA baselines under the more challenging TMWPL dataset.
| Model | Llama3.1-Instruct|gemma-2-9b-it | internlm3-8b-instruct | **NCAL-Qwen2.5-Instruct**|
| ------- | ---------- | --------- | --------- | --------|
| Parameters | 8B | 9B | 8B | 7B |
| Dataset | TMWPL | TMWPL | TMWPL| TMWPL|
| Acc |61.43 |72.29|72.86| **74.86**|
> **Q6: The concept of equidistant text representation (ETR) regularization needs further explanation. Does it share the same meaning as Textmodality Collapse (TC)?**
**A6:** Thank you for the reviewer's comment. In our manuscript, the meaning of ETR is the same as TC, and we will correct this error in the final version.
---
Rebuttal Comment 1.1:
Comment: The rebuttal have addressed most of my questions and concerns regarding the method and experiments. I have raised my score to 4 and recommend that the authors incorporate the relevant experiments and discussions on model generalization into the paper.
---
Reply to Comment 1.1.1:
Comment: We’re pleased to hear that your concerns have been resolved. We sincerely appreciate your thoughtful feedback and insightful suggestions, which have greatly contributed to enhancing the clarity and overall quality of our work. Thank you once again! | Summary: The paper proposes a new method called Neural-Collapse-Advanced Personalized Learning (NCAL) to address the limitations of data-based personalized learning approaches that assume well-annotated benchmarks. In reality, these benchmarks often have long-tail distributions that affect model accuracy. NCAL introduces a regularization technique, Textmodality Collapse (TC), to optimize text embedding distributions in LLMs. The method is model-agnostic, allowing for its integration with various architectures. The authors demonstrate that NCAL improves performance across tasks and mitigates class imbalance, achieving new state-of-the-art results in the field.
Claims And Evidence: The claims regarding NCAL’s enhancement of model performance and its potential to address class imbalance are well-supported by experiments. However, the assertion that NCAL is model-agnostic would benefit from further clarification, particularly in terms of its practical application to various architectures. Additionally, more detailed examples of its impact on real-world class imbalance problems would provide further credibility to the claims.
Methods And Evaluation Criteria: The proposed method, NCAL, is well-aligned with the problem of data-driven personalized learning, particularly in addressing the long-tail distribution challenge in real-world benchmarks. The evaluation is conducted on two benchmark datasets. Extensive experiments demonstrate its effectiveness in improving model performance and mitigating class imbalance, making the evaluation criteria appropriate for the problem at hand.
Theoretical Claims: I have verified the theoretical claims in the paper, and the proofs are correct and well-founded.
Experimental Designs Or Analyses: The experimental design of this paper aligns with the standards of data-driven personalized learning in terms of data selection, baseline setup, and implementation details. However, incorporating additional personalized learning baselines or datasets would further enhance the completeness and comprehensiveness of experiments.
Supplementary Material: I have examined the supplementary material, from Section A through Section C.
Relation To Broader Scientific Literature: This paper builds upon prior research in neural collapse and representation learning by introducing NCAL, which optimizes feature representations through the simplex ETF structure and enhances text embedding distribution with TC regularization. However, the authors should improve the writing to enhance the clarity of the paper.
Essential References Not Discussed: As far as I understand, the key references have already been covered.
Other Strengths And Weaknesses: ## Strengths
- The introduction of Textmodality Collapse (TC) regularization is a unique contribution that optimizes the distribution of text embeddings within the LLM representation space, significantly enhancing performance.
- NCAL is model-agnostic, meaning it can be integrated with various architectures and methods, making it highly adaptable for different use cases and ensuring wide practical applicability.
## Weaknesses
- Personalized learning is a crucial research topic in intelligent education. However, due to various constraints in real-world environments, the collected data often exhibit severe long-tail distributions. The authors propose the NCAL method, which leverages the concept of neural collapse and introduces TC regularization to constrain the model, mitigating the impact of long-tail distributions. Would it be possible to include additional visualizations, rather than relying solely on numerical metrics in Tables, to more intuitively demonstrate the effectiveness of the proposed method in enhancing category representation space?
- This paper uses the TIMSS and PMTD datasets to evaluate the effectiveness of this method. However, more information should be provided, such as the number of samples in each category, to give a more comprehensive view of the data distribution.
- Recently, some works [1] have used data augmentation to mitigate the impact of data-related issues. It is recommended to include experiments comparing these methods with the approach proposed in this paper.
[1] Towards Accurate and Fair Cognitive Diagnosis via Monotonic Data Augmentation
Other Comments Or Suggestions: Please see other reviews
Questions For Authors: I believe the authors have addressed an important and interesting problem. I am particularly concerned about whether the proposed method can be better visualized to clearly demonstrate how it addresses the long-tail problem in the representations. If this can be done, I would be willing to increase my score.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > **Q1: However, the assertion that NCAL is model-agnostic would benefit from further clarification.**
**A1:** NCAL is designed to learn features that conform to the same simplex equiangular tight frame (ETF) structure by introducing TC regularization to optimize text distribution. As a result, NCAL serves as a plug-and-play framework. To further demonstrate its versatility, we present experimental results in Table 2, showcasing NCAL’s performance when integrated with different LLM architectures, such as Qwen2.5 and Llama3.1.
> **Q2: More detailed examples of its impact on real-world class imbalance problems would provide further credibility to the claims.**
**A2:** Thank you for your valuable suggestion. We visualized the class imbalance statistics of the personalized learning data collected from real-world scenarios used in this paper (see Q5). We also conducted experiments on other open-source datasets to evaluate NCAL’s performance under different class imbalance datasets. These results further demonstrate its effectiveness in practical applications (see Q3).
> **Q3: However, incorporating additional personalized learning baselines or datasets.**
**A3:** As suggested by the reviewer, we add a comprehensive study comparing four LLM methods and three open-source datasets, including: [IITJEE NEET AIIMS Students Questions Data](https://www.kaggle.com/datasets/mrutyunjaybiswal/iitjee-neet-aims-students-questions-data); [MathDial](https://github.com/eth-nlped/mathdial/tree/main); [DialogID](https://github.com/ai4ed/DialogID/tree/main). These results demonstrate the effectiveness of NCAL in comparison with more baselines under various long-tail personalized learning datasets. These results are shown below:
| Model | Parameters | Dataset | Acc |
| --------------------------- | ---------- | --------- | --------- |
| DeepSeek-R1-Distill-Qwen-7B | 7B | TMWPL | 60.86 |
| phi-4 | 14B | TMWPL | 72.00 |
| DeepSeek-V2 | 16B | TMWPL | 65.24 |
| Qwen2.5-Instruct | 7B | TMWPL | 61.14 |
| **NCAL-Qwen2.5-Instruct** | 7B | TMWPL | **74.86** |
| Model | Parameters | INASQD-Acc | MathDial-Acc | DialogID-Acc |
| --------------------- | ---------- | -------- | ----- | --------------------- |
| Qwen2.5-Instruct | 7B | 94.50 | 51.50 | 85.15 |
| **NCAL-Qwen2.5-Instruct** | 7B | **96.50** | **57.00** | **87.46** |
| Llama3.1-Instruct | 8B | 95.50 | 51.00 | 85.75 |
| **NCAL-Llama3.1-Instruct** | 8B | **98.00** | **53.50** | **87.34** |
| Qwen2.5-Math-Instruct | 7B | 95.00 | 52.00 | 84.53 |
| **NCAL-Qwen2.5-Math-Instruct** | 7B | **96.50** | **57.50** | **86.03** |
> **Q4: Would it be possible to include additional visualizations?**
**A4:** In Figure 1, we present the visualization to demonstrate the effectiveness of our method in improving the class feature space. The arrows indicate the direction of class centers, while the points represent sample features, with colors corresponding to the classes. In Figures 1 (a) and (c), it is visually evident that the baseline suffers from severe class representation imbalance. In (b) and (d), our method more uniformly learns the information for each class.
> **Q5: However, more information should be provided, such as the number of samples in each category, to give a more comprehensive view of the data distribution.**
**A5:** We thank the reviewer for pointing out this issue. We will include the statistical data in the final version. The data distribution is as follows, and it is clear that the number of samples in different categories exhibits a distinct long-tail distribution.
| Classes | Recall | Formulate | Identify | Represent | Implement | Inferences | Analyze |
| ------- | ------ | --------- | -------- | --------- | --------- | ---------- | ------- |
| TMWPL samples | 5045 | 1283 | 717 | 711 | 405 | 235 | 216 |
| Classes | R-FR | I-Q | R-SR | F-I | F-F | U | I-H | R-RR |
| ------- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| PMTD samples | 1014 | 880 | 737 | 518 | 510 | 389 | 104 | 62 |
> **Q6: It is recommended to include experiments comparing these methods with the approach proposed in this paper.**
**A6:** We agree with the reviewer that data augmentation is indeed an effective approach for addressing long-tail distributions. However, data augmentation requires strict control over the augmentation strategy, as well as the quality and diversity of the generated samples. In contrast, NCAL has broader applicability. It is important to note that we sincerely cannot provide a baseline comparison with the paper ``Towards Accurate and Fair Cognitive Diagnosis via Monotonic Data Augmentation,” as the augmentation method used in that work is based on ID-type data (e.g., 0, 1, 2), which cannot be learned using LLM encoding.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. After reviewing your rebuttal along with the comments from other reviewers, I believe most of the key concerns have been satisfactorily addressed. I am therefore increasing my score to reflect an acceptance recommendation | null | null | null | null | null | null |
Consensus Based Stochastic Optimal Control | Accept (poster) | Summary: This paper proposes the Momentum Consensus-Based Optimization (M-CBO) and Adaptive Momentum Consensus-Based Optimization (Adam-CBO) method to solve the stochastic optimal control problem. While the numerical results are nice, I do not think the theoretical analysis is significant. I will put details in “Theoretical Claims”.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I think the first item in Assumption 4.1.4 is too strong, excluding most of the optimal control problems of interest. The assumption $$J(\theta)-J(\tilde{\theta}) \ge \eta ||\theta-\tilde{\theta}||$$ implies that the landscape of $J(\theta)$ is a kink (very sharp) near $\tilde{\theta}$. Otherwise, if J is smooth, making a local Taylor expansion (note that $\nabla J(\tilde\theta) = 0$)
$$J(\theta) = J(\tilde\theta) + \nabla J(\tilde\theta)^\top (\theta - \tilde\theta) + o(\theta - \tilde\theta) = J(\tilde\theta) + o(\theta - \tilde\theta),$$
we get a result contradict to the assumption.
While there might be examples satisfying this assumption, I think most of the commonly studied control problems, such as LQR, do not satisfy this assumption. Therefore, the theoretical contribution is not that significant.
Experimental Designs Or Analyses: I think the experiments are well designed.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper cites relevant literature.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The presentation of the paper is nice.
The weakness is already addressed before.
Other Comments Or Suggestions: None
Questions For Authors: Does the first equation in (1) lack a dt?
In sec 3.2, does the square refer to elementwise square (which result in a vector)? Please clarify.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and valuable feedback.
1. **On “ Assumption 4.1.4”**
Indeed, the assumption can be generalized as
$$\|\theta - \tilde\theta\| \leq \frac{(\mathcal{J} -\underline{ J})^\mu}{\eta}$$
for some $\mu,\eta>0$ .
When $\mu=\frac{1}{2}$, the condition is in general correct if $J$ is continuous near the minimum points, becase
$$
J(\theta) = J(\tilde \theta) + \nabla J(\tilde \theta)^T (\theta - \tilde \theta) + \frac{1}{2} (\theta - \tilde \theta)^T \nabla^2 J(\tilde \theta) (\theta - \tilde \theta) + o(\|\theta - \tilde \theta\|^2)
\\
= J(\tilde \theta) + \frac{1}{2} (\theta - \tilde \theta)^T \nabla^2 J(\tilde \theta) (\theta - \tilde \theta) + o(\|\theta - \tilde \theta\|^2)\\
\geq J(\tilde \theta) + C \|\theta - \tilde \theta\|^2,
$$
where $C$ is the smallest eigenvalue of Hessian matrix $\nabla^2 J(\tilde \theta)$ which is positive at minimum. The revised version of the proof can be seen in [here](https://drive.google.com/file/d/1PZZXnyQ3sV7qyRchWjeFlDCUurYaliOB/view?usp=sharing)
**On the Clarification**
Thank you for noting this. We will add the missing dt in equation (1), and clarify that the square in Section 3.2 refers to elementwise squaring.
**The figure above is in Google Drive, if the reviewer cannot access it, we will appreciate if you could find it in Github https://anonymous.4open.science/r/Adam_CBO_Review-D1DB**
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the reply. The authors have addressed my major concern, so I increased my score from 2 to 3.
---
Reply to Comment 1.1.1:
Comment: We appreciate your thoughtful engagement with our work and your willingness to reconsider your evaluation following our rebuttal. Your suggestions on the proof strengthened the theoretical framework and have been invaluable in refining the manuscript. We are grateful for the time and care you dedicated to this process, and your insights have significantly elevated the quality of our research. | Summary: The paper introduces Consensus-Based SOC using the Adam-CBO framework ( gradient-free, model-free, and mesh-free approach) for solving high-dimensional SOC problems. It claims to overcome limitations of existing model-based and model-free methods by improving convergence and stability. The authors then present theoretical guarantees for convergence, derives a mean-field limit, and validates the approach on numerical benchmarks.
Claims And Evidence: The author claims that Adam-CBO is superior to existing model-based/ model-free methods by being gradient-free, model-free, and mesh-free, but empirical or numerical comparisons with established policy gradient or HJB-based methods are limited.
Methods And Evaluation Criteria: The methods used are aligned with stochastic optimal control problems, particularly in high-dimensional settings. The evaluation includes linear quadratic control, mean-field systemic risk models, and Ginzburg-Landau dynamics, which are reasonable test cases. However, benchmarks lacks direct comparisons to policy optimization, deep RL, or dynamic programming approaches, which are common in SOC research.
Theoretical Claims: The theory follows general SOC problem setting and are logical
Experimental Designs Or Analyses: The experiments are well-structured, covering low/ high-dimensional control settings. LQG & systemic risk problems are valid test cases, but real-world scalability remains unclear
Supplementary Material: While supplemental material provide useful background, the proof of Theorem 4.4 requires more explanation, especially regarding the assumptions on policy parameter evolution
Relation To Broader Scientific Literature: The work builds on prior work in stochastic control, reinforcement learning, and mean-field game theory.
Essential References Not Discussed: The author could included stochastic control that use actor-critic methods (or) entropy-regularizzed RL methods, which also tackle similar problems
Other Strengths And Weaknesses: Strengths
– Introduces a gradient-free, model-free, and mesh-free approach to high-dimensional SOC problems, which I found it to be novel
- Provides rigorous convergence analysis, including mean-field limits.
- Covers diverse experiments ( LQG control, systemic risk, and Ginzburg-Landau models)
Weaknesses
- Lack of comparison to standard policy optimization methods (e.g., PPO, SAC, TRPO).
- Assumptions on convergence are restrictive
- Computational efficiency compared to existing SOC solvers is not analyzed.
Other Comments Or Suggestions: One minor issue is that the paper sometimes jumps quickly between the simpler M-CBO model and Adam-CBO extension. It is very difficult to follow through the distinction, hence clarifying their distinctions and the roles of specific parameters earlier could help readers distinguish which theoretical claims directly apply to which algorithm
Questions For Authors: - Why comparison with existing SOC methods limited? The paper does not benchmark against standard stochastic control solvers, such as Dynamic Programming or HJB-based approaches.
- The authors proposes adam-CBO as an improvement over M-CBO. However, there is no clear theoretical or empirical comparison between the two. What specific scenarios or problem instances show a (significant) performance gap between these approaches? Are there cases where M-CBO performs comparably or even better?
- Is transition kernel here continuous or piecewise continuous? how does this method handle discontinuous state transitions?
- Why is no discount factor accounted, any risk in terms of instability because of the absense of discounting?
- numerical experiments discussed primarily focus on LQG problems and GL models. In paper there is a claim that this method generalizes to large-scale SOC problems, were there any large-scale, high-dimensional real-world control examples tested? (or) could you share your intuition on how this would scale?
- In Algorithm 1, what determines the stopping condition for iteration t_N? Is there a heuristic for choosing t_N, or does it require manual tuning?
- Could you clarify whether the stability constraints in pg 6,7 are explicitly enforced in the problem formulation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and thoughtful suggestions for improving our work.
1. **On "Why comparison with existing SOC methods limited?"**
We would like to clarify that the goal of our work is to demonstrate that **our approach is applicable in a more general setting**—specifically, **finite-horizon, model-free stochastic control problems**, we refer to the response to reviewer NtTj for more details. To the best of our knowledge, existing methods—such as those based on DP or HJB—cannot directly address this setting.
That said, we do compare with an HJB-based method and show that our approach achieves **better accuracy using significantly less information** (i.e., no model access) in the original version.
Following the reviewer’s suggestion, we added comparisons with **DDPG**, **PPO**, **SAC**, **TD3**, **TQC**, and **CrossQ** (using the stable-baselines3 implement "https://github.com/araffin/sbx") on **Pendulum-v1** as well as PPO and DQN on **CartPole-v1**. The numerical results can be found [here](https://drive.google.com/file/d/1ghqLfAgbxFtUICMz3AGSDFkr9JPyDn7d/view?usp=sharing) and [here](https://drive.google.com/file/d/10H1Pf1hOq-aYsQo_7EtRn9ziPUrGKixo/view?usp=sharing)
Below is the computational time for each method over 100,000 steps:
|Method|Time (s) for Pendulum-v1|Time (s) for CartPole-v1|
|-|-|-|
|DDPG|288.83||
|PPO|145.19|150.58|
|SAC|355.01||
|TD3|291.26||
|TQC|576.35||
|CrossQ|708.73||
|DQN||186.13|
|**Adam-CBO**|**1124.88**|**3444**|
While **Adam-CBO** has higher runtime, it **converges to the optimal policy much faster** in terms of learning efficiency.
However, we would like to stress that these results are **not directly comparable** in a strict sense. Most of the baseline methods optimize multiple components—for example, PPO jointly optimizes a policy and a value function, and SAC optimizes two Q-functions and a policy. In contrast, **our method optimizes only the policy**.
If we were to directly replace the gradient-based optimizer within an existing method like PPO with Adam-CBO, we **do not expect** it to outperform the full method in that specific setup. The **main advantage** of Adam-CBO lies in its **applicability to broader, more general settings**, particularly when gradients are unavailable or unreliable.
2. **On "Adam-CBO vs. M-CBO – performance gap?"**
We compare the two in the first experiment and find Adam-CBO consistently outperforms M-CBO. Later experiments focus on Adam-CBO for this reason.
However, M-CBO is more analytically tractable, so we base theoretical results on it. Analyzing Adam-CBO theoretically is challenging but remains an exciting direction for future work.
3. **On "Is transition kernel here continuous or piecewise continuous? how does this method handle discontinuous state transitions?"**
Our method assumes continuous state transitions. However, it supports discrete actions, as shown in CartPole-v1. There, we use a neural network to produce a real-valued score and select an action by comparing this score against a uniform random threshold, allowing the method to work with discrete actions.
4. **On "Why is no discount factor accounted, any risk in terms of instability because of the absence of discounting"**
Discounted infinite-horizon problem is a **special case** of our formulation. While our method can easily incorporate a discount factor, our focus is on **finite-horizon problems**, where **discounting is unnecessary**. In these settings, the finite time naturally bounds the reward accumulation, and the absence of a discount factor does not lead to instability.
5. **On "Real-world large-scale SOC examples/scalability"**
We added tests on 2, 4, and 50-agent control scenarios. Results are available [here](https://drive.google.com/file/d/1D_aORIODftmI5KXwIEf9lLh4vRPalz3H/view?usp=drive_link). We refer to our response to Reviewer 1fhy for more details on the numerical results.
We currently do not do any real world experiments, but we are interesting in exploring
+ Controlling particle distributions via external fields without modeling distribution evolution
+ Identifying transition paths in chemistry without full potential surfaces
6. **On "Stopping condition for Algorithm 1"**
We stop when the standard deviation of policy parameters across agents falls below a threshold, indicating convergence. We will clarify this in the main text.
7. **On "Stability constraints on pages 6–7"**
Thank you for this question. We did not identify specific "stability constraints" on pages 6–7. If the reviewer could point to a particular equation or passage, we’d be happy to clarify or revise the relevant text to improve clarity.
**The figure above is in Google Drive, if the reviewer cannot access it, we will appreciate if you could find it in Github https://anonymous.4open.science/r/Adam_CBO_Review-D1DB** | Summary: The paper presents a scalable, gradient-free alternative for solving stochastic optimal control problems. By leveraging consensus-based updates and adaptive momentum, the proposed methods achieve efficient policy optimization without requiring explicit transition models or gradient computations. Theoretical guarantees and numerical results highlight their potential in high-dimensional control applications.
Claims And Evidence: I believe the claims made in the subission are mostly well-supported by clear and convincing evidence, including theoretical gurantees and nice numerical examples. The authors are able to scale their methods up to a few hundred dimensions in their example of systemic risk mean field control, which is nice. A suggestion would be to try scaling things up in the example of the ginzburg-landau model, which does not have a known analytical solution.
One claim that I cannot find the corresponding evidence of is about the efficiency of the proposed method. The authors claim that the method can efficiently scale to high dimensions because of the adjustable Gaussian noise that can help exploration. I am wondering if the authors can provide some evidence of that. A toy example would be enough.
Methods And Evaluation Criteria: The proposed method makes sense for solving the class of stochastic optimal control problems with many interacting particles/agents.
Theoretical Claims: I have not fully checked the proof provided in the appendix. I only checked the correctness of Theorem 4.1, 4.2, 4.3.
Experimental Designs Or Analyses: The experiments done in this paper are nice but still have some room for improvement. It will be more convincing if the authors can present more numerical results on high-dimensional examples that do not have explicit analytical solutions. For example, the authors may want to consider the experiments done in [1] with many agents.
Also, one thing I am concerned about is the lack of comparison to existing methods. The authors list many related methods in the section of introduction but have only compared their method to BSDE.
[1] Abdul, A. T., Saravanos, A. D., & Theodorou, E. A. (2024). Scaling Robust Optimization for Multi-Agent Robotic Systems: A Distributed Perspective. arXiv preprint arXiv:2402.16227.
Supplementary Material: I have checked the proof of Theorem 4.1, 4.2, 4.3 in the supplementary material and went over the details of numerical experiments provided in the appendices.
Relation To Broader Scientific Literature: The paper can be of interest in the community of particle physics, microbiology, and robotics.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: Can the authors address the concerns I have about the experiments?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and thoughtful suggestions for improving our work.
1. **On the lack of comparison with existing methods and experiments on high-dimensional problems without analytical solutions**:
We would like to emphasize that our primary goal is to address a more general class of stochastic control problems than those typically handled by existing methods. As discussed in our response to Reviewer NtTj, most conventional approaches rely on access to either model gradients or infinitesimal time horizons, whereas our method operates in a model-free, finite-horizon setting—a scenario that is less explored in the literature.
That said, we appreciate the reviewer’s suggestion and have applied our method to control problems involving 2, 4, and 50 agents, similar in spirit to [1]. The corresponding numerical results can be found [here](https://drive.google.com/file/d/1D_aORIODftmI5KXwIEf9lLh4vRPalz3H/view?usp=sharing). We note, however, that our setup differs from [1] in important ways: our approach is based on closed-loop (feedback) control, while [1] employs open-loop control. Furthermore, [1] treats obstacles as hard constraints, whereas we model them as soft constraints through penalization in the cost function. Due to these differences, a direct comparison is not meaningful, but we believe our results still highlight the scalability and flexibility of our approach.
2. **On the adjustable Gaussian noise that can help exploration:**
We appreciate the reviewer’s suggestion and we want to added two experiments to explain the role of adjustable Gaussian noise.
The fist one, we compare the result of adam-cbo with and without gaussian noise when optimizing a two dimensional Rastigrin function.The numerical result can be found [here](https://drive.google.com/file/d/13auGZ3A0rmf5j8zdeXsMin43hGXGzv0V/view?usp=sharing). The starting points are normal distribution centered at [3,3] or [-3,-3] for the two methods respectively. The adam-cbo with gaussian noise can escape from the local minimum and find the global minimum, while the adam-cbo without gaussian noise is stuck in the local minimum.
The second case, we show the evolution of one parameter in the neural network when training the LQG probelm under fixed Gaussian and adjustable Gaussian noise. The numerical result can be found [here](https://drive.google.com/file/d/1szCYt1KLLk16ao5TbLGeB8C9wVNo06lr/view?usp=sharing). If we fix a large Gaussian noise, the parameter can explore more (the parameters are more different from the starting points), but it will keep high variance in a long time and cannot converge. If we fix a small Gaussian noise, the parameter can converge very fast, but it do not explore too much on the space (the parameters are close to the starting points). If we use adjustable Gaussian noise, the parameter can explore more at the beginning and converge fast at the end. The adjustable Gaussian noise give us the flexibility to balance the exploration and exploitation. If we know the parameter is close to the optimal solution, for some fine-tuning problems, we can use a small Gaussian noise to converge fast. If we are not sure the parameter is close to the optimal solution, we can use a large Gaussian noise to explore more and iteratively reduce the noise to converge.
The last point, why we claim the adjustable noise can effectively scale to high dimenisons problems comes from some former analytical result. By adding the Gaussian noise, the paper [2] shows that the convergence of the CBO method, which is exponential in time, is guaranteed with parameter constraints **independent** of the dimensionality. Even though we do not do the same analysis for our method, we believe the conlusions are similar.
[2] Jose A. Carrillo, et al. "A consensus-based global optimization method for high dimensional machine learning problems." ESIAM: Control, Optimisation and Calculus of Variations.
3. **On the concern about "the lack of comparison to existing methods"**
We would like to emphasize twice that our work is not intended to be a direct comparison with existing methods, but rather to demonstrate the effectiveness of our approach in a more general setting. However, we added some comparison of our method with PPO, SAC, TD3, TQC, CrossQ and DQN. The numerical results can be found [here](https://drive.google.com/file/d/1ghqLfAgbxFtUICMz3AGSDFkr9JPyDn7d/view?usp=sharing) and [here](https://drive.google.com/file/d/10H1Pf1hOq-aYsQo_7EtRn9ziPUrGKixo/view?usp=sharing). We want to cariify that the results are not directly comparable, since the methods are designed for different settings, which we refer to repsond to reviewer yqHf for more details.
**The figure above is in Google Drive, if the reviewer cannot access it, we will appreciate if you could find it in Github https://anonymous.4open.science/r/Adam_CBO_Review-D1DB** | Summary: This paper considers a high-dimensional stochastic control problem and proposes two consensus based optimization algorithms to solve this problem. The proposed algorithms rely on Monte Carlo estimation to estimate the value function, which is used for choosing the optimal policy. Extensive simulation results are provided on different control tasks.
Claims And Evidence: The paper provides proofs for the theorems and lemmas. However, the problem formulation is a bit unclear. There is no explicit explanation of the multi-agent setting and the motivation of the consensus problem.
Methods And Evaluation Criteria: The simulation benchmark are other value function estimation methods, but it would be helpful to also compare with gradient-based methods, since the major motivation of this paper is to outperform the gradient-based approaches.
Theoretical Claims: The theoretical derivations seem correct after a quick read.
Experimental Designs Or Analyses: Yes, and see the box of Methods And Evaluation Criteria.
Supplementary Material: Yes, I went through the global convergence of mean field law and the simulation settings.
Relation To Broader Scientific Literature: This paper is related to mean field learning-based control and reinforcement learning.
Essential References Not Discussed: Not that I could think of.
Other Strengths And Weaknesses: Strengths:
1. The paper is well-motivated. The scalability and high variances in gradient estimation are two major challenges in high-dim stochastic control.
2. The paper provides various simulation results to demonstrate the applicability of the proposed methods.
3. The convergence analysis in the continuous time setting using differential equations are valid and interesting.
Weaknesses
1. The paper is poorly written. There is no mention of multi-agents or consensus objective functions in Section 2 problem formulation.
2. The algorithms need more explanation and intuition. Right now, they are followed by technical theorems, making it difficult to understand the intuitions behind the algorithm design.
3. There is no comparison with other gradient-based methods.
Other Comments Or Suggestions: See above
Questions For Authors: 1. Why does this paper try to solve a single agent stochastic control problem by multi-agent consensus? This connection is not made clear.
2. Why does the value estimation enjoy smaller variance than the gradient estimation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful review and valuable feedback.
1. Regarding **the lack of comparison with gradient-based methods**, we would like to clarify that traditional gradient-based approaches typically fall into two specific categories:
+ Model-based methods: These methods assume access to an analytical model of the system, enabling the derivation and solution of the Hamilton–Jacobi–Bellman (HJB) equation. This is the classical setting of optimal control, where gradients can be computed explicitly.
+ Model-free methods with infinitesimal time horizons: In these cases, even without access to a model, the Bellman principle can be leveraged under an infinitesimal (or discounted infinite) time horizon, allowing for recursive value updates.
However, our work addresses a more general and challenging setting: finite time horizon and model-free stochastic control. In this regime, the assumptions required by typical gradient-based methods—either model knowledge or discount-based recursion—are not available. This makes a direct application or comparison with former methods non-trivial or inapplicable.
That said, we do compare our method with an HJB-based approach and demonstrate that, despite using significantly less information, our method achieves better accuracy. Additionally, following the suggestion of Reviewer yqHf, we have included empirical comparisons with popular model-free policy gradient methods such as PPO, SAC, TRPO, and DDPG. While our method performs competitively, we would like to emphasize that it is designed for more general settings—specifically, finite-horizon, model-free stochastic control problems—which are not directly addressed by these existing methods.
2. **On the motivation for using multi-agent consensus in a single-agent control problem**:
Since gradients are not accessible in our setting, we rely on multiple agents to explore the state space collaboratively. By exchanging information and moving toward consensus, the agents are able to approximate the optimal policy in a robust manner. We have expanded the introduction to clarify this motivation more explicitly.
3. **On why value estimation exhibits lower variance than gradient estimation**:
Monte Carlo value estimation computes the expected return by averaging cumulative rewards over entire trajectories, which smooths out the variability from individual time steps. In contrast, gradient estimation requires sensitivity analysis at each step, and in finite-horizon problems, this leads to high variance due to sparse or noisy signal propagation through time. We will add this explanation to the main text for clarity.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I have increased my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thorough review and for revising your evaluation of our paper in response to our rebuttal. Your insights were instrumental in refining our work. We greatly appreciate your time and expertise in strengthening this manuscript. | null | null | null | null | null | null |
Communicating Activations Between Language Model Agents | Accept (poster) | Summary: This paper studies the multi-agent communication problem in the LLM scenario. Specifically, it proposes using hidden representations instead of natural language. Experiments on several multi-agent collaboration datasets demonstrate the effectiveness of the proposed method.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A no theoretical claims
Experimental Designs Or Analyses: Yes. The experiments on the multi-agent database scenario make sense and can be used to demonstrate the effectiveness of the proposed method.
Supplementary Material: Yes, I check the case studies in the appendix.
Relation To Broader Scientific Literature: The multi-agent communication problem has been a long-term topic in the AI community. Recently, due to the development of LLMs, people have started using natural language as a communication medium. As discussed in the paper, previous work already discussed that the natural language might not be the most efficient way and this paper follows the trend to use hidden representation but with a more careful design (e.g., train a transformation matrix)
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strength:
The paper is well-motivated and clearly written.
Limitation:
1. The theoretical depth of this paper can be further enhanced. For example, the result shows that the direct replacement strategy is the most efficient way. Basically, it means that we should discard all previous representations and directly use the projected ones. Do we have any assumptions behind this phenomenon? (e.g., what is the relationship between the two models such that this conclusion holds?)
2. Many design choices need further clarification. For details, please see the question for authors section.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. What is the connection and difference of your model against the encoder-decoder model?
2. How to choose the hyperparameters of j,k
3. What is the effect of the scale and distribution of the aligning data for training the transformation matrix?
4. Are you trying to train a unique matrix for each j,k pair? Since you are using the MSE loss, I wonder if such a training objective might work well given that the capacity of a matrix is not very big.
5. Have you tried directly aligning the representations from two models and adding one more layer on top of that?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback, and address their comments below:
1. > The theoretical depth of this paper can be further enhanced. For example, the result shows that the direct replacement strategy is the most efficient way. Basically, it means that we should discard all previous representations and directly use the projected ones. Do we have any assumptions behind this phenomenon? (e.g., what is the relationship between the two models such that this conclusion holds?)
First, note that the direct replacement strategy only replaces $B$'s layer-$j$ activation of *the final token* with $A$'s last-token layer-$k$ activation. The embeddings at all other token positions remain the same. Hence, we are not actually discarding all previous representations: after applying masked attention in each of the previous Transformer layers, the last token activation of $A$ attends to all tokens before it, hence incorporating information from the entire sequence; and the previous token activations of $B$ are retained and this incorporate all of $B$'s "thoughts" regarding the full sequence.
Please refer to Appendix B.1 for additional discussion and comparison between the replacement, sum, and mean strategies for activation grafting.
2. > What is the connection and difference of your model against the encoder-decoder model?
This is an interesting point; the direct replacement activation grafting approach could be seen as running an encoder-decoder Transformer, where the first $k$ layers of $A$ are the encoder, $B$ is the decoder, and we have a "special cross-attention" where----from layer $j$ onwards----all tokens in $B$ attend to only the last-token embedding outputted by the first $k$ layers of $A$. However $A$ and $B$ are language models used out-of-the-box, with no joint training, no requirement of shared tokenizer/aligned embedding space/similar training distributions/etc.
3. > How to choose the hyperparameters of j,k
**Please see point #3 in the reply to Reviewer VKxi above.**
4. > What is the effect of the scale and distribution of the aligning data for training the transformation matrix?
We propose training $\mathbf{W}$ to minimize MSE loss over a dataset of $N$ sentences
$\frac{1}{N}\sum_{i=1}^N \left\|\mathbf{z}^{(i)} - \mathbf{W}\mathbf{y}^{(i)}\right\|_2^2$ where each $(\mathbf{y}^{(i)},\mathbf{z}^{(i)})$ pair denotes the final-token layer-$26$ activations of $A$ and $B$ at layers $k$ and $j$ respectively given the same sentence as input.
Hence, the "aligning data distributions" are quite literally $A$ and $B$'s activation distributions; we do not scale or otherwise modify the activation vectors in any way before training.
5. > Are you trying to train a unique matrix for each j,k pair? Since you are using the MSE loss, I wonder if such a training objective might work well given that the capacity of a matrix is not very big.
Yes we are, see point #4 above. However, note that for each model pair, we only need a single $(j,k)$ pair----indeed, we attain SOTA results with $k=j=26$ fixed.
Also, this training objective is quite standard in related literature. For instance, the "model stitching" paper [1] learns a 1x1 convolution between activations of two models at a specific layer, which is exactly equivalent to our method of training $\mathbf{W}$; [2] also learns a linear projection between Transformer layers which is shown to be quite expressive.
Indeed, we find our approach yields quite strong results; please see Appendix B.3 for additional discussion.
6. > Have you tried directly aligning the representations from two models and adding one more layer on top of that?
We kindly ask for clarification on this question. If by "directly aligning the representations from two models and adding one more layer on top of that" you mean learning a linear layer that projects activations from $A$'s activation space to $B$'s, that is exactly what we do in the AC ($\mathbf{W}$) method.
**We hope that with the provided responses & additional strong results, you consider raising your score to support clear acceptance. Please let us know if there are any additional questions or concerns; we'd be happy to address them.**
[1] Yamini Bansal, Preetum Nakkiran, Boaz Barak. Revisiting Model Stitching to Compare Neural Representations, 2021.
[2] Alexander Yom Din et al. Jump to Conclusions: Short-Cutting Transformers With Linear Transformations, 2024. | Summary: The paper proposes a novel method for inter-language model (LM) communication by directly exchanging model activations instead of using natural language.
Tested on two synthetic datasets (a coordination game and an investment decision task) and several reasoning benchmarks (GSM8k, MMLU subsets, Biographies), the method achieves up to 27% improvement over natural language communication while using less than one-fourth the compute. The paper compares activation communication with natural language debate, a common multi-agent method, showing superior efficiency and generalization across model sizes.
## update after rebuttal
I have considered the additional set of experiments and the preliminary analysis and I don't see any reasons why this paper shouldn't be accepted. I am increasing my score from 3 to 4
Claims And Evidence: The claims in the paper are partially supported by the experiments, which demonstrate that activation communication improves performance while reducing compute costs compared to natural language communication. The results across two synthetic datasets and multiple reasoning benchmarks provide some evidence for the claimed efficiency and effectiveness of the approach. The ablation studies on different activation combination functions further strengthen the argument that direct activation exchange can enhance inter-model communication. The application to Llama-only models make it for a partial support of the general claim that their methods can improve LLM performance
Methods And Evaluation Criteria: The proposed methods make sense for studying inter-model communication. The use of synthetic datasets is a notable strength. The countries and coordination game allow the authors to isolate the impact of communication without confounding factors. This controlled setup provides clearer insights into how activation-based communication influences performance.
The reasoning benchmarks used are fairly standard in the field, making the evaluation relevant and comparable to prior work.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental design is generally sound, with well-structured comparisons between activation communication and natural language-based methods. The use of both synthetic tasks and reasoning benchmarks helps validate the approach across different settings. However, a notable weakness is that all tested models come from the LLaMA family, meaning the study does not evaluate communication between models with different architectures, tokenization schemes, or training distributions. This limits conclusions about the method's general applicability. The comparison with single agent is also a (mild) concern that I have (see more below in the additional questions section).
Additionally, the paper lacks an analysis of why the method work, e.g. inspecting the geometric properties of the activations being communicated. As an example, understanding whether activation similarity between agents correlates with downstream task performance could provide insights into when and why activation-based communication is effective. Metrics like cosine similarity or rank correlation between activations before and after communication could help determine if performance gains are linked to latent space alignment between models. This type of analysis would strengthen the theoretical grounding of the method.
The paper would be a clear accept with tests on a broader range of model families to confirm that activation communication is not specific to Llama's architecture or training paradigms, and additional insights on the effectiveness of the proposed approach.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The paper builds on prior work in multi-agent LLM communication by replacing costly natural language exchanges with direct activation transfer, reducing compute while improving performance. It extends activation engineering research by using intermediate activations as a communication channel rather than for single-model control. The method is related to model grafting but removes the need for learned routing or fine-tuning, making it more generalizable. Though not fundamental to position the paper, broader comparisons with cross-model embedding alignment techniques would strengthen the work’s positioning, e.g. Relative representations enable zero-shot latent space communication (Moschella et al.)
Essential References Not Discussed: Most essential references are properly discussed.
A link that I didn't see in the paper is the one to papers like Eliciting Latent Predictions from Transformers with the Tuned Lens (Belrose et al), which, maybe with a different end goal, use a similar method. Discussing should be added in the related work section
Other Strengths And Weaknesses: The paper is clearly written and easy to follow, with well-structured explanations of the motivation, methodology, and experimental results. The key ideas are presented intuitively, making it accessible to both researchers familiar with multi-agent communication and those new to the topic. The figures and tables effectively support the narrative, particularly in illustrating the activation communication process and performance improvements.
Other Comments Or Suggestions: In the appendix I found several interesting insights that should be either moved or at least discuss and referenced in the main body of the paper.
For instance,
- the analysis on the interplay between CoT and AC: I would add it or at least discuss it in the main body of the paper. For me, it could be swapped with the compute cost analysis
- the fact that AC over multiple instances of the same model, like NLD, doesn't always outperform the single model setup
- the fact that AC is superior to NLD even with multiple rounds.
For instance, at the end of page 7, when mentioning the in-distribution training of matrix W, there's no reference to the experiment i in section B.3.
This additional information in the main body of the paper would make it even clearer and more informative
Questions For Authors: 1. NLD debate is found to improve with increased number of agents (Figure 10 in the paper), do you think your method can compare with that (stronger) setup? I am thinking of instances of your method like f = average or a regularized W matrix learned to transfer across multiple models
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback, and address their comments below:
1. > All tested models come from the LLaMA family...
**Please see point #1 in the reply to Reviewer VKxi above.** In summary, we test AC using models **across the LLaMA, Qwen, and Gemma families**, and find that **AC beats NLD across the board, and beats both individual models for 4/5 of the 6 model pairs** on Biographies/GSM8k respectively----demonstrating the efficacy of AC irrespective of model architecture, size, tokenizer, and training data.
2. > The paper lacks an analysis of why the method work, e.g. inspecting the geometric properties...
First, please see Section 3.3 for a theoretical grounding of this work.
Second, we conduct the following experiment: for each of the 6 pairs of models $A,B$ in the above experiment (see the table from response to Reviewer VKxi), we compute the increase in Biographies performance with AC relative to the average individual performance of $A$ and $B$. We also compute the matrix analog of squared cosine similarity between the models' activation spaces, $||Y^T X||_F^2/(||X||_F^2||Y||_F^2)$ where $X$ is the matrix of $A$'s activations on 3072 sentences from C4 (same dataset used to train $\mathbf{W}$), $Y$ is the same for $B$, and $||\cdot||_F$ is the Frobenius norm. This gives us the following plot (please click link if image not displayed):
https://i.ibb.co/3y6XF7Z7/model-comparison-plot.png
There is a clear positive correlation between similarity of the activation distributions and AC performance gain, as expected; the more aligned $A/B$'s activation spaces, the more semantically meaningful/useful the embedding we graft from $A$ to $B$.
3. > Though not fundamental to position the paper, broader comparisons...
Thanks for sharing this. We've included a discussion of this and similar papers (e.g., Kornblith19, Similarity of Neural Network Representations Revisited) in the paper.
4. > A link that I didn't see in the paper...
We thank the reviewer for bringing this to our attention. The "tuned lens" and "logit lens" [1] are related to the theoretical intuition behind this approach that we share in Section 3.3; we have added a discussion of these papers to both Section 3.3 and related works.
5. > In the appendix I found...
Thanks for pointing this out, we have ensured all experiments in the appendix are either moved to or discussed in the main body of the paper.
6. > NLD debate is found to improve with increased number of agents...
First, we want to distinguish between **"agents"** (distinct model *instances*) and **"models"** (distinct *LMs*).
With debate, more agents can help because we are sampling different outputs from each agent (model instance), and this can yield diverse reasoning paths that are recombined to produce stronger final outputs.
This isn't true with AC, as one activation grafting step from $A$ to $B$ inherently communicates to $B$ all of $A$'s knowledge/beliefs about the prompt it was given. **We argue this is actually a benefit of AC over NLD, as we don't require increasing token budgets to extract more and more information out of the LMs.**
A similar argument can be made for the number of *rounds* in NLD. Indeed, as shown in Appendix B.5, for 5 of the 7 reasoning benchmarks, **AC beats NLD even with 3-4 rounds while using substantially less compute**, highlighting the superiority and robustness of activations as an alternative “language” for inter-LM communication.
AC could theoretically scale to more than 2 *models*. While letting $f$ be the average function would work out of the box, we saw that direct replacement is much more effective. So, for instance, consider a setup (using the notation from Section 3.1 of our paper) where for any $i, j, k, m$ with $j < k$, we:
1. run a partial forward pass $B_{\leq j}(x_B)$ to get last-token activation $\mathbf{b}_j$;
2. run a partial forward pass $A_{\leq i}(x_A)$ to get $\mathbf{a}_i$;
3. replace $\mathbf{b}_j \leftarrow f(\mathbf{a}_i, \mathbf{b}_j)$;
4. continue $B$'s forward pass till layer $k$ to get last-token activation $\mathbf{b}_k$;
5. run a partial forward pass $C_{\leq m}(x_C)$ to get $\mathbf{c}_m$;
6. replace $\mathbf{c}_m \leftarrow f(\mathbf{b}_k, \mathbf{c}_m)$;
7. then continue $C$'s forward pass till decoding is complete.
This 3-model setup can extend to an arbitrary number of models. We leave this extension of our approach to future work.
7. > The paper would be a clear accept with tests on a broader range of model families to confirm that activation communication is not specific to Llama's architecture or training paradigms, and additional insights on the effectiveness of the proposed approach.
**We hope that with the provided responses & additional strong results, you consider raising your score to support clear acceptance. Please let us know if there are any additional questions or concerns; we'd be happy to address them.**
[1] nostalgebraist. Interpreting gpt: the logit lens, 2020.
---
Rebuttal Comment 1.1:
Comment: I thank the reviewers for they work on the rebuttal.
The table you shared in the reply to reviewer VKxi is exactly what I had in mind. It's interesting that GSM8k benefits the most from AC, whereas improvements on Biographies, if any, are marginal. I'm comparing model B vs AC column.
> First, please see Section 3.3 for a theoretical grounding of this work.
I had read Section 3.3 and do not see any strong theoretical grounding there. I'm not saying this as a weakness, especially given that I find in that section several intuition on why your method work. What I don't find is a formal or empirical evidence that can trace back to why AC performs better. See for instance [1], which came out roughly around the time of my review. I also want to add that a similar intuition was known at least since 2020 with work on self-supervised learning of visual representations.
The plot you share on similarity vs performance does not give us much more intuition, but it's a nice preliminary analysis, especially if we consider the short amount of time you had to run it. I suggest its inclusion in the paper together with the table above.
> This isn't true with AC, as one activation grafting step from to inherently communicates to all of 's knowledge/beliefs about the prompt it was given.
I disagree with this, AC communicates rich representation of one reasoning trajectory of model A, or at least the trajectory up until that point and potenally many different ones from that point onwards. At the same time I agree with you on the increased efficiency of AC over NLD.
Finally, I have considered the additional set of experiments and the preliminary analysis and I don't see any reasons why this paper shouldn't be accepted. I am increasing my score to 4.
[1] Skean et al, Layer by Layer: Uncovering Hidden Representations in Language Models, 2025
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response, and appreciate the additional valuable feedback! We will definitely include more extensive discussion of these points in the final paper if accepted. | Summary: The paper proposes an alternative method for communication between language models (LMs) that does not rely on natural language. Instead, the authors introduce a technique for LMs to communicate with activations. More specifically, intermediate activations from one model are injected into another's computation at an intermediate layer, allowing models to exchange information in a manner that the authors argue is more efficient than text. Through experiments, this method is shown to improve reasoning performance in multi-agent tasks, outperforming natural language-based communication across various datasets while using less compute as compared to natural language debate. The paper validates the approach through experiments on multi-player coordination games and reasoning benchmarks, demonstrating the method's robustness across different model sizes.
Claims And Evidence: Yes, no issue.
Methods And Evaluation Criteria: Yes, no issue.
Theoretical Claims: NA
Experimental Designs Or Analyses: Yes, no issue.
Supplementary Material: Yes, the supplementary material was briefly reviewed.
Relation To Broader Scientific Literature: The proposed approach contributes to the field of multi-LM agent framework which is of interest to a broad ML community.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
Innovation is timely with the popularity of multi-agent framework
Experimental results show better computational efficiency compared to natural language baseline.
Weaknesses:
Lack of comparison with other similar approaches such as model merging and single model with latent reasoning. These approaches, though not framed as multi-agent approaches, have close similarities in using activations from intermediate layers of an LM injecting into other intermediate layers of the same LM.
The approach needs access to the models’ parameters which is not feasible for state-of-the-art close-source LMs.
Need for model-model pair training of W to map representations from one LM to another LM if architecture is different.
Other Comments Or Suggestions: The work by Phametal.,2024, though different, should also be a method to compare as a baseline to better understand the effect of using intermediate activations versus embeddings in multi-agent frameworks.
There are also highly similar approaches that also rely on using intermediate activations from a language model to inject into intermediate layers of the same language model to improve reasoning (e.g. https://arxiv.org/abs/2412.06769 and https://arxiv.org/abs/2502.05171). These approaches do not need another training stage to train a separate W to map activations from one LM to another. More discussion about these similar works would help the reader better understand the difference.
Questions For Authors: NA
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback, and address their comments below:
1. > Lack of comparison with other similar approaches such as model merging
To adequately scope our paper, we chose to limit our focus to task-agnostic methods. Model composition/merging methods are extensively discussed in Sections 2 and 3.2, but they require a substantial amount of additional task-specific parameters and data, hence we do not compare to these. Furthermore these methods require much more compute in the form of LM finetuning, layer or router training, etc.; our approach is far more compute efficient.
2. > The approach needs access to the models’ parameters which is not feasible for state-of-the-art close-source LMs.
Exploring API-only approaches is highly limiting (see AC's meta review of [1] at ICLR last year). Furthermore, recent releases of powerful open-source models [2] merit the development of embedding-based techniques.
3. > Need for model-model pair training of W to map representations from one LM to another LM if architecture is different.
Note that learning $\mathbf{W}$...
- needs to happen **exactly once** for each *model pair*
- introduces zero additional task-specific parameters and data by virtue of requiring only general text, e.g. sequences from $A$ and/or $B$’s pretraining data mixes
- is quite sample-efficient: as mentioned in the paper, we use $3072$ sentences to train $W\in \mathbb{R}^{4096\times 3072}$, since linear regression with $d$-dimensional input has a sample complexity of $O(d)$ [3]
Furthermore, we find empirically that *even when models have different architectures, tokenization schemes, or training distributions, we do* **not** *need to train a mapping matrix $\mathbf{W}$ to attain SOTA results.* **Please see point #1 in the reply to Reviewer VKxi above.**
4. > The work by Phametal.,2024, though different, should also be a method to compare as a baseline to better understand the effect of using intermediate activations versus embeddings in multi-agent frameworks.
Pham24 propose communicating the *input (tokenizer) embeddings* between models, meaning the two models **must have the same tokenizer and embedding table** to even run their approach. This severely limits the applicability of their method; in particular, given that all our experiments use model pairs (e.g., LLaMA-3.2-3B and LLaMA-3.1-8B) with distinct tokenizers and/or embedding layers, we unfortunately cannot compare against Pham24. Also, note Pham24's approach still faces substantial information loss relative to the model activations and, more importantly, **does not save compute**, as the number of embeddings passed between models is *the same as the number of tokens passed between models in natural language communication*.
5. > There are also highly similar approaches that also rely on using intermediate activations from a language model to inject into intermediate layers of the same language model to improve reasoning (e.g. https://arxiv.org/abs/2412.06769 and https://arxiv.org/abs/2502.05171). These approaches do not need another training stage to train a separate W to map activations from one LM to another. More discussion about these similar works would help the reader better understand the difference.
Thank you for raising this point; we have added discussion of these approaches to our paper. In summary, such latent reasoning approaches involve spending extra compute by doing "CoT in activation space," e.g. by grafting LM activations into other layers/later forward passes through the same model; our approach can be viewed as doing exactly the same thing, but instead "outsourcing" the CoT to another model (and thus reaping benefits from greater diversity of thoughts/reasoning paths from distinct models).
Also, note that as shown above, we find that even when models have different architectures, tokenization schemes, or training distributions, we do *not* need to train a mapping matrix $\mathbf{W}$ to attain SOTA results.
**We hope that with the provided responses & additional strong results, you consider raising your score to support clear acceptance. Please let us know if there are any additional questions or concerns; we'd be happy to address them.**
[1] Chau Pham, Boyi Liu, Yingxiang Yang, Zhengyu Chen, Tianyi Liu, Jianbo Yuan, Bryan A. Plummer, Zhaoran Wang, and Hongxia Yang. Let models speak ciphers: Multiagent debate through embeddings, 2024.
[2] Abhimanyu Dubey et al. The llama 3 herd of models, 2024.
[3] Vapnik, V. N. An overview of statistical learning theory, 1999. | Summary: The paper considers this fundamental question "as LLMs are increasingly capable of handling larger, more complex tasks (sometimes with “super-human” ability), might they communicate more effectively in representations of higher dimension than natural language?". It proposed a simple technique where LMs communicate via activations.
Claims And Evidence: The primary claim in the paper is that LMs can communicate using activations. An LM B’s computation is paused at an intermediate layer, then its current activation is combined with another LM A’s intermediate activation via some function f, and then f’s output is passed into the next layer of B and the forward pass is continued till decoding is complete. The experimental evaluation is done over 7 reasoning benchmarks and 2 multiplayer games. The activation communication protocol exhibits up to 27.0 improvement over natural language communication across these datasets using <1/4 the compute.
Methods And Evaluation Criteria: The presented method is rather simplistic but the experimental evaluation demonstrates its value.
Theoretical Claims: There are no significant theoretical claims in the paper.
Experimental Designs Or Analyses: The paper considers rather simple datasets (2 multiplayer and 7 reasoning tasks) and two models (llama 3b and 8b).
Supplementary Material: Qualitative results were reviewed.
Relation To Broader Scientific Literature: The paper is an interesting contribution to multi-LLM inference.
Essential References Not Discussed: Yes, references are discussed. The paper demonstrates good familiarity with literature.
Other Strengths And Weaknesses: A rather simple idea of using activations is shown to have good empirical value on the considered experiments.
The weights learned for projecting activation from one model to another is task-independent.
The experimental evaluation can be improved with better baselines than single model and NL debate.
Other Comments Or Suggestions: It would be useful to expand the empirical evaluation to more and diverse datasets.
Questions For Authors: Why was k = j = 26 selected as the layer?
The explanation in line 25-255 is not convincing about why the experimental comparison with Pham24 was avoided. Could you please elaborate further?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback, and address their comments below:
1. > The paper considers rather simple datasets (2 multiplayer and 7 reasoning tasks) and two models (llama 3b and 8b).
In Appendix B.6, we display results on **the entire MMLU benchmark (57 datasets)**, spanning various domains and difficulty levels. We find that **AC matches/outperforms NLD on 48/57 datasets**, demonstrating our approach's value.
While our initial experiments were only with LLaMA models, note: (1) These LMs are among *SOTA open-source models*, meriting our focus; and (2) we extensively vary both the LLaMA suite (LLaMA-2, 3, 3.1, 3.2) and parameter count (1-70B) as shown in Figure 3 of the paper. This is already a broad coverage of models/model sizes.
**However, we share additional results using models from the Qwen-2.5 and Gemma-2 families below.** Each cell contains two results: Biographies score / GSM8k score.
| Model Pair ($A,B$) | $A$ | $B$ | NLD | AC |
| ------------------------------------------------ | -------------------------------- | -------------------------------- | -------------------------------- | ----------------------------------------- |
| LLaMA-3.2-3B, LLaMA-3.1-8B | $79.4\pm0.0/58.0\pm4.9$ | $83.9\pm0.0/60.0\pm4.9$ | $80.2\pm0.1/\mathbf{75.0\pm4.3}$ | $\mathbf{84.6\pm0.0}/64.0\pm4.8$ |
| Qwen-2.5-1.5B, Qwen-2.5-3B | $59.4\pm0.9/20.0\pm0.9$ | $85.5\pm1.1/35.0\pm1.1$ | $63.2\pm1.1/65.0\pm1.1$ | $\mathbf{89.6\pm1.0}/\mathbf{70.0\pm1.0}$ |
| Gemma-2-2B Gemma-2-9B | $83.0\pm1.1/45.0\pm1.1$ | $\mathbf{94.6\pm0.9}/80.0\pm0.9$ | $70.3\pm1.0/70.0\pm1.0$ | $88.1\pm0.7/\mathbf{90.0\pm0.7}$ |
| Qwen-2.5-1.5B, LLaMA-3.2-3B | $59.4\pm0.9/20.0\pm0.9$ | $79.4\pm0.0/58.0\pm4.9$ | $75.4\pm1.0/75.0\pm1.0$ | $\mathbf{79.5\pm1.0}/\mathbf{75.0\pm1.0}$ |
| LLaMA-3.2-3B, Gemma-2-2B | $79.4\pm0.0/58.0\pm4.9$ | $83.0\pm1.1/45.0\pm1.1$ | $62.5\pm1.1/55.0\pm1.1$ | $\mathbf{84.0\pm0.1}/\mathbf{60.0\pm1.1}$ |
| Qwen-2.5-1.5B, Gemma-2-2B | $59.4\pm0.9/20.0\pm0.9$ | $\mathbf{83.0\pm1.1}/45.0\pm1.1$ | $49.3\pm1.1/50.0\pm1.1$ | $73.0\pm1.1/\mathbf{55.0\pm1.1}$ |
Note the following:
- **AC beats NLD across the board**, and **beats both individual models for 4/5 of the 6 model pairs** on Biographies/GSM8k respectively----demonstrating the efficacy of AC irrespective of model architecture, size, tokenizer, and training data
- These results are obtained *without* training $\mathbf{W}$, meaning we do *not* need to train a projection layer between activation spaces to attain SOTA results, even for extremely distinct models! (We hypothesize this is because we are only replacing $B$'s *last*-token activation, hence $B$ can learn from $A$ without an extreme alteration to its activation distribution)
2. > The experimental evaluation can be improved with better baselines than single model and NL debate.
We limit our focus to task-agnostic methods; existing model composition/grafting methods require substantial task-specific parameters/data & much more compute in the form of LM finetuning, layer or router training, etc., hence we do not compare to these.
Regarding multiagent debate, the NLD setup that we evaluate against is the predominant method of NL communication. This is quite a strong/flexible NL approach, involving CoT and allowing varied numbers of agents/rounds. (In fact, as shown in Appendix B.5, we find **AC outperforms NLD even with 3-4 rounds of debate on 5 of the 7 reasoning datasets**.)
3. > Why was k = j = 26 selected?
See Section 3.3, lines 201-222. This reasoning is precisely why we choose a layer around halfway through the LM; indeed, Hernandez24 find that by around the halfway point of an LM's computation, it has developed "enriched entity representations" of the input that would be quite useful for communication compared to the next-token representations of later layers.
We validate this empirically; Figure 2 shows 2D contour plots of accuracy over different $k,j$ values. For computational reasons we only do this hyperparameter sweep on the Countries and Tip Sheets datasets and simply cross-apply the optimal values here, $k=j=26$, to the reasoning benchmarks and find that the values seem to generalize well across datasets, which is quite interesting in its own right.
5. > The explanation in line 25-255 is not convincing about why the experimental comparison with Pham24 was avoided. Could you please elaborate further?
**Please see point #4 in the reply to Reviewer gYwD below.**
**We hope that with the provided responses & additional strong results, you consider raising your score to support clear acceptance. We are happy to address any additional questions.** | null | null | null | null | null | null |
UniSim: A Unified Simulator for Time-Coarsened Dynamics of Biomolecules | Accept (poster) | Summary: This paper introduces UniSim, a model for simulating time-coarsened molecular dynamics of small molecules, peptides, and proteins. UniSim employs a multi-stage approach: First, a unified atomic representation model is pretrained on a diverse collection of molecular datasets (both equilibrium and off-equilibrium conformations) using a multi-task approach. Then, a vector field model is trained using the stochastic interpolants framework to predict the probability transition kernel for large timesteps (time-coarsening). Finally, a force guidance kernel is trained with other model parameters frozen to adapt (fine-tune) the model to specific chemical environments and force fields. The authors evaluate UniSim on benchmark datasets for each of the three molecular domains and compare to existing methods for time-coarsened molecular dynamics.
## update after rebuttal
Please see comments below.
Claims And Evidence: The main claims are:
1. UniSim is the first time-coarsened dynamics model exhibiting transferability across diverse molecular domains (small molecules, peptides, and proteins)
2. A multi-task pretraining approach learns a unified atomic representation
3. The stochastic interpolant framework and force guidance allow for rapid fine-tuning to different chemical environments, learning the state transition patterns over long time steps
Regarding the first claim, even though the authors write that their model is transferable, in practice, they train separate force guidance kernels for specific tasks. In my understanding, a truly transferable model should work "out-of-the-box" without any need for fine-tuning or output heads trained from scratch. It would thus be more accurate to say that they train a "reusable backbone", or "model that can be efficiently finetuned" (or similar formulations). Whether UniSim is the first model to achieve this level of transferability is also unclear: The authors should compare to existing models that are finetuned in a similar manner to UniSim, and then compare the performance of different finetuned backbones.
For the second claim, the "multi-task pretraining" is actually a single task (force matching) pre-training in disguise. The off-equilibrium loss L_o (Eq.7) is the standard squared error force loss that is also used to train machine-learned force fields: H_out[i] can be regarded as the atomic energy contribution of atom i to the total predicted energy sum_i H_out[i]. Then, grad_X sum_i H_out[i] just computes the gradient of this predicted energy, i.e., the associated forces. By minimizing L_o, the predicted forces are matched to the reference forces. It turns out that the equilibrium loss L_e (Eq.10) is actually the same objective in disguise (i.e., predicted forces are matched to a reference force). To see this, recall that any potential energy surface is well-approximated by a harmonic potential close to an equilibrium (take the Taylor expansion and truncate after the second-order term, the first-order term vanishes because the forces are zero at equilibrium structures, which are minima of the potential). Then, the term -(X'-X)/sigma_e is just the force of a harmonic potential given by V(X') = 1/(2 sigma_e) (X'-X)^2. The authors should make this connection more transparent and consider changing "multi-task pretraining" to "force matching pretraining" (or a similar term), and clearly explain that L_e is just a clever trick to generate synthetic data with estimated force labels from equilibrium structures.
For the last claim, I think the shown data does not support the statement that the state transition patterns for long-time steps are learned. In fact, the TIC0/TIC1 plots shown in several figures clearly show that the sampled distributions do **not** closely approximate the probability distributions sampled during MD. Perhaps UniSim is competitive with other models that attempt time-coarsened dynamics, and I acknowledge that it is a step in the right direction, but the task is definitely not solved in a satisfactory manner.
Finally, there is another claim that is hidden in the conclusion. The authors write: "Experiments conducted on small molecules, peptides and proteins have fully verified the superiority of UniSim in distribution similarity compared to MD trajectories and transferability to out-of-distribution molecular domains." However, as I already stated above, the agreement to distributions sampled in MD trajectories looks actually quite poor. The metrics the authors look at (e.g., distance histograms, contact maps, etc.) are certainly informative, but they are not sufficient. The ultimate measure of quality must be how well actual thermodynamic observables are reproduced by the time-coarsened dynamics. After all, accelerated dynamics are useless if they give the wrong results, and the experiments that are done currently are not sufficient to "fully verify the superiority of UniSim", in particular with respect to regular MD.
Methods And Evaluation Criteria: The proposed methods are appropriate for the problem. However, the evaluation criteria are not sufficient to show that the method works well and is practically useful. As mentioned above, to truly measure the quality of a time-coarsened dynamics, it is necessary to check how well it reproduces thermodynamic observables extracted from regular MD simulations. I think the authors should at least try a very simple example, e.g., reproducing the free energy surface of alanine-dipeptide (a well-studied small model system) with UniSim, so readers are better able to judge whether the presented model is useful for practical studies or not.
Theoretical Claims: The paper includes theoretical justifications, particularly for the stochastic interpolant framework (Section 3.3 and Appendix B). I have not checked the presented proofs carefully, but at a first glance, they seem sound to me.
Experimental Designs Or Analyses: The experiments that were performed and the analyses that were done seem sound to me. However, there is an aspect of the work, which should be discussed more prominently: In Appendix C.2., the authors mention that every prediction of their model is followed by an energy refinement with OpenMM to prevent error accumulation. This is a very important detail that should be discussed in the main text, and it is also necessary to state how many optimization steps need to be performed on average. Every evaluation of OpenMM used for the minimization is roughly equivalent to a single regular MD time step, so this greatly affects the actual speedup offered by UniSim. For example, if UniSim can leverage a time step that is, say, 1000 times larger than a regular MD time step, but then on average 1000 optimization steps with OpenMM are necessary to stabilize the structure, there is no net gain in terms of efficiency (on the contrary, this would be less efficient than just running regular MD).
Supplementary Material: I have read the Supplementary Material, but I have not carefully checked the presented proofs.
Relation To Broader Scientific Literature: The paper appears to cite the most relevant prior work. However, when discussing classical MD methods, quantum mechanics, and empirical force fields in the introduction, the provided references, while not outright wrong, are somewhat questionable. It almost makes it appear as if the cited works introduced the corresponding concepts. Perhaps it would be more appropriate to cite earlier seminal work here.
Essential References Not Discussed: I did not spot any essential omissions.
Other Strengths And Weaknesses: **Strengths**
+ The core idea of a unified simulator for diverse biomolecules is valid.
+ The paper is well-written and easy to follow.
**Weaknesses**
- Lack of a direct comparison of computational efficiency with baselines and classical MD (especially considering that minimization with regular force fields seems to be necessary to stabilize trajectories).
- Lack of experiments that investigate the prediction of thermodynamic observables with the proposed framework, assessing the practical usefulness of the approach
- Some aspects of the work are unnecessarily "obfuscated" (e.g., the multi-task pretraining is really a single task in disguise), or hidden in the appendix (e.g., the need for minimization with a classical force field for stabilization).
- Some design choices are questionable, for example, the energy normalization described in Appendix C.1. effectively introdces a dataset-dependent unit conversion. This may not cause a large issue if all datasets are sampled at roughly comparable temperatures, but it seems unjustified from a theoretical standpoint.
Other Comments Or Suggestions: I think the TIC0/TIC1 plots could be improved: Instead of showing a scatter of the time-coarsened sampling over a contour of the true probability distribution, it would be better to do a side-by-side comparison of the true probability distribution and the distribution sampled by UniSim's time-coarsened dynamics (both as contours). This would allow a more direct comparison and allow readers to better judge the agreement of time-coarsened dynamics with the ground truth.
Questions For Authors: 1. Can you provide a quantitative comparison of the computational cost (e.g., wall-clock time or speedup factor) of UniSim (including the time needed for structure refinement via minimization with OpenMM) compared to classical MD?
2. Have you performed any experiments to assess the stability and accuracy of UniSim on longer timescales? How do the cumulative prediction errors affect the results over longer simulations?
3. How sensitive is the performance of UniSim to the choice of hyperparameters? Did you perform any hyperparameter optimization, and if so, what were the key findings?
4. Could you elaborate on the motivation behind the atomic embedding expansion technique? How does it specifically help in distinguishing different chemical environments compared to using just the periodic table? Did you do ablation studies?
5. Could you elaborate a bit more on the intuition and advantage of using the "gradient-environment subgraph" approach? What is the expected impact of different choices of delta_min and delta_max?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: - **Q1:** The first claim & its transferability
Firstly, we strongly agree that the term "reusable backbone" is more suitable to describe UniSim, which will be used in our revised manuscript.
Secondly, we argue that the force kernel is not strictly necessary for cross-domain generalization. Figure 4 shows that UniSim/g reproduces TIC-2D distributions for MD22 small molecules without any fine-tuning procedure. This zero-shot transfer capability confirms the learned physical representations transcend specific training domains.
Lastly, we are **unable** to apply identical experimental setups to baselines for validating transferability probably due to:
1. Architectural Constraints: Methods like FBM incorporate protein-specific atomic representations (e.g., residue index) that are fundamentally incompatible with small molecules.
2. Pretraining Deficiency: None of them provides pretraining interfaces, preventing knowledge transfer to novel atomic species after training solely on peptide/protein data.
- **Q2:** The term "multi-task pretraining"
We would like to clarify that "multi-task pretraining" (line 220) does **not** refer to different loss formulations here. In particular, different "tasks" is defined as different force fields where the force labels come from (line 220-229). By using such multi-task pretraining, we address label inconsistency issues owing to varying force field parameters across different dataset. We hope the misunderstanding can be resolved.
- **Q3:** The practical usefulness in longer timescales
To specifically address your concerns and validate UniSim’s practical usefulness, we have performed long-timescale simulations on a well-studied system, alanine-dipeptide. Details of the experimental setup can be found in our response to **Q1,** **Reviewer KWXR**, where we show UniSim successfully reproduces the free energy landscape ([Figure 1](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/ad_ramachandran.png)), with its stability been verified based on RMSD to the initial state ([Figure 2](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/ad_rmsd_over_time.png)).
- **Q4:** The energy minimization technique & efficiency
Firstly, we emphasize that the energy minimization is **almost only** applied to hydrogen atoms with heavy atoms constrained by predefined harmonic potentials (line 682). For this reason, we placed the details in the appendix instead of the main text.
Regarding the computational efficiency, we provide [Table 5](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/ess_pepmd.png), which demonstrates that UniSim maintains a 1–2 order-of-magnitude advantage in ESS/s (used in Timewarp) over MD, with refinement step included.
- **Q5:** The energy normalization trick
Firstly, we claim that UniSim focuses on capturing relative energy differences rather than absolute values, as the weight of the Boltzmann constraint can be further modulated through the guidance strength $\alpha$. Secondly, we confirm that all trajectories in PepMD and ATLAS datasets were sampled at 300K, while MD17 data was collected at around 500K. Thus the temperature consistency within each dataset is guaranteed.
- **Q6:** The visualization of TIC plots
Given that UniSim's trajectories are relatively short (1,000 frames), contour plots failed to demonstrate clear distributions and yielded suboptimal visualization. By the way, Ramachandran plots in the same format as MD are shown in ([Figure 1](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/ad_ramachandran.png)) for long-timescale simulations.
- **Q7:** Hyperparameter sensitivity
In this experiment, the primary hyperparameters we tuned were: (1) the SDE inference steps $T$ and (2) the guidance strength $\alpha$. Due to space limitations, please refer to **Q2, reviewer mRvK** for our ablation studies and further discussions.
- **Q8:** Intuition of the atomic embedding expansion
Here we will elaborate on the rationale behind our design of atomic embedding expansion.
Since we aim for cross-domain representation learning, using fine-grained vocabulary specific to proteins would conflict with small-molecule representation. While using the periodic table is the simplest alternative, it will result in significant information loss on **regularly occurring patterns**. Therefore, we adopt the periodic table as the base vocabulary $A_b$ and extend each element with a supplementary vector of length $D$, thereby modeling recurring chemical environments while maintaining a unified representation.
Specifically, we validate the effectiveness of atomic embedding expansion with the ablation on PepMD shown in [**Table 6**](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/ablation_Ae_pepmd.png).
- **Q9:** Intuition of using the "gradient-environment subgraph" approach
Thank you for your valuable question. Due to space limitations, We kindly direct the reviewer to our response to **Q1, Reviewer Cnw4** for a detailed discussion.
---
Rebuttal Comment 1.1:
Comment: I'd like to thank the authors for their replies
> Firstly, we strongly agree that the term "reusable backbone" is more suitable to describe UniSim, which will be used in our revised manuscript.
Thank you.
> Lastly, we are unable to apply identical experimental setups to baselines for validating transferability probably due to:
> * Architectural Constraints: Methods like FBM incorporate protein-specific atomic representations (e.g., residue index) that are fundamentally incompatible with small molecules.
> * Pretraining Deficiency: None of them provides pretraining interfaces, preventing knowledge transfer to novel atomic species after training solely on peptide/protein data.
I don't understand these arguments. If FBM cannot be applied to small molecules, why not apply your method to larger systems instead? And what do you mean by "none of them provides pretraining interfaces"? Why would it not be possible to pre-train on one dataset, and finetune on another? Why do you need an "interface" for that? This should be possible with any architecture.
> We would like to clarify that "multi-task pretraining" (line 220) does not refer to different loss formulations here. In particular, different "tasks" is defined as different force fields where the force labels come from (line 220-229). By using such multi-task pretraining, we address label inconsistency issues owing to varying force field parameters across different dataset. We hope the misunderstanding can be resolved.
In this case, I wonder even more whether the term "multi-task" is accurate here. After all, it's the same task (force prediction), you just want to reconcile the fact that different datasets use different levels of theory (and therefore have inconsistent force labels), or they don't have force labels at all (so you approximate them). A different term/description would make much clearer what is actually happening.
> Firstly, we emphasize that the energy minimization is almost only applied to hydrogen atoms with heavy atoms constrained by predefined harmonic potentials (line 682). For this reason, we placed the details in the appendix instead of the main text.
This does not really answer my question. Can you provide any statistics on how many optimisation steps are required on average (at most)? Also, that you almost only need to optimise hydrogen atoms does not change the fact that this seems to be necessary to make the method stable. It's an important detail that should not be hidden in the appendix.
> Firstly, we claim that UniSim focuses on capturing relative energy differences rather than absolute values, as the weight of the Boltzmann constraint can be further modulated through the guidance strength. Secondly, we confirm that all trajectories in PepMD and ATLAS datasets were sampled at 300K, while MD17 data was collected at around 500K. Thus the temperature consistency within each dataset is guaranteed.
I don't really understand the point the authors want to make here. I never doubted that UniSim captures relative energy differences...
> Given that UniSim's trajectories are relatively short (1,000 frames), contour plots failed to demonstrate clear distributions and yielded suboptimal visualization. By the way, Ramachandran plots in the same format as MD are shown in (Figure 1) for long-timescale simulations.
Thank you for this additional experiment. However, I still don't find this result particularly convincing (the agreement is not very good). For example, compare to [this paper from 2019](https://www.nature.com/articles/s41524-019-0261-5), which shows much better agreement to the ground truth.
> Since we aim for cross-domain representation learning, using fine-grained vocabulary specific to proteins would conflict with small-molecule representation. While using the periodic table is the simplest alternative, it will result in significant information loss on regularly occurring patterns. Therefore, we adopt the periodic table as the base vocabulary
and extend each element with a supplementary vector of length, thereby modeling recurring chemical environments while maintaining a unified representation.
Thank you, but again, this is not answer my question. I apologise if I was unclear: I am not wondering why you use the atomic table as the "basis" (that makes sense, obviously), I am asking why it is necessary to "extend" the vocabulary with chemical environments. What deficiencies does the model have if you use *only* elements as the vocabulary?
---
Reply to Comment 1.1.1:
Comment: - **Q1**: If FBM cannot be applied to small molecules, why not apply your method to larger systems instead...
We sincerely appreciate your question and apologize for any confusion caused. By mentioning the lack of "pretraining interface," we meant that baselines trained only on peptide data cannot **directly** handle unseen atom types-a capability that could be enabled by cross-domain pretraining, as evidenced by UniSim/g, Figure 4.
We further finetune baselines (pre-trained on PepMD) on ATLAS (excluding Timewarp, which requires unavailable atomic velocity data). As shown in these [results](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/result_atlas.png), UniSim outperforms baselines across all metrics, proving stronger transferability.
- **Q2**: I wonder even more whether the term "multi-task" is accurate here...
We sincerely appreciate your suggestion. In our revised version, we will replace "multi-task" with more appropriate alternatives such as "multi-head" to avoid any misunderstandings.
- **Q3**: Statistics on optimisation steps.
We appreciate your valuable suggestion. Here we provide the visualization of UniSim's efficiency in [Figure](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/efficiency_stats.png) as well as statistics of the number of optimization steps required per inference step as follows: (1) mean: 69.3, (2) median: 55 and (3) maximum: 2,075. For each inference step, the average inference time is 0.120 s and the average optimization time is 0.152 s. Therefore, the computational overhead remains within the same order of magnitude with the refinement step.
Moreover, since the refinement step **does** contribute to stabilizing the generation, we agree with your suggestion and will include a discussion of the refinement procedure in the main text of our revised manuscript.
- **Q4**: I don't really understand the point the authors want to make here...
We sincerely apologize for any confusion caused. First, we have verified that the simulation temperature is almost the same for each dataset, which could address one of your concerns. Further, after training to fit the normalized potential, we can adjust the guidance strength $\alpha$ to approximate the unnormalized one during inference. Therefore, the normalization scheme just needs to maintain the distribution characteristics of potentials, where the min-max normalization proves to be a good choice.
- **Q5**: I still don't find this result particularly convincing...
We sincerely appreciate your question. First, we would like to clarify that the Ramachandran plots in [Figure](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/ad_ramachandran.png) successfully reproduce all five metastable states of AD mentioned in [this paper](https://pubs.acs.org/doi/abs/10.1021/ct400993e). UniSim also shows a close match of representative conformations for each state as illustrated in [Figure](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/ad_metastable.png). While complete coverage may require more sampling steps, our current results could already capture the essential conformational landscape.
In addition, regarding the cited work (this paper from 2019), we highlight the following key differences from our UniSim:
1. The cited work employs a coarse-grained (CG) molecular representation, whereas UniSim adopts an all-atom representation.
2. The cited work trains networks to replace potentials in classical MD simulations, without generating trajectories directly like UniSim.
3. The Ramachandran plots in the cited work are generated via i.i.d. sampling from a VAE decoder. In contrast, UniSim produces trajectories autoregressively, which inherently involves error accumulation and thus presents a distinct challenge.
4. The cited model requires system-specific tuning of the CG resolution parameter, which limits its generalizability.
That said, we are pleased to include the discussion of this work in our revised manuscript.
- **Q6**: I am not wondering why you use the atomic table as the "basis"...
We apologize for not addressing your question directly in our earlier response and here we give a clearer explanation.
In domains like proteins, atoms follow **regular chemical patterns** (e.g., CA, CB in residues). Wet lab experiments have shown that atoms of the same pattern have stable properties (e.g., bond lengths). Considering the constraints on bond lengths and angles, these patterns exhibit discrete characteristics, which GNNs alone are not sufficient to handle.
This implies that an effective embedding approach must capture these patterns. Using only the periodic table as vocabulary would yield low-resolution representations, missing those domain-specific regularities. Instead, our solution extends each element in the vocabulary to $D$ discrete patterns and maps each atom to the most possible pattern based on its neighbors in the graph, thus simplifying the understanding of complex but highly regular structures. | Summary: The paper introduces UniSim, a deep learning framework designed to simulate biomolecular dynamics over coarse-grained time steps. The method unifies the treatment of small molecules, peptides, and proteins by first learning a unified atomic representation through multi-task pretraining on a diverse set of 3D molecular datasets. Building on this representation, the authors employ a stochastic interpolant framework to model the transition probability of molecular states over long timesteps, thus enabling time-coarsened dynamics simulation. A force guidance kernel is further introduced to adapt the model to various chemical environments and force field conditions. The approach is evaluated on several datasets (e.g., PepMD, MD17/MD22, ATLAS) and compared against SOTA methods, with experiments demonstrating competitive performance in capturing both the structural and distributional properties of MD trajectories.
Claims And Evidence: - Main Claims:
The authors claim that UniSim is the first simulator capable of transferably modeling time-coarsened dynamics across diverse biomolecular domains. In addition, they assert that the multi-task pretraining yields a robust unified atomic representation, and that the incorporation of a force guidance module significantly enhances sample validity and adherence to underlying physical constraints.
- Evidence:
The claims are supported by extensive experimental results. Quantitative metrics such as Jensen–Shannon (JS) distances are reported on multiple datasets. Comparative analyses and ablation studies (e.g., UniSim with and without force guidance) further corroborate the improvements brought by the proposed components. While the evidence is comprehensive, the paper also acknowledges limitations (e.g., relatively short generated trajectories and suboptimal validity in protein simulations), which helps contextualize the claims.
Methods And Evaluation Criteria: - Methodology: The approach comprises three main stages:
1. Unified Pretraining: A multi-task pretraining strategy is used to learn an atomic representation from diverse 3D molecular datasets. An SO(3)-equivariant GNN is employed along with novel techniques like gradient-environment subgraph construction and atomic embedding expansion.
2 .Vector Field Model: A stochastic interpolant framework is used to fit the probability transition kernel of MD trajectories over a coarse time step. This involves training neural networks to predict both a drift term and a noise term, thereby defining an SDE that governs state evolution.
3. Force Guidance Module: To adapt to different chemical environments, a force guidance kernel is introduced. This module adjusts the learned transition dynamics using “virtual” forces derived from MD potential labels.
- Evaluation Criteria:
The method is evaluated using metrics that assess both the physical plausibility (e.g., VAL-CA, contact map errors) and the distributional similarity (JS distances on various projections) of generated trajectories relative to ground-truth MD data. These criteria are well chosen for the problem of capturing the complex free energy landscapes and metastable states in biomolecular dynamics.
Theoretical Claims: The paper states theoretical propositions (e.g., Proposition 3.1) that connect the learned vector field model with the force guidance adjustments. The proofs—provided in the appendix—are built on established frameworks from stochastic interpolants and score matching techniques. They derive closed-form relations between the drift and noise components of the SDE under both the baseline and force-guided settings. Although this is not novel, giving proof enhances the completeness of the paper.
Experimental Designs Or Analyses: - Design:
The experimental design is extensive and multifaceted. The authors conduct evaluations across three molecular domains (small molecules, peptides, proteins) using multiple datasets. They perform ablation studies comparing the full UniSim model with a variant that omits the force guidance module (UniSim/g) to isolate its effect.
- Analyses:
Quantitative comparisons against competitive baseline methods (FBM, Timewarp, ITO, Score Dynamics) are provided via tables and figures that report JS distances, validity scores, and contact errors. Visualizations (e.g., TIC plots, contact maps) illustrate the capability of UniSim to capture the free energy landscape and the transitions between metastable states.
- Potential Issues:
Although the experiments are comprehensive, the generated trajectories are relatively short, which may limit insights into long-term dynamics. So adding experiments about the exploration of with conformational space, such as the experiments in F$^3$low, can further illustrate the point.
Supplementary Material: I have read all supplementary materials.
Relation To Broader Scientific Literature: This paper combines pre-training methods, stochastic interpolation methods in generative modeling, and the classifer-guided approach that is popular in generative models.
Essential References Not Discussed: This paper is a good summary of this field.
Other Strengths And Weaknesses: I think the concepts presented in this paper are not novel, but it is a very solid piece of work that blends the appropriate concepts more seamlessly and does to SOTA on the various task metrics, which is very helpful to the research community
Other Comments Or Suggestions: No other comments or suggestions.
Questions For Authors: I don't have a specific question.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: - **Q1**: Potential Issues: Although the experiments are comprehensive, the generated trajectories are relatively short, which may limit insights into long-term dynamics. So adding experiments about the exploration of with conformational space, such as the experiments in F3low, can further illustrate the point.
Thank you for your instructful suggestion! To further validate the stability and applicability of UniSim in long-timescale simulations, we conducted additional experiments on a well-studied molecular system, alanine dipeptide (AD), consisting of only 22 atoms while exhibiting comprehensive free energy landscapes.
Specifically, following the task setup in Timewarp [1], we attempt to finetune our vector field model trained on PepMD (i.e., UniSim/g) to AD before performing long-timescale simulations. Firstly, we obtained three **independently** sampled MD trajectories of alanine dipeptide (AD) with the simulation time of 250 ns from [mdshare](https://markovmodel.github.io/mdshare/ALA2/#alanine-dipeptide), which were assigned as the training/validation/test set. The coarsened timestep $\tau$ was set to 100 ps, and 200,000 data pairs were sampled from the training and validation set, respectively. Then UniSim/g was finetuned on AD with the learning rate of 1e-4 for at most 300 epochs, after which an force guidance kernel was trained based on the finetuned model with the learning rate of 5e-4.
After we obtain the best checkpoint of UniSim evaluated on the validation set, we perform long-timescale simulations for a chain length of 100,000 to explore the metastable states of AD. We show the Ramachandran and TIC-2D plots of UniSim and the test MD trajectory in [Figure 1](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/ad_ramachandran.png). Building upon previous research [2], UniSim has demonstrated robust performance in long-timescale simulations by effectively exploring key metastable states of AD, including C$5$, C$7_{eq}$, $\alpha_{R}'$, $\alpha_{R}$ as well as $\alpha_{L}$. Moreover, the relative weights of generated conformation ensembles across different metastable states show good agreement with MD, confirming that UniSim accurately reproduces AD's free energy landscape in long-timescale simulations.
Furthermore, to investigate the stability of UniSim in long-timescale simulations, we selected the root-mean-square deviation (RMSD) of heavy atoms relative to the initial state as the validation metric. The plots of RMSD over time compared to the MD trajectory are presented in [Figure 2](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/ad_rmsd_over_time.png). As evident from the figure, the RMSD variations and ranges of the trajectory generated by UniSim exhibit strong consistency with that of MD, demonstrating the model's stability in long-timescale simulations.
**Reference**
> [1] Klein, L., Foong, A., Fjelde, T., Mlodozeniec, B., Brockschmidt, M., Nowozin, S., ... & Tomioka, R. (2023). Timewarp: Transferable acceleration of molecular dynamics by learning time-coarsened dynamics. Advances in Neural Information Processing Systems, 36, 52863-52883.
> [2] Wang, H., Schütte, C., Ciccotti, G., & Delle Site, L. (2014). Exploring the conformational dynamics of alanine dipeptide in solution subjected to an external electric field: A nonequilibrium molecular dynamics simulation. Journal of Chemical Theory and Computation, 10(4), 1376-1386.
---
Rebuttal Comment 1.1:
Comment: I thank the author for conducting more experiments, my concerns were largely resolved and I think this paper deserves a 4 out of 5 and I will keep my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the positive evaluation and valuable comments, which have helped us strengthen the experiments and improve our manuscript. We sincerely appreciate your time and insightful feedback! | Summary: The paper introduces UniSim, a deep learning-based unified simulator for time-coarsened MD simulation. The framework aims to improve the transferability and efficiency of long-timescale molecular simulations across different biomolecular domains (small molecules, peptides, and proteins), and consistes of a pretraining module with unified atomic representation, a state transition module using stochastic interpolant , and optional force guidance kernel. Evaluations on small molecules, peptides, and proteins show competitive results w.r.t. several baselines.
Claims And Evidence: Mostly yes, the novelty of combining atomistic pretraining with stochastic-interpolant-based dynamics modeling is backed by previous literature. The effectiveness of the force guidance kernel is supported by ablation, and cross-domain transferability is shown on datasets such as MD22.
Methods And Evaluation Criteria: Yes, see above.
Theoretical Claims: Proofs are included in the supplement which I didn't check.
Experimental Designs Or Analyses: The designs are largely consistent with the claims, see above.
Supplementary Material: No.
Relation To Broader Scientific Literature: The work enables a broader, cross-domain application of time-coarsen MD simulation and also promotes the combination of pretraining with dynamic modelling.
Essential References Not Discussed: Not that I found.
Other Strengths And Weaknesses: Strengths
1. The pretraining + stochastic interpolant framework is novel.
2. Force guidance kernel further improves flexibility in different chemical environments.
3. Results showing good cross-domain transferability.
Weaknesses:
1. Writing clarity in the methods needs improvement. For example, the gradient-environment subgraph part was a bit confusing and needs some more intuition on the design.
2. Effect of predefined timestep τ on model accuracy is unexplored. Interesting to discuss how τ would affect model performance. No major experiment is required, but would like at least some discussion on the intuition.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: - **Q1:** Writing clarity in the methods needs improvement. For example, the gradient-environment subgraph part was a bit confusing and needs some more intuition on the design.
Thanks for the question. We will elaborate on the design rationale of the gradient-environment subgraph as clearly as possible.
**Scale Invariance:** A crucial challenge in training unified representation models arises from the vast scale discrepancy between molecular systems: small molecules typically contain ~10^1 atoms, while proteins often comprise ~10^3 or more atoms. Conducting direct full-atom representation learning without proper processing could lead to non-transferrable representations across different molecular domains.
**Physical Faithfulness:** We notice that, from Newtonian mechanics, atomic motion is governed by the forces acting on each atom. Crucially, long-range intermolecular forces (e.g., van der Waals interactions) decay exponentially with distance, becoming negligible beyond ~10 Å empirically. This implies that the force acting on any atom predominantly originates from its local environment within a 10 Å radius sphere. This physical insight motivates our design of the environment subgraph paradigm, where atomic force computation can be localized to such spherical subgraphs. This approach makes it feasible to decompose large biomolecules into manageable subgraphs for training.
**Computational Efficiency:** Furthermore, during training, we hope that the number of atoms that contribute to loss computation should be the same order of magnitude as that of small molecules or peptides (10^1~10^2 atoms). We denote those atoms as $G_g$. From the above, the complete force-determining environment for $G_g$ is the union of the spherical subgraphs centered on each atom in $G_g$ with radius of around 10 Å, denoted as $G_e$. By using $G_e$ as the training input, we effectively constrain the number of atoms involved in gradient computation to $|G_g|$, which aligns with our intention. Specifically, for the sake of convenience, we adopt Eqs. (2-3) for constructing training data, thus avoiding computing the union of hundrends of point sets.
Accordingly, our hyperparameters should satisfy:
- Scale Matching: $G_g$ contains $10^1$-$10^2$ atoms (aligning with small molecule sizes)
- Physical Consistency: $\delta$\_max - $\delta$\_min ≥ 10 Å ensures force computation completeness
Moreover, regarding the impact of the selection of different $\delta$\_min and $\delta$\_max, we argue that the choice of $\delta$\_min can be arbitrary as long as the spherical region with radius delta_min contains an approximately appropriate number of atoms. In contrast, the selection of $\delta$\_max must be grounded in physical priors. Crucially, if $\delta$\_max - $\delta$\_min is so small that it shields some strong interactions, it will inevitably degrade model performance. On the contrary, once $\delta$_max - $\delta$\_min exceeds a certain threshold (e.g., 10 Å), further increasing $\delta$\_max yields negligible benefits.
- **Q2:** Effect of predefined timestep τ on model accuracy is unexplored. Interesting to discuss how τ would affect model performance. No major experiment is required, but would like at least some discussion on the intuition.
Thanks for the question. Regarding the impact of $\tau$ on the model performance, we provide the following explanations:
- If $\tau$ is extremely small (comparable to the MD integral timestep), the conformations of $(x_0,x_1)$ would be very similar. While this may intuitively improve model accuracy, the efficiency could become comparable to or even worse than classical MD, contradicting our goal of accelerating MD simulations.
- If $\tau$ falls within a reasonable range, increasing $\tau$ reduces the time correlation between $(x_0,x_1)$, making the learning task more challenging but enabling the model to explore more state space with a shorter simulation.
- If $\tau$ exceeds a certain threshold, $x_0$ and $x_1$ can be considered as independent samples from the Boltzmann distribution, rendering the model incapable of learning dynamic transition features between states.
In conclusion, we believe that a moderate $\tau$ is a reasonable choice.
**Reference**
> [1] Kong, X., Huang, W., & Liu, Y. Generalist Equivariant Transformer Towards 3D Molecular Interaction Learning. In *Forty-first International Conference on Machine Learning*. | Summary: The paper presents UniSim, a unified simulator for time‐coarsened dynamics of biomolecules. It proposes a multi-task pretraining method to learn a unified atomic representation across small molecules, peptides, and proteins. The simulator uses a stochastic interpolant framework to learn long-timestep state transitions and introduces a force guidance module to adapt the generated trajectories to different chemical environments. Experiments are reported on peptides, small molecules, and proteins, with comparisons against baselines such as FBM, Timewarp, ITO, and Score Dynamics.
Claims And Evidence: The paper claims that UniSim is the first unified model for time-coarsened dynamics across diverse biomolecular domains and that its multi-task pretraining and force guidance improve distributional similarity and validity. While the theoretical formulations are supported by derivations, the experimental evidence is less convincing. In particular, the validity metrics for protein simulations remain unsatisfactory, and the improvements over existing methods are modest. These factors weaken the claims regarding robust performance and cross-domain transfer.
Methods And Evaluation Criteria: The proposed methods include:
- A unified atomic representation using an SO(3)-equivariant GNN.
- A stochastic interpolant-based generative framework to bridge long simulation timesteps.
- A force guidance module to enforce Boltzmann-like behavior during sampling.
The evaluation criteria (JS distances, VAL-CA, CONTACT, etc.) are standard and appropriate for MD simulations. However, some aspects (e.g., sensitivity to hyperparameters and error accumulation over long rollouts) are not thoroughly explored.
Theoretical Claims: The paper presents a proof (Proposition 3.1) linking the stochastic interpolant framework with the modified dynamics when force guidance is applied. A cursory check suggests the derivations are consistent. No major errors were detected in the presented theoretical claims, though a detailed line-by-line verification is challenging within the review context.
Experimental Designs Or Analyses: The experiments span multiple molecular domains with comparisons against several baselines. The design is standard, and the benchmarks are appropriate. However, the results—particularly for proteins—do not show significant improvements in key validity metrics. The lack of rigorous statistical analysis (e.g., confidence intervals) and limited ablation studies on the effect of force guidance and timestep choices reduce confidence in the claimed advantages.
Supplementary Material: The supplementary material includes proofs for theoretical claims and detailed training/inference setups. The additional details provided are useful and do not raise any immediate concerns.
Relation To Broader Scientific Literature: The work is positioned well within the recent literature on deep learning for MD simulations. It builds on approaches such as FBM, Timewarp, ITO, and Score Dynamics, aiming to integrate aspects from each—namely, unified representations, time-coarsened simulation, and force-guided sampling. The paper accurately cites recent advances and situates its contributions relative to them.
Essential References Not Discussed: While the paper covers most key works, it would benefit from a brief discussion of methods that integrate reinforcement learning or adaptive sampling techniques for MD acceleration, as well as approaches from enhanced sampling (e.g., metadynamics) that address similar problems in different ways.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Statistical Significance: Can the authors provide statistical analysis (e.g., confidence intervals or p-values) for the reported improvements, especially in the protein validity metrics?
2. Hyperparameter Sensitivity: How does the choice of the coarse timestep ($\tau$) affect the simulation accuracy and validity? Have experiments been conducted to assess sensitivity to this parameter?
3. Training Data Balance: Was the unified model trained on a balanced dataset across small molecules, peptides, and proteins? If not, could the imbalance affect the reported transferability claims?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: - **Q1**: The validity metrics for protein simulations remain unsatisfactory.
We claim that the unsatisfactory validity metric of ATLAS stems from two primary factors:
1. We pursue a unified modeling framework across small molecules and proteins, which necessitates certain compromises in protein-specific modeling (e.g., MSA).
2. The number of parameters matters: we conduct the ablation analysis by increasing the hidden dimension $H$ from 128 to 256. As demonstrated in [Table 1](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/ablation_hidden_dim_pepmd.png) and [Table 2](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/ablation_hidden_dim_atlas.png), the modification yields substantial improvements in validity metrics across both the PepMD and ATLAS test sets, suggesting that parameter scaling can effectively address certain performance deficiencies.
- **Q2:** Some evaluation criteria are not thoroughly explored.
We sincerely appreciate your valuable suggestions regarding the evaluation criteria. We address your concerns through the following systematic investigations:
1. **Hyperparameter Sensitivity**: we have now conducted the ablation study for the inference step parameter $T$ and the guidance strength $\alpha$ on the PepMD test set, which are shown in [Table 3](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/ablation_T_alpha_pepmd.png). From the table, we can summarize two key observations:
- The validity metric improves as $\alpha$ increases, while other metrics generally exhibit deteriorating trends. This suggests that the force guidance kernel enhances the comprehension of physical priors, but may constrain the exploration of the state space to some extent.
- As $T$ increases, most metrics show degrading trends. This is likely because excessive discretization of SDE leads to greater error accumulation.
2. **Error accumulation over long rollouts**: we have performed long-timescale simulations (100,000 rollouts) on a well-studied system, alanine-dipeptide, where the detailed setup is elaborated in our response to **Q1,** **Reviewer KWXR**. The stability of UniSim in long-timescale simulations has been verified by the alignment of RMSD relative to the initial state ([Figure 2](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/ad_rmsd_over_time.png)).
- **Q3:** The lack of rigorous statistical analysis.
We have now updated the statistical analysis for protein validity metrics ([Table 4](https://anonymous.4open.science/r/uni-sim-3BBB/rebuttal/statistical_analysis.png)), which provides rigorous statistical validation of our performance claims.
- **Q4**: a brief discussion of methods that integrate reinforcement learning/enhanced sampling/adaptive sampling.
In response to your valuable feedback, we will further enhance the Introduction section with a brief discussion on those methods.
- **Q5**: Regarding the choice of the coarse timestep $\tau$.
Thanks for the instructful question. We provide the following responses:
1. First, we did not conduct ablation experiments on the coarsened timestep $\tau$. That's because selecting different $\tau$ values would require retraining the model on entirely new datasets, which is not worthwhile.
2. Second, we explain our methodology for selecting $\tau$ across different datasets. According to Timewarp [1] where $\tau$ was set to 500 ps, we opted for values of $\tau$ in the same order of magnitude. We then select an appropriate $\tau$ to ensure the variance of the distance between training data pairs was close to 1, thereby avoiding numerical instability during training.
3. Finally, we briefly discuss the impact of different $\tau$ on the performance.
1. If $\tau$ is extremely small, the conformations of $(x_0,x_1)$ would be very similar. While this may intuitively improve model accuracy, the sampling efficiency may become the bottleneck instead.
2. If $\tau$ falls within a reasonable range, increasing $\tau$ reduces the time correlation between $(x_0,x_1)$, making the learning task more challenging but probably enhances the exploration of the state space.
3. If $\tau$ exceeds a certain threshold, $x_0$ and $x_1$ can be considered as independent samples from the Boltzmann distribution, rendering the model incapable of learning dynamic transition features between states.
In conclusion, we believe that a moderate $\tau$ is a reasonable choice.
- **Q6**: Regarding the training Data Balance.
We employed several engineering tricks to ensure the pretraining dataset remains as balanced as possible:
- During training, we require molecules of each batch originate from the same dataset.
- We ensure that the total number of atoms per molecule in a batch is similar using dynamic batch size.
- We impose the same upper limit on the number of batches for each dataset in a single epoch.
This approach ensures that the total number of atoms contributed by each dataset during a single epoch remains approximately balanced. | null | null | null | null | null | null |
Large Language Diffusion Models | Reject | Summary: This work presents the large language diffusion model (LLaDA), a masked diffusion that shows strong scalability outperforming self-constructed autoregressive large language models. In particular, LLaDA achieves comparable performance for in-context learning and instruction-following compared to SOTA LLMs, and outperforms on a reversal reasoning task. Furthermore, the paper showcases the chat capability of LLaDA supporting multi-round dialogue which was known only possible for autoregressive models. While few papers studied the scalability of diffusion models, LLaDA is the first to show the promising capability of diffusion language model comparable with autoregressive language models on multiple benchmarks while supporting multi-round dialogue chat ability.
Claims And Evidence: Yes, the claims on the scalability and other capabilities like in-context learning, and instruction following are supported by experimental results.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria including experiment design, baseline selection, and benchmark datasets and tasks are all appropriate and well-designed.
Theoretical Claims: While there are no propositions or theorems in this work, training and inference algorithms are sound.
Experimental Designs Or Analyses: Yes, the experimental designs and analyses are valid and appropriate.
Supplementary Material: Yes, I read the supplementary material that contains algorithms and additional details of training, inference, and experiments. In particular, there are sufficiently many examples of experimental results.
Relation To Broader Scientific Literature: This work demonstrates the strong scalability of the diffusion language models on diverse benchmarks compared with self-constructed autoregressive language models and SOTA LLMs. This aligns with the previous findings on the scalability of the diffusion language models, for example [Nie et al., 2024].
Nie et al., Scaling up masked diffusion models on text, ICLR 2025
Essential References Not Discussed: To the best of my knowledge, most of the relevant works on diffusion language models and their scalability are addressed in this work.
Other Strengths And Weaknesses: **Strength**
- Comprehensively demonstrated the scalability of diffusion models and their capabilities in in-context learning, instruction-following, and reversal reasoning on benchmark datasets.
- First to show the chat capability of the diffusion language model, and the multi-turn dialogue cases are very interesting.
**Weakness**
- While one could argue that the methods used in this work including masked diffusion models and pre-training & SFT pipeline are widely studied, I find the value in training the 8B scale model with comprehensive comparisons with autoregressive models including the self-constructed AR model. Further, the showcased chat capability is a promising direction for future research on diffusion language models.
- Also one could argue that current LLaDA performance underperforms SOTA LLMs, this is not a fair comparison as the training dataset for those models is not available and the resources for training are highly incomparable.
Other Comments Or Suggestions: No additional comments.
Questions For Authors: Is the inference time for LLaDA faster than the LLaMA with the same number of parameters? I would appreciate the inference time comparison and the analyses on it, for example, if faster than what component (e.g., parallel generation of tokens) makes it faster.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Reviewer 8p3t
We thank Reviewer 8p3t for the recognition of our contributions and the thoughtful comments. Below is our point-by-point response.
## Q1: Contribution
Like most research, our work builds upon prior studies. We sincerely appreciate your recognition of our unique contributions. If you have any further questions or concerns, we would be happy to address them.
## Q2: Comparison with SOTA LLMs
We appreciate your observation that the training data and compute used in our work are not comparable to those of SOTA LLMs. This is indeed a primary reason why LLaDA lags behind SOTA models in overall performance. We will further scale diffusion language models to better explore their full potential in future work.
## Q3: Efficiency
We include an inference time analysis showing that **LLaDA enables a trade-off between generation quality and inference efficiency, which stems from its ability to generate multiple tokens in parallel.**
We evaluate three representative benchmarks on 8 A100-80G GPUs: Math (mathematica), HumanEval (code), and MMLU (general). To highlight the efficiency potential of LLaDA, we adopt shorter generation lengths—specifically, 1 for MMLU, 128 for Math, and 256 for HumanEval. We compare LLaMA3 with and without KV-Cache, while LLaDA operates without any inference optimization techniques. Both LLaDA and LLaMA3 have 8B parameters. In the table, the numbers in parentheses indicate the number of sampling steps. **Overall, when inference time is comparable, LLaDA achieves performance similar to LLaMA3 with KV-Cache. Notably, on the Math benchmark, LLaDA even outperforms LLaMA3 with KV-Cache while using less inference time.**
In addition, recent studies [1, 2, 3] have shown that distilling MDMs can greatly accelerate both text and image generation, which holds potential for improving LLaDA. We also plan to explore inference optimization techniques similar to KV-Cache to further enhance efficiency.
||LLaDA-Base(32)|LLaDA-Base(64)|LLaDA-Base(128)|LLaMA3 w/ Cache|LLaMA3 w/o Cache
-|-|-|-|-|-
Time(min)|31|61|122|79|307
Math|12.6|18.9|22.7|15.1|15.1
||LLaDA-Base(64)|LLaDA-Base(128)|LLaDA-Base(256)|LLaMA3 w/ Cache|LLaMA3 w/o Cache
-|-|-|-|-|-
Time(s)|56|110|220s|342|354
HumanEval|12.8|23.8|31.7|34.2|34.2
| |LLaDA-Instruct(1)|LLaMA3 w/ Cache|LLaMA3 w/o Cache
-|-|-|-
Time(s)|231|235|334
MMLU|64.5|68.4|68.4
[1] Hayakawa et al. Distillation of Discrete Diffusion through Dimensional Correlations.
[2] Zhu et al. DiMO: Distilling Masked Diffusion Models into One-step Generator.
[3] Xu et al. Show-o Turbo: Towards Accelerated Unified Multimodal Understanding and Generation.
If you have any more questions, we are happy to discuss them and will do our best to address them. | Summary: This paper focuses on the architecture of large language models (LLMs) and discusses the effectiveness of non-autoregressive training for large models. Inspired by the approach of masked language models, it designs a Masked Diffusion Language Model and proposes a diffusion model-based generative architecture with a mask predictor. The paper also explores autoregressive model architectures like LLaMA and presents a nearly identical large model training pipeline (pre-training, SFT). Through extensive experiments comparing with different AR LLMs, some pioneering conclusions are drawn. Additionally, scaling experiments demonstrate the advantages of non-autoregressive architectures in training. Finally, the paper explores some challenges encountered with the Masked Diffusion Language Model through extensive experimentation.
## update after rebuttal
Based on the review comments from all reviewers and the author's rebuttal, I believe the paper still has shortcomings. I agree with reviewer TqxC that the writing needs improvement due to an overstatement of the work. Moreover, the mask diffusion method primarily builds on existing work, lacking sufficient innovation and formula derivation. The paper primarily relies on GPU-intensive experiments for scaling, yet it does not achieve the depth expected of a technical report.
**Reasons for Acceptance**: As an experimental report on scaling, this paper indeed resolves many underlying issues and provides useful conclusions. From the perspective of community contribution, the insights generated are valuable and could justify acceptance.
**Reasons for Rejection:** As a framework for a new method, the paper falls short in efficiency and comparative experiments. The writing style does indeed exaggerate the impact, necessitating revisions.
Claims And Evidence: Yes, the paper proposes that non-autoregressive architectures can train LLMs and have advantages in inference generation and scaling.
Methods And Evaluation Criteria: The dataset and metrics employed in this study are compliant, with a discussion on both general generative tasks and those requiring stronger reasoning capabilities. Experiments were conducted to explore the potential of masked diffusion language models. The proposed method also addresses the question that autoregressive models are not the sole approach to training large language models (LLMs), offering a new direction for training large models and uncovering their potential.
Theoretical Claims: This paper is an experimental study, and the proposed masked diffusion language model is theoretically supported, as well as being relatively simple and comprehensive.
Experimental Designs Or Analyses: The scaling experiments seem to be conducted on models that are not sufficiently large. Observing the scaling effects on even larger models (if feasible) would provide more convincing evidence of the model's scalability. During the sampling process, the authors introduced a semi-autoregressive remasking strategy, which resulted in a noticeable decline in performance on the Instruct version of LLaDA. This aspect lacks a thorough and reasonable analysis.
Supplementary Material: I have reviewed everything in it.
Relation To Broader Scientific Literature: The authors have summarized the applications of discrete and continuous diffusion models in text generation, and the proposed scaling and training of an LLM from scratch are deemed reasonable.
Essential References Not Discussed: Currently, there is none, considering that this is the first non-autoregressive large language model and there are no non-autoregressive baselines for comparison. It is hoped that the citation section will provide a more detailed description of how existing masked diffusion language models (non-LLM) are modeled, or summarize some general modeling formulas.
Other Strengths And Weaknesses: Strengths:
1. This article is the first to conduct extensive experiments to establish a non-autoregressive large language model based on diffusion, comparing its scaling advantages with existing autoregressive LLM. It demonstrates that masked diffusion language models can scale better within a certain range, showing potential.
2. Through experiments comparing the effects of pretraining and sft with different autoregressive llms, and by using a mask predictor as a denoising network for prediction, this presents another generative approach. The experiments also highlight the model's certain advantages in some reasoning tasks.
3. This paper represents the pioneering effort in training a non-autoregressive LLMs, with its experimental design and analysis of results offering groundbreaking significance and research insights. It introduces a "new" alternative for the training of large models.
Weaknesses:
1. The experimental results, on the whole, do not significantly outperform the existing AR-LLM baselines. Although there are advantages in reasoning capabilities, it is still insufficient to claim that the proposed method is an "excellent" architecture for building LLMs. It can only be considered as one of the "effective" modeling and training options.
2. The length of the generated text is still too short and of fixed length, making it difficult to demonstrate an advantage over existing autoregressive LLMs in handling long texts. Alternatively, the potential of the proposed method could be illustrated by balancing parallel reasoning performance and generation length.
Other Comments Or Suggestions: I have no comments and suggestions for the authors.
Questions For Authors: 1. Comparison of Inference Efficiency. Firstly, could you supplement the comparison with the inference speed of autoregressive LLMs? Since diffusion can perform parallel inference without the need for KV cache, they should theoretically be more efficient. Secondly, since the number of sampling steps in the appendix is also a hyperparameter, I would like to ask whether the proposed sampling method is compatible with algorithms designed to optimize and accelerate sampling for diffusion models, as well as reduce the number of sampling steps?
2. Performance Comparison of Non-LLMs. Could you provide the performance gap between LLaDA and previously proposed discrete or continuous diffusion LMs under the same parameter? Through comparison, it can be shown that the LLADA architecture has advantages over other non-autoregressive models, rather than previous works being able to achieve more effective scaling than LLADA.
3. Effectiveness of the Low-confidence Remask Strategy. The significant drop in LLaDA-Instruct performance in Table 6 seems somewhat unreasonable. I speculate that this strategy might disrupt the continuity of the sampling process (in other words, diffusion path), such as increasing the difficulty of denoising for the mask predictor. Directly applying this strategy in SFT model may not necessarily be effective, and I hope that some theoretical evidence can be provided to demonstrate that the Low-confidence Remasking Strategy can be effectively applied to LLaDA. Moreover, since the remask strategy is used every time during text generation sampling, I wonder what impact this has on the generation speed.
4. Discussion on Semi-Autoregressive and Block Experiments. Semi-autoregressive methods are only used in the remask strategy; could blocks be input to the mask predictor for generation? Additionally, due to the limitations on generation length, the size and number of blocks also require more experiments to substantiate.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Responses to reviewer wFKs
We thank Reviewer wFKs for the recognition of our contributions and the thoughtful comments. Below is our point-by-point response.
## Q1: AR baselines
We agree that the current results may still be insufficient to claim an "excellent" architecture for building LLMs, and we appreciate your acknowledgment that our method is one of the "effective" modeling and training options.
We believe that challenging the long-held assumption that LLMs must rely on autoregressive training is a meaningful contribution in itself. Our work is an early attempt to scale diffusion language models, with many design choices—such as data and architecture—borrowed from autoregressive settings. In future work, we aim to develop designs specifically tailored to diffusion models to further enhance their performance.
## Q2: Long texts
Since LLaDA is built upon the Transformer architecture, many existing techniques [1, 2] for long-context processing in Transformers are applicable to LLaDA, and [1] has already been integrated. We leave a more thorough exploration of LLaDA’s long-context capabilities to future work.
[1] Su et al. RoFormer: Enhanced Transformer with Rotary Position Embedding.
[2] Press et al. Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation.
## Q3: Efficiency
Please refer to our response to Q1 from Reviewer TqxC for efficiency analysis. **We show that LLaDA enables a trade-off between generation quality and inference speed, and achieves performance comparable to LLaMA3 with KV-Cache when inference time is similar. Notably, on the Math benchmark, LLaDA even outperforms LLaMA3 with KV-Cache while using less inference time.**
Recent studies [3, 4] have demonstrated that distillation can significantly accelerate both text and image generation with discrete diffusion models. These approaches hold strong potential for improving LLaDA, and we leave their integration to future work.
[3] Hayakawa et al. Distillation of Discrete Diffusion through Dimensional Correlations.
[4] Zhu et al. DiMO: Distilling Masked Diffusion Models into One-step Generator.
## Q4: Comparison of Non-LLMs
LLaDA is built on RADD [5], one of the best-performing discrete diffusion models [6, 7], which has been shown to outperform continuous diffusion models at comparable parameter scales (~0.3B). Due to limited computational resources, we did not extend this comparison to larger model sizes.
However, these previous works use small models, lack key LLM capabilities such as in-context learning and instruction-following, and are evaluated only on perplexity rather than downstream tasks.
In contrast, our work is the first to scale diffusion language models to unprecedented 8B parameters, demonstrating better scalability than autoregressive models while supporting these core LLM abilities. As you noted, *“this is the first non-autoregressive large language model and there are no non-autoregressive baselines for comparison.”*
[5] Ou et al. Your Absorbing Discrete Diffusion Secretly Models the Conditional Distributions of Clean Data.
[6] Sahoo et al. Simple and Effective Masked Diffusion Language Models.
[7] Shi et al. Simplified and Generalized Masked Diffusion for Discrete Data.
## Q5: Lowest-confidence remasking
Applying lowest-confidence remasking directly to LLaDA-Instruct may lead the model to generate excessive |EOS| tokens, resulting in overly short answers. This is the primary cause of the observed performance degradation.
Lowest-confidence remasking is a heuristic strategy similar to the widely used annealed sampling in LLMs, which emphasizes high-probability tokens. During SFT, we pad |EOS| tokens within each mini-batch to align sequence lengths, increasing their frequency in the training data. As a result, the remasking strategy tends to select |EOS| more often. We plan to address this in future work by improving the SFT data processing pipeline.
Regarding generation speed, the remasking strategy has negligible impact on runtime efficiency. It introduces no additional neural network computation and only involves lightweight indexing operations with minimal overhead.
## Q6: Semi-AR
Thank you for your suggestion regarding feeding blocks into the mask predictor. This would require modifying the training procedure of LLaDA, and while we believe it could be beneficial for inference, we leave it as future work.
As shown in our response to Q5 from Reviewer OmMG, we conducted experiments with block lengths of 4, 8, 16, 32, and 64. The results are consistently robust across these configurations.
## Q7: Scaling to larger models
We agree that scaling to larger models is a valuable direction and we leave large-scale experiments for future work.
## Q8: Summarizing previous MDMs
We will include a summary of the general modeling formulations of previous MDMs in the revision.
If you have any more questions, we are happy to discuss them and will do our best to address them.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. I have carefully read your responses, and they addresse the vast majority of the concerns. Given that my score is already the highest, and that it seems appropriate, I'm keeping it for now.
---
Reply to Comment 1.1.1:
Comment: We thank Reviewer wFKs for acknowledging our contributions since the beginning. We are glad that the vast majority of the concerns have been addressed. | Summary: The paper introduces LLaDA, a large language model based on diffusion models instead of autoregressive models (ARMs), which dominate in the large language modeling area currently. The model is trained from scratch using a pre-training and supervised fine-tuning (SFT) paradigm with a mask diffusion loss, achieving competitive performance with strong LLMs like LLaMA3 8B. The authors show LLaDA's scalability, in-context learning, and instruction-following on various tasks.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes. I've checked all the equations in the paper and didn't notice any significant errors.
Experimental Designs Or Analyses: Yes. The benchmarks used to evaluate an LLM and the experimental analyses are reasonable.
Supplementary Material: No supplementary material is uploaded.
Relation To Broader Scientific Literature: Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs), while this paper tries to challenge this by using a diffusion modeling objective to train LLMs.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
- The paper introduces a distinct approach to large language modeling using diffusion models, which is a significant departure from the dominant autoregressive paradigm.
- The trained LLaDA with 8B parameter size demonstrates competitive performance with state-of-the-art LLMs like LLaMA3 8B, and excels in math, infilling, and reversal reasoning tasks particularly.
Weaknesses:
- The method part seems to mostly follow [1] and [2], which makes this work appear to merely scale the model parameters without providing any new methods or insights. Though scaling alone is already a valuable contribution in some way.
- The text diffusion model seems to require significant computational resources during inference, which may limit its practical applicability compared to more efficient ARMs with KV cache.
[1] Shen Nie et al. Scaling up masked diffusion models on text. arXiv preprint arXiv:2410.18514, 2024.
[2] Subham Sekhar Sahoo et al. Simple and effective masked diffusion language models. arXiv preprint arXiv:2406.07524, 2024.
Other Comments Or Suggestions: No.
Questions For Authors: - I'm curious about how a diffusion model does code infilling as the middle length seems to need to be decided in advance per my understanding. Could the authors elaborate more on how HumanEval-FIM is evaluated in LLaDA?
- In Table 6, why the semi-autoregressive remasking is significantly effective on the instruct model but less effective for the base model?
- In Table 8, how do we decide the block length for each task in advance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Responses to Reviewer OmMG
We thank Reviewer OmMG for the recognition of our contributions and the thoughtful comments. Below is our point-by-point response.
## Q1: Contributions
Thank you for recognizing our efforts in scaling masked diffusion models to an unprecedented 8B scale. Like most prior work, our research builds on established foundations. We believe a key open question for masked diffusion models is whether they can scale comparably to autoregressive models while exhibiting core capabilities such as in-context learning and instruction following—abilities previously considered unique to autoregressive approaches. We view this line of investigation as equally important as algorithmic innovation.
## Q2: Efficiency
Please refer to our response to Q1 from Reviewer TqxC, where we provide a sampling efficiency comparison with LLaMA3. **We show that LLaDA enables a trade-off between generation quality and inference speed, and achieves performance comparable to LLaMA3 with KV-Cache when inference time is similar. Notably, on the Math benchmark, LLaDA even outperforms LLaMA3 with KV-Cache while using less inference time.**
While our work focuses on exploring the upper bound of masked diffusion models for language modeling, we believe there is still significant room to improve sampling efficiency through further optimization.
Recent studies [1, 2, 3] have demonstrated that distillation can significantly accelerate both text and image generation with masked diffusion models. In addition, [4] shows that masked diffusion models can leverage KV-Cache. These techniques hold potential for improving LLaDA.
[1] Hayakawa et al. Distillation of Discrete Diffusion through Dimensional Correlations.
[2] Xu et al. Show-o Turbo: Towards Accelerated Unified Multimodal Understanding and Generation.
[3] Zhu et al. DiMO: Distilling Masked Diffusion Models into One-step Generator.
[4] Arriola et al. Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models.
## Q3: Evaluate on HumanEval-FIM
Similar to autoregressive models, for HumanEval-FIM, we provide the prefix and suffix as the prompt and require the model to complete the missing middle part at the end of the sequence. We will include a detailed explanation in the revision.
## Q4: Semi-AR
As detailes in Appendix B.3, for LLaDA-Instruct, semi-autoregressive remasking helps prevent overly short outputs caused by padded |EOS| tokens in the SFT data. In contrast, the pre-training data used for LLaDA-Base does not include padded |EOS| tokens, so semi-autoregressive remasking is unnecessary.
Specifically, during SFT, we pad |EOS| tokens within each mini-batch to align sequence lengths. When using the lowest-confidence remasking strategy, which is similar to annealed sampling in LLMs and places greater weight on high-probability tokens, LLaDA-Instruct tends to overgenerate the |EOS| token due to its frequent occurrence in the SFT data. This often results in outputs that are too short.
Semi-autoregressive remasking mitigates this issue by prioritizing the prediction of content tokens in the earlier part of the answer, rather than allowing the model to prematurely focus on predicting the |EOS| token. This helps produce longer, more complete responses. In future work, we plan to address this issue by optimizing the SFT process.
## Q5: Block length
We can choose the block length from {4, 8, 16, 32, 64}, and the results remain highly robust across these settings. The corresponding GSM8K results for different block lengths are as follows:
| | 4| 8 | 16 | 32 | 64|
|-|-|-|-|-|-|
|GSM8K results|78.5| 78.6| 78.2 | 77.5| 77.8|
If you have any more questions, we are happy to discuss them and will do our best to address them. | Summary: This paper scales up discrete diffusion models to a larger regime than has been seen in prior work (8B params, 2.3T tokens) and additionally perform supervised instruction tuning. They present discrete diffusion models as an alternative to autoregressive language modeling. They evaluate their discrete diffusion model across natural language understanding, coding, and mathematics benchmarks and show strong results compared to autoregressive models at a similar scale. They also present unique benefits of discrete diffusion modeling, such as breaking the "reversal curse" observed with causal language models. For generation, they explore different decoding algorithms such as semi-autoregressive unmasking and lowest confidence unmasking.
Claims And Evidence: I do not think the central claim that diffusion models are "a viable and promising alternative to ARMs" is well-supported by the presented evidence, primarily due to the lack of any inference-time analysis.
The paper shows that LLaDA achieves competitive performance on certain tasks, but the authors completely omit any analysis of generation time or inference efficiency. Diffusion models can naturally trade-off compute time for quality (the authors use between 64-1024 steps as shown in Figure 5). Understanding this trade-off, and where autoregressive models of a comparable scale fall on that tradeoff curve, is critical for evaluating ther primary claim.
The authors do demonstrate that their pure diffusion sampling (random remasking) achieves reasonable performance on some tasks. The pre-trained model achieves 52.3% accuracy on GSM8K with this approach. However, their generation results for their final model typically come from their "Lowest confidence & semi-autoregressive remasking" strategy, which incorporates autoregressive elements by proceeding left-to-right in blocks.
It's not clear whether diffusion models can match autoregressive models on these tasks without borrowing autoregressive elements. Given the framing in the introduction challenging autoregressive generation, I would expect pure diffusion generation to be evaluated comprehensively. They do present a full ablation on GSM8k, but not the other generation datasets.
For the NLU benchmarks, the authors apply classifier-free guidance for LLaDA but not autoregressive baselines. The same technique has been shown to be beneficial to autoregressive models as well [1]. Result should be reported without such techniques for both methods. If the guidance strength is swept over for the diffusion model, then the same should be done for the autoregressive model.
Because of these issues, the evidence falls short of supporting the claim that diffusion models represent a generally viable alternative to autoregressive approaches.
[1] Sanchez, Guillaume, et al. "Stay on Topic with Classifier-Free Guidance." International Conference on Machine Learning. PMLR, 2024.
Methods And Evaluation Criteria: The proposed method of scaling up masked diffusion models is reasonable for exploring alternatives to autoregressive language modeling. The authors build on existing masked diffusion techniques and successfully scale them to 8B parameters.
The evaluation criteria generally make sense, as the authors use standard benchmarks (MMLU, GSM8K, HumanEval, etc.) that allow for comparison with existing autoregressive LLMs. Their inclusion of reversal reasoning tasks is appropriate given the potential advantages of bidirectional modeling compared to autoregressive modeling.
However, there is no inference-time efficiency analysis in this work. For a paper proposing an alternative to autoregressive models, understanding the computational requirements during generation is critical. Without quantifying this tradeoff, the evaluation does not clearly present the tradeoffs between approaches.
Additionally, the ablation studies would have been stronger if they had compared pure diffusion approaches consistently across all tasks, rather than primarily for GSM8K.
While I recognize the value of comparing on various downstream tasks instead of focusing the evaluation on validation likelihood/perplexity, I think the inclusion of those results for the autoregressive and discrete diffusion models would be informative.
Theoretical Claims: The paper does not present any novel proofs.
Experimental Designs Or Analyses: The experimental design has several methodological issues that affect the validity of the comparisons:
1) The authors apply classifier-free guidance to improve LLaDA's performance but don't apply the technique to autoregressive baselines, creating an uneven comparison. Results without such techniques should be reported.
2) While the paper ablates their remasking strategies for GSM8K (Table 6), the results across generation tasks tasks rely on their "Lowest confidence & semi-autoregressive remasking" strategy, which incorporates autoregressive elements. A more comprehensive comparison of pure diffusion approaches across all tasks would have strengthened the experimental design, especially given their claim that autoregression is not necessary for the capabilities of currnent LLMs.
3) For a study proposing an alternative to autoregressive models, the absence of generation time comparisons makes evaluating the viability of their model challenging.
Supplementary Material: I reviewed all of the supplementary material.
Relation To Broader Scientific Literature: This paper primarily scales up existing masked diffusion models (MDMs) to 8B parameters. The approach builds on prior work in discrete diffusion model (eg. Austin et al. (2021), Ou et al. (2024)).
The authors apply the established pre-training and supervised fine-tuning (SFT) paradigm that has become standard for autoregressive LLMs to discrete diffusion models.
The paper's main contribution is demonstrating that MDMs can scale effectively and achieve competitive performance with similar-sized autoregressive models on standard benchmarks. This extends earlier work by Nie et al. (2024), which explored the scaling behavior of masked diffusion at a much smaller scale.
They evaluate whether discrete diffusion perform better on reversal tasks which is related to prior work on the "reversal curse" in autoregressive models (Berglund et al. (2023)). They provide empirical evidence that bidirectional modeling may address this limitation.
Essential References Not Discussed: The discussion of related work is reasonable.
Other Strengths And Weaknesses: Strengths:
1. Successfully demonstrates that diffusion models can scale to 8B parameters and achieve competitive performance on standard language tasks
2. Shows potential advantages in bidirectional reasoning, particularly in reversal tasks
3. Provides a comprehensive empirical evaluation across multiple benchmark tasks
Weaknesses:
1. The paper uses a grandiose writing style that detracts from the scientific content. Literary quotes from William Blake ("What is now proved was once only imagined") and Albert Einstein ("In the middle of difficulty lies opportunity") are inappropriate for a scientific paper. Similarly, the use of ill-defined language such as "intelligence" harms clarity.
2. The paper places strange emphasis on their use of maximum likelihood training as though it represents a novel contribution. The authors simply train their model to maximize the likelihood of the data (or a lower bound on it). This is the approach used by most generative models, including recent discrete diffusion models and autoregressive LLMs. The discussion of "Fisher consistency" and connection to data compression is spurious.
3. Although this paper is primarily an engineering effort, the paper provides limited documentation of things such as the pre-training data composition, SFT data curation, etc. that would enable others to reproduce or build upon this work.
Other Comments Or Suggestions: No additional comments.
Questions For Authors: 1. What is the inference-time of your method compared to the baseline autoregressive model? For instance, across the sampling step sweep you present on GSM8k in Figure 5.
2. What is the performance of your approach on the NLU benchmarks without the use of classifier-free guidance?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: # Responses to Reviewer TqxC
We thank Reviewer TqxC for the thoughtful comments. Below is our point-by-point response.
## Q1: Efficiency
We include an inference time analysis showing that LLaDA enables a trade-off between generation quality and inference efficiency.
We evaluate three representative benchmarks on 8 A100-80G GPUs: Math (mathematica), HumanEval (code), and MMLU (general). To highlight the efficiency potential of LLaDA, we adopt shorter generation lengths—specifically, 1 for MMLU, 128 for Math, and 256 for HumanEval. We compare LLaMA3 with and without KV-Cache, while LLaDA operates without any inference optimization techniques. In the table, the numbers in parentheses indicate the number of sampling steps. **Overall, when inference time is comparable, LLaDA achieves performance similar to LLaMA3 with KV-Cache. Notably, on the Math benchmark, LLaDA even outperforms LLaMA3 with KV-Cache while using less inference time.**
Recent studies [1] have shown that distilling MDMs can greatly accelerate both text and image generation, which holds potential for improving LLaDA. We also plan to explore inference optimization techniques similar to KV-Cache to further enhance efficiency.
||LLaDA-Base(32)|LLaDA-Base(64)|LLaDA-Base(128)|LLaMA3 w/ Cache|LLaMA3 w/o Cache
-|-|-|-|-|-
Time(min)|31|61|122|79|307
Math|12.6|18.9|22.7|15.1|15.1
||LLaDA-Base(64)|LLaDA-Base(128)|LLaDA-Base(256)|LLaMA3 w/ Cache|LLaMA3 w/o Cache
-|-|-|-|-|-
Time(s)|56|110|220s|342|354
HumanEval|12.8|23.8|31.7|34.2|34.2
| |LLaDA-Instruct(1)|LLaMA3 w/ Cache|LLaMA3 w/o Cache
-|-|-|-
Time(s)|231|235|334
MMLU|64.5|68.4|68.4
[1] Hayakawa et al. Distillation of Discrete Diffusion through Dimensional Correlations.
## Q2: Semi-AR
We clarify the misunderstanding on *"their generation results for their final model typically come from their 'Lowest confidence & semi-autoregressive remasking' strategy"*. As detailed in Lines 1072-1087, **20 out of 24 tasks in Tables 1&2 used pure diffusion strategy without Semi-AR for likelihood evaluation and sampling.** Further, we add pure diffusion sampling results for the remaining four tasks:
||GSM8K|GPQA|HumanEval|MBPP
-|-|-|-|-
w/o Semi-ar|62.9|30.3|43.9|28.0
w/ Semi-ar|78.6|31.8|47.6|34.2
**Our main conclusions remain unchanged without Semi-AR.** The evaluation of LLaDA-Base involves no Semi-AR and achieves performance comparable to LLaMA3-Base. And the conclusion that LLaDA-Instruct slightly underperforms LLaMA3-Instruct (Line 266) also holds. This relative underperformance is attributed to the absence of alignment with RL, which we leave for future work.
We also compared pure diffusion and Semi-AR evaluation across all 24 tasks, and found that Semi-AR yielded improvements on only above 4 tasks. Due to space constraints, both sets of results will be included in Tables 1&2 in the final version.
## Q3: CFG
As detailed in Lines 1081–1087, 18 out of the 24 benchmarks did not use CFG. We add the results without CFG for the remaining 6 benchmarks:
||ARC-C|Hellaswag|TruthfulQA|WinoGrande|GPQA|PIQA
-|-|-|-|-|-|-
LLaDA-Base w/ CFG|47.9|72.5|46.4|74.8|26.1|74.4
LLaDA-Base w/o CFG|45.9|70.5|46.1|74.8|25.2|73.6
After removing CFG, all performance drops are within 2 points. These changes do not affect our conclusion that LLaDA-Base is comparable to LLaMA3-Base. We will include both with and without CFG results in the revised version and provide a discussion of the reference you mentioned.
## Q4: PPL
We add the zero-shot perplexity results:
||WikiText2|WikiText103|LAMBADA
-|-|-|-
LLaDA-Base|11.4|11.3|26.3
LLaMA3-Base|8.1|8.2|19.9
LLaMA2-Base|37.3|36.6|63.2
## Q5: Writing style
We appreciate your suggestions and we will carefully revise the paper to improve clarity.
## Q6: MLE
We clarify that **maximum likelihood estimation (MLE) is not our contribution but rather our motivation.** As stated in the introduction, we consider MLE a key factor in the success of LLMs, which motivates us to scale MDMs and develop LLaDA. Further, we will revise the discussion on data compression and Fisher consistency, and include their theoretical foundations below.
As detailed in the background of [1], MLE is equivalent to lossless data compression under Shannon's source coding theorem.
As detailed on page 39 of [2], Fisher consistency refers to an estimation method that recovers the true parameter when applied to the entire population and MLE satisfies Fisher consistency.
[1] Delétang et al. Language Modeling Is Compression.
[2] R. H. Norde. A Survey of Maximum Likelihood Estimation.
## Q7: Data
Our pre-training data is sourced from the public Common Crawl, comprising approximately 11% Chinese, 61% English, and 28% code. Our SFT dataset includes 1M open-source samples and 3.5M synthetic samples. We will provide details of our data filtering and synthesis pipeline in the revision, with a level of detail comparable to that of LLaMA3.
If you have any more questions, we are happy to discuss them and will do our best to address them.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and the additional results.
Q1. Thank you for providing the timing comparisons. Given the centrality of inference efficiency to the motivation of discrete diffusion models, a comprehensive comparison, beyond what can fit in a rebuttal, would be needed to support the strong claims put forth in the paper about the competitiveness of discrete diffusion generation with autoregressive generation. When using the same number of "steps" as an autoregressive model, discrete diffusion models are much slower due to the lack of KV-caching (e.g. see [1, 2]). Therefore it is difficult to understand how LLaDA-Base(256) is faster than LLaMA3 w/ Cache given that LLaDA-Base(256) is using one timestep per token in the generation sequence. I understand that full details about the timing comparison likely did not fit in the rebuttal response. However, this is why presenting a detailed analysis in the original submission is critical.
The model as currently presented (i.e. employing bidirectional attention at every timestep) is inherently incompatible with KV cacheing. The use of such techniques could be enabled by explictly training the model in a block-wise autoregressive fasion (as was done in past work [3]), but that is a significant departure from the current work.
[1] Sahoo et al. "Simple and effective masked diffusion language models." NeurIPS (2024).
[2] Zheng et al. "Masked diffusion models are secretly time-agnostic masked models and exploit inaccurate categorical sampling." arXiv preprint (2024).
[3] Arriola et al. "Block diffusion: Interpolating between autoregressive and diffusion language models." arXiv preprint (2025).
Q2. When looking at the sampling configurations for the final LLaDA-Instruct model (Table 8), 3/4 open-ended generation tasks (meaning not multiple choice QA) employ semi-autoregressive generation with a block size significantly smaller than the overall sequence length. I am referring specifically to GSM8K, HumanEval, and MBPP. I acknowledge that the MATH benchmark is open-ended generation as well and does not employ semi-autoregressive generation.
The NLU benchmarks only require computing likelihoods and therefore do not involve actual generation. As a result, I think that my original statement was accurate. I recognize that numerous benchmarks that are not open-ended generation are also reported, but I think placing emphasis on the generation behavior of the language model is reasonable given that is how they are used in practice.
While I appreciate these additional results, they actually reinforce my original concern rather than addressing it. Your updated results do show meaningful performance drops from disabling semi-autoregressive generation (GSM8K: 78.6 -> 62.9, HumanEval: 47.6 -> 43.9, MBPP: 34.2 -> 28.0) which conflicts with the presented claim in the paper that the autoregressive formulation is not essential for the success of language models.
Q3. Thank you for providing the additional results. I acknowledge that the model performance is still reasonably strong without the use of CFG, although it did provide some benefit.
Q4. Perplexity is not directly comparable across models with different vocabularies. As a result, it's not clear that these results are meaningful.
Q6: As written, it is presented as something unique to this work. For instance, the introduction states (emphasis mine):
> This design enables LLaDA to construct a model distribution with bidirectional dependencies and optimize a lower bound of its log-likelihood, offering an **unexplored** and principled alternative to existing LLMs.
Again in the conclusion:
> We introduce LLaDA, a principled and **previously unexplored approach** to large language modeling based on diffusion models.
This is simply untrue. There is a significant body of work exploring exactly this class of generative models for language generation. The contribution of this work is in scaling up an existing approach, not introducing a fundamentally new paradigm.
Q7. I appreciate the additional details about the data curation process. For the engineering effort to be a meaningful contribution to the research community in itself, the level of detail and commitment to open-source by the OLMo [4] line of models would be a better benchmark. The level of detail presented by the Llama3 technical report is still relatively limited.
[4] OLMo: Accelerating the Science of Language Models (Groeneveld et al., ACL 2024)
In summary, I think that this is an impressive scaling effort of an interesting approach to language generation. However, the claims in the paper, as written, are very strong and the reported results do not support those claims. A rigorous analysis of inference-time efficiency is necessary and the claims about the unessential nature of autoregression should be softened as long as the strongest results require semi-autoregressive generation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. Although we **received it less than two days before the rebuttal deadline**, and the guidelines allow authors to skip the final response if insufficient time is given, we still did our best to address your comments as follows:
Q1. As detailed in Lines 103–105 (left) and 55–67 (right), we claim that LLaDA is comparable to large-scale (8B) ARMs in scalability and other performance metrics, with sufficient validation. We **disagree, with our highest respect**, with the implication that comparable performance implies comparable efficiency. We make no claims suggesting LLaDA outperforms ARMs in efficiency.
For completeness, our **original submission** already included a trade-off analysis between NFE and performance (Figure 5). In this rebuttal, we added an additional analysis on sampling time vs. performance **in response to your suggestion**. The works you cited also conducted similar comparisons with ARMs—specifically, Figure 2 in [1], Figure 9 in [2], and Table 7 in [3].
**We clarify that LLaMA3’s output length is not controlled due to its autoregressive nature.** On HumanEval and MMLU, its average output lengths are 433 and 2, compared to LLaDA’s 256 and 1, contributing to LLaDA’s faster performance. The references [1, 2, 3] you mentioned evaluate generation quality only using perplexity. We go a step further by using three downstream tasks as evaluation metrics.
In summary, **we believe that the inference efficiency of LLaDA has been thoroughly analyzed**, even though this does not affect our main conclusions. Besides, **we kindly ask that you not assume we are unable to address your concerns within the rebuttal period**.
Q2. We emphasize that **we follow existing LLMs (e.g. LLaMA series) to comprehensively evaluate LLaDA-Base/Instruct on common benchmarks** and our claim remains **the overall performance (not just open-ended generation) of LLaDA is competitive to ARMs**.
**As you stated in the rebuttal comment, your concerns refer specifically to open-ended generation tasks,** we conducted a more in-depth investigation into the sampling strategy for such tasks. We set the probability of the |EOS| token to zero during the LLaDA-Instruct's sampling. Note that in LLaDA-Instruct, |EOS| is used for padding purposes, while the actual end-of-sequence is indicated by a different token |EOS-ids|. Please refer to our response to Q4 from Reviewer OmMG for the motivation. **This adjustment shows the potential of pure diffusion sampling, outperforming semi-AR on three of four open-ended generation tasks.**
||GSM8K|HumanEval|MBPP|Math
-|-|-|-|-
w/o Semi-ar|66.0|49.4|37.6|26.6
w/ Semi-ar|78.6|47.6|34.2|23.8
For the remaining 20 tasks, pure diffusion is competitive with semi-AR as follows:
LLaDA-Base
||MMLU|BBH|ARC–C|Hella-swag|TruthfulQA|WinoGrande|PIQA|GSM8K|Math|GPQA|HumanEval|HumanEval-FIM|MBPP|CMMLU|C–Eval
-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-
w/o Semi-AR|**65.9**|**49.8**|45.9|**70.5**|**46.1**|**74.8**|73.6|**70.7**|**27.3**|**25.2**|**33.5**| **73.8**|38.2|**69.9**|**70.5**|
w/ Semi-AR|**65.9**|48.6|**46.2**|45.2|45.2|74.3|**76.3**|70.6|27.2|24.1|33.1|73.7|**39.0**|**69.9**| **70.5**|
LLaDA-Instruct
||MMLU|MMLU-pro|Hellaswag|ARC-C|GPQA
-|-|-|-|-|-
w/o Semi-AR|**65.5**|**37.0**|**74.6**|**88.5**|30.3
w/ Semi-AR|65.4|34.8|73.3|85.4|**31.8**
**We emphasize that these experiments are conducted only following your suggesions and do not change the claim of our submission.**
Q3. We're glad the CFG issue is resolved.
Q4. We added perplexity experiments for LLaDA and LLaMA, as your initial comment didn’t specify the experimental setup. Line 155 explains the vocabulary difference from LLaMA. If needed, we’re happy to compare with our ARM baseline under the same vocabulary in the final version.
Q6. We disagree with our highest respect. Our claim refers to the full statement: **a principled and previously unexplored approach to large language modeling**. Previous works have not explored approaches to **large language models.** We believe this aligns with your comment that *this is an impressive scaling effort of an interesting approach to language generation*. If you have any constructive suggestions on how to improve the wording, we will carefully consider them.
Q7. We will open source the data collection process, model weights, evaluation code, and core training code. **We believe our level of openness is no less than that of most leading LLM papers**, including representative works such as the LLaMA series.
We are confident that we have thoroughly addressed your concerns regarding semi-AR sampling and inference efficiency, as we did with your comments on CFG. In light of your assessment of our work as **impressive and interesting**, we earnestly ask you to reconsider whether a score of 1 is still warranted.
**Provided that it does not violate the conference policy, we intend to publicly release the full review history after the final decision.** | null | null | null | null | null | null |
The Role of Randomness in Stability | Accept (spotlight poster) | Summary: This paper studies the notion of stability, which is defined (through various definition) as the probability of an algorithm to give the same result on two datasets sampled from the data distribution. More precisely, the authors aim at quantifying the amount of randomness needed to achieve stability, as randomness has been commonly seen as an important factor for stability.
The amount of randomness is considered in two settings (replicability and differential complexity) through two notions of "randomness complexity", namely certificate complexity and DP complexity, which have been introduced in prior works.
The two main results are to provide tight guarantees of these two notions of randomness complexity in terms of the globally-stable complexity, which quantifies the probability of a deterministic algorithm to output replicable results.
The authors then apply their framework to PAC-learning, for which they characterise the certificate complexity and the sample complexity of an algorithm achieving at least $\frac1{2}$-replicability.
Claims And Evidence: The main claims are (i) tight bounds of certificate complexity and DP complexity in terms of globally-stable complexity and (ii) a characterisation of the certificate complexity of PAC learning in terms of the Littlestone dimension.
These claims are supported by proofs in the appendix and sketches of proofs are provided, which improve the readability of the paper.
However, several concepts such as Littleton dimension or share random bits are not defined formally in the paper (please correct me if I am wrong). As I am not an expert in this literature, it makes the statement look a bit informal and it is hard to assess their correctness.
Methods And Evaluation Criteria: The proposed proof strategies, based on boosting theorems for stability, make sense for the problem at hand.
Most result are rather abstract, the clarity could be greatly improved by providing an instantiation of the results on a more practical/explicit example.
Theoretical Claims: The authors provide in Section 4 an overview of the technical elements involved in their proofs. However, I did not check the proofs in the appendix carefully, as it is a bit far from my expertise.
I find it confusing that several concepts, such as Littleton dimension, are not formally define in the text. I makes it hard to check the results and to get intuition about them. Adding a definition in the appendix could improve the clarity for readers that are not used to these notions.
I had one general remark about the framework: The complexity notions are defined via statements of the form $\mathrm{Pr}(\mathcal{A}(S) = \mathcal{A}(S')) \geq \eta$. Does this kind of definition implicitly assume that $\mathcal{A}$ takes values in a finite or countable space. For instance, if the hypothesis class is $R^d$ (eg, for stochastic optimizers such as SGD), then $\mathcal{A}$ could have a continuous PDF and the previous probability would always be 0. In general, I think the introduction of these notions could be made more formal (the notion of shared random bits is not clearly introduced in my opinion).
Minor remark: the notation $\mathcal{X}^\star$ is not defined.
At line 156 (second column), shouldn't the denominator be $\eta^3 \epsilon$?
Experimental Designs Or Analyses: N/A
Supplementary Material: I did not review the details of the proofs in the appendix.
Relation To Broader Scientific Literature: This paper is strongly related to existing literature on stability, certificate complexity and DP complexity, which have all been relatively recently introduced.
Essential References Not Discussed: I think a comparison with the literature of algorithmic stability (eg, Bousquet & Elisseeff, 2002) could be beneficial, as it is also quantifying the similarity of the output of the algorithm on similar dataset. Algorithmic stability is a major tool in the modern study of generalization error in learning theory and it would be beneficial to discuss it quickly, if it is relevant.
Other Strengths And Weaknesses: The writing could be largely improved in my opinion, here are a few comments:
- the first two paragraphs of the abstract and of the introduction are almost copy past of each other.
- The abstract is very long (half a page), reducing it would improve clarity
- Some citations, especially in the abstract, are not written using the standard bibliographic tools in latex but instead written in plain text in a format that is not consistent with the other citations. It makes it look like the paper is not finished.
- some formulations are a bit informal, e.g., "taking a minute to explain", "we'd also like", "an astronomical number of"
- the paper has no conclusion
- the section "further related works" seems to repeat several things already discussed in the introduction, in general a small part of the paper is dedicated to the presentation of the main result. I believe it would improve the clarity to compress a bit the related works to include more technical background on the mathematical setup.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. In the certificate complexity bound of theorem 2.4, which term is typically dominating in practice?
2. The results, while interesting, are presented in a quite abstract way, could it be possible to apply them on more explicit examples, like a specific classification setting for PAC-Learning?
3. Are there formal links between your stability notions and the algorithmic stability that is classically studied in learning theory.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading of our work, comments, and questions. Below we clarify a few possible misunderstandings about our definitions, explain our writing choices, and discuss several minor modifications we will make to address the reviewers concerns:
**Q1: Missing Definitions** We respectfully disagree that there are substantial missing definitions in the paper (outside of Littlestone dimension which we chose to exclude for reasons explained below, but will include in the next version).
The term “shared randomness” refers informally to the random string $r$ in the definition of replicability — it is not a standalone definition. Replicability promises that over independent samples $S$ and $S’$ and the *same random string* r:
$Pr_{r,S,S’}[A(S;r)=A(S’;r)] \geq 1-\rho$
$r$ is usually called the “shared randomness” since it is used in both runs of A. This is the standard way to express this notion in the literature — it is not usually defined on its own. There is, however, a confusing typo after this equation regarding the domain of $r$ for which we sincerely apologize. The sentence “the smallest $\ell$ s.t. there exists a better than $\frac{1}{2}$-replicable algorithm solving $\mathcal{M}$” should say “solving $\mathcal{M}$ *using $\ell$ random bits*”. We will fix this and clarify the domain in the text.
*Regarding Littlestone:* the connection between Littlestone dimension and stability is very involved and largely orthogonal to our work. We do not use the definition in any of our proofs, so excluding it should not make any of our results harder to check. We use as a blackbox that any finite Littlestone class has a globally stable realizable learner (see Thm E.2). Our contribution thus has very little to do with Littlestone dimension, and we thought including the definition would only confuse readers as its connection to stability is fairly mysterious without deep knowledge of prior work.
This said, we absolutely see the reviewer’s counterpoint that stating the definition and (perhaps more importantly) giving concrete examples would help the unfamiliar reader understand what types of classes Thm 2.4 applies to. We will include these in the text (see Q2 below) as well as add references to prior work discussing the above in more detail.
**Q2: Concrete Examples** There are many nice examples of classes with finite Littlestone dimension to which Thm 2.4 applies. Two classical examples are affine subspaces in $\mathbb{R}^n$ (or more generally algebraic varieties), and half-spaces with margin (semialgebraic varieties with margin). Thm 2.4 gives the first randomness-efficient stable learners for both of these families. We will add discussion of this in the text.
**Q3: Thm 2.4, Dominating Term**. This depends on how good accuracy you want, but if one only wants (say) 99% accuracy over the data, the first term will always dominate (e.g. this term would be polynomial in the dimension of the affine subspace).
**Q4: Continuous Output Spaces** we do not make any assumptions about the output domain of $\mathcal{A}$. However, it is true that if the output distribution of $\mathcal{A}(\cdot)$ is continuous, then $\mathcal{A}$ is not globally stable for any $\eta>0$. The definitions we consider measure the performance of the *best* algorithm over the domain/output, so this is not an issue for working over uncountable and non-discrete domains.
**Q5: Relation to [BE02]** Replicability and Differential Privacy are only loosely related to the classic stability notions of [BE02], which are generally much weaker (e.g. these notions can be satisfied for any VC class, whereas DP and replicability cannot). It is true there are some connections: e.g. replicability implies generalization by a similar Hoeffding-style argument, and uniform stability [BE02] is related to a substantial relaxation of DP known as private prediction. To the best of our knowledge, however, there is no real formal comparison between [BE02] and the models we study, though the work is certainly worth mentioning as a seminal investigation into stability in learning.
**Other Writing Comments**: Our citation style in the abstract is a conscious choice ensuring all authors of the major relevant papers are listed (the ICML citation format does not seem to allow this directly). This is because theory papers have alphabetical author order and all authors should be credited equally. We also note it is relatively common for theory papers to have a “discussion/open-problem” section as we do rather than a traditional conclusion and respectfully disagree this is an indicator of poor writing.
This said, we do agree the abstract is on the long side for the ICML format and repetitive with the intro, so we will shorten it as the reviewer suggests and ensure the early definitions are as clear as possible.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking the time to clarify several points.
I have now a better understanding of how the paper is written and I believe that the proposed changes will improve the clarity of your work.
For this reason, I will increase my score by one point.
Good luck! | Summary: This paper addresses the randomness complexity of an algorithm: how many random bits an algorithm requires to satisfy certain property. Two properties are studied: replicability and differential privacy. It is shown that the randomness complexities of these properties are closely connected to the global stability of the algorithm. The latter can be formulated as a property of a deterministic algorithm. Two main results follow from the boosting from deterministic globally-stable algorithms to replicable or private (necessarily) randomized algorithms.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I checked results in Appendix B in detail. Modulo clarity of expositions, the theoretical claims are correct.
Experimental Designs Or Analyses: Not applicable
Supplementary Material: I review Appendix B in detail, appendices C-D in less detail.
Relation To Broader Scientific Literature: This work contributes to the recent line of research on randomness complexity of stable algorithms. While prior results (e.g. Canonne et al., 2024) look at particular algorithmic tasks, this work provides a general equivalence between stability properties, such as global stability, replicability and differential privacy.
Essential References Not Discussed: Not to the best of my knowledge.
Other Strengths And Weaknesses: The paper is well-written and provides an important contribution by showing a sharp relation between fundamental algorithmic properties.
A weakness: Theorem 2.3 only considers the high-privacy regime ($\varepsilon = O(1 / \sqrt{n' \log n'})$).
Other Comments Or Suggestions: Line 774: should be $C_{\mathrm{Glob}} \leq \log \frac{1}{\eta}$
Questions For Authors: Please clarify the end of the proof of Theorem B.2. For example, what is $L$ in line 771?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading of our work and helpful comments. We will make the suggested minor fixes.
*Regarding the end of the proof of Thm B.2:*
The $L$ in line 771 is a typo and should be $L_D$, our apologies. The point here is to bound the measure of the set of samples $S$ on which $A_{\text{List}}$'s majority output is not in $L_D$; in particular, we want to show this bad set has measure at most $\frac{2\gamma}{\eta}$.
To see this, we observe that for any *fixed* sample S whose majority output is not in $L_D$, it must be the case that $A_{\text{List}}(S;\cdot)$, over the randomness of $r$, outputs a hypothesis *outside of* $L_D$ with probability at least $\eta/2$. But then if there is a $\frac{2\gamma}{\eta}$ measure of such samples S, the overall probability that $A_{\text{List}}$ outputs something outside of $L_D$ (now over the randomness of both S and r) is more than $\gamma$. This contradicts $A_{\text{List}}$’s list-replicability (the guarantee on Line 752) and completes the proof. | Summary: The authors study the randomness complexity of private and replicable algorithms, showing that this complexity is tightly related to the best global stability parameters of an algorithm for the same task. Letting $\eta_M$ be the best achievable replication probability for a deterministic algorithm solving problem $M$, they define the globally-stable complexity of a problem, $C_{Glob} = \log \tfrac{1}{\eta}$. They then show that the certificate complexity of $M$ is lower-bounded by $C_{Glob}$ and upper-bounded by $C_{Glob} + 1$. The similarly give upper and lower bounds on the DP complexity of a problem $M$ by a function of $C_{Glob}$ and privacy parameters $\varepsilon, \delta$. Finally, they show that the certificate complexity of agnostic PAC learning for classes with finite Littlestone dimension (ensuring they are replicably learnable) is $poly(d) + O(VC(H)\log(1/\alpha))$. Thus, the certificate complexity for agnostic PAC learning resembles that of realizable learning, up to O(VC(H)) factors.
Claims And Evidence: All claims are supported with proof.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I did not go carefully through the proofs in the appendices, but the explanation of proof approach is clear enough that they imply the stated results.
Experimental Designs Or Analyses: N/A
Supplementary Material: I read the formal theorem statements and skimmed the proofs. I did not read most of appendices A and B.
Relation To Broader Scientific Literature: The authors do a very thorough job of connecting their results to the existing literature, highlighting how their work extends prior work on DP complexity, certificate complexity, and answers open questions regarding agnostic learning raised in prior work.
Essential References Not Discussed: All essential references are discussed.
Other Strengths And Weaknesses: The paper was very well-written and a pleasure to read. The results represent a significant contribution to our understanding of the randomness complexity of stable algorithms.
One weakness (acknowledged by the authors) is the fairly significant tradeoff between sample and randomness efficiency in the results. The randomness complexity is likely to be a secondary or tertiary consideration in most applications, after sample and computational complexity, so understanding the randomness complexity of sample and/or computationally efficient stable algorithms seems like the kind of result we’d really love to have. This is of course a much more significant ask (particularly the computational efficiency piece).
Other Comments Or Suggestions: Page 3 - Double close parens in the first paragraph after Theorem 2.1
In the text preceding Theorem 2.3, I was a bit confused about what happened to the confidence parameter in moving from DP to user-level DP, until I got to the appendix.
Page 4 - “Not only is such a bound is possible”
Page 7 - “occurring probability”
Questions For Authors: No questions for the authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading of our work and helpful comments (we will fix the minor issues noted).
The trade-off between sample and randomness efficiency is a very interesting question, and is not well understood even for the most basic examples (see e.g. “Geometry of Rounding: Near Optimal Bounds and a New Neighborhood Sperner's Lemma” and “The Randomness Complexity of Differential Privacy”). Our transformations have an overhead scaling polynomially in the stability parameter, which can be efficient for low-dimensional problems but is indeed costly in settings like PAC learning. In the PAC case, this can be circumvented when one does not care about randomness by looking at certain variants of global stability, and it would be interesting to understand if there is an inherent trade-off here. However, given the substantial difficulty in resolving this problem even for, say, estimating the bias of d coins, we feel this is reasonably outside the scope of our work. | Summary: This paper studies a type of stability for algorithms and its connections to differential privacy (DP). It is argued that such stability requires randomization of the algorithm, and results are given for translating between different measures of complexity. The theoretical results are explained with intuition, and proof sketches are given in the main text with complete proofs in the appendices.
I am not familiar with this field, so my lower score mostly reflects a lack of confidence.
Claims And Evidence: Most claims are theoretical; see below.
Methods And Evaluation Criteria: N/A
Theoretical Claims: Overall, the results sound fairly reasonable to someone who isn't an expert. I did not check the correctness of the proofs in the appendices.
A few things stood out to me, however:
* The idea that randomness is required for stability is not self-evident, even after reading footnote 1. In many cases data are discrete (say inputs are binary strings) so I don't understand how an interpolation argument like that given here would apply.
* Thm 2.2 gives a result in terms of failure probablity $\beta$ versus $\beta'$. Since the confidence takes the form of a probablity $1-\beta$ (line 071), it would seem to me that your results could easily become vacuous if that grows exponentially as in Thm 2.2 (parts 1 & 2). In other words, the blowup in lack of confidence doesn't seem so mild to me.
Experimental Designs Or Analyses: The authors did not perform any numerical experiments.
Supplementary Material: I did not.
Relation To Broader Scientific Literature: I am not familiar with this literature, so I don't have much to add. However, it would be good for the authors to address other kinds of stability, e.g. that used in numerical analysis and its differences from the current form in the introduction.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The claim that "stability," in the colloquial sense, is important for algorithms seems self-evident. However, I have a philosophical quibble with the form of stability that the authors study here. My background is applied mathematics and I'm familiar with the type of stability studied in numerical analysis where the output of an algorithm is required to be close for close inputs. In the current work, the authors require the output to be the *same* for different inputs, which seems a very strong requirement. It would help convince people with backgrounds like mine if the authors would motivate better why they've chosen such a strong type of stability. In my opinion, requiring the exact same outputs for different datasets is more than anyone practically would want, but I'm willing to admit I could be wrong.
Other Comments Or Suggestions: * $\rho$-replicable definition: the distribution for $r$ is never stated
* line 084 col 2: $\ell$ is never defined
* line 088 col 2: "$\ell$-random" should be "$\ell$ random"
* explicit formulae for $C_\mathrm{Rep}$ and $C_\mathrm{Glob}$ should be given in the main text
* Thm 2.2: In item 2 (DP to stability) it seems like the prime on $\beta$ is swapped from where it should be; is this a typo?
* line 240 col 2: "tower-type dependence" this wasn't familiar to me, you might want to explain what you mean by this
* line 348 col 1: there is a typo with the first argument missing from $\mathcal{A}_\mathrm{Rep}(;r)$
Questions For Authors: * Can you better motivate the particular definition of stability for people who are not familiar with this area?
* Could you shorten the abstract? It is very long.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading and questions and will make all suggested fixes
We hope the below clarifications (and positive impressions of the area expert reviewers) raises the reviewer’s confidence in our work, and respectfully ask they consider using the confidence score rather than a lower main score to reflect any remaining lack of confidence due to field familiarity.
**Q1:** “The idea that randomness is required for stability is not self-evident. I don't understand how an interpolation argument would apply.”
**A1:** Good catch. The footnote only formally makes sense for DP (where “similar” means trading out one example in your discrete data-set, so one can always interpolate by trading out points one by one). In replicability, the argument is similar but one interpolates in *parameter/distribution-space*, not over samples. We will clarify this in the text.
For example, consider estimating the bias of a coin. The data is discrete (coin flips), but the underlying set of distributions (biases $p \in [0,1]$) is continuous. Our interpolation is over p. A deterministic replicable algorithm has a single canonical output w.h.p for every $p \in [0,1]$, but cannot distinguish between close p and p’. Since we can interpolate between any $p,q \in [0,1]$, the algorithm must have the same canonical solution everywhere so is essentially constant.
**Q2:** “Thm 2.2 gives a result in terms of failure probability $\beta$ vs $\beta’$… the blowup in lack of confidence doesn't seem so mild to me”
**A2:** This type of blowup is considered mild in the literature because almost all statistical problems have complexity scaling in $\log(1/\beta)$ due to Chernoff. This means one does not really lose significant generality assuming the initial algorithm has small $\beta$. More generally, we can usually amplify a constant success algorithm to $1-\beta$ success just by repeating it $\log(1/\beta)$ on independent samples and taking the best output. We will clarify this in the text.
It is also worth remembering $C_{\text{glob}}$ is log scale, so, combined with the above, the blowup we incur is only, e.g., $loglog(1/\eta)$ for $\eta$-stability, which is mild even when $\eta$ is exponential (as in PAC Learning).
**Q3:** “Can you better motivate the particular definition of stability for people who are not familiar with this area?”
**A3:** Absolutely. This critical question has been discussed extensively in prior work (e.g. in Impagliazzo et al.). We’ll summarize here and add discussion in the paper as well as on weaker stability notions. We’ll focus on replicability, since presumably DP (widely adopted in industry and hugely impactful in theory) does not need more motivation.
Replicability comes with a substantial list of advantages due to requiring truly equal outputs. Here is a representative sample:
1) Replicability is *closely related to other core notions of stability in computer science* (DP, adaptive data analysis, and more, see e.g. “Stability is Stable: Connections between Replicability, Privacy, and Adaptive Generalization” or “The Bayesian Stability Zoo”). Several open problems in DP (sample-efficient user-level DP, amplification of DP) were resolved through studying replicability.
2) Replicability is *testable*. It is not always easy to test whether an algorithm is stable (e.g. testing DP is known to be computationally hard). By requiring equivalence, replicability is trivially verifiable.
3) Replicability is *preserved under arbitrary post-processing*. One can apply any desired algorithm to the output of a replicable procedure, and the result will always remain replicable. This is an extremely useful property in algorithm design.
4) Replicability is *easily amplified*. Given a constantly replicable algorithm, there is a simple and computationally efficient procedure to amplify it to arbitrarily high replication probability.
5) Replicability is *model independent*. Different statistical tasks naturally have different notions of weak stability and closeness, or, in the case of testing problems with binary outputs, may have *no* natural notion of weak stability (see “Replicability in High Dimensional Statistics” or “Replicable Uniformity Testing” for a discussion of replicability in these settings). Replicability gives a **unified definition** for studying stability for all statistical tasks.
It is also critical to understand when replicability is possible, and its overhead compared to weaker notions of stability. This has been the topic of a substantial number of works, and efficient replicable algorithms are known for many tasks (statistical queries, hypothesis testing, mean estimation, RL, learning large margin halfspaces…). This is not the main focus of our work, but we feel combined with the above more than justifies the study of replicable algorithms.
**Q4:** Could you shorten the abstract?
**A4:** We’ll shorten the abstract in the next version to be less repetitive with the intro.
---
Rebuttal Comment 1.1:
Comment: Re: Q1
Your argument makes more sense to me now, but if the only replicable algorithm for estimating coin bias is "essentially constant", then is that definition of stability useful? I think what you would want, in practice, is a definition that allows for slightly different estimates of the bias $p$ for slightly different input data. In any case, this kind of issue is what I think you should discuss more when motivating the definition (when you address Q3).
Reviewer 3UmF also had issues with the clarity of the random bit distributions (these need to be fully defined in the main text).
Also, I'd like to hear your responses to my "other comments" section to be sure I was interpreting things right.
Unfortunately year's review for ICML doesn't allow for confidence scores, only a single score. If you can clarify how you'll address the above comments, I'll be willing to raise my score 1 point.
---
Reply to Comment 1.1.1:
Comment: Thanks for the reply -- we ran out of space above (new ICML rebuttal limit), so can finish responding to your comments here.
**Re Q3 and motivation:** We will definitely include a discussion of the advantages and disadvantages of replicability compared to other stability notions in the text. The notion suggested by the reviewer for the coin problem is natural and worth studying, but it turns out in this setting the stronger notion of replicability is not only possible, it's *asymptotically free!* We apologize if our previous explanation regarding the coin problem was confusing here — the interpolation argument is only against **deterministic** replicable algorithms. There is an elementary *randomized* (say) $\frac{3}{4}$-replicable algorithm for the coin problem running in time $O(\log(1/\beta)/\alpha^2)$ (for $\alpha$-accuracy and $\beta$-confidence), the same asymptotic cost as standard bias estimation. This was one of the initial motivations of Impagliazzo et al. [ILPS22] in defining the notion. As we mentioned before, efficient replicable algorithms are now known for a vastly larger class of problems than just bias estimation.
That said, in general there are interesting tradeoffs between weak notions of stability like the one the reviewer suggests and strong notions like replicability. For instance, if one wants very high probability stability or is handling very high dimensional data, weak stability will likely be computationally and statistically cheaper. On the other hand, for settings where the output of the algorithm may then be post-processed by a highly sensitive function, replicability will preserve stability while weaker notions will not. This is the reason weak notions typically don’t lead to (differential) privacy; it’s possible one can use these small variations to extract private user data. The type of stability one should choose is based on the circumstances (there are of course many more reasons to use one vs the other, e.g. those described in our previous response), and developing a better understanding of both weak and strong stability can help us decide when to use which notion.
**Response to “Other Comments” Section**:
— We thank the reviewer for catching our fairly embarrassing typo regarding $\ell$. The sentence should read “the smallest $\ell \in \mathbb{N}$ s.t. there exists a better than $\frac{1}{2}$-replicable alg solving $\mathcal{M}$ *using $\ell$ random bits*”. In other words, a replicable algorithm *whose randomness $r$ is drawn uniformly from* {$0,1$}$^{\ell}$. We will fix this in the text.
— *Regarding the domain of r.* We hope the above (at least partially) clarifies this. In the original definition of replicability the authors are not careful about specifying the domain of the randomness (usually one just assumes access to an infinite stream of Ber(1/2)-distributed bits the algorithm can query). In our setting (and that of Dixon et al.), the randomness is drawn uniformly from {$0,1$}$^{\ell}$ where $\ell \in \mathbb{N}$ may be taken arbitrarily large and can be viewed as a parameter of the algorithm. We will ensure this is formalized in the main text, along with the definitions of C_{glob} and C_{rep}
We remark that technically, prior works even drew randomness from continuous domains such as uniform over [0,1], but these methods can easily be discretized to work over {$0,1$}$^{\ell}$ for large enough $\ell$.
— The $\beta$ vs $\beta’$ in Thm 2.2 is a typo, thanks again. We’ll also fix the missing argument in 348 and typo at 088.
— “Tower type dependence” refers to iterated exponentials, i.e. of the rough form $T(n) = 2^{T(n-1)}$, we’ll add this. | null | null | null | null | null | null |
Extractive Structures Learned in Pretraining Enable Generalization on Finetuned Facts | Accept (poster) | Summary: This work provides an in-depth analysis on the ability of pretrained language models to generalize from specific facts to broader implications.
The authors focus on understanding the underlying mechanisms that allow pre-trained language models to make such generalizations after being finetuned on particular facts.
The paper introduces the concept of extractive structures, a novel framework that describes how different components within the Transformer-based models, such as MLP layers and attention modulars, work in concert to enable this generalization.
The authors suggest that extractive structures consist of three kind of components: informative components, upstream and downstream extractive components.
The paper presents two main predictions based on the extractive structures hypothesis: the data ordering effect and the weight grafting effect.
Empirical evidence supporting these predictions is provided through experiments conducted on various large-scale language models。
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I check the design of extractive structures framework. I think it makes sense.
Experimental Designs Or Analyses: The experiments part has good soundness for supporting the predictions about data ordering and weight grafting.
Supplementary Material: No.
Relation To Broader Scientific Literature: This work contributes to the interpretability of the learning process and knowledge storage of neural language models.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: No.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the review! We're happy that you believe our extractive structures framework makes sense and that our experiments are sound! | Summary: This paper introduces extractive structures—model components that store, retrieve, and process facts—to explain how LMs generalize to implications of fine-tuned facts. The authors show these structures emerge during pretraining when models encounter implications of known facts. Experiments on multiple LMs confirm a data ordering effect (OCR fails if implications precede facts) and a weight grafting effect (extractive structures transfer to counterfactuals), offering insights into LM generalization and robustness.
Claims And Evidence: The claims made in the paper are well supported.
> Claim 1. The structures consist of *informative components* that store training facts as weight changes, and *upstream* and *downstream extractive* components that query and process the stored information to produce the correct implication.
The structures are intuitive. To operationalize this, the authors define scores to identify the roles of each LM module. The score visualizations support the proposed structure and align with findings from prior works analyzing pretrained models.
> Claim 2. Our technique reveals that fact learning occurs in both early and late layers, which enable different forms of generalization.
This claim is supported by layer-freezing ablation in Table 2, where freezing either early or late layers does not hurt fine-tuning fact performance, indicating these facts are distributed across the model. Freezing early layers impairs first-hop implications, while freezing later layers impairs second-hop implications, suggesting that facts in early layers as part of the first-hop informative components enable first-hop reasoning, and facts in later layers enable second-hop reasoning.
> Clam 3. We next study how extractive structures are learned during pretraining and propose a mechanism by which this
occurs.
The extractive structures are hypothesized to emerge as models strategically generalize from facts to implications during pretraining, rather than memorizing both simultaneously. The paper supports this by designing synthetic pre-training settings, showing that the model's OCR ability is only non-trivial when facts precede their implications during pretraining.
Methods And Evaluation Criteria: I find the proposed evaluations well-designed and effective in demonstrating extractive structures as a plausible mechanism for how LMs generalize to implications of fine-tuned facts. See "Claims And Evidence" for details.
Theoretical Claims: I have not checked the proof in great details for the extractive scores in the Appendices, but the score definitions seem intuitively reasonable to me.
Experimental Designs Or Analyses: I find the experimental designs and analyses both sound and compelling. See "Claims And Evidence" for details.
Supplementary Material: I have scanned through all the supplementary material.
Relation To Broader Scientific Literature: Previous work on out-of-context reasoning shows inconsistent results—fine-tuned facts sometimes enable generalization, but not always.
This paper introduces extractive structures—model components that store, retrieve, and process facts—to explain why out-of-context reasoning *occurs*. It also analyzes how these structures are learned during fine-tuning, shedding light on why certain forms of generalization *may fail*, particularly when pretraining data lacks the necessary conditions for these structures to form.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The paper is well-written, easy to follow, and presents a clear, intuitive mechanism for how LMs generalize to implications of fine-tuned facts, backed by thorough experiments and analysis.
2. The proposed extractive structure provides a timely framework that reconciles inconsistencies in the OCR literature (see "Relation to Broader Scientific Literature").
3. The experimental design is novel and engaging, with results and visualizations that are clear, interpretable, and strongly support the claims.
Weakness:
I did not find major weakness in this work.
Other Comments Or Suggestions: This paper is well-written, but I have a few suggestions to improve clarity:
1. On line 24, changing "training" fact to "finetuning" fact would better emphasize that the fact in question is introduced during the fine-tuning stage.
2. In the setup paragraph of Section 6.1, it is initially unclear why dax represents facts and wugs represents implications until Appendix D.2 clarifies that there are more names (100) than animals (20). While this distinction may seem minor, moving some dataset design details to the main text could make the synthetic setting clearer from the outset.
Questions For Authors: 1. Evaluation Details: In Appendix B.1, you mention using the log-probability of the first continuation token for scoring. Could you clarify whether the mean rank is calculated based on this first token alone or on the entire continuation sequence?
2. Training Details: Out-of-context reasoning (OCR) can be sensitive to training hyperparameters. While you've discussed the impact of learning rates, have you also examined how the annealing stage affects OCR? A comparison between the final checkpoint and the last checkpoint before the final annealing stage could provide valuable insights.
3. Defining Early and Late Layers: In Table 4, layers 1–24 are categorized as early, and layers 25–32 as late. Could you elaborate on the criteria used to define these layers as early or late? Was this classification based on the visualizations in Figure 5 or another methodology?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your review! We are happy that you found our claims "well supported", our evaluations "well-designed and effective", and our experimental designs and analyses "both sound and compelling". We're also grateful for the writing suggestions!
> 1. Evaluation Details: In Appendix B.1, you mention using the log-probability of the first continuation token for scoring. Could you clarify whether the mean rank is calculated based on this first token alone or on the entire continuation sequence?
The mean rank is calculated based on the first token alone. Fortunately, in all of our datasets and all the tokenizers used in the models we studied (including those in the appendices), the first tokens are unique across the 20 options in each dataset. We'll clarify this in the paper.
> 2. Training Details: Out-of-context reasoning (OCR) can be sensitive to training hyperparameters. While you've discussed the impact of learning rates, have you also examined how the annealing stage affects OCR? A comparison between the final checkpoint and the last checkpoint before the final annealing stage could provide valuable insights.
This is a great question. We have indeed performed this experiment; the final checkpoint of OLMo-0424 generalizes slightly worse than the preanneal checkpoint for the first-hop and second-hop OCR tasks, and has similar results for the data ordering and the grafting results. We think that it is possible that annealing does make a difference, but without more systematic pretraining-scale experiments or having more details behind the other open-weight but not open-source models such as Llama and Gemini, it is hard to make a strong conclusion. The figure are available at the following external links:
- [Evaluating first-hop and second-hop OCR](https://pub-76146412ddd24d5d8276a5d3297e09b0.r2.dev/hop-olmo.pdf)
- [Data ordering](https://pub-76146412ddd24d5d8276a5d3297e09b0.r2.dev/data-olmo.pdf)
- [Weight grafting](https://pub-76146412ddd24d5d8276a5d3297e09b0.r2.dev/weight-olmo.pdf)
> 3. Defining Early and Late Layers: In Table 4, layers 1–24 are categorized as early, and layers 25–32 as late. Could you elaborate on the criteria used to define these layers as early or late? Was this classification based on the visualizations in Figure 5 or another methodology?
The classification was based on the informative scores in Fig. 5. Specifically we compared the informative scores of the first-hop MLPs and the informative scores of the second-hop MLPs. The reason why we classified based on the MLPs is that the scale of the informative scores for MLPs are much higher than those for attention heads. We'll update the paper to clarify this. | Summary: This paper studies the mechanisms of how language models perform two-hop out-of-context reasoning (OCR), where the model generalizes to implications of new facts acquired during fine-tuning that involve composing the new facts as the first or second hop with another known fact. A series of experiments is conducted, which consolidates prior findings that LMs usually perform the two hops serially in the lower/middle and upper/late layers, respectively. The authors also hypothesize that the mechanism is learned during pretraining when encountering implications of known facts, where several controlled experiments strengthen this hypothesis.
## update after rebuttal
The rebuttal helps complement the draft and addresses some of my concerns. I will maintain my original score, which leans positive overall.
Claims And Evidence: Yes. The claims are supported by clear evidence from various angles.
Methods And Evaluation Criteria: Most of the methods/evaluation criteria make sense. One metric that is somewhat unconventional to my knowledge is the usage of the rank of the probability that the model assigns to the ground truth continuation among a set of options, introduced in Section 3, which also serves as the basis for the later metrics. I wonder why not choosing more standard metrics such as accuracy.
Theoretical Claims: I don't think there are theoretical claims in this work.
Experimental Designs Or Analyses: The experimental designs and analyses are sound.
Supplementary Material: I briefly reviewed all the appendix sections. In particular, the list of people names and templates.
Relation To Broader Scientific Literature: The work is based on prior findings (cited in the paper) on latent multi-hop reasoning in transformer language models, especially that LMs usually perform the two hops sequentially within the forward pass. Overall, the main contributions w.r.t. the existing literature are to consolidate and extend these prior findings with a new set of techniques on comparing models before and after fine-tuning.
Essential References Not Discussed: There're no undiscussed essential references to my knowledge.
Other Strengths And Weaknesses: Even though there are many interventions in the model internals which measure the different components, etc., the overall conclusion is arguably still quite behavioral. It might be interesting to look closer at the specifics, especially the interface between the informative and extractive components and how they are implemented. This might help understand why the model generally seem to learn the right things during fine-tuning instead of just memorizing the facts in arbitrary fashions.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review. We're happy you find that our claims are "supported by clear evidence from various angles", and that our "experimental designs and analyses are sound". We're also grateful for your constructive feedback! We'll now discuss your concerns.
> One metric that is somewhat unconventional to my knowledge is the usage of the rank of the probability that the model assigns to the ground truth continuation among a set of options, introduced in Section 3, which also serves as the basis for the later metrics. I wonder why not choosing more standard metrics such as accuracy.
The main reason for using the mean rank instead of accuracy is that mean rank can measure partial progress more easily than accuracy. This allows us to measure improvements in LM performance throughout training. In contrast, using accuracy gives a sparse, sharp signal only when the log-prob on the correct continuation exceeds every other continuation, and has been shown to be misleading at times (Rylan et al, 2023).
We also believe that our use of mean rank is sound, particularly in this setting. First, note that we can reliably interpret low mean rank as high accuracy, because mean rank equals 0 necessarily implies perfect accuracy, and vice versa.
Secondly, in our setting, the language model cannot possibly use any prior knowledge or reasoning to improve mean rank without actually learning the underlying facts. This is because our synthetic dataset is constructed by picking a label uniformly at random from a set of possible continuations. In contrast, standard multiple choice benchmarks have options that often have relationships between them, so that language models can improve their mean rank by eliminating impossible answers (e.g. if option A implies option B, then option A is impossible, since there can only be one correct answer). This particular problem has been found in the [TruthfulQA dataset](https://github.com/sylinrl/TruthfulQA). Because our continuations are randomly generated, there is no internal structure to bias the mean rank.
Schaeffer, Rylan, Brando Miranda, and Sanmi Koyejo. "Are emergent abilities of large language models a mirage?." Advances in Neural Information Processing Systems 36 (2023): 55565-55581. | Summary: This paper focused on two-step implication process in LLMs, and proposes "extractive structures" to analyze which part of the LLM syblayers are the dominants of implication at different types of problems (2 types in the paper) with a method to highlight them based on the output probability. The paper also discusses how the implication behavior is obtained during fine-tuning LLMs, argued that (1) early layers encode implication in the early positions in the input, and late layers vice versa, and (2) learning implication should happen after learning facts.
## update after rebuttal:
Thank you for your comments on my review! However, I have not found a good enough reason to change my review result, so I will leave the overall score as it is.
Claims And Evidence: The study focused on only two-hop implication: the fact A implies B, then B implies C, and if we only discuss about this setting, the claims in the paper is well supported. But the result is indeed weak to be generalized to more complex task (more-than-3-hop implication).
Methods And Evaluation Criteria: The proposed method is based on comparing output probabilities when replacing certain set of weights/variables. This approach is able to highlight the effect on a specific sublayer in the LLM. In contrast, this approach doesn't focus on actual behavior of the updates of model parameters.
Theoretical Claims: Although the paper contains many discussion, the method is basically simple: check differences of output probability/rank of output if tweaked some portion of the target LMs. Definitions of extractive scores looks okay, but it is somewhat arbitrary and different argument may involve different formulation.
Experimental Designs Or Analyses: The experiment mainly focuses on analysis on OLMo, and the results may be suitable to explain how the OLMo model behaves. But it is somewhat questionable whether insights obtained from experiments can be generalized to other models (as noted in Appendix I showing some counterexamples).
Supplementary Material: Appendix B involves actual calculation of extractive scores using derivation.
Appendix I shows several results on other models, showing several counterintuitive behaviors.
Relation To Broader Scientific Literature: The study reveals several process of how/where the model remembers information from training data, so it may impact against designing training strategy on continual pre-training and fine-tuning LLMs.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: Table 1: Senshoji should be Sensoji (if the authors intended it is a famous temple in Tokyo)
L.210: the the -> the
Questions For Authors: I would like to provide some comments on following questions:
(1) the proposed framework can yield similar results on larger models (10B~100B) or smaller models (~1B)?
(2) when the length of implication chain is increased more than 2, what can be said according to these results?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the review! We're happy that you find that our claims are "well supported" for the two-hop setting. We're also grateful that you've pointed out areas to work on in terms of writing. We'll now address your questions and concerns.
> But it is somewhat questionable whether insights obtained from experiments can be generalized to other models
> (1) the proposed framework can yield similar results on larger models (10B ~ 100B) or smaller models ( ~1B)?
While we have shown that different models vary in terms of how well they can exhibit OCR and how sensitive they are to hyperparameters, we want to highlight that all the models we investigated __show qualitative patterns consistent with our main hypotheses__.
In addition, as suggested by the reviewer, we are running experiments on the Llama-3.2-1B and Gemma-2-27B. The results we have so far are consistent with the qualitative patterns exhibited by the 4 different 7B models we've studied. Specifically, we find that in every model we studied,
- The model exhibits OCR in the first-hop and second-hop datasets for some learning rates and training epochs ([Llama-3.2-1B](https://pub-76146412ddd24d5d8276a5d3297e09b0.r2.dev/hop-llama_1b.pdf))
- For all learning rates and training epochs, the fact-first data order generalizes at least as well as the implications-first data order. Further, for some learning rates and training epochs, the fact-first data order generalizes significantly better. ([Llama-3.2-1B](https://pub-76146412ddd24d5d8276a5d3297e09b0.r2.dev/data-llama_1b.pdf))
- For all learning rates and training epochs, the grafted model generalizes to counterfactual implications at least as well as the control model. Further, for some learning rates and training epochs, the grafted model generalizes significantly better than the control model. ([Llama-3.2-1B](https://pub-76146412ddd24d5d8276a5d3297e09b0.r2.dev/weight-llama_1b.pdf))
We are still in the middle of running the Gemma-2-27B experiments; we hope to provide an update in the next few days.
> (2) when the length of implication chain is increased more than 2, what can be said according to these results?
We believe that our extractive structures framework can be generalized to describe longer chains of latent reasoning, or even reasoning that requires consolidating information from many different training documents (Treutlein et al, 2024). For example, to deal with several hops we might generalize the simple [upstream] -> [informative] -> [downstream] mechanism to [upstream] -> [informative 1] -> [connector] -> [informative 2] -> [downstream], and adapt the causal metrics accordingly. We believe that extending the framework to these settings would be an exciting way of building on our present work.
Treutlein, Johannes, et al. "Connecting the dots: Llms can infer and verbalize latent structure from disparate training data." Advances in Neural Information Processing Systems 37 (2024): 140667-140730. | null | null | null | null | null | null |
Contradiction Retrieval via Contrastive Learning with Sparsity | Accept (poster) | Summary: The paper proposes a new method named SparseCL to leverage specially trained sentence embeddings designed to preserve subtle, contradictory nuances between sentences. SparseCL utilizes a combined metric of cosine similarity and a sparsity function to efficiently identify and retrieve documents that contradict a given query. Experiments are performed on the Arguana, MSMARCO, and HotpotQA datasets to demonstrate the effectiveness of the proposed method. The authors apply the proposed contradiction retrieval method to two downstream settings including corpus cleaning and natural language inference to highlight the practical benefits of the proposed approach in real-world scenarios.
Claims And Evidence: Yes. The claims are mostly well-supported by the experimental results.
Methods And Evaluation Criteria: Yes. The proposed method can improve the baseline without introducing much burden.
Theoretical Claims: Yes. Theoretical Claims are well supported.
Experimental Designs Or Analyses: Yes. The experimental designs are overall reasonably designed.
Supplementary Material: Yes. Additional related work, Additional ablation studies, Two examples demonstrating the “non-transitivity” of Hoyer sparsity and the “transitivity” of cosine function, Experiment comparison with method from (Shi et al., 2023), A case study for counter-argument retrieval from Arguana dataset, Hyper-parameters for training and inference, Efficiency test of cross-encoder and vector calculation, Data generation details for MSMARCO and HotpotQA experiments in Section 4.2 parts.
Relation To Broader Scientific Literature: The proposed contradiction retrieval method may benefit areas like corpus cleaning and natural language inference.
Essential References Not Discussed: No. The references are well-cited and discussed.
Other Strengths And Weaknesses: Strengths:
- The paper is overall well-written and organized.
- Besides the cosine similarity, the paper proposes to use Hoyer sparsity as the metric for contradiction retrieval without complex pair-wise computation for reranking.
- The experimental validation is performed on three public datasets and demonstrates the effectiveness of the proposed method.
Weakness:
- The overall design is rather simple with a well-known sparsity metric.
- Besides the provided baselines (BGE, UAE, GTE), does the proposed method perform well on an LLM-based encoding method?
Other Comments Or Suggestions: - Figure 1 is not referenced.
- The caption of Figure 2 L196 and L199: left/right should be upper/bottom?
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We greatly appreciate your insightful comments provided for our work. Here are our responses to the proposed weaknesses and questions.
**W1: The overall design is rather simple with a well-known sparsity metric.**
As shown in Table 4, the idea of SparseCL is not limited to Hoyer sparsity, but also works on many other sparsity functions. We believe that simple idea with superior performance is actually an advantage.
**W2: Besides the provided baselines (BGE, UAE, GTE), does the proposed method perform well on an LLM-based encoding method?**
Thank you for pointing out this consideration! As of September 2024, the 'gte-large-v1.5' model, achieving an NDCG score of 72, performed best on the Arguana dataset among models with fewer than 1 billion parameters. Additionally, LLM-based embeddings were not effective on Arguana during that time; for instance, SFR-Embedding-Mistral scored 67 in NDCG, and Voyage-lite-02-instruct scored 70. We believe that the three models we tested, varying in size, backbone, and performance, sufficiently demonstrate the effectiveness of our SparseCL method.
**Typo**
Thank you for correcting the typos. We will fix them in the final version. | Summary: This paper addresses the task of contradiction retrieval—retrieving documents or passages that explicitly refute a given query. The authors introduce SPARSECL, a method that augments standard contrastive learning for sentence embeddings with a sparsity measure (specifically, the Hoyer measure) to capture subtle contradictory nuances. The proposed approach combines traditional cosine similarity with a sparsity-based score to more effectively identify contradictions. Extensive experiments are performed on the Arguana dataset as well as on synthetic contradiction datasets constructed from MSMARCO and HotpotQA, with additional applications in corpus cleaning and natural language inference.
Claims And Evidence: Key Claims:
1. SPARSECL can enhance contradiction retrieval over standard similarity-based methods.
2. Incorporating the Hoyer sparsity measure with cosine similarity yields measurable improvements (e.g., in NDCG@10) across multiple datasets.
3. The method generalizes across datasets and has practical downstream applications (e.g., corpus cleaning).
Evidence:
1. The paper presents quantitative improvements (e.g., up to 11.0% average gain in retrieval metrics) compared to baselines such as standard contrastive learning and prompt-augmented approaches.
2. Ablation studies and additional zero-shot tests are provided, reinforcing the claim that the sparsity measure contributes uniquely to capturing contradictions.
Methods And Evaluation Criteria: The authors fine-tune pre-trained sentence embedding models via a modified contrastive learning loss that rewards high Hoyer sparsity for contradiction pairs while using similar (non-contradictory) pairs as negatives. A combined scoring function—cosine similarity plus an α-weighted Hoyer score—is used at test time to rank candidate passages.
Retrieval performance is measured using NDCG@10 on Arguana, as well as on modified MSMARCO and HotpotQA datasets. Additional evaluations include ablation studies, zero-shot generalization tests, and experiments on corpus cleaning.
Theoretical Claims: The paper argues that cosine similarity’s transitivity limits its ability to capture contradiction (since similar sentences may both contradict a third without contradicting each other).
In contrast, the Hoyer sparsity measure is shown (via propositions in the Appendix) to be non-transitive, allowing it to better represent the localized differences that signify contradiction.
The provided proofs (Propositions C.1 and C.2) appear sound under the stated assumptions, but the practical implications in high-dimensional settings might warrant further discussion.
Experimental Designs Or Analyses: Design Strengths:
Ablation studies dissect the contribution of each component (e.g., effect of adding paraphrases, alternative sparsity functions). Downstream applications (such as corpus cleaning) are tested, demonstrating the method’s utility beyond standard retrieval benchmarks.
Potential Issues:
Some experiments rely on synthetic data generated by GPT-4, which might not fully capture the complexities of naturally occurring contradictions. The evaluation on the Arguana dataset suggests that in some cases, even similarity-based methods perform reasonably well, potentially masking the unique contribution of the sparsity component.
Supplementary Material: Yes, I glanced over the supplementary material.
Relation To Broader Scientific Literature: The paper is well-situated within the literature on contrastive learning and sentence embeddings, and it draws appropriate connections to fact verification and counter-argument retrieval. Its focus on non-similarity-based retrieval differentiates it from prior work, although a deeper discussion comparing its approach with alternative methods for detecting contradictions (e.g., adversarial training methods) would further clarify its novelty.
Essential References Not Discussed: While the paper cites relevant work on counter-argument retrieval (e.g., Shi et al. (2023) on the Bipolar-encoder), it might benefit from referencing additional recent studies on robust retrieval or adversarial defenses in retrieval systems, which could provide context regarding alternative approaches to handling contradictory information.
A discussion on recent developments in fact-checking and misinformation detection could also broaden the context of its contributions.
Other Strengths And Weaknesses: Strengths:
1. Novel use of a sparsity measure (Hoyer) to capture contradiction nuances.
2. Comprehensive experimental validation across multiple datasets and applications.
3. Clear exposition of the limitations of standard cosine-based retrieval and the motivation for a new metric.
Weaknesses:
1. Dependence on synthetic data for some evaluations might limit external validity.
2. The sensitivity to the hyperparameter α and potential computational implications in high-dimensional spaces are not fully explored.
3. Some parts of the theoretical analysis could be more tightly integrated with empirical findings.
Other Comments Or Suggestions: Failure modes: Discuss potential limitations or failure cases in more detail, particularly concerning the generalization to varied contradiction types.
Questions For Authors: 1. Hyperparameter Sensitivity: How sensitive is the retrieval performance to the choice of the α parameter? Could you provide a more detailed analysis of its tuning across different datasets?
2. Data Generation Concerns: Given that some datasets rely on GPT-4–generated contradictions, how do you expect SPARSECL to perform on datasets containing naturally occurring contradictions?
3. High-Dimensional Effects: Can you elaborate on any potential issues that might arise from applying the Hoyer sparsity measure in high-dimensional embedding spaces, and whether alternative sparsity measures might offer complementary advantages?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate your insightful comments on our work. Here are our responses to the proposed weaknesses and questions.
W1 & W2 see below Q2 and Q1
**W3: Some parts of the theoretical analysis could be more tightly integrated with empirical findings.**
We are not claiming that we have theoretical analysis for characterizing the contradiction relationship. The two propositions in Appendix serve as motivating examples to show the fundamental limitation of the cosine metric in representing contradictions and how the sparsity metric can bypass that.
**Q1: hyperparameter sensitivity**
Here is the way we tune the $\alpha$ parameter: From the range [0, 10], we first divide the range into 10 intervals, calculating the NDCG@10 score on the validation set for each interval’s midpoint, and then diving into that interval for a finer search. We stop when the interval range is smaller than 0.01. We will collect all the $\alpha$ values and investigate their sensitivity in the final version.
**Q2: data generation concerns**
Please see our experimental results on Arguana, NLI datasets, where contradictions are collected from natural human-written sources.
**Q3: High-Dimensional Effects**
We believe that the current embeddings are already in a high-dimensional space (e.g. bge-base has dimensionality 768) and our experiments work well there. Please see our ablation study in Table 4 on different sparsity functions. All of them yield some improvement compared with the baseline, while our Hoyer sparsity yields the best performance. We believe that’s because simple sparsity functions have a more benign optimization landscape and thus are easier for models to optimize, which is worth future study. | Summary: This paper introduces SparseCL, a novel approach for contradiction retrieval using sparse-aware sentence embeddings. The method addresses limitations of traditional similarity search and cross-encoder methods by training sentence embeddings to preserve sparsity of differences between embeddings of contradicting sentences. SparseCL utilizes a combined metric of cosine similarity and Hoyer sparsity to score and retrieve documents that contradict a query, aiming for efficiency and improved contradiction detection.
## update after rebuttal
Responses did not really address the methodological questions I had (which is to be expected, since the issues go to the heart of the experiments presented)
It seems other reviewers didn't find the dataset *HotPotQA-synthetic-rephrase* being labelled *HotPotQA* throughout as much of a problem as I did. Clearly Figure 2 is impressive, though I feel that this also points to a separation between matching/contradicting passages that is too-good-to-be-true (and hence my skepticism about the datasets overall).
Rating remains "1: Reject"
Claims And Evidence: The embedding training separation shown in Figure 2 is very impressive - the training is clearly effective according to the metrics.
In L271, the authors state "This pattern of enhancement was consistently observed regardless of whether the embedding models were fine-tuned or not." This is a very puzzling statement, which prompted digging deeper.
Looking at the samples provided in Appendix H: It seems like many of the LLM-generated contradiction samples are 'simple contradictions' - including "no" and "not" far more than the straight paraphrases. Which in turn leads to a reconsideration of what the MSMARCO and HotpotQA dataset results actually represent. A plausible explanation for the experimental results is simply that the paraphrasing (positive or contradictory) preserves L2 similarity, while the sparse contradiction elements are just learning to recognise the signs of LLM-generated-contradiction. This seems like a fundamental experimental flaw.
The fact that SparseCL *also* shows improvements on the Arguana dataset, which is a human-curated dataset, provides some counter-evidence to this artifact-only argument. However, Arguana is also focused on counter-argument detection within debates, which might still exhibit specific textual patterns different from more general contradiction. Moreover, the Arguana results show less dramatic performance - which is what raised the artifact questions above in the first place.
Methods And Evaluation Criteria: The proposed SparseCL method, combining cosine similarity with a Hoyer sparsity measure on contrastively learned embeddings, is a sensible approach to address the limitations of purely similarity-based methods for contradiction retrieval. The use of contrastive learning with reversed positive/negative examples (contradictions as positive, paraphrases as negative) is also well-motivated for learning contradiction-aware embeddings.
Theoretical Claims: The paper includes a theoretical analysis in Appendix C about the non-transitivity of Hoyer sparsity and the transitivity of cosine similarity. I have not rigorously checked the correctness of the proofs in Appendix C, but the intuition presented and the examples seem reasonable. The theoretical motivation for using Hoyer sparsity to circumvent the transitivity of cosine similarity is clearly presented and makes sense.
Experimental Designs Or Analyses: As noted in 'Claims and Evidence', the MSMARCO and HotpotQA experimental design and results are overshadowed by the synthetic data offering 'short-cuts' to detection.
Supplementary Material: I reviewed the supplementary material, specifically Appendices C, G and H.
Relation To Broader Scientific Literature: The paper is well-positioned within the broader scientific literature on information retrieval and semantic representation learning. It correctly identifies the limitations of traditional similarity search for non-similarity-based tasks like contradiction retrieval.
The novelty lies in the specific combination of contrastive learning with a sparsity-inducing objective, tailored for contradiction detection, and the use of Hoyer sparsity. The paper appropriately cites relevant works in these areas.
Essential References Not Discussed: Nothing specific.
Other Strengths And Weaknesses: ### Strengths:
- **Novel Approach:** The SparseCL method is a novel and well-motivated approach for contradiction retrieval, addressing a gap in existing methods.
- **Clear Presentation:** The paper is generally well-written and clearly presents the method, experiments, and results.
- **Efficiency:** The method offers a computationally efficient alternative to cross-encoders.
### Weaknesses:
- **Synthetic Dataset Reliance:** The heavy reliance on synthetically generated datasets (MSMARCO, HotpotQA) for evaluation is a significant weakness regarding generalizability and the potential for artifact exploitation.
- **Marginal Improvement on Arguana:** While improvements are shown on Arguana, they are less dramatic than on synthetic datasets, which could be interpreted as supporting the artifact-exploitation concern.
- **Need for Deeper Analysis:** The paper could benefit from a deeper analysis of *why* SparseCL works, particularly on the synthetic datasets, and more detailed error analysis.
Other Comments Or Suggestions: ### Typos / Fixes
* L043: Fix citation double bracketing
* L009: "while as far as we know" ... seems like the authors should be clear on this point
* L011: "non-simlarity" -> "non-similarity"
* L100: "In specific, they" -> "Specifically, they"
* L102: "This phenomenon bring our attention" -> "This phenomenon brought our attention"
* L105: "papers are different" -> "papers was different" (tense+number)
* L148: Fix ",."
* L157: "a score between [0, 1]" -> "a score in (0, 1)"
* L134: "so the authors of (Wachsmuth et al 2018) designed" -> "so Wachsmuth et al (2018) proposed"
* L154: Fix citation double bracketing
* L155: "fine-tune any pretrained" -> "fine-tune a pretrained"
* L421: Fix bracketing "selected from (Hurley...)" -> "selected from Hurley ... ()"
* L655: "(Shi et al., 2023) proposes" -> "Shi et al. (2023) proposes"
* Appendix C. "Two examples demonstrating the ..." -> "The ..." (these are proofs, not examples)
Questions For Authors: ### Synthetic Data Artifacts (Critical Question):
The MSMARCO and HotpotQA datasets are synthetically generated. One concern is that SparseCL might be exploiting artifacts from the LLM's negative paraphrasing rather than capturing generalizable contradiction. Could you please elaborate on how you have considered or mitigated this potential issue? Specifically:
1) Have you analyzed failure cases or error types to see if SparseCL is overly sensitive to specific LLM-generated patterns?
1) Could you discuss the generalizability of your findings to real-world contradiction scenarios, given the reliance on synthetic data?
1) Are there any experiments or analyses you could perform to further validate that SparseCL captures genuine contradiction beyond LLM-specific artifacts? For instance, could you compare performance on a subset of the synthetic data that is *manually* verified for contradiction quality, or explore datasets of contradictions not generated by LLMs?
*If your response clearly shows that I am mistaken about the potential issues with identifiable LLM 'smell' for the synthetic contradictions, then I would be happy to update my score*
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We greatly appreciate your insightful comments provided for our work. Here are our responses to the proposed weaknesses and questions.
**L271 statement:**
This means that our SparseCL method can be combined with either a zero-shot embedding model or a fine-tuned embedding model, and we demonstrate that in both cases, the retrieval performance with SparseCL surpasses the performance without SparseCL. We will clarify this in our final version.
**Samples provided in Appendix H:**
Thanks for pointing out this observation. We did a simple check that among the 40 examples we listed in Appendix H, 8 of the original passages have the string “no/not/non” and 17 of the generated contradictions have the string “no/not/non”. It is true that in some cases, contradictions are formed by adding negation words, but it is also worth noticing that in more than half of the cases, contradictions are judged in other non-trivial ways like opposite words or phrase replacement. We believe this represents the essence of a well-defined contradiction rather than a flaw.
Second, it is still unclear how similarity-based retrieval methods can solve this task even if a certain style exists for generated contradictions. As we are looking for the opposite side of the query passage, rather than passages sharing the same style.
**Concern about Arguana:**
We agree that textual patterns exist for contradictions from any specific area. The reason that our method shows less dramatic performance in Arguana is that there aren't enough paraphrases within the corpus to confound the similarity-based retrieval method, because then it is likely that the counterargument is still the most similar passage within the corpus. In our ablation study (agreed by Reviewer RTuC), we manually add more paraphrases to the Arguana corpus, and we then observe a significant performance drop for the similarity-based retrieval method, and our method’s performance is unaffected. Please refer to Table 6.
**W1: Synthetic Dataset Reliance:**
We want to emphasize that, in addition to Arguana, our experiments on NLI tasks and data cleaning are conducted on real datasets. Please refer to Sections 4.4 and 4.5. For our data cleaning experiment, although the corrupted passages are generated by an LLM, they are similar enough to confuse current embedding models and cause a performance drop.
**W2: Marginal Improvement on Arguana:**
See above (Concern about Arguana)
**W3: Need for Deeper Analysis:**
Please refer to our ablation study, including sparsity function variants, data augmentation, and a case study in Appendix E to show its superiority against traditional cosine-based contrastive learning on the contradiction retrieval problem.
**Typo:**
We will fix all typos and citations for the camera ready version.
**Q1 Synthetic Data Artifacts (Critical Question):**
We have put in the prompt to encourage the generated content to be indistinguishable “in terms of length, language style, and format”. From the examples in Appendix H, we have manually checked and didn’t find any obvious language style difference between the paraphrases and contradictions. Given that it is hard to “prove” artifacts exist or not in generated passages, a manual check is the best we can do. We agree that certain key words like “no/not/non” may be used more often in contradictions, but this is the common practice to form contradictions, which also exist in human generated contradictions like Arguana.
**Q2: the generalizability of your findings to real-world contradiction scenarios, given the reliance on synthetic data:**
The motivation for us to generate contradiction pairs using LLM is the lack of natural contradiction pairs in real-world datasets besides Arguana and NLI datasets.
**Q3: further validate that SparseCL captures genuine contradiction beyond LLM-specific artifacts:**
We have sampled 500 passages from LLM generated contradictions and 2 people annotated the data to check the quality. Please refer to Appendix H. Therefore, we believe that the experiments run on our synthetic datasets are fair.
---
Rebuttal Comment 1.1:
Comment: So, I don't see that your rebuttal has addressed my main concern : The MSMARCO and HotPotQA datasets referred to throughout the paper should all be properly identified as *MSMARCO-synthetic-rephrase* and *HotPotQA-synthetic-rephrase* - these were your actual train/val/test sets. While "2 people annotated the data to check the quality" for these datasets, it could still easily be that an LLM could discover some kind of 'contradiction indicator' (my simple observation of 'no/not/non' was just the first one that leapt off the pages of your Appendix H). So, I remain suspicious of the results in Sections 4.2 and 4.3.
Similarly, Section 4.4 (Corpus Cleaning) could suffer from the same identification of short-cuts.
The Natural Language Inference experiments of Section 4.5 are not based on the synthetic data that seems problematic to me. However, the results in Table 5 are puzzling:
* I see no reason for the SparseCL lines to be bolded. Does having a lower score in each column have any significance?
* For an effective SparseCL conclusion, shouldn't the spread between the SparseCL scores in the Contradiction and Entailment be wide? They seem to be no wider apart than the scores for the other methods.
* I can see that there is definitely some effect taking place. On the other hand, doesn't the sparsity of the SparseCL embeddings cause the numbers in these rows to not be strictly comparable in any case?
In Section 4.6 (the augmented Arguana ablation) seems to be reliant on the same synthetic data source. So, again, there are methodological issues here (IMHO).
My rating remains unchanged.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback.
**Response regarding the adapted MSMARCO and HotPotQA datasets:**
We will be happy to follow the reviewer's suggestion and use different names for these datasets throughout the paper. That said, given that we clearly explain how these datasets were constructed in the introduction, Section 4.2, Section 4.3, and elsewhere, we do not believe there is any confusion about how they were generated.
In addition, we welcome any constructive suggestions for other datasets on which we could evaluate our method.
**Response to more detailed issues raised:**
In our “data set construction” paragraph in section 4.2, after generating the paraphrases $\{x^+_1,x^+_2,x^+_3\}$ and contradictions $\{x^-_1,x^-_2,x^-_3\}$ for the original document $x$, we remove $x$ from the corpus. After that, only the generated content remains in the corpus for use in subsequent queries and query answers. This ensures that all documents used in our experiments share the same writing style—generated by the same language model, with the same style control instruction included in both prompts. Therefore, we believe there is no detectable “LLM indicator” that exists only within a subset of our corpus, aside from the natural stylistic differences between paraphrases and contradictions.
Regarding the inherent language style difference between a human-written text and a text written to contradict it, we conducted an experiment to verify that the word “not” appears more frequently in human-written contradictions. Please see the statistics below for the Arguana corpus.
We counted the number of times “not” appears at the beginning of each argument or counter-argument in the Arguana corpus:
Among the query arguments: “not” appears 614 times
Among the counter-arguments: “not” appears 1,274 times
We can see that even in human-written contradiction pairs, certain inherent language style differences are inevitable. Given that existing baseline methods do not perform well on Arguana even after fine-tuning, we believe that this style difference is an inherent aspect of contradiction retrieval tasks, rather than a flaw.
**Response regarding our NLI experiments:**
The goal of our method is to derive a score function that assigns the highest scores to contradiction pairs (as illustrated in Figure 2), and lower scores to other types of relations (such as paraphrases and random pairs). Based on this score function, the retrieval algorithm then searches for the maximal score within the dataset to identify contradictions for a given query.
That said, numerical comparisons across different rows are not the focus; what matters is whether the highest score within each row is assigned to the contradiction pair, and whether the gap between contradictions and the other two classes is sufficiently large.
From Table 5 and our analysis in Section 4.5, we observe that—qualitatively—only our method consistently assigns the highest scores to contradictions while maintaining a non-negligible margin over the other two classes. This is the reason we bolded the statistics for our method. In contrast, the other two methods either failed to distinguish between contradictions and paraphrases, or assigned the highest scores to paraphrases instead of contradictions. | Summary: * Introduces a contradiction retrieval method using cosine similarity and Hoyer measure of sparsity. The novel idea being using sparsity of embedding differences as a function to model contradiction.
* Discusses other approaches namely bi-encoders and cross-encoder models along with their shortcomings and addresses how SparseCL overcomes those.
* Shows robustness of the proposed fine tuning method on three pretrained sentence embedding models of different sizes across three datasets: Arguana, adapted HotpotQA and MSMARCO.
* Discusses imperfections of Arguana dataset, augments with paraphrases to make it harder and shows differential performance of methods as the number of paraphrases increase.
* Shows generalization capability of the sparse-aware embeddings by training on HotpotQA and testing on MSMARCO and, vice versa.
* Applies the method on two downstream applications of corpus cleaning and NLI.
Claims And Evidence: * Claims that transitivity of cosine function makes it inherently incapable of representing the contradiction relation, backs it up with proofs and shows an average improvement of 3.6% with sparsity aware embeddings.
* The method enhances the speed of contradiction detection, supported by an experiment showing 200 times speed up compared to cross encoder methods.
* Mentions “embeddings designed to preserve subtle, contradictory nuances between sentences” and “effectively captures the essence of contradiction”, but lacks qualitative evidence to support these claims.
Methods And Evaluation Criteria: * Cosine similarity being limiting and using spasticity of embedding difference on top, is a good idea to retrieve contradictions. Although it is not very clear if Hoyer Sparsity is able to model and distinguish between contradictory differences vs orthogonal differences between two embeddings.
* Breadth of benchmarking datasets and data augmentations shows effectiveness of the approach.
* For the evaluation metric, most of the earlier work used Recall@1 and the paper uses NDCG@10, explanation for the choice of hyperparameter would make the shift clearer.
Theoretical Claims: NA
Experimental Designs Or Analyses: Checked the soundness of Counter-argument Retrieval, Contradiction retrieval, Zero-shot Generalization Tests and Arguana retrieval results analysis.
Very thorough job explaining details except in Contradiction retrieval task, It is not clear whether both, Zeroshot(cosine) and CL(cosine) are expected to give high scores for similar vs contradictory examples (seemingly zeroshot/pre-trained embeddings are trained on semantic similarity whereas CL uses contradictions as positives), and if either scenario impacts the validity of comparing these two methods.
Supplementary Material: Reviewed “non-transitivity” of Hoyer sparsity and the “transitivity” of cosine function, Experiment comparison with method from Shi et al., A case study for counter-argument retrieval from Arguana dataset and Efficiency test of cross-encoder and vector calculation.
Relation To Broader Scientific Literature: * Introduces a method for contradiction retrieval task, using cosine similarity and Hoyer measure of sparsity, presenting sparsity of embedding differences as a function to capture contradiction, being the novel idea.
* Discusses prior similarity based methods for contradiction retrieval like bi encoders and cross encoders and their limitations.
* Runs two small experiments to show their method for contradiction retrieval can be applied to downstream applications of corpus cleaning and NLI.
* Talks about how contradiction retrieval is related to some major LLM based research efforts in recent years namely Fact verification and LLM hallucination, and augmented LLM and retrieval corpus attack.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: * A novel idea to use sparsity of differences between the sentence embeddings to model contradiction.
* Well written, provides theoretical and quantitative backing(supplement section C and G) on claims against the existing bi-encoder and cross-encoder models.
* Shows robustness of the proposed method on three embedding models of different sizes and across three datasets.
* An insightful ablation into the limitations of Arguana dataset (section 4.6), proposed augmentations and effectiveness of sparsity-based retrieval over similarity based methods(supplement section E).
* Some arguments lack evidence/clarity e.g. “embeddings designed to preserve subtle, contradictory nuances between sentences”, meaning of the scores from Zero Shot (Cosine) vs CL (Cosine) methods.
Other Comments Or Suggestions: * [Typo] Figure 2: sub figures are referred to as left and right, instead of top and bottom.
* Scoring function for contradiction retrieval: In the Problem Formulation, query passage is referred to as q and corpus passages with p1, p2 and so on. The score function should probably be calculated between q and corpus passages, pi.
Questions For Authors: NA
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate your insightful comments on our work!
Thank you for pointing out the typos.
**"Some arguments lack evidence/clarity e.g. “embeddings designed to preserve subtle, contradictory nuances between sentences”, meaning of the scores from Zero Shot (Cosine) vs CL (Cosine) methods."**
This sentence serves as a motivating hypothesis that explains why we believe embeddings for contradictions exhibit sparse differences. We acknowledge the current lack of evidence in explaining the semantic meaning behind each coordinate, which is an intriguing area for further investigation.
We will adjust the figure captions and the notations in the score function as you suggested. | Summary: The paper is about non-similarity based information retrieval which is currently under explored. In the paper they introduce SparseCL, a novel approach to address shortcomings in existing similarity search and cross-encoder models when retrieving arguments contradictory to the query from large document corpora. The proposed method utilizes a combined metric of cosine similarity and sparsity function to identify and retrieve documents that contradict a given query.
The proposed method shows an 11.0% improvement when evaluated on existing retrieval benchmarks.
In summary;
- The paper introduces a novel approach using sentence embeddings together with cosine similarity and Hoyer measure of sparsity to capture the essence of contradiction.
- The embedding and scoring proposed approach shows an improved performance as compared to existing methods.
Claims And Evidence: The authors make a compelling argument that supports the proposed approach of training a sentence embedding model to preserve sparsity of differences between the contradicted sentence embeddings. This is represented in Figure 2. Hoyer sparsity histogram.
Methods And Evaluation Criteria: The proposed method is well supported by evaluations on well known benchmarks of Arguana (Wachsmuth et al., 2018) a counterargument retrieval task and two contradiction retrieval datasets adapted from HotpotQA (Yang et al., 2018) and MSMARCO (Nguyen et al., 2016).
Theoretical Claims: NA
Experimental Designs Or Analyses: The experimental design is reasonable, the authors test the performance of the proposed method on well known benchmarks of Arguana (Wachsmuth et al., 2018) a counterargument retrieval task and two contradiction retrieval datasets adapted from HotpotQA (Yang et al., 2018) and MSMARCO (Nguyen et al., 2016).
They apply the proposed contradiction retrieval to downstream applications ie., retrieval corpus cleaning and natural language inference. In addition authors perform ablation to explain the functionality of each component of the proposed method.
Supplementary Material: Appendix H, explains how the gpt-4-trubo as used to generate paraphrases and contradictions for the experiment.
Relation To Broader Scientific Literature: Similarity retrieval is a well studied task where contrastive learning is used to map similar sentences together and dissimilar ones far from each other in the embedding space (Gao et al., 2021, Karpukhin et al., 2020 and Xiong et al., 2020). In the same spirit, the authors try to address a rather different problem of non-similarity based retrieval.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: The study makes an in-depth analysis of non-similarity based information retrieval, the analysis suggests that a combined metric of cosine similarity and sparsity function to identify and retrieve documents that contradict a given query improves performance as compared to existing methods.
The authors cite and explain the existing studies, they also make a good literature study of the existing and related studies and how their proposed approach addresses the similarity problem differently.
The paper is well written, with clear figures, and the experimental results support the findings.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | null | null | null | null | |
The Courage to Stop: Overcoming Sunk Cost Fallacy in Deep Reinforcement Learning | Accept (poster) | Summary: The paper proposes a simple technique to avoid the issue of deep RL agents collecting familiar low-rewarding data to avoid the issue of re-sampling this data during training as a way to improve sample efficiency of deep RL agents. The paper integrates the algorithm in a variety of algorithms, applies it on different benchmarks including image data, and conducts an ablation.
## update after rebuttal
I am leaning on acceptance since my questions were adequately addressed and the paper presents an interesting idea.
Claims And Evidence: Yes, in general, paper's claims are accompanied with convincing evidence.
Few issues:
- The graph on Figure 7 does not include any information about standard deviation, variance etc. Nor does it mention the number of trials, which I am assuming 5 * # of environments. This information is critical even though LEAST clearly has better sample efficiency.
- The discussion on entropy aware dynamic buffer size and adjusting exploration noise (Section 3) seems quite ad-hoc. The previous paragraphs are well-motivated and those are well-analyzed in the empirical section, but these other sections seem out-of-place. I see there is some evaluation in the Appendix. I think the main paper should have some discussion about this.
- Section 4.3 mentions that LEAST helps “avoid the agent from training via useless transitions too often”. While LEAST does help plasticity it is a little unclear to me whether this is because it fits higher quality data or because there are fewer data samples since LEAST filters them out. It seems like an algorithm’s plasticity could increase if it just updated less often. I presume the number of gradient steps are the same, which means it would be the former. If this is the case, this should be specified.
Methods And Evaluation Criteria: Yes, the paper integrates the proposed algorithm in different settings, baseline algorithms (TD3, SAC, DrQv2), and in visual/non-visual environments that are commonly used in the deep RL literature.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, the paper conducts a reasonable analysis except for one point mentioned above (copy-pasting here): The graph on Figure 7 does not include any information about standard deviation, variance etc. Nor does it mention the number of trials, which I am assuming 5 * # of environments. This information is critical even though LEAST clearly has better sample efficiency.
Supplementary Material: Yes, just skimmed the whole thing. There is no code included.
Relation To Broader Scientific Literature: The paper falls under the broad category of improving the sample efficiency of RL algorithms in robotics simulation tasks. It revisits how deep RL algorithms are fundamentally designed to use the data they collect and suggests a data filtration process to use higher quality data during training.
Essential References Not Discussed: It would be useful to make some references to algorithms that sample data differently instead of filtering data. For example, prioritized experience replay (Schaul 2015) or event tables (Kompella 2023). These methods seems similar in spirit of updating the agent using better data.
Other Strengths And Weaknesses: Strengths:
- The algorithm is simple to implement and incorporate into any RL algorithm
- It is evaluated on visual and non-visual environments with different baselines.
- It conducts a thorough empirical analysis and ablation
Weaknesses:
- See above
- The last two lines in the first paragraph Section 3.2 are very unclear and seem to just be filled with jargon. It would be better to simplify.
- The paragraphs on dynamic buffer size and exploration noise are quite confusing. The notation used also is slightly abused which makes things tricky to parse. Are the last h episodes removed? If not, which ones? What does UTD mean? It should be specified.
- First few lines on page 6 discusses why LEAST eventually does better in HalfCheetah, but it does not discuss why it may be worse during early training. I think that is the real anomaly which should be explained.
Other Comments Or Suggestions: Page 6 paragraphs have some typos and the dynamic buffer size/exploration noise paragraphs are unclear.
Questions For Authors: - Given that the training loss is so critical to filter data, what do the authors think about the unreliability of the loss function in off-policy algorithms and how that may be affecting LEAST [1]?
- How do you think performance would change if instead of storing the Q values in B_Q, the Monte Carlo returns were saved?
- Isn’t the issue of unchanging policy true for even non-LEAST methods? It's a little unclear to me why LEAST has this issue. Given this seems like a universal issue, it seems like LEAST may be better since it has an improved exploration strategy.
- Nothing to really comment but something of potential interest to authors is [2], which is for the average reward setting where the agent tries to figure out when it should reset the episode, where there is a cost associated with resetting the episode.
[1] Why Should I Trust You, Bellman? The Bellman Error is a Poor Replacement for Value Error. Fujimoto et al. 2022.
[2] RVI-SAC: Average Reward Off-Policy Deep Reinforcement Learning. Hisaki et al. 2024.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for the insightful suggestions and helpful feedback of our work, and for the recognition of our work's motivation, performance, and potential academic impact.
**Q1: (1) Given that the training loss is so critical to filter data, what do the authors think about the unreliability of the loss function in off-policy algorithms and (2) how that may be affecting LEAST ?**
(1) To mitigate the unreliability of loss calculations, LEAST mitigates this issue by using the median of the loss values rather than relying on a single loss value per transaction, which reduces sensitivity to noise & biases. It will also be an interesting future direction to consider more robust loss calculation.
(2) While LEAST shows strong performance in the current experimental settings, an interesting direction for future work would be to investigate its behavior in tasks requiring even longer-term learning. In such scenarios, the potential instability of loss calculations might affect the episode-stopping threshold, which could slightly reduce the method’s efficiency in certain edge cases. It is an interesting future direction for further improving the robustness of loss calculations, which can further extend LEAST's applicability and improve its stability across a wider range of tasks.
**Q2: How do you think performance would change if instead of storing the Q values in B_Q, the MC returns were saved?**
Thank you for the suggestion! MC returns could be a potential metric. However, it can introduce two main challenges for implementing this within LEAST’s framework:
- *Interaction efficiency:* MC return requires repeatedly resetting to specific states to compute estimates from multiple rollouts to reduce variance in its estimation, which can be sample-inefficient.
- *Suffer from biased estimation:* MC return heavily depends on observing complete trajectories up to their true terminal states. When trajectories are truncated prematurely (as in LEAST), the missing components of the return are treated as zero, leading to biased and inaccurate calculations.
We hope future work will explore more advanced estimations based on MC returns to effectively address the aforementioned challenges.
**Q3: Isn’t the issue of unchanging policy true for even non-LEAST methods? It's a little unclear to me why LEAST has this issue. Given this seems like a universal issue...**
While this problem is indeed common in RL, it is particularly critical for LEAST due to its unique mechanism.
The motivation for proposing a noise adjustment schedule stems from specific observations during our experiments. Although the Q- & loss-based stopping threshold provides core benefits, LEAST occasionally triggers frequent resets during early training stages, which can hinder long-term exploration (crucial for extended execution tasks like humanoid walk).
Thus, we introduced a simple yet effective noise mechanism controlled by premature reset frequency, aiming to stabilize exploration, especially during early training. When integrated with TD3, the results showed minimal improvement, suggesting that noise adjustment may be less beneficial for vanilla deep RL compared to its impact on LEAST.
Table. Performance $\uparrow$ (Average of 3 runs):
| Method | Ant | Humanoid |
| --- | --- | --- |
| **TD3 w/ noise adjustment** | $5704 \pm 145.23$ | $4591.28\pm 224.82$ |
| TD3 | $5682.13\pm 129.48$ | $4616.54\pm 273.69$ |
**Q4: Nothing to really comment but something of potential interest to authors ...**
This is very interesting! We will analyze it in the related work of the new version.
**W1: Fig 7 does not include information about std, variance etc... even though LEAST clearly has better sample efficiency.**
Thanks for your suggestion. We used 5 random seeds, and we will update the caption and add error bars in Fig 7.
**W2: The discussion on entropy aware dynamic buffer size & adjusting exploration noise seems quite ad-hoc... should have some discussion**
To ensure robust episode truncation in complex environments, we implement two key mechanisms beyond the core Q-value & loss thresholds: (1) a dynamic buffer that filters outliers to improve threshold stability & accuracy when Q-values become unreliable, and (2) to prevent exploration limitations from frequent resets, we introduce an action noise mechanism that adaptively balances exploration and stopping behavior.
We will elaborate more in the revision.
**W3: … While LEAST does help plasticity it is a little unclear whether this is because it fits higher quality data or .... this should be specified.**
We would like to clarify that LEAST improves plasticity not by reducing the number of updates, instead, it helps the agent to focus on more informative samples which accelerates adaptation and helps it explore better policies.
---
We would like to thank the reviewer once again for the time and effort in reviewing our work! We are happy to provide further clarification if you have any additional questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! The paper is interesting. I think the insight is useful and I appreciate that the first crack at this issue is simple.
---
Reply to Comment 1.1.1:
Comment: We would like to sincerely thank the reviewer (Reviewer 4RG4) once again for the valuable feedback, as well as for the time and effort dedicated to reviewing our work.
We are also grateful to Reviewer zKrJ for the additional questions and constructive suggestions (we have to combine the responses here together due to the rebuttal requirements). We greatly appreciate that you found the paper interesting and the insights useful, and we are thankful for your decision to raise the score to accept.
We will carefully incorporate all the feedback into the revised manuscript, which has greatly helped us improve the quality, clarity, and overall contribution of our work. | Summary: The paper introduces the learn to stop (LEAST) heuristic for online off-policy reinforcement learning (RL).
The core idea is the proposition of an adaptive stopping mechanism to prohibit including unhelpful-to-the-learning-task transitions to the replay buffer. The heuristics is then experimentally evaluated on a variety of aspects with four MuJoCo tasks and DMC tasks.
## update after rebuttal
Score increased. See rebuttal comment.
Claims And Evidence: The paper states its main contributions as
(1) identifying a critical limitation of current RL frameworks
(2) introduction of a stopping heuristic with general benefits
(3) experimental validation of the proposed heuristic.
Claim (1) seems to be supported by clear and convincing evidence.
Claim (3) is supported by the experiments presented in the paper.
Claim (2) is not entirely clear to me. See also Question for Authors.
Methods And Evaluation Criteria: Looks appropriate except open question in Question for Authors.
Theoretical Claims: Theoretical foundation for the presented heuristic would be nice, but is not included in the paper.
Experimental Designs Or Analyses: The experimental verification looks valid for the presented results.
Supplementary Material: I did not review supplementary material.
Relation To Broader Scientific Literature: The paper contributes to the online RL literature. Due to its nature to terminate episodes early, potentially saving computation costs, it might be relevant to computationally heavy experiments.
Essential References Not Discussed: Not relevant.
Other Strengths And Weaknesses: The presented background (often called preliminaries) has nothing to do with "Deep" RL, it describes RL basics. It seems reasonable though that the addressed problem is especially relevant for Deep RL.
Appendix B.4 is an exact repetition of a paragraph in Section 2.
Other Comments Or Suggestions: To me, the connection between the observed problem and the sunk-cost fallacy seems far-fetched. Sunk-cost fallacy depends on the emotional attachment to whatever resource is already sunk. The continuation of an episode until termination stems rather from a missing mechanism instead of an attached to the current episode.
The referencing of subsection with "§" does not fit ICML standards.
Figure 5 and 13 have surprising format sizes.
Typo: mwthod (327).
Questions For Authors: What do you mean by "In this paper, we investigate whether deep RL agents deployed on cumulative reward maximization tasks also suffer from sunk cost fallacy."?
What exactly makes the introduced heuristic special for deep RL in comparison to its importance for "non-deep" RL?
What do you mean by "In this section, we conduct comprehensive experiments to evaluate whether LEAST module is necessary for Deep RL."? Introducing a trainings heuristic can be helpful, but I do not understand what you mean by necessary. Please elaborate.
Can you summarize comprehensibly why the proposed criteria of the LEAST heuristic is a good choice in general for stopping episodes? The experimental evidence seems not entirely clear. Especially why the proposed dual-criteria would be better than possible multi-criteria.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the insightful suggestions and constructive feedback on our work. Please find our responses to each of the concerns below.
**Q1: What do you mean in Background by "In this paper, we investigate whether deep RL agents deployed on cumulative reward maximization tasks also suffer from sunk cost fallacy."?**
We investigate whether RL agents exhibit behavior analogous to the sunk cost fallacy in reward maximization tasks, where agents inefficiently persist in executing trajectories despite diminishing returns will cause more losses, similar to how investors might hold onto losing investments. Through this analysis, we aim to identify and address fundamental inefficiencies in conventional RL architectures.
**Q2: What makes the introduced heuristic special for deep RL in comparison to its importance for "non-deep" RL?**
We would like to clarify that our proposed methodology is general for RL (not specific to Deep RL).
We validate our method's generality through experiments on tabular Q-learning in two MiniGrid navigation tasks [1] with default parameters. Results below measure the episode number required to converge to the optimal policy. LEAST significantly improves learning efficiency compared to standard Q-learning (**5%-14%** reduction in episodes required to converge), demonstrating its effectiveness in improving learning efficiency in simpler, discrete non-deep RL settings.
| Method | 6x6 with obstables ($\downarrow$) | 8x8 with obstables ($\downarrow$) |
| --- | --- | --- |
| **Q learning w/ LEAST** | $428.0\pm21.3$ | $495.3\pm24.7$ |
| Q learning | $494.7\pm28.9$ | $523.3\pm26.3$ |
**Why we mainly discuss deep RL:** While non-deep RL provides a valuable testbed for validating the algorithms clearly, our main focus is to make deep RL algorithms learn better and use data more efficiently, since Deep RL is critical for solving complex, high-dimensional problems and has unique challenges and practical relevance in real-world applications. We aim to validate our approach in these demanding scenarios while encouraging future theoretical analysis in simpler RL paradigms.
We hope our work can inspire exploration in non-deep RL, e.g. tabular & linear methods for theoretical analysis.
[1] Open source GitHub repo: Minigrid-RL (default setting)
**Q3: What do you mean by "In this section, we conduct comprehensive experiments to evaluate whether LEAST module is necessary for Deep RL."?**
We will revise the sentence for clarity in the revised version. To clarify, in this section, we aim to evaluate the necessity of LEAST by investigating whether it provides significant improvements in the performance & efficiency of baselines.
**Q4: ... why the proposed criteria of the LEAST are a good choice in general? ... Especially why is this better than a possible multi-criteria.**
The proposed dual-criteria based on Q-values & loss offer the following major advantages, making it a robust, versatile, and general choice for our LEAST framework:
- **Robustness**. Q-values assess trajectory quality through expected rewards, while critic loss indicates the agent's understanding (familiarity) of transitions. These complementary metrics provide comprehensive evaluation criteria for stopping decisions.
- **Versatility**. Q-values & loss are core metrics in RL algorithms, making them algorithm-agnostic & environment-independent, unlike entropy (SAC-specific) or scene-specific metrics. This enables the dual-criteria to work across different tasks and algorithms with minimal changes.
- **Simplicity**. Q-values & loss are readily computed metrics that require no extra computation (e.g., network modules), making the criteria efficient to implement.
We explored various multi-criteria but found none that consistently outperformed the dual-criteria in efficiency and task independence. Our motivation is to provide a fundamental framework that is simple, effective, and broadly applicable across diverse tasks and algorithms, and to encourage future research into more sophisticated multi-criteria approaches.
**W1: The presented background has nothing to do with "Deep" RL, it describes RL basics. It seems reasonable...**
We revised this paragraph in the background section to better reflect the context of deep RL.
**C1: The connection between the observed problem & the sunk-cost fallacy seems far-fetched...**
Our analogy here focuses on the behavioral similarity: the implicit commitment to completing trajectories without evaluating their future benefit, although the classical sunk-cost fallacy typically involves emotional attachment. The absence of early stopping mechanisms in existing RL architectures leads to decision-making inertia, resembling the sunk-cost fallacy in its effect on efficiency.
---
Thank you again for reviewing our work. We will refine the typos and hope our responses address the concerns and would appreciate your consideration in the evaluation. We are happy to provide further clarification if needed.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comprehensive rebuttal and for addressing the points raised in my initial review.
I still think sunk-cost fallacy is not a good fit to the introduced method. Nonetheless early stopping is a valuable mechanism in the presented context.
Having read the rebuttal as well as the other reviews, I consider raising my score from 2 to 3. And thus, leaning more towards the acceptance of the paper.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the follow-up comments and for reconsidering your initial assessment of our work. We appreciate your thoughtful engagement and recognition of early stopping as a valuable mechanism in the presented context.
Regarding the analogy to the sunk cost fallacy, our intention was not to equate the two concepts directly (as the classic definition involves emotional attachment -- which is absent in RL as agents typically do not have emotions), but rather to highlight the **behavioral similarity**: the implicit commitment to completing trajectories without dynamically evaluating their future benefit. This **decision-making inertia**, stemming from the absence of early stopping mechanisms, produces inefficiencies similar to those observed in the sunk cost fallacy. We aim to demonstrate how this analogy highlights the inefficiencies in existing RL frameworks and the relevance of our proposed method as a solution.
*We will revise the manuscript to better clarify this point and refine the explanation in the introduction to better illustrate this connection within the context of this paper:*
> Existing RL architectures often lack a mechanism to dynamically evaluate the potential utility of continuing a trajectory. This results in agents persistently executing full trajectories, even when it becomes clear that further interactions are unlikely to yield meaningful outcomes and new insights.
>This decision-making inertia reflects a behavioral inefficiency akin to the sunk cost fallacy, where individuals persist in a course of action simply because resources have already been invested, without reassessing whether continuing these actions is worthwhile. However, unlike humans, who can reassess and decide to stop when recognizing a clear loss, RL agents are constrained by existing frameworks that focus exclusively on policy optimization without mechanisms to evaluate and interrupt unproductive trajectories.
>While the sunk cost fallacy in humans is traditionally linked to emotional attachment, this analogy is drawn in this context to highlight the structural inefficiency in RL caused by the absence of dynamic stopping mechanisms. Specifically, RL agents do not have the ability to evaluate the declining utility of continuing a trajectory, leading to wasted computational resources and lower learning efficiency. LEAST addresses this critical issue by introducing a dynamic stopping mechanism that enables agents to identify and terminate low-quality and redundant trajectories early, thereby reallocating resources to more promising ones. This not only mitigates inefficiencies reminiscent of the sunk cost fallacy but also improves sample efficiency and overall learning performance.
In addition, to further contextualize this analogy, we will expand the background section to include real-world examples, such as continued investment in failing stocks. These examples demonstrate behavioral parallels between human decision-making related to sunk cost fallacy and the interacting behavior of RL agents, where the lack of dynamic stopping mechanisms leads to persistent yet inefficient trajectory execution. By providing this refined explanation and including real-world examples, we aim to better illustrate the behavioral parallels and highlight the practical relevance of our proposed method.
----
*Lastly, as you kindly mentioned the possibility of raising your score, we would be grateful if this could be reflected in your final evaluation (edit the review). We also welcome any further suggestions you may have and will definitely incorporate them into the revision.*
Your thoughtful reconsideration and feedback have already contributed significantly to improving the quality and clarity of our work, and we deeply value your support in the review process. | Summary: The paper proposes a novel technique for improving RL training that allows an agent to terminate an episode early if the expected return drops below a heuristic threshold. The paper aims to show that this can speed up training as it avoids filling the replay buffer with uninformative trajectories. The authors provide evidence for their claim across a variety of benchmarks and baseline algorithms.
### Update after rebuttal period
With additional context, I support the acceptance of this paper. All major questions were adequately addressed.
Claims And Evidence: The paper is empirical in nature and provides a variety of evidence for its claims. The evidence seems sufficient for the claims.
Methods And Evaluation Criteria: The bench-marking covers several different RL problems and algorithms. A small (rather nitpicky) caveat is that the baseline RL algorithms use are relatively old and have known weaknesses such as problems with learning stability. All considered algorithms are 4 years old at this point. To address this, I would encourage the authors to verify their findings with more up-to-date architectures and algorithms such as SR-SAC [1], BRO [2], Simba [3], CrossQ [4], TD-MPC2 [5], or MAD-TD [6]. Obviously I am not asking the authors to provide evidence on all of these, but for example repeating the experiments on Mujoco tasks with a model-free algorithm shouldn't be too much of an ask. Model-based algorithms such as TD-MPC2, Dreamer or MAD-TD might be another interesting direction, as the gathered data influences both model and value function learning. In the image based domains, why wasn't LEAST evaluated with TACO or A-LIX as well? The improvements should be orthogonal?
One additional question for clarification: in the reported scores, were the algorithms evaluated with or without early resets? I think it would be important to verify that the additional performance is not due to the algorithms being able to avoid "difficult" parts of the state space they might naturally have to end up in. Clarifying this is a requirement for me supporting the acceptance of the paper!
[1] Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier, D'Oro et al, ICLR 2023 https://openreview.net/forum?id=OpC-9aBBVJe
[2] Bigger, Regularized, Optimistic: scaling for compute and sample-efficient continuous control, Nauman et al, NeurIPS 2024, https://openreview.net/forum?id=fu0xdh4aEJ¬eId=It2lExTd92
[3] SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning, Lee et al, ICLR 2025, https://openreview.net/forum?id=jXLiDKsuDo
[4] CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity, Bhatt et al, ICLR 2024, https://openreview.net/forum?id=PczQtTsTIX
[5] TD-MPC2: Scalable, Robust World Models for Continuous Control, Hansen et al, ICLR 2024, https://openreview.net/forum?id=Oxh5CstDJU
[6] MAD-TD: Model-Augmented Data stabilizes High Update Ratio RL, Voelcker et al., ICLR 2025, https://openreview.net/forum?id=6RtRsg8ZV1
Theoretical Claims: No theoretical claims were made.
Experimental Designs Or Analyses: Aside from the concerns raised above, the experimental design is well done.
Supplementary Material: I reviewed the full appendix.
Relation To Broader Scientific Literature: They seem to fit the literature.
Essential References Not Discussed: Relevant prior work from the field of robotics seems to not be discussed, such as [1] and follow up work. As robotics is a field where limiting long episodes has strong practical relevance, this seems important.
[1] Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning, Eysenbach et al, ICLR 2018 https://openreview.net/forum?id=S1vuO-bCW
Other Strengths And Weaknesses: I have two problems with the presentation of the paper, but these are relatively minor. First of all, the algorithm is presented as the agent "learning" to avoid the sunk cost fallacy. However, LEAST is a fixed heuristic, so no learning is happening as far as I can tell. In addition, the sunk cost fallacy seems to be only somewhat applicable here, as agents do not normally have a choice but to continue in a trajectory until the end.
The other problem is that the authors mostly consider environments which already include early resets, such as the mujoco tasks. In these, avoiding "bad" areas of the state space is indeed optimal, as the agent is most likely facing an early reset anyways. However, in other tasks with less well shaped rewards and early reset structure, the method might limit exploration to a problematic extend. I would ask the authors to comment on this more clearly. The authors already comment on some exploration related problems in 3.2, but I believe that the chosen test tasks might lead to optimistic conclusions on the methods peformance.
Other Comments Or Suggestions: none
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful suggestions and useful feedback of our work. Please find our responses to each of the concerns below.
## Question
**Q1.1: Please verify the findings with more up-to-date algorithms such as SR-SAC, CrossQ,...,e.g., repeating the experiments on Mujoco tasks with a model-free algorithm.**
Thanks for your suggestion. We have incorporated CrossQ as an additional backbone algorithm and conducted experiments on Hopper & HalfCheetah (with full results to be included in the revision due to time constraints).
The results show that CrossQ-LEAST consistently outperforms vanilla CrossQ in both performance & learning efficiency. This improvement may stem from CrossQ's more stable Q-value predictions compared to SAC, which helps calculate more reliable stop thresholds.
Table: Score$\uparrow$ (Convergence steps (M)$\downarrow$ ) (Average of 3 runs):
| Method | Hopper| HalfCheetah |
| --- | --- | --- |
| **CrossQ-LEAST** | $2853\pm162.37$ ($3.25$) | $13607.57\pm 115.48$ ($2.26$) |
| CrossQ | $1839.36\pm297.24$ ($4.71$) | $13035.81\pm 137.16$ ($2.67$) |
**Q1.2: Why wasn't LEAST evaluated with TACO or A-LIX as well? The improvements should be orthogonal?**
Since both methods are based on DrQv2, we used it as our primary backbone to evaluate LEAST's general improvements. We also tested LEAST with A-LIX and found consistent performance gains across image-based domains (results to be included in revision). Besides, MBRL indeed remains an interesting direction for future work.
| Method | Finger Turn Hard | Quadruped Run |
| --- | --- | --- |
| **A-LIX w/ LEAST** | $537\pm19.24$ | $858.73\pm 16.81$ |
| A-LIX | $451\pm32.61$ | $772.53\pm 25.97$ |
**Q2: Relevant prior work from the field of robotics seems not to be discussed.**
We appreciate this suggestion and will enhance the related work section in the revision to thoroughly discuss relevant research from the robotics field.
## Weakness
**W1.1: The algorithm is presented as the agent "learning" to avoid the sunk cost fallacy. But no learning is happening as far as I can tell.**
We appreciate your suggestion and will revise the phrasing to use "decide" instead of "learn," as it more accurately reflects how LEAST functions. We originally used the learning analogy to illustrate the behavioral patterns that LEAST promotes.
**W1.2: The sunk cost fallacy seems to be only somewhat applicable here...**
Our goal is to make it more intuitive for readers to understand our core contribution: existing RL architectures often lack the capability to dynamically evaluate the potential utility of continuing a trajectory, leading agents to "stubbornly" execute the full trajectory regardless of its relevance or productivity within the given interaction budget. This inefficiency bears an analogy to the sunk cost fallacy, where agents implicitly persist in a trajectory simply because it has already been initiated, without reassessing whether it remains promising or worthwhile. While the sunk cost fallacy is traditionally associated with human decision-making, we draw this analogy to highlight how LEAST's dynamic stopping mechanism addresses a similar inefficiency in RL.
**W2: In other tasks with less well-shaped rewards & early reset structure, the method might limit exploration to a problematic extent?**
LEAST's adaptive stopping mechanism functions independently of environment-level reinitialization, as it can detect suboptimal states before the environment triggers a reset. In locomotion tasks, e.g., LEAST identifies patterns like in the agent instability that indicate imminent failure, proactively stopping the episode before environment-level reinitialization occurs—particularly during early training stages. This proactive stopping enables LEAST to leverage its dynamic stopping mechanism to improve learning efficiency, even in environments with early reinitialization (as demonstrated in Figs 6, 7, 9).
To address the exploration limitations, we introduce a simple yet effective method, dynamic exploration noise adjustment, and find that it effectively helps agents escape suboptimal trajectories to explore more diverse behaviors. Thus, in environments with less well-shaped rewards & early reinitialization structure (e.g., PointMaze), LEAST still demonstrates robust performance, further confirming the method’s general applicability.
*We hope our work can inspire future research to build upon this foundation for more challenging scenarios. Exploring further improvements to LEAST's exploration capabilities remains an exciting direction for future work.*
---
We would like to thank the reviewer once again for the time and effort in reviewing our work! We are happy to provide further clarification if you have any additional questions. We hope that our responses adequately address your concerns, and we would greatly appreciate it if you could kindly consider reflecting on this in the evaluation.
---
Rebuttal Comment 1.1:
Comment: Hi, I read the rebuttal, thanks for testing the additional algorithmic setups!
I briefly wanted to point your attention at this question:
"One additional question for clarification: in the reported scores, were the algorithms evaluated with or without early resets? I think it would be important to verify that the additional performance is not due to the algorithms being able to avoid "difficult" parts of the state space they might naturally have to end up in. Clarifying this is a requirement for me supporting the acceptance of the paper!"
As I said, this is a clarification, but I think it is vital that this is correctly handled.
---
Reply to Comment 1.1.1:
Comment: > **Question:** Clarifying this is a requirement for me supporting the acceptance of the paper: in the reported scores, were the algorithms evaluated with or without early resets?.
We sincerely apologize for mistakenly combining this question with a subsequent one in our initial rebuttal due to space limitations, which caused us to overlook addressing it fully. We greatly appreciate the reviewer for raising this important question again, for the patience, and for giving us the opportunity to address it thoroughly.
To clarify, all algorithms in our experiments were **evaluated** **without** **early resets**, ensuring a fair and rigorous comparison. This experimental setup guarantees that the reported performance improvements are not the result of artificially "avoiding difficult parts of the state space," but rather reflect the genuine gains in learning stability, efficiency, and adaptability achieved by our method. To ensure complete transparency, we will explicitly highlight this point in the revised manuscript. Furthermore, we will include this clarification in the README file accompanying our codebase, which will be open-sourced upon publication, to ensure that this aspect is fully documented.
We would like to thank the reviewer for acknowledging our additional experiments and explanations, and for careful consideration of our work and the emphasis on this critical aspect. Please let us know if you have any further questions or if additional clarifications are needed, and we are more than happy to provide further details. | Summary: This paper introduces "Learn to Stop" (LEAST) to address the "sunk cost fallacy" in deep reinforcement learning (RL). The sunk cost fallacy refers to how RL agents must complete episodes even if the trajectory collected thus far is already poor, which ultimately provides low-quality data to the agent. LEAST allows the agent to terminate episodes early by dynamically evaluating the quality and learning potential of current trajectories based on Q-values and critic gradient information. Empirically, LEAST consistently improves sample efficiency and final performance compared to baseline methods.
Claims And Evidence: **Claim 1: The traditional RL framework suffers from the sunk cost fallacy, leading to inefficient sampling and suboptimal policy learning.**
* **Unsupported.** The core experiments do not show that LEAST is truncating suboptimal trajectories but instead focus on data efficiency improvements. The Maze examples in Figure 3 shows that LEAST agents have a higher fraction of high-return trajectories, but again does not show that LEAST is truncating suboptimal trajectories.
**Claim 2: LEAST improves sample efficiency and final performance across different algorithms**
* **Supported.** Figures 6 and 9 show that algorithms enhanced with LEAST require fewer steps to reach the same performance level as their vanilla counterparts.
Methods And Evaluation Criteria: 1. The evaluation criteria and benchmarks (MuJoCo and DMC) are standard and appropriate for the problem. The authors also make a good case for why these environments are suitable for testing their method, particularly in environments with potential dead-ends or suboptimal trajectories.
1. What do the edges of the box in the box plots represent?
Theoretical Claims: None.
Experimental Designs Or Analyses: 1. **Figure 3 is difficult to interpret.** Could you clarify what "Loss" refers to specifically? Is it the critic's loss, TD error, or the policy's return? Additionally, what criteria determine whether a sample is classified as "High/Low Loss" versus "High/Low Q"?
1. Why does Figure 5 only present results for Ant and HalfCheetah? Are the results for other environments less compelling? If so, what might explain this discrepancy?
1. **The paper lacks empirical evidence demonstrating that LEAST actually terminates low-return trajectories early.** This verification is essential since early termination of unpromising trajectories is the fundamental motivation behind the algorithm.
1. **The plasticity experiments seem disconnected from the main contribution.** The paper's shift to studying network plasticity appears abrupt and inadequately motivated, and these experiments feel preliminary. While Figure 11 suggests LEAST reduces plasticity loss rates, this observation may be misleading. One of LEAST's primary motivations is that completing obviously low-quality episodes provides poor training data, potentially leading to premature convergence to suboptimal policies. Would this premature convergence potentially lead to faster loss of plasticity?
Supplementary Material: I review the figures in the appendices, and my review references a few of them.
Relation To Broader Scientific Literature: The paper appropriately positions its contribution within the RL literature, particularly in relation to sample efficiency methods and existing approaches to early stopping. The authors discuss related work in Appendix A, covering both sample efficiency techniques in Deep RL and early stopping mechanisms.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: **Additional Strengths:**
1. LEAST is intuitive and easy to understand.
2. The empirical validation is comprehensive.
**Additional Weaknesses:**
1. While the basic Q-based threshold version of LEAST presents a sound approach, the additional mechanisms appear to function more as heuristic patches rather than integrated components of the algorithm's core design.
1. The paper would benefit from exploring potential tradeoffs of early stopping, particularly how it might affect the discovery of valuable long-term strategies that only become apparent after extended training periods.
1. The paper contains several typos and sentence fragments that occasionally impact readability (specific examples are provided in the next section).
**Author comments on these weaknesses would be appreciated, though I emphasize that these weaknesses are not critical to my evaluation.**
Other Comments Or Suggestions: 1. "which proves that it is trapped in sub-optimality and can’t escape." While I would agree that the Vanilla training curve for the Medium maze hints at suboptimal convergence, it is unclear if the Vanilla training curve for the Large maze has converged yet. "Prove" seems like too strong of a word here. Plotting the entropy of the policy throughout training could make the argument for suboptimal convergence stronger.
1. "Figure 4 shows this dual-criteria threshold (ωi × εi ) significantly outperforms purely Q-based threshold" What does "significantly" mean here, precisely? Figure 4 shows that the 25th percentile dual criteria threshold agents outperform the 75th percentile Q-based threshold agents, but does this constitute a significance test? (What do the edges of the box in each box plot represent? I assumed 25/75th percentile). This statement should be rephrased to explicitly state what metrics we are comparing.
1. "Makes it difficult to measure central tendency." is a sentence fragment.
1. "Specifically, for TD3 and SAC (Figure 6(a,b))." is a sentence fragment
1. Figure 7 should have error bars
2. Line 327: "mwthod" -> method
3. line 328: "thebox" -> the box
4 "thus avoiding the agent from training via" -> thus preventing the agent from training on
5. Line 385: "sequential of skills" it is unclear what this means
6. Line 385: "Compare to TD3" -> compared
7. "Appendix D.2 detailed introduces" -> extra word
1. "We analyze the impact of the startup time of LEAST in Appendix D.4, it is a good choice to start LEAST from 10% − 20% of the total training time for MuJoCo." is a run-on sentence
1. "for the image input task" -> tasks
1. Line 643: broken reference to DDPG
1. Line 796: "Deep RL" -> Deep RL
1. Line 932: "noisy daily"
1. "Divide quadrants uniformly using the mean of Q and Loss of advanced agent buffer samples." is a sentence fragment
1. Line 797: Broken reference for REDQ.
Questions For Authors: **I currently lean to reject. While experiments demonstrate LEAST's improved data efficiency, the paper lacks convincing evidence that this improvement stems from truncating low-quality trajectories as claimed.**
1. Can the authors provide additional experiments showing that LEAST is indeed truncated low-quality trajectories? For instance, one could plot the distribution of trajectory lengths thoughout training for Vanilla and LEAST agents (in some fair manner).
2. Could the authors contextualize the plasticity loss experiments more? My impression is that the plasticity loss experiments would be more appropriately placed in an appendix, as their relevance to the core contributions seems limited. This would free up space for more informative ablation studies that could better illuminate the algorithm's effectiveness.
**If the authors address both of these points, I will consider raising my score.**
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful suggestions and useful feedback of our work. Please find our responses to each of the concerns below.
**Q1: Can you demonstrate experimentally that LEAST truncates low-quality trajectories?**
We conduct additional experiments on Ant to analyze trajectory length and quality, and will include it in the revision. Experiments below confirm that LEAST effectively truncates low-quality trajectories.
**LEAST effectively truncates trajectories:** We compare the distribution of trajectory lengths stored in the replay buffer between LEAST and vanilla agents (built upon TD3) during different stages of training. Table 1a shows that LEAST stores significantly more short trajectories than the vanilla agent, especially during the early training stages.
Table 1a. Percentage distribution of lengths in the buffer at different training stages (25%, 50%, 75%, and 100%). Trajectory length (Short (S): $l \leq 333$, Medium (M): $333<l \leq 666$, Long (L): $l > 666$):
| Method | 25% Progress | 50% Progress | 75% Progress | 100% Progress |
| --- | --- | --- | --- | --- |
| **LEAST-TD3** | S: 27.25, M: 46.14, L: 24.61 | S: 45.85, M: 38.62, L: 15.53 | S: 23.75, M: 39.17, L: 37.11 | S: 4.42, M: 6.35, L: 89.23 |
| Vanilla-TD3 | S: 21.17, M: 52.38, L: 26.45 | S: 8.69, M: 15.03, L: 76.28 | S: 4.47, M: 8.24, L: 87.29 | S: 1.18, M: 7.51, L: 91.31 |
**Truncated trajectories are low-quality.** We design a comparison experiment that evaluates the quality of trajectories identified for truncation by LEAST. We periodically (every 10k steps) stop the truncation to allow these "would-be truncated" trajectories to execute fully until their natural termination. The final cumulative rewards of these trajectories are then compared to the average rewards of trajectories under normal LEAST training. Results below show that "would-be truncated" trajectories achieve lower rewards across all training phases, confirming that LEAST effectively truncates suboptimal trajectories. This is further supported by suboptimal path truncation visualizations in MAZE ([ANONYMOUS LINK](https://anonymous.4open.science/r/lzvk-EF05/R_1.png)).
Table 1b. Effectiveness of LEAST in truncating low-quality trajs:
| Method | 25% Progress | 50% Progress | 75% Progress | 100% Progress |
| --- | --- | --- | --- | --- |
| **would-be-truncated** | $1619.26\pm1033.58$ | $3361.37\pm653.27$ | $4783.07\pm468.63$ | $4973.48\pm502.86$ |
| **Baseline** | $3975.26\pm507.91$ | $4873\pm282.35$ | $5523\pm361.24$ | $6376\pm417.78$ |
**Q2: The plasticity loss experiments would be more appropriately placed in an appendix...**
We will move this to the appendix and use the space to expand the analyses in appendix.
The plasticity loss experiments were included to investigate a secondary benefit of LEAST in mitigating plasticity loss. Premature convergence caused by low-quality data (primacy bias [1]) is correlated with plasticity loss [2]. Since LEAST improves buffer data quality by truncating suboptimal trajectories (Fig. 3), we hypothesized it could mitigate plasticity loss, as analyzed in Fig. 11 to provide preliminary evidence for future work.
[1] The primacy bias in deep reinforcement learning. ICML'22.
[2] Loss of Plasticity in Continual Deep Learning. Nature'24.
**W1: The additional mechanisms appear to function more as heuristic patches...**
These auxiliary mechanisms ensure LEAST's robustness in complex environments.
While the basic Q-based module works well on simple tasks like Hopper, but becomes less reliable in more complex environments, e.g. Ant. We propose two complementary modules: a dynamic buffer to improve threshold calculations by filtering outliers, and noise adjustments to maintain exploration despite early stopping.
**W2: The paper would benefit from exploring potential tradeoffs of early stopping and exploration...**
To achieve that tradeoff, we introduce a simple yet effective noise adjustment module (which reduces the frequency of early stopping), helping to maintain exploration diversity and mitigate the risk of converging to overly short-term strategies. We hope our work can inspire research for further improvement.
**W3: Clarify what "Loss" refers to specifically in Fig 3?**
Loss refers to critic loss. We classify samples as High/Low Q based on their Q-values compared to the median Q-value in the Q buffer (and similarly for High/Low loss).
**W4: Fig 5 only presents results for Ant and HC.**
We chose these tasks as they represent different complexity levels - Ant: complex; HC: simpler - allowing us to evaluate LEAST across varying challenges. We will include full results in the revision.
**W5: Typos, fragments & box plot.**
We improved them in the revision. The edge of box plots indicates upper and lower bounds.
---
Thanks again for reviewing our work! We hope our responses address the concerns and would appreciate your consideration in the evaluation. We are happy to provide further clarification if needed.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detail response! In particular, I appreciate the additional results supported that LEAST truncated low-quality trajectories. These results fill an important gap currently in the paper, and I do urge the authors to include them in revisions (either in a camera-ready or resubmission). Assuming the plasticity experiments will be moved to the appendix (and the appendix makes it clear that these are fairly preliminary results) and the typos/fragments will be addressed, I am raising my score.
Overall, it's a cool paper.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank Reviewer jmK5 for the thoughtful feedback and the time and effort in evaluating our work, and we deeply appreciate your recognition of the contributions of our work.
We will carefully address all your suggestions in the revised manuscript (including incorporating additional quantitative and qualitative results on truncating low-quality trajectories into the main text, relocating the plasticity experiment to the appendix, and correcting typos and fragments), and these suggestions have been invaluable in refining the quality, clarity, and overall impact of our work. | null | null | null | null | null | null |
Batch List-Decodable Linear Regression via Higher Moments | Accept (poster) | Summary: In modern machine learning, collecting large datasets from a single source is often impractical. Instead, data is typically gathered in batches from multiple sources. This paper investigates the $\textbf{batch list-decodable linear regression}$ problem: given pairs $(X, y) \in \mathbb{R}^{d+1}$ drawn from the distribution $D_{\beta^*}$ such that $y = {\beta^*}^\top X + \xi$, where $\xi \sim \mathcal{N}(0, \sigma^2)$, we are provided with $m$ batches, each containing $n$ samples. With probability $\alpha$, a batch consists entirely of i.i.d. samples from $D_{\beta^*}$, while with probability $1-\alpha$, the batch is drawn from an arbitrary distribution. The goal is to compute a list $L$ of vectors $\widehat{\beta}$ such that at least one $\widehat{\beta}$ satisfies that $\\|\widehat{\beta} - \beta^*\\|_2$ is small.
Compared to the prior work [DJKS'23], which reduced the batch size dependency from exponential in $1/\alpha$ to linear, this paper further improves the batch size and achieves lower estimation error by leveraging the higher-order moment assumptions through the Sum-of-Squares (SoS) framework.
Claims And Evidence: Yes, the claims in this paper are clearly stated and rigorously supported by detailed proofs.
Methods And Evaluation Criteria: The main result of this paper is purely theoretical and is thoroughly validated through rigorous theoretical analysis.
Theoretical Claims: The proofs seem correct since I didn't read all the proofs carefully.
Experimental Designs Or Analyses: No experiments in this paper.
Supplementary Material: Yes, I reviewed all the parts of the supplementary material.
Relation To Broader Scientific Literature: (1) The use of higher moments and SoS-based techniques could be applied to other robust regression settings, such as sparse linear regression or generalized linear models, where outliers or heavy-tailed noise are common.
(2) The batch setting considered in this paper shares similarities with mixed linear regression, where each batch could correspond to a different component of the mixture. Extending the SoS-based approach to estimate multiple regression components simultaneously would be an interesting direction.
(3) The SoS framework might be leveraged to design robust algorithms for list-decodable PCA under the batch setting, especially in the presence of corruptions or outliers.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: $\textbf{Strengths:}$ (1) The proposed algorithm improves the performance of previous works by requiring smaller batch sizes and achieving better error guarantees, and provides a flexible tradeoff between batch size and computational complexity.
(2) This paper leverages higher-order moment information under the SoS framework to improve the robustness and accuracy of the estimation.
(3) This paper provides the first SoS proof of the Marcinkiewicz-Zygmund Inequality, which could be useful in other robust estimation tasks.
$\textbf{Weaknesses:}$ (1) The proposed method relies on SoS-certifiably bounded moments, which might limit its applicability to more general distributions.
(2) The algorithm is polynomial in the dimension and batch size but can become quasi-polynomial for very small batch sizes.
Other Comments Or Suggestions: It would clarify the generality of the proposed method if this paper could discuss the tightness of the SoS assumptions.
Questions For Authors: This paper is closely related to the work of Das et al. [DKS'23]. Could you compare the running time of the proposed algorithm with that of [DKS'23]?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their effort and positive assessment of our work. We reply to the points mentioned individually below:
(**Applicability of the SoS-certifiability Assumption**) SoS certifiability of moments is by now a well-studied condition that is known to hold for broad families of interest such as sub-Gaussian distributions and all distributions satisfying the Poincare inequality. Please see our response to Reviewer ez6k on this.
(**Comparison of runtime with Das et al. 2023**) As discussed in lines 91-107, the runtime of Das et al. 2023 is $\mathrm{poly}(d,n,m,1/\alpha)$. Their theorem statement does not specify what is the absolute constant in the exponent of that runtime. In comparison, our algorithm is also polynomial time but the parameter $k$ appears in the exponent (order of the higher moment information used) instead of just an absolute constant.
(**Quasi-polynomial runtime for very small batch sizes**) It is indeed true that the algorithm is no longer polynomial-time when the batch size becomes very small (e.g., when $k$ is on the order of $\log^2(1/\alpha)$). As discussed after the main theorem statement (lines 115–131), there is evidence (based on previously known Statistical Query lower bounds from Diakonikolas et al. 2021) that this phenomenon is inherent. We view this as an interesting conceptual conclusion: The tradeoff between complexity and batch size appears to be smooth. Specifically, for $n = 1/\alpha$, the complexity of our algorithm matches the previously known algorithm of Das et al., 2023, while for $n = \log^2(1/\alpha)$, it matches known SQ hardness results. In the intermediate regime, our results suggest a continuous interpolation rather than a sharp phase transition at a specific point. | Summary: First, in the interest of full disclosure, this review is very lightly modified from a review I wrote for this paper for NeurIPS 2024.
This paper studies a problem in algorithmic robust statistics: batch list-decodable linear regression. The setting is as follows. We get $m$ batches of $n$ samples $(X_i,Y_i)$ each, from a linear model $Y_i = X_i^\top \beta + \epsilon_i$. An $\alpha > 0$-fraction of the batches are "good", ie distributed like above. The remaining $(1-\alpha)m$ batches are actually chosen by a malicious adversary. The goal is to output a list of possible $\beta$s, actually $O(1/\alpha) of them (which is the best you can hope for), one of which is close to the ground truth.
The motivation for this problem comes from:
- list-decodable regression *without* the batch assumption -- ie just $1/\alpha$ fraction of samples are "good" -- seems to require exponential in $1/\alpha$ time, per SQ lower bounds
- in the real world, datasets are often collected in batches, but the batches may be small.
The paper's main contribution is a new family of polynomial-time algorithms with stronger quantitative guarantees than prior work for this problem. The key improvement is to the size of batches needed. Prior works obtained provable guarantees in polynomial time only when the batch size $n \gg 1/\alpha$. By contrast, the present work tolerates really small batches, of arbitrarily small polynomial size -- $n \gg \alpha^{-\delta}$ for any small constant $\delta > 0$. The tradeoff is that the resulting algorithm requires:
- strong assumptions on $X$: the random variable $X$ for the "good" samples must be "SoS-certifiably bounded" for moments up to $O(1/\delta)$. This assumption is satisfied by strongly log-concave distributions and by subgaussian distributions; it is largely unknown what other distributions may satisfy it. (The paper should probably mention the fact that subgaussian distributions satisfy their assumption.)
- a lot of samples and time: (nd)^{O(1/\delta)}$ batches and time are needed.
The algorithm and its analysis borrow a lot of technical ingredients from the now-enormous algorithmic robust statistics literature. It is a little unclear to me to what extend there is a fundamental new algorithmic/analysis idea here, although certainly combining several existing techniques from algorithmic robust stats is on its own a nontrivial matter. The authors suggest that their combination of an iterative list-pruning technique with SoS is novel; I am not sufficiently expert in the specifics of list-decodable estimation to assess the novelty independently.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: no
Experimental Designs Or Analyses: n/a
Supplementary Material: no
Relation To Broader Scientific Literature: thoroughly discussed in the introduction to the paper; I will not repeat the discussion here.
Essential References Not Discussed: none
Other Strengths And Weaknesses: ## Strengths
Pushes the frontier of polynomial-time guarantees for a well-motivated problem in robust statistics. Reasonably well written.
## Weaknesses
While batch list-decodable linear regression itself seems like a reasonably well-motivated problem, I see only moderate motivation for the goal of obtaining algorithms tolerating smaller batch size if the resulting algorithms have impractical (albeit still polynomial) running time. Given that algorithmic robust statistics has been under intense investigation for almost a decade now, I think a strong new paper should ideally either:
1. introduce a fundamental new algorithmic or lower-bound technique, overcome a serious technical barrier, etc., and/or
2. make headway in pushing the many new theoretical ideas from the last 10 years towards the realm of practical algorithms/impact beyond theory.
I think this paper is making some contribution on both types of goals, but not groundbreaking contribution on either. For (1), the paper is indeed pushing the frontier of polynomial-time algorithms, but it is hard to tell if there's a really new technique here, versus clever combination of existing ideas. For (2), although pushing for smaller batch sizes is a well-motivated practical goal, the paper is hamstrung by its reliance on higher-moment techniques and SoS which give very large polynomial running time.
Still, I think the problem is appealing enough and the results will be appreciated by the ICML audience.
Other Comments Or Suggestions: none
Questions For Authors: none
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and the positive evaluation of our paper. We respond to the points raised individually below:
(**Assumptions on $X$**) The SoS certifiability of moments is a well-established condition, known to hold for various important distribution families, including sub-Gaussian distributions and those satisfying the Poincaré inequality. Please see our response to Reviewer ez6k for more details.
(**Sample and Time Complexity**) There is a tradeoff between the batch size and the complexity of the algorithm: Before this work, we only knew that for batch size $n=1$ the problem exhibits exponential complexity hardness results and for $n=1/\alpha$ it becomes feasible in polynomial time, but we did not know anything in between: Is the complexity interpolating in a smooth way or there is some phase transition phenomenon? One of the main goals of this paper is to quantify this tradeoff and show that it is indeed smooth, which we view as an interesting theoretical insight. As explained in lines 114-131 (first column) although this tradeoff means that for extremely small delta the algorithm becomes efficient, this matches the prior known SQ lower bounds, thus in this sense, this behavior is unavoidable. Moreover, there is a wide range of regimes where the algorithm is efficient. For example setting $\delta$ to be a constant like $¼$, already gives a polynomial time algorithm that works only under 4-th bounded moment distributions and has better error guarantee than prior work (any algorithm before our work would either have larger error or run in exponential time).
(**Technical contributions**) One of our main technical contributions is a novel pruning procedure to reduce the candidate list size to $O(1/\alpha)$. This procedure is optimal, and uses techniques that differ significantly from prior work. The procedure is critical for the success of the iterative algorithm as the number of candidate lists will grow exponentially with the number of iterations without pruning. Moreover, although robust mean estimation using SoS is by now a standard tool, our work requires a novel iterative framework where each iteration improves the accuracy of the estimators. Lastly, in order to apply the result from Kothari & Steinhardt (2017), we need to show that the fact that the covariate has certifiably bounded moments implies that the regressor estimator we use also has SoS certifiably bounded moments, which is technically non-trivial. Finally, in order to apply SoS robust mean estimators for the variable $yX$ we need to ensure certifiability of moments, for the purpose of which we prove an SoS version of the famous Marcinkiewicz Zygmund Inequality, that might be useful independently of the result of this paper.
(**Motivation of Studying Batch-size in Robust Statistics Tasks**) Once again, we would like to emphasize the importance of studying the role of batch size in robust statistics. While the common assumption in robust statistics is that an arbitrary subset of data points (with a certain size bound) is corrupted, in the crowdsourcing setting, adversarial corruption naturally happens at a user-level, leading to corrupted batches of data instead of corrupted data points. As shown in Diakonikolas et al. (2021) and Das et al. (2023), such batch structure in corruption may lead to drastic change in the computation landscape of the statistical estimation task, and we believe it is a natural as well as important research direction to explore the quantitative relationship between batch size and other algorithmic resources needed in robust estimation tasks. Our work is the first revealing a smooth quantitative tradeoff between batch sizes and the computation/statistical resources needed for the concrete application of list-decodable linear regression.
On the practical side, we believe that optimizing the batch-size is crucial to any crowd-sourcing setting. In particular, if only 1% of users were reliable, prior work would roughly require one of the following: Either each user provides one data point, but the learning algorithm has exponential complexity Karmalkar et al. (2019), or (ii) each user will have to contribute at least $100$ data points Das et al. (2023) (which may well be unrealistic). On the other hand, we give an efficient algorithm that succeeds with much fewer data points per user; prior to our work, it was not clear if this is even possible in polynomial time. In particular, even if we use only the first $k=4$-th order of moment information, our algorithm will have already improved upon the prior work with substantially smaller batch sizes and achieved better error. We thus view our work as an important first step towards building practical algorithms for smaller batch sizes. | Summary: This paper proposes an efficient polynomial-time algorithm for batch list-decodable linear regression, using higher-order moment information within a Sum-of-Squares (SoS) framework. Compared to previous methods, this approach notably reduces both the required minimum batch size and the final regression error. These improvements rely on assuming that the covariates have higher-order moments that are certifiably bounded within the SoS framework. The authors' main innovation is an iterative list-pruning algorithm that leverages these higher moments, enabling stronger performance guarantees than existing techniques.
Claims And Evidence: The claims are generally well-supported by rigorous theoretical arguments.
Methods And Evaluation Criteria: The proposed methods and/or evaluation criterias are sound and particularly suitable.
Theoretical Claims: The key theoretical contributions—particularly Lemma 3.3 (SoS Marcinkiewicz–Zygmund inequality)—are clearly and rigorously presented. Additionally, I checked Theorem 1.3 and Proposition 3.1, whose proof techniques are based on reference [1]. The provided proofs appear correct, though very technical.
[1]. Das, A., Jain, A., Kong, W., and Sen, R. Efficient list decodable regression using batches. In International Conference on Machine Learning, pp. 7025–7065. PMLR, 2023.
Experimental Designs Or Analyses: This paper focuses exclusively on theoretical contributions without empirical validation.
Supplementary Material: The submission did not include supplementary material.
Relation To Broader Scientific Literature: The paper places itself clearly within the literature, comparing explicitly and thoroughly with prior works (Das et al. 2023, Diakonikolas et al., 2019, Diakonikolas & Kane, 2023, Kothari & Steinhardt, 2017). It effectively leverages and builds upon known techniques from robust and list-decodable regression, robust mean estimation, and the SoS methodology. It makes a clear advancement in terms of algorithmic complexity relative to Das et al. (2023).
Essential References Not Discussed: This paper studies batch list-decodable linear regression via higher moments, where the list-decodable linear regression is provided by [2].
[2] Sushrut Karmalkar, Adam R. Klivans, Pravesh Kothari: List-decodable Linear Regression. NeurIPS 2019: 7423-7432
Other Strengths And Weaknesses: Strengths:
1. Strong theoretical contribution: notably improves upon previous algorithms by leveraging higher moments.
2. Introduction of novel theoretical techniques (SoS Marcinkiewicz-Zygmund inequality) valuable to the broader theoretical ML community.
Weaknesses:
1. The assumptions required for significant improvements (e.g., hypercontractivity, bounded SoS-certifiable moments) might limit broader applicability.
2. No experimental validation, even preliminary, to provide intuition about practical relevance.
Other Comments Or Suggestions: It will be better if the authors add numerical experiments to validate your theories.
Questions For Authors: See weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and positive assessment of our work. We respond to their points below:
(**Distributional assumptions**) The condition of having SoS certifiably bounded moments is standard (in the algorithmic robust statistics literature) for leveraging higher-order moment information in the algorithm. This condition holds for all distributions satisfying the Poincare inequality (Kothari & Steinhardt (2017)), which results in a large family of distributions: it includes Gaussians, product distributions, strongly log-concave distributions, and any sum or uniformly continuous transformation of such distributions. There are also other non-Poincare distributions with SoS certifiably bounded moments, i.e., distributions over discrete points sampled from Gaussians. Moreover, via recent results (Diakonikolas et al. (2024) cited at the end of this rebuttal), it is known that the class of all sub-Gaussian distributions also satisfies the SoS certifiably bounded moments condition. Finally, regarding practicality, if having bounded $k$-th moments for very large $k$ is viewed as unrealistic in practice, we point out that even if we use only the first $k=4$-th order of moment information, our algorithm will have already improved upon the prior work with substantially smaller batch sizes and achieved better error. We thus view our work as an important step towards building practical algorithms for smaller batch sizes.
(**Experimental evaluation**) While we acknowledge the reviewer's interest in experiments and the importance of developing practical algorithms, we would like to emphasize that our primary contribution is to characterize the computational-statistical landscape for this fundamental learning task—in terms of error guarantee, batch size complexity, and batch number, and the tradeoff between them. That being said, there are practical regimes (like $k=4$) where we already get improved error compared to Das et. al. 2023 in poly-time and any other previously known alternative for the same error would require exponential runtime.
References:
Diakonikolas, Ilias, Samuel B. Hopkins, Ankit Pensia, and Stefan Tiegel. "SoS certifiability of subgaussian distributions and its algorithmic applications." arXiv preprint arXiv:2410.21194 (2024). | Summary: This paper studies the list-decodable linear regression, under the setting that the algorithm can collect batches of samples. This paper can be seen as a follow-up of Das et al. 2023, which studies the same problem under same batch setting, however uses a batch size $n \geq \tilde{\Omega}(\alpha^{-1})$, number of batches $m=\text{poly}(d,n,\alpha^{-1})$, and outputs a list of $O(\alpha^{-2})$ vectors at least one of which is $\tilde{O}(\alpha^{-1/2}/\sqrt{n})$ close to the target regressor. This paper, proposing a new algorithm that uses degree-$1/\delta$ moment information, enjoys a batch size $n\geq\Omega_{\delta}(\alpha^{-\delta})$ with $m = \text{poly}((dn)^{1/\delta},\alpha^{-1})$ and outputs a list of $O(\alpha^{-1})$ vectors at least one of which is $O(\alpha^{-\delta/2}/\sqrt{n})$ close to the target. The algorithm implements a refining idea for regressor using mean estimation from Diakonikolas et al. (2019) and apply a SoS list-decodable mean estimation approach to handle the linear regression in list-decodable setting.
Claims And Evidence: Given that the prior work on batch list-decodable linear regression didn't use high-moment information, the improvement makes senses in some way.
However, the algorithm is somehow not clear, maybe due to the complexity of this problem and SoS based algorithms. As stated in the paper, the regressor is estimated using the mean of $yX$, and the list-decodable learning is made possible via implementation of SoS based algorithms for mean estimation. Then, it should be clearly specify which mean estimation framework the algorithm is based on, as the literature for list-decodable mean estimation is well developed. On the other hand, it is not true that there exists no multi-filter framework using high-moment information. Actually, the very beginning of the multi-filter framework proposed by Diakonikolas et al. 2018 already considered higher moments. Hence, I don't see what the challenging part is here.
Methods And Evaluation Criteria: The method makes sense, by applying robust linear regression technique together with a list-decodable mean estimator.
Theoretical Claims: The theoretical claims make sense. It is hard for me to check all of the proofs.
Experimental Designs Or Analyses: No experiments required.
Supplementary Material: I read part of the supplementary material to check if the algorithm for Proposition 3.1 is there. But I didn't find it.
Relation To Broader Scientific Literature: This paper aims to improve upon previous work on batch list-decodable linear regression. The findings on establishing a SoS proof for the Marcinkiewicz Zygmund Inequality, might be of broader interest.
Essential References Not Discussed: It seems all are cited.
Other Strengths And Weaknesses: There is a key weakness for this paper. As the key contribution is the improvement over the batch size, however, this is at the cost of increasing the number of batches, which is not stated in this paper. The paper nor discusses the total sample complexity and how it is affected by applying different parameter $k$ in its algorithm. To check this closely, although batch size is reduced to $n=\Omega_{\delta}(\alpha^{-\delta})$, the number of batches $m=\text{poly}((nd)^{1/\delta},\alpha^{-1})$. Hence, to avoid higher number of batches, if $\delta=1$, the result is essentially the same to Das et al. 2023. Since there are lots of hiding terms, it is hard to verify if this algorithm is truly efficient.
Other Comments Or Suggestions: It must be specify the tradeoff between batch size and number of batches.
Questions For Authors: See above.
Ethical Review Concerns: No concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort in reading our paper. We respond to the points raised individually below:
(**Tradeoff between batch size and algorithm complexity is not discussed**) There indeed is a tradeoff between batch size and algorithm complexity. However, this is not a hidden aspect of the result. Rather, it is explicitly stated, and it is part of the conceptual take-away point of the paper. We reiterate the main points below for completeness:
The tradeoff is clearly presented in Theorem 1.3, where all parameters (batch size, number of batches, and runtime) are formally stated. Specifically, the batch size $n = \Omega_{\delta}(\alpha^{-\delta})$ and the number of batches $m = \text{poly}((nd)^{1/\delta}, \alpha^{-1})$, referenced by the reviewer, are explicitly given there.
The tradeoff is discussed in detail directly after the theorem (lines 93-131). Specifically the discussion paragraph states “Our result essentially shows that there is a smooth tradeoff between the batch size provided and the computational resources required”. The subsequent text points out that if the parameter $\delta$ is superpolynomial then the algorithm is no longer efficient and that there is evidence in the form of known Statistical Query lower bounds, (discussed further in Appendix F) suggesting that this tradeoff is unavoidable.
Quantifying this tradeoff is part of the main conceptual contribution of the work: On the one hand, SQ lower bounds from Diakonikolas et al., 2021 suggest that the problem is hard for batch-size equal to one; while Das et al. (2023) gives an efficient algorithm when the batch-size is $\approx 1/\alpha$ (see lines 69-81). It is natural to ask what happens in between (lines 91-97). Is there a sharp phase transition (in terms of the existence of a poly-time algorithm) when the batch-size goes from constant to linear in $1/\alpha$ or does the tradeoff happen smoothly? Our result suggests the latter: when the batch-size is $1/\alpha^c$, for $c>0$, there exists an algorithm using $O(1/c)$-degree SoS to achieve a small error for the task (lines 112-131).
If the reviewer’s concern stems from our use of the term “efficient”, such as in lines 104–108 (“Is there a computationally efficient algorithm […] with improved batch size and/or error guarantees?”), we clarify that “efficient” there refers to poly-time algorithms in general, which is the standard convention in the learning theory literature. This does not imply the absence of trade-offs within the space of poly-time algorithms. If this was the source of confusion, we will revise the phrasing to ensure this distinction is explicit.
(**Clarifications on the algorithmic approach**) We respond to the specific questions below:
* (*Which mean estimation framework is being used?*) As stated in line 166 of our paper, either Theorem 5.5 from Kothari & Steinhardt (2017), or Theorem 6.17 from Diakonikolas & Kane (2023) suffices for our purpose of list-decodable mean estimation. In general, any list-decodable mean estimation algorithm satisfying the guarantee in Lemma 3.5 suffices for our purpose, and the exact implementation is not important for our algorithm.
* (*“It is not true that there exists no multi-filter framework using high-moment information. Actually, the very beginning of the multi-filter framework proposed by Diakonikolas et al. 2018 already considered higher moments”*). The proof strategy in Diakonikolas et al. 2018 requires exact Gaussianity of the data (also listed in the their theorem statement). The subsequent work of Diakonikolas et al. 2020 was able to adapt the proof strategy for the case of just bounded covariance data but it did not extend for higher moment boundedness. We refer to Chapter 6 of the book Diakonikolas & Kane (2023) where the difficulties of using higher information is described. To the best of our knowledge SoS based algorithms are the only ones that work with higher moments beyond Gaussian distributions. Thus, we apply an SoS list-decodable mean estimation to the regressor estimator $\frac{1}{|B|}\sum_{(X,y) \sim B} Xy$. Even if $X$ were Gaussian, this estimator would not be, making Diakonikolas et al. (2018) inapplicable. To address this, we show SoS certifiability of the moment bounds by deriving a novel SoS proof of the Marcinkiewicz-Zygmund inequality that may be of independent interest. Finally, we highlight that the biggest difficulty in our paper is not finding the right algorithm to apply as a black-box but the subsequent issue that applying this algorithm iteratively may result in a number of hypotheses that blows up (see the paragraphs after line 194). Making that iterative framework work is where our novel pruning procedure is used. This procedure is optimal, and uses techniques that differ significantly from prior work of Diakonikolas et al. 2020.
I. Diakonikolas, D. M. Kane, and A. Stewart. List-decodable robust mean estimation and learning mixtures of spherical gaussians. STOC 2018.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response!
The tradeoff:
Indeed, my concern is that when $\delta$ is no longer a constant, the the claim of ``computationally-efficient'' does not hold any more, as there is a $1/\delta$ factor in the exponent of the number of batches. I think this must be clarified when claiming computational efficiency. As the main question addressed in Line 105-107 is the existence of computationally-efficient algorithm with 'significantly improved' batch size. It is unclear whether this `significant improvement' is qualitative or quantitative. To claim efficiency, $\delta$ must be an absolute constant, meaning that the batch size is still $\text{poly}(1/\alpha)$.
On the other hand, I do appreciate the contribution of this paper in studying the regime when $1\leq n \leq d$. In my understanding, the best batch size in this paper is $O(\log^2(1/\alpha))$ and is made possible when $k=\log(1/\alpha)$ (instead of that $k$ being $\log^2(1/\alpha)$, Line 119?) and this is not yet a constant. Hence, the lower bound $n=1$ of the regime is not reached. I think it is very important to bring this up-to-front when claiming the contribution of the paper to avoid any misunderstanding.
Algorithmic approach:
The clarification helps a lot. It will be helpful to include it in the revision.
---
Reply to Comment 1.1.1:
Comment: Thanks for the detailed response. We will clarify the two concerns raised by the reviewer below.
**Batch Size Improvement within Polynomial Sample Complexity and Runtime**
Regarding the phrasing used in the question in Lines 105–107 (“Is there a computationally efficient algorithm for list-decodable linear regression in the batch setting with significantly improved batch size and/or error guarantees?”), the answer is yes—there is indeed an algorithm that offers both quantitative and qualitative improvements in batch size. Specifically, let $\mathcal{A}$ be the algorithm from Theorem 1.3 with $\delta = 1/12$ hardcoded. Then $\mathcal{A}$ runs in polynomial time and requires a batch size of $O(\sqrt{1/\alpha})$, which is a quadratic improvement over the $O(1/\alpha)$ batch size required by previous work. By instantiating the algorithm with a different constant value of $\delta$, we can achieve any desired polynomial improvement in batch size while still maintaining polynomial running time.
For sub-constant values of $\delta$, the algorithm runs in quasi-polynomial time and achieves even smaller batch size guarantees. In our view, this is orthogonal to the main question in Lines 105–107, which asks whether any algorithm exists that is both efficient and achieves better batch size guarantees than prior work. As such, it does not contradict the answer provided above. Nonetheless, since this point caused confusion for the reviewer, we greatly appreciate their feedback and will clarify it in the final version.
**Best-possible Batch Size**
The reviewer is right that the smallest batch size for our algorithm to work is $\Theta( \log^2(1/\alpha) )$, which corresponds to the parameter $k = \log(1/\alpha)$ (thanks for pointing out the typo!).
Even though this might suggest that the algorithm cannot work with constant batch size,
and that the upper bound $n = \Theta ( \log^2(1/\alpha) )$ does not immediately match the existing SQ lower bounds from the literature, the claimed hardness is still argued to hold using a reduction from batch to non-batch setting that we mention in line 131 and describe in Appendix F. In particular, the lower bound together with our reduction give strong evidence that the runtime achieved of any algorithm is **necessary** to be exponential in $1/\alpha$ if the batch size is only $o(\log(1/\alpha))$. This includes the case when the batch size is any constant, and the number of batches drawn is polynomial. In summary, an important theoretical contribution of our paper is that we essentially characterize the transition point regarding the computational-statistical efficiency of this problem in terms of the batch size parameter, up to a single logarithmic factor.
We will clarify that our algorithm does not apply for the entire regime of $1 \leq n \leq d$ in the revision. | null | null | null | null | null | null |
Gravity-Bench-v1: A Benchmark on Gravitational Physics Discovery for Agents | Accept (poster) | Summary: This paper introduces Gravity-Bench-v1, a novel benchmark for evaluating LLM agents on physics discovery tasks. The authors design an environment that simulates gravitational dynamics with high precision, where agents must strategically plan observations, analyze data, and reason to solve various tasks. The benchmark includes standard Newtonian physics as well as out-of-distribution scenarios (e.g., modified gravity laws) to test generalization capabilities beyond memorized knowledge in LLM agents. The work provides a rigorous framework for assessing scientific discovery capabilities.
Claims And Evidence: - The paper's key research motivation that current benchmarks inadequately evaluate scientific discovery capabilities in llm agents is reasonable. I suggest authors to also provide compelling evidence of the limitations in existing benchmarks, showing the memorization vs discovery gap in previous works highlighted in sec 2.
- The claim that current agents struggle with observation planning in a budget-constrained setting is convincingly demonstrated through the significant performance gap between full-observation and budget-constrained scenarios (Table 1).
- I think the claim regarding PhD-level solutions as "human reference" requires more clarification. The paper lacks detailed information about how these solutions were developed, how many human experts were involved, and what their backgrounds were. This affects the validity of using these solutions as a reference point for evaluation.
Methods And Evaluation Criteria: The metric of setting task-specific error thresholds based on the performance gap between full-observation and partial-observation PhD-level solutions is reasonable and well-justified.
However, the paper should provide more details about how agents are made aware of these budget constraints and what prompting strategies were used to encourage strategic observation planning. Also, it's worth noting the benchmark is still limited to two-body problems.
Theoretical Claims: There are no formal theoretical proofs in the paper to verify.
Experimental Designs Or Analyses: The experimental design is generally sound. However, some aspects of the experimental analysis can be improved:
- The paper doesn't clearly explain why agents consistently use fewer observations than their budget allows (Figure 3). Is this a function of the prompting or an inherent limitation of the agents? How did authors explore this?
- For error analysis, it would be valuable to also see how errors and number of observations selected evolve over successive steps/iterations in agents.
- The findings about GPT-4o mini solving OOD tasks that GPT-4o fails on are intriguing but presented anecdotally rather than systematically analyzed. A more comprehensive analysis focused on these problems could be helpful to better understand the memorization vs discovery capabilities in different models.
Supplementary Material: I reviewed appendix and some parts of code in the supplementary material.
Relation To Broader Scientific Literature: The paper effectively positions itself within the landscape of benchmarks for scientific tasks and data-driven discovery.
The authors appropriately distinguish their approach and difference from existing work by highlighting the dynamic, partially observable environment and the inclusion of OOD cases to test generalization rather than memorization.
Essential References Not Discussed: The issue of LLM memorization and limitation of current benchmarks for general data-driven reasoning as well as scientific discovery tasks is also discussed in some other works such as [1-3]. I would suggest authors to also consider citing these works in their related work section.
[1] Shojaee et al., LLM-SR: Scientific Equation Discovery via Programming with Large Language Models, 2024
[2] Cai et al., SciAssess: Benchmarking LLM Proficiency in Scientific Literature Analysis, 2024
[2] Xie et al., On Memorization of Large Language Models in Logical Reasoning, 2024
Other Strengths And Weaknesses: **Strengths:**
- The benchmark addresses a gap in evaluating scientific discovery capabilities beyond memorization.
- The use of high-fidelity physics simulations provides a strong environment for testing science-focused agents.
- The inclusion of OOD cases with modified laws and simulators is particularly valuable for assessing generalization in science.
**Weaknesses:**
- The scope is currently limited to two-body gravitational problems in physics, which represents a narrow slice of scientific discovery scenarios.
- Insufficient details regarding the human reference evaluation settings, including the number of human experts involved and their evaluation protocol.
- Lack of details regarding the prompts used for different experimental cases, particularly how agents make decisions about budget allocation.
- The temporal dimension of budget utilization and error rate progression is missing from the analysis (improvement over steps, iterations in ReAct style agentic setting). Does increasing the number of iterations change the underutilization pattern observed?
- There's limited analysis of how different models approach OOD scenarios compared to other problems in current reported results.
Other Comments Or Suggestions: Minor Comment: I suggest adding annotation (a), (b), etc. for subplots in figures (e.g., in Figure 2 or 3) to ease referencing.
Questions For Authors: - In Table 1, why do the same models show lower computation time on full observations compared to partial observations?
- Regarding the OOD results where GPT-4o-mini solved a modified gravity task while GPT-4o did not: Have you verified this finding across multiple runs to ensure it's not anomalous? A more comprehensive analysis could provide insights about the relation between model size and discovery.
- In Section 4.6, you note that agents tend to take shortcuts rather than pursue systematic derivations. Have you analyzed whether the occurrence of these shortcuts differs between OOD cases and standard problems? Are shortcuts more or less common in novel problems?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **(Response to 1\)** See our response to “points 2,4,5” of Reviewer f7pg.
**(Response to 2\)** See our response to “point 1” of Reviewer f7pg.
(**Response to 3\)** Regarding the prompting process and the manner in which the agents were informed of the budget, we adopted a minimalistic approach: the prompt merely described the observation tool, specified the budget constraints (a maximum of 100 observations, with up to 10 per request), and deliberately avoided suggesting any specific strategy. In the absence of explicit strategic guidance, it becomes increasingly evident that these AI models demonstrate an inherent tendency to prematurely converge on solutions while failing to make full use of the allocated observational budget. This behavior shows that the models often fail to approach the problem with the level of care expected in scientific problem-solving.
This approach was necessary for gathering baseline results, from which we can investigate if this behavior can be mitigated via prompting or other means. The full prompt is below and will be included in the Appendix of our camera-ready version.
**(Response to 4\)** We thank the reviewer for raising the point about iteratively re-calling the agent and how that would improve budget use and performance. In our current setup, agents decide when to stop and almost always do so before exhausting their observation budget. We intentionally did not force agents to use their full budget, as our goal was to assess whether they can autonomously reason about when additional data is required. We acknowledge the value of experimenting with force, i.e. increasing the number of iterations.
(**Response to 5\)** We thank the reviewer for pointing this out, we agree that a deeper analysis of how different models approach OOD scenarios is important. Due to the large size of the outputs (over 1.2 million tokens per run), we were unable to perform detailed manual inspection across all cases. However, we are currently developing automated tools to analyze these responses systematically.
**\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\***
Prompt (Response to 3):
You are tasked with solving the following physics problem related to a binary star system. You are provided observations of each star's position over time, (t,x,y,z), in units of seconds and meters.
\#\#\# Problem Description
Determine the total energy (K \+ U) for the system in joules.
You must provide your answer in units of J.
\#\#\# Additional Instructions
To complete this task, you have access to the following tools and data:
1\. An observational tool called \`Observe\` that allows you observe the system at
specific times of your choosing.
2\. A code interpreter that can execute Python code.
When using \`Observe\`:
1\. The \`times\_requested\` parameter should be a list that can contain any values in the time window \[0.0, 7.21e+09\] seconds. You cannot request negative times. The upper limit for the time window was chosen to guarantee that the problem is solvable with an appropriate sampling of observations using the total observational budget.
2\. You can observe the system at any time within the time window, even if it is in the past compared to the last observation.
3\. You can observe the system up to a total of 100 times and you can observe up to 10 times per observational request which is the maximum length of the \`times\_requested\` list.
4\. After each observation, the dataframe \`row\_wise\_results.df\` will be updated. It contains columns: time, star1\_x, star1\_y, star1\_z, star2\_x, star2\_y, star2\_z. You can access it using the code interpreter tool. For example, to access the first five rows, print(row\_wise\_results.df.head(n=5))
**\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\***
Answers to direct questions:
Q1) Partial observations (budget-obs-100) require multiple sequential tool calls, as agents iteratively request and analyze observations, compared to full-obs where all observations are available immediately in a single step.
Q2) We tested each model three times. However, we acknowledge additional runs would be valuable.
Q3) We have not yet specifically analyzed whether the frequency of shortcuts differs between OOD cases and standard problems. | Summary: This paper proposes a new benchmark, “Gravity-Bench-v1”, for evaluating the scientific discovery capabilities of AI agents. This benchmark is based on gravitational physics (in particular, the two-body problem), and measures the ability of agents to discover hidden physical laws in a dynamic environment. The distinctive feature of this benchmark is that the agents are given a limited observation budget (up to 100 observations), and they must plan their observations efficiently within this constraint, collect and analyze data, and solve the problem. The benchmark includes not only scenarios that follow the laws of physics in the real world, but also out-of-distribution cases (such as modified gravitational laws), to evaluate the true scientific generalization capabilities of AI. The evaluation experiments show that the latest AI models (such as o1, Claude 3.5 Sonnet, and GPT-4o) show moderate performance (up to 64%) when they have full access to data, but when their observation budgets are limited, their performance drops significantly (up to 21.5%). This suggests that current AI models have limitations in long-term planning and information gathering strategies.
## Response After Rebuttal
After carefully considering the authors' rebuttal, I maintain my evaluation as "Weak reject (2)". Here's why:
The authors provided detailed explanations regarding the PhD-level solutions, clarifying that they were verified by astrophysics experts. This point is appreciated.
However, significant limitations still remain with this benchmark. Most notably, the current version is restricted only to two-body gravitational problems. For a benchmark claiming to evaluate scientific discovery across broader domains, this scope is extremely limited. While the authors mention plans for future extensions (adding noise, 3D extension, application to other fields), the narrowness of the current benchmark restricts the contribution of this paper.
Additionally, there remains a lack of detailed analysis regarding the observation budget usage patterns. The authors state that systematic exploration is difficult due to the approximately 1.3 million tokens generated per run, but at least a detailed analysis of representative cases would have provided insights into the agents' decision-making processes.
Regarding the OOD (out-of-distribution) claims, while I understand the authors' explanation, demonstrating generalizability to broader scientific discovery tasks would require evidence beyond a single physics domain.
Indeed, the fact that even the latest AI models achieve only 40.2% under budget constraints is interesting, but while this demonstrates the difficulty of the tasks, it doesn't fully validate the value of the benchmark design itself.
Overall, while the fundamental idea behind GravityBench has merit and shows promise, the current version is too narrow in scope to recommend for acceptance at ICML. A future version implementing the extensions mentioned by the authors would likely result in a more comprehensive and valuable benchmark.
Claims And Evidence: The paper's claims are generally supported by clear and convincing evidence. The main claims are the proposal of a new benchmark for evaluating AI agents' scientific discovery capabilities and the challenges that current AI models face with this benchmark. These claims are supported by comprehensive experimental results using multiple models (e.g., o1, Claude 3.5 Sonnet, GPT-4o).
In particular, the data showing the difference in performance with and without the observation budget constraint, the analysis of the agent's strategy in the observation plan, and the detailed examination of the failure mode are convincing. In addition, the evaluation in the out-of-distribution task also supports the claim that it measures the true scientific generalization ability of AI.
However, there is a lack of detailed explanation of the “PhD-level solution method”, and there is limited transparency regarding how this was actually implemented and how its accuracy was verified. In addition, the fact that the evaluation of o1 was partially limited due to budget constraints and that a comparison was not conducted under completely identical conditions for all models somewhat weakens the quality of the evidence.
Methods And Evaluation Criteria: The proposed method and evaluation criteria are generally appropriate for the problem of scientific discovery. The use of Rebound, a scientific-grade physics simulation tool, provides a precise and reliable physical environment. In addition, by setting constraints on the observation budget, the decision-making process is modeled to be similar to that of actual scientific research.
The fact that the evaluation criteria include task-specific tolerance thresholds is appropriate. In particular, the fact that the thresholds are adjusted according to the difficulty of each task (e.g. 5% for easy tasks, 70% for difficult tasks) and are determined based on the performance of PhD-level solutions is reasonable.
One area for improvement is the potential to introduce noise and error into the observations, which would mimic a more realistic scientific environment. Also, the current system only handles trajectories restricted to the (x,y) plane, but it would be possible to increase the complexity and realism of the benchmark by extending it to more general 3D trajectories.
Theoretical Claims: This paper makes mainly experimental contributions, and contains few assertions accompanied by rigorous theoretical proofs. However, the physical foundations, such as the description of physical laws (e.g., the modified gravitational law FG ∝ r^(-2+α)) and the virial theorem (2K + U = 0), are accurately stated.
The theoretical aspects mentioned in the paper are based on established principles and laws of physics, and there is no question about their accuracy. However, the paper itself does not provide new proof of these theories, but rather applies existing physical theories to create a benchmark environment.
Experimental Designs Or Analyses: The experimental design is robust and well-constructed with a clear purpose. A total of 206 task-simulation pairs were created, enabling comprehensive evaluation. The fact that each model was evaluated multiple times (3 times) under the same conditions also increases the reliability of the results.
However, there are some areas of concern. The limited number of evaluations of o1 (not evaluated under budget-obs-100) due to budget constraints makes a full comparison of all models difficult. Also, in the analysis of the observation planning capability, there is a lack of detailed analysis of why the agents do not use a large portion of the observation budget available to them. This could be useful information for a deeper understanding of the agents' behavior.
The analysis of failure modes, in particular the analysis of the correlation between the shortcut that assumes the mass of the star and incorrect answers, is very valuable. However, further verification of whether this correlation is a causal relationship would make the analysis more convincing.
Supplementary Material: I read through the sections after the references section of the paper “Gravity-Bench-v1: A Benchmark on Gravitational Physics Discovery for Agents”, explaining each section in one phrase.
First, in Appendix A, “Rebound simulations details”, there was an explanation of the detailed implementation method of the simulation and its accuracy. Next, in Appendix B, “Description of the benchmark problems”, there was a detailed explanation of the definition of each problem included in the benchmark and the required solution. In Appendix C, “Choosing task-specific thresholds for budget-obs”, there was an explanation of the method of setting evaluation criteria under observation budget constraints and the rationale behind it. Appendix D, “Another case study on planning”, specifically discusses the importance of strategic observation planning using the task of finding the periastron of an elliptical orbit as an example. Finally, Appendix E, “Mass assumptions”, analyzes the correlation between the shortcut behavior of AI models assuming mass values and incorrect answers. These appendices further reinforce the arguments of the paper and clarify the detailed aspects of the proposed benchmark.
Relation To Broader Scientific Literature: This research is appropriately positioned in the context of existing efforts to utilize AI in scientific research. In particular, it discusses the relationship with various approaches, such as LLM (Galactica, etc.) that specializes in literature analysis, data-driven discovery, automated statistical modeling, and workflow automation frameworks.
The Gravity-Bench is differentiated from existing approaches in that it views discovery as a dynamic and iterative process, and it emphasizes the interaction between exploration and inference in partially observable environments. In addition, while existing benchmarks focus on rediscovering known phenomena and textbook-style problem solving, the Gravity-Bench is differentiated by including a variety of dynamic scenarios that reflect the unpredictability of real-world discovery processes.
What is particularly valuable is that this research recognizes the lack of AI benchmarks for scientific discovery and presents specific efforts to fill that gap.
Essential References Not Discussed: The paper covers a wide range of related research, but there is a lack of reference to some related literature.
In particular, a more detailed comparison of specific previous research on AI systems that focus on the discovery of physical phenomena may be beneficial. For example, there is a lack of comparison with AI that performs symbolic regression of physical laws (e.g. AI Feynman, BMS) and autonomous scientific discovery systems that combine experimental design and hypothesis generation.
There are also references to the literature on partially observable Markov decision processes (POMDPs), but there is limited specific comparison with previous research on the application of POMDPs in the context of scientific discovery. A detailed discussion of the relevance of previous research in this field would make the novelty and positioning of the proposed benchmark clearer.
Other Strengths And Weaknesses: The main strength of this paper is that it provides a concrete and rigorous benchmark for evaluating the complex process of scientific discovery. In particular, the design, which requires dynamic problem solving that combines observation planning and analysis, enables a realistic evaluation of the scientific capabilities of AI. In addition, the inclusion of out-of-distribution scenarios is also commendable, as it measures the true scientific reasoning capabilities of AI rather than an approach based on memorization.
One weakness is that the current version is limited to the twin-gravity problem, and it is unclear whether it can be generalized to a wider range of scientific discovery scenarios. In addition, the idealized observation conditions (lack of observation error and noise) do not fully reflect the complexity of real scientific observations.
In terms of originality, the combination of environmental benchmarking and scientific discovery evaluation is creative. In particular, the integration of scientific simulation and the concept of partial observability to create a new evaluation framework for scientific exploration is innovative.
Other Comments Or Suggestions: Below are some suggestions for improving the paper:
1. It would be good to add a detailed explanation of the “PhD-level solution” and increase the transparency of the implementation method and verification process.
2. Introducing noise and measurement errors into the observations would allow for a more realistic imitation of a scientific discovery scenario.
3. A detailed analysis of why the agents are not fully utilizing their observation budgets would be beneficial. This could lead to a better understanding of the agent's decision-making process and indicate directions for improvement.
4. Currently, only trajectories limited to the (x, y) plane are handled, but in the future, it is hoped that the system will be extended to more general 3D trajectories.
5. By providing more specific prospects for the possibility of extending the benchmark to a wider variety of scientific fields and phenomena, the future direction of the research will become clearer.
Questions For Authors: I would expect an answer to the concerns raised above, but have no additional questions otherwise.
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review and valuable suggestions. Below, we address the main points:
**(Response to point 1)** We appreciate the feedback with respect to adding additional details regarding the human solutions. They were developed by one 2nd year PhD student, one professor, and one research scientist (PhD+10), all in astrophysics. Each one of them double checked and verified every solution. We also confirmed that the human-ref results matched Rebound’s built-in measures of the orbital parameters when available.
These solutions were not developed to directly compare human and AI scores, but rather were made to serve as an empirical solution baseline to understand the difficulty of each question, which was necessary to set the threshold values an AI must exceed in order for it to get marked as “correct”. We will explicitly detail this in the revised manuscript.
(**Response to 2,4,5**) We entirely agree with the recommendations for future improvements with respect to adding noise, moving to 3D, and branching to new fields. Publishing GravityBench is intentionally our first demonstration of a simulation-based scientific reasoning benchmark. We already are working on extensions including other areas of physics, like electromagnetism, and realistic observational errors. Developing these environments allow us to evaluate, and even train, AI agents on research-like tasks. Though even without these complexities, current models struggle at GravityBench, which demonstrates the benchmark's immediate value.
(**Response to 3**) We agree that a deeper analysis of why agents underutilize their observation budget would be valuable. Although we began a manual analysis of agent traces, each run generated around 1.3 million tokens, making systematic exploration impractical. Therefore, we expect further analysis will require careful automation which will shed light on failure modes. The manual inspection we present suggests that agents prematurely rush toward solutions, rather than strategically and iteratively using their observation budget. We emphasize that GravityBench is self-contained in the sense that the reason for agent failures reflect limitations in reasoning and planning abilities rather than issues with the benchmark itself.
**(Response to other comments)** Although we initially could not run o1 under all settings, we have now evaluated o3-mini-high, the current best-performing model, which still only achieves 40.2% under budget-obs-100. This performance demonstrates that our benchmark remains substantially challenging, as o3-mini-high still struggles with most questions, including those involving modified gravity or drag forces. These new o3-mini-high results will be provided in our camera-ready version.
We appreciate the comments on systematically verifying whether the shortcut of just assuming stellar masses leads causally to incorrect answers. However, we already know that this is indeed the case. Any solution which assumes stellar masses of 1 kilogram or equal masses is physically incorrect, which directly causes these solutions to fail. This is a direct causal relationship between the assumption and the solution rather than a correlation.
We acknowledge the point regarding limited references to literature on symbolic regression and AI-driven autonomous scientific discovery systems, and will expand our discussion to contextualize GravityBench with respect to these approaches. Regarding the connection to the POMDP literature, we are presently not entirely certain how "partial observability" in our benchmark relates with the standard interpretation used in idealized POMDP frameworks, as many of our problems involve regression rather than decision problems. We will clarify this relationship in the future, particularly as we introduce observational noise and uncertainty into our benchmark. | Summary: The paper introduces a new benchmark called Gravity-Bench-v1 to test the discovery potential of LLMs. The which the benchmark different 2-body star systems are simulated. These simulations include out of distribution parts where the gravitationally force have a different proportion to r by adding an alpha to r^{2 + $\alpha$}. The simulation outputs start positions over time and the agents have access to them by an observation tool and a python based shell script.
Claims And Evidence: The claim of the paper is to provide a benchmark to test the scientific reasoning potential of LLMs. By including out-of-distribution physics tasks it should be tested if the model is not just memorizing the solutions. However, is this really the case? And can one show this? Just thinking about an irrotational vector field in three-dimensional space. In general the inverse-square law corresponds to the property that the divergence is zero outside the source. So a general way of this law in an n-dim. Euclidean space would be that the intensity "I" of the vector field falls off with the distance "r" following the inverse (n − 1). Already here is a lot of literature about this. Furthermore, there are non-euclidean formulations even for Newtonian Cosmology (https://arxiv.org/pdf/2002.10155). These are just some examples which show that besides the simple 3-d inverse-square law a lot of other laws exist where the physics is different. It’s also easy to imagine that r^{2 + $\alpha$} can be given as an exam question in an undergraduate physics course. Also there are ton of alternatives formulations to general relativity or Modified Newtonian dynamics (MOND). Taking this beside there is another aspect: The models are closed source and are trained on data we don’t know. So how can the authors be sure that there approach is not in the data? It’s impossible to show this so making such a statement that r^{2 + $\alpha$} is out of distribution is therefore impossible to proof and then all evidence in terms of out-of-distribution is meaningless.
Methods And Evaluation Criteria: The authors used closed source LLMs to evaluate them on out-of-distribution physics tasks. Since we don’t know the data the LLMs were trained on, the methodology does not make sense.
Theoretical Claims: None.
Experimental Designs Or Analyses: See above.
Supplementary Material: The supplementary material consisted of some further information about the simulations, benchmark problems, task-specific thresholds, etc.
Relation To Broader Scientific Literature: No really search for alternatives to Newtonian Cosmology.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: --
Other Comments Or Suggestions: --
Questions For Authors: --
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s detailed consideration. They raised a concern about whether our simulations overall, and the modified law of gravity in particular, are truly out-of-distribution (OOD) tests for the AI models with solutions that cannot be found online (thus incorporated into the pretraining corpus). We argue that they are indeed OOD, on the following basis:
1) A major OOD aspect to our benchmark is the methodology itself (that mimics the scientific method) the AI models need to follow to answer our problems. They need to:
a) Understand both the given question and data.
b) Plan their “observations” within the constraints of the budget. This is a physical reasoning task on its own as for example an eccentric star will spend very little time near its periastron.
c) Reason through a usually multi-step roadmap to the solution.
d) Execute the roadmap using the available tools.
2) The universally poor performance across the AI models (including o1) we test clearly indicates that these models are genuinely struggling with these reasoning challenges. The scores obtained (under 20% for budget-obs-100) show that our benchmark is far from saturation.
3) Unlike other benchmarks that rely on or mimic questions largely obtained from sources available on the internet, we specifically designed our own simulations entirely from scratch. It is highly unlikely that similar problems are available in online or offline sources, ensuring memorization from any source remains very unlikely.
These points above emphasize that our modified gravity law problem is OOD. Since it is not only that an identical question is unlikely to be found in any other source, the methodology to solve it (all the steps of point 1\) makes it truly unique.
To conclude, even aside from the scenarios which we label 'out-of-distribution' (those with modified gravity and drag forces), our benchmark substantially contributes by evaluating dynamic planning and iterative reasoning under limited observational budgets, critical capabilities for scientific discovery yet rarely tested by existing benchmarks. This value is independent of whether one fully accepts our label of 'out-of-distribution' for the two scenarios where we go beyond the standard gravitational force. | null | null | null | null | null | null | null | null |
On Path to Multimodal Generalist: General-Level and General-Bench | Accept (oral) | Summary: This paper presents a comprehensive benchmark that includes over 700 existing tasks, providing a foundation for evaluating multimodal large language models (MLLMs). It introduces a five-level classification framework designed to systematically categorize MLLMs based on their capabilities and functionalities. Furthermore, the paper conducts an extensive evaluation of hundreds of models using the proposed benchmarks, offering valuable insights into their performance across various dimensions. By establishing a standardized assessment methodology, this work aims to advance research in the field and facilitate the development of more effective and versatile MLLMs.
## update after rebuttal
I would like to thank the authors for the rebuttal. As the authors didn't update Table 1 of the draft as they mentioned in the rebuttal, I am not able to judge whether they will do so properly. I keep my original rating.
Claims And Evidence: Strength:
- This paper serves as a benchmark study, featuring an impressive number of tasks, diverse data domains, and a substantial number of evaluated models. I greatly appreciate the authors' extensive efforts in constructing this comprehensive evaluation benchmark.
Weakness:
- The five-level classification method lacks clarity, as it does not provide well-defined criteria for each level. In particular, the concept of "synergy" between different modalities is not clearly articulated, making the classification ambiguous. For instance, Unified-IO-2 and Next-GPT, which are any-to-any modality generative models capable of producing visual and audio outputs from multimodal inputs, are categorized as Level-2, suggesting they lack synergy in comprehension and generation. Meanwhile, DeepSeek-VL and LLaVA-One-Vision, which do not even possess visual generation capabilities, are classified as Level-3, implying they exhibit synergy across tasks. This inconsistency raises concerns about the validity of the classification framework, as it does not appear to align with the actual capabilities of these models. A more precise and well-justified definition of "synergy" is necessary to ensure the classification accurately reflects the models’ multimodal abilities.
Methods And Evaluation Criteria: - The evaluation criteria are well-founded; however, as previously mentioned, I disagree with the classification method applied to different MLLMs.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: The experimental design is thorough, encompassing a wide range of domains, tasks, data, and models.
However, the analysis in Section 5.2 feels somewhat superficial, especially considering the large-scale experiments conducted in this study. The observations are fairly straightforward and lack deeper insights, making them less compelling.
Supplementary Material: I roughly go through the supplementary material given that it has almost 300 pages.
Relation To Broader Scientific Literature: It is related to many works in multi-modal large language models.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: No, the primary concern lies in the reasonableness of the classification method and the depth of the analysis.
Other Comments Or Suggestions: It would be much more engaging to see more unique insights drawn from the vast amount of experiments conducted.
Questions For Authors: Please address my questions above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for your time, meaningful questions, and constructive suggestions. Also, your recognition of our paper means a lot to us, which is the source of power to push us forward and enhance this work/project for a greater meaning to the community. We address each concern in detail below and hope that our clarifications help improve your evaluation of our work.
---
**Q1. The five-level classification method lacks clarity, as it does not provide well-defined criteria for each level. In particular, the concept of "synergy" between different modalities is not clearly articulated, making the classification ambiguous. For instance, Unified-IO-2 and Next-GPT, which are any-to-any modality generative models capable of producing visual and audio outputs from multimodal inputs, are categorized as Level-2, suggesting they lack synergy in comprehension and generation. Meanwhile, DeepSeek-VL and LLaVA-One-Vision, which do not even possess visual generation capabilities, are classified as Level-3, implying they exhibit synergy across tasks. This inconsistency raises concerns about the validity of the classification framework, as it does not appear to align with the actual capabilities of these models. A more precise and well-justified definition of "synergy" is necessary to ensure the classification accurately reflects the models’ multimodal abilities.**
**A:** Thank you for the detailed and thoughtful feedback. Due to space constraints, we provided the full definitions and criteria for each of the five levels in Appendix C. We kindly refer the reviewer to this section for a comprehensive explanation.
`First`, it is important to note that the levels in our framework are not mutually exclusive, but rather hierarchical. That is, a model classified at Level-3 will also have valid scores at Level-2, and so on. In Table 1, Unified-IO-2 and Next-GPT are shown as example models under Level-2 to illustrate that they satisfy the baseline criteria for this level—not to imply they are limited to Level-2. In fact, as shown in Table 19 (Appendix), both Unified-IO-2 and Next-GPT also receive valid Level-3 scores, indicating that they exhibit synergy across comprehension or generation tasks. However, they do not appear at Level-4, which requires a synergy in comprehension and generation.
`Secondly`, as for Level-3, the definition explicitly includes models with synergy in comprehension `and/or` generation. Although DeepSeek-VL and LLaVA-One-Vision do not support generative outputs, their performance on comprehension tasks exceeds that of single-modality specialists, thereby qualifying them for Level-3 based on comprehension synergy alone.
We appreciate the reviewer pointing out this potential source of confusion, and we will revise the paper to clarify the hierarchical nature of the levels, emphasize that the examples in Table 1 are illustrative rather than exclusive, and provide clearer articulation of "synergy" as it applies to comprehension and generation tasks.
---
**Q2. However, the analysis in Section 5.2 feels somewhat superficial, especially considering the large-scale experiments conducted in this study. The observations are fairly straightforward and lack deeper insights, making them less compelling. It would be much more engaging to see more unique insights drawn from the vast amount of experiments conducted.**
**A:** Thank you for the valuable feedback. Due to space limitations in the main paper, we have provided more in-depth analyses and insights in Appendices B.5, B.6, and B.7, including detailed discussions on synergy across skills, modalities, and comprehension/generation dimensions.
In addition, we include fine-grained performance breakdowns for each model across individual skills, which allow for more precise diagnosis of model weaknesses and emerging capability trends. These detailed results offer a solid foundation for identifying underexplored areas and informing future research directions.
In the revision, we will work to highlight additional non-obvious patterns and findings in the main text to better reflect the depth of our analysis.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the rebuttal. As the authors didn't update Table 1 of the draft as they mentioned in the rebuttal, I am not able to judge whether they will do so properly. I keep my original rating. | Summary: This paper pioneers the idea of a General-Level framework to evaluate MLLMs, allowing for an accurate assessment of MLLM capabilities.
The authors provide significant observations and principles on benchmark setting and design a sophisticated evaluation metric at different levels to maintain the rationality of the benchmark. They build a General-Bench based on the proposed framework, which is a massive benchmark to evaluate the comprehension and generation capabilities of MLLMs across various modalities such as image, video, audio, 3D, language. A comprehensive evaluation result with a massive analysis and discussion is provided in this paper.
Claims And Evidence: The claims are clear. This paper provides enough evidence. The authors claim that existing benchmarks rely too heavily on single-task performance metrics; they present evidence by evaluating 100+ MLLMs and showing that most fail higher-level synergy requirements. They further claim that current benchmarks lack coverage of diverse formats and advanced capabilities.
Methods And Evaluation Criteria: This paper proposes General-Bench to evaluate multimodal synergy across comprehension, generation, and cross-modal synergy. It introduces a five-level classification system that measures how well models preserve synergy at increasingly complex levels. Experimental results show that even the best-performing MLLMs struggle with higher-level synergy tasks, suggesting current models are far from achieving full multimodal generalization. This paper can offer a systematic and accurate measure of progress toward AGI.
Theoretical Claims: This paper provides detailed definition of General-Level. I think the proof is rational.
Experimental Designs Or Analyses: 1. The task selection in the benchmark is comprehensive, containing more than 700 tasks and covering all the main tasks across various modalities, fully reflecting the capability of each MLLM.
2. The selected models also mainly cover the primary open-source and closed-source models, reflecting a high coverage of MLLMs.
3. The observations and analyses for the experiments provide in-depth insights, such as current MLLMs focusing more on content comprehension than supporting generation, and that multimodality does NOT enhance language.
Supplementary Material: I have reviewed the supplementary material. It provides more details than what I expect on experimental settings (e.g., SoTA specialist selection), multimodal generalist (MLLM) selections, level definitions, and benchmark datasets. The supplementary material is very comprehensive and provides enough support for the main claims and background of the proposed benchmark.
Relation To Broader Scientific Literature: The paper provides a novel MLLM benchmark, inheriting from previous MLLM benchmarks, such as MME, MMMU, and MMT-Bench. However, it also introduces innovations, such as expanding the simple ranking into a five-level ranking, providing a more comprehensive and reasonable evaluation for MLLMs.
Essential References Not Discussed: This paper includes enough references, and the authors have adequately discussed the relevant works.
Other Strengths And Weaknesses: Strengths:
1. This paper derives a stunning and impressive idea: introducing the five-level capability grading mechanism from the autonomous driving industry into MLLM evaluation. This category-based ranking can comprehensively assess the MLLM’s synergy capabilities across comprehension, generation, and multimodal interactions.
2. The definition of General-Level is reasonable and conforms to the requirements of the two assumptions in Section 3.2. In addition, Appendix C.1 provides further explanation, and the general-level is convincing as an evaluation criterion for MLLM capability ranking.
3. This paper proposes a panoramic evaluation for various multimodal tasks. Regardless of modality coverage or task number, the output performs better than previous MLLM benchmarks. The task selection shows variety and covers various domains, making it very suitable to be a standard MLLM benchmark.
4. The paper conducts a thorough evaluation on the benchmark, selecting more than 100 MLLMs. This is a substantial workload and clearly demonstrates the current MLLMs’ performance on the new evaluation criteria. Also, the observations gained from the results, such as "multimodality does NOT enhance language", provide valuable insights for MLLM research.
Other Comments Or Suggestions: There might be some minor issues
- Line 2328: all task -> all tasks
- Caption in Figure 1: hinge -> hinges
- Line 60: Yhe -> The
Questions For Authors: 1. The idea of utilizing General-Level to expand upon classical MLLM benchmarks is novel. However, General-Bench focuses on capability while overlooking critical risks (e.g., hallucination, bias amplification). Could high-scoring models in your framework inadvertently reward unsafe behaviors? Should 'safety synergy' be introduced as a separate dimension in the newly proposed framework?
2. Since the benchmark contains more than 700 tasks, which is tremendous, evaluating models on General-Bench likely requires massive compute. Does this contradict the push for sustainable AI? Have you quantified the carbon footprint, and if not, should this be a mandatory disclosure for future benchmarks?
3. In the current setting, is each modality equally important on the path to AGI? The authors mention that they would like to incorporate more modalities into this benchmark. The current modalities seem to be the primary modalities for intelligence, while the newly added modalities do not seem as important as the current ones. If each modality is equally important, the current configuration may not fairly reflect the model's capabilities.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We are more than excited to receive your very strong recognition of our work, which means a lot! Also thanks for your detailed review and valuable suggestions. We have done our best to address all concerns and are happy to engage in further discussion to improve the clarity and quality of our work.
---
**Q1. General-Bench focuses on capability while overlooking critical risks (e.g., hallucination, bias amplification). Could high-scoring models in your framework inadvertently reward unsafe behaviors? Should 'safety synergy' be introduced as a separate dimension in the newly proposed framework?**
**A:** Thank you for the reviewer’s thoughtful and forward-looking comment. When designing the scope definition and task list curation for General-Bench, we paid special attention to mitigating potential bias by ensuring diversity in data sources, annotation methods, and task formats. Moreover, we prioritized using well-established datasets with high-quality annotations, and in some cases, relied on manual annotations to further minimize the risk of hallucinated content.
Regarding the safety perspective, we explicitly focused on evaluating positive and safe skill sets when defining the skill scope of our benchmark. During data collection and task construction, we carefully reviewed whether the expected model outputs could involve safety-sensitive content. If any safety risks were detected, we excluded those instances from the final benchmark.
We sincerely appreciate the reviewer’s suggestion to consider `"safety synergy"` as a separate evaluation dimension. We agree that safety is a critical component of trustworthy multimodal AGI systems and should be an integral part of generalist benchmarking. In future versions of General-Bench, we plan to incorporate safety-focused tasks and metrics to better evaluate models not just on capability, but also on responsible and safe behavior.
---
**Q2. Since the benchmark contains more than 700 tasks, which is tremendous, evaluating models on General-Bench likely requires massive compute. Does this contradict the push for sustainable AI? Have you quantified the carbon footprint, and if not, should this be a mandatory disclosure for future benchmarks?**
**A:** Thank you for raising this important concern. We agree that sustainability is a crucial consideration in designing and deploying large-scale AI benchmarks.
Although General-Bench includes over 700 tasks, it is intentionally designed to be modular and flexible. Model developers are not required to run evaluations on the entire task suite. In practice, given prior knowledge of a model’s capability boundaries, developers can select a relevant subset of tasks for evaluation to obtain a fair and rigorous comparison across other models.
We also acknowledge the importance of quantifying carbon footprint in building more responsible and sustainable AI systems. While we have not yet included CO₂ impact reporting in the current version, we plan to integrate optional carbon estimation tools in future releases, following emerging best practices in large-scale evaluation.
---
**Q3. In the current setting, is each modality equally important on the path to AGI? The authors mention that they would like to incorporate more modalities into this benchmark. The current modalities seem to be the primary modalities for intelligence, while the newly added modalities do not seem as important as the current ones. If each modality is equally important, the current configuration may not fairly reflect the model's capabilities.**
**A:** Thank you for this thoughtful question. We would like to clarify that General-Bench does not assume all modalities are equally important on the path toward AGI, nor do we assign equal weight to each modality in our evaluation. On the contrary, we recognize that different modalities play distinct and complementary roles in intelligent systems, and their importance can vary depending on the task, environment, or application context.
The currently included modalities—language, image, video, audio, and 3D—have been extensively studied and are more readily available at scale, which naturally draws more research attention. However, other modalities such as heatmaps, sensor data, charts, or even tactile signals also encode rich, structured information about the world. These may be particularly crucial in specialized domains such as medical diagnosis, physical reasoning, or embodied intelligence.
Our motivation for incorporating more modalities is not to suggest that each one is equally critical for AGI, but rather to build a broader and more flexible evaluation framework that can assess how well models generalize across diverse input types.
We appreciate the reviewer’s suggestion and will revise the paper to make our position on modality importance and evaluation scope more explicit and transparent.
---
**Q4. Typos: Line 2328: all task -> all tasks ...**
**A:** Thanks!Will correct this. | Summary: The paper introduces General-Level, a framework inspired by the autonomous driving industry's capability grading system, to classify Multimodal Language Models (MLLMs) across five levels based on their synergy in comprehension, generation, and multimodal interactions. To support this classification, the authors propose General-Bench, an extensive multimodal benchmark covering 700+ tasks and 325,800 instances across diverse modalities, including text, images, video, audio, and 3D. The evaluation of over 100 MLLMs reveals that most models fail to exhibit true synergy across modalities and tasks, challenging the idea that current MLLMs are progressing toward Artificial General Intelligence (AGI). The authors argue that synergy—the ability to transfer knowledge between modalities and tasks—should be the key metric for evaluating multimodal generalists. However, their assumptions, such as the claim that multimodal synergy can enable generalist models to outperform task-specific SoTA specialists, remain debatable. The benchmark itself is massive and computationally expensive, making it impractical for fast model iteration. The results suggest that multimodality does not necessarily enhance language abilities, contradicting some empirical findings. The paper aims to redefine multimodal evaluation standards but introduces several questionable assumptions and practical challenges.
Claims And Evidence: see below.
Methods And Evaluation Criteria: see below.
Theoretical Claims: No
Experimental Designs Or Analyses: see below.
Supplementary Material: N/A.
Relation To Broader Scientific Literature: Authors need to discuss more their work and prior holistic evaluation benchmarks like HELM, VHELM, etc.
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths**:
1. General-Bench covers a vast range of modalities, making it one of the most extensive multimodal evaluation suites to date.
2. The proposed General-Level system offers a structured way to assess MLLM capabilities beyond simple task performance.
3. The paper correctly highlights the lack of cross-task and cross-modal synergy in existing MLLMs.
4. The authors claim they will maintain an open benchmark and leaderboard, potentially aiding long-term MLLM progress.
**Weaknesses**:
1. Benchmark Construction Relies on Existing Datasets and Is Computationally Expensive:
The main contribution of the paper, General-Bench, is largely a repurposing of existing benchmarks rather than a fundamentally new dataset. The biggest issue is its sheer size, making model evaluation prohibitively compute- and time-intensive. While the authors claim they will maintain the benchmark and leaderboard, this does not address the core problem: practical usability during model development. Large-scale evaluations are not feasible for fast iteration, which is why even in the current era, development sets remain essential.
2. Overly Strong and Questionable Assumptions:
The paper assumes that "a model’s synergy capability enables it to outperform SoTA specialists in specific tasks by leveraging knowledge across tasks or modalities." This is highly unrealistic. There is no current evidence that a general-purpose LLM can outperform a domain-specific model in its specialized task. For instance, no LLM has surpassed a fine-tuned BERT model in NER. Furthermore, pursuing an all-encompassing LLM for every task is inefficient, both computationally and economically. The cost of using a large LLM for a task that a smaller, specialized model can perform better is unjustified.
3. Misleading Claim About Multimodal Synergy Not Enhancing Language Performance:
The paper concludes that multimodality does not enhance language abilities, but this is not always correct. Empirical results from training Vicuna, Qwen2, and LLaMA show that models trained with both image-text and text-only data consistently outperform those trained with text-only data on language benchmarks like MMLU. This trend is reproducible across multiple models, contradicting the paper’s claim. The authors fail to acknowledge variations in training setups that could impact this conclusion.
Other Comments Or Suggestions: Line 60: Yhe -> The
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate you carefully reviewing our paper and raising meaningful questions and constructive suggestions. Below we address your concerns one by one and are open to further discussion. Hope you can raise your evaluation.
---
**Q1. Discussion of related evaluation benchmarks (HELM, VHELM, etc.)**
**A:** Thank you for highlighting this. We acknowledge the contributions of holistic benchmarks like HELM, VHELM, and MMBench, which provide open, standardized infrastructures for evaluating vision foundation models. Our *Generalist Benchmark* aligns with this direction and is designed to serve as an open platform enabling broad community participation and transparent comparison of MLLMs. We will include a more detailed comparison with these existing efforts in the revision.
---
**Q2. The benchmark’s large scale makes evaluation costly and impractical for rapid iteration...**
**A:** We believe the benchmark's scale is a strength, reflecting diversity across tasks, modalities, and formats. For any single task, we limit evaluation to ~500 instances—computationally reasonable for most models.
To enhance usability, we will further propose a **multi-scope evaluation structure** with three leaderboard types:
- **Scope-1**: Full-spectrum leaderboard (current General-Bench), for general-purpose MLLMs.
- **Scope-2**: Modality-specific leaderboards, for modality-specialized models.
- **Scope-3**: Fine-grained task-cluster leaderboards under each modality.
All rankings are computed via our *General-Level* framework, allowing users to select appropriate evaluation scope based on their model’s capacity and resource constraints. This makes the benchmark scalable and adaptable: lightweight scopes allow fast iteration; broader scopes offer more visibility at higher computational cost—it's up to the user.
We do not plan to release dev sets for each task, as this benchmark is strictly designed for **zero-shot evaluation**. Model development and training are left to the developers.
---
**Q3. The assumption that synergy enables generalists to outperform SoTA specialists is unrealistic.**
**A:** We understand the concern regarding synergy and its comparison to SoTA specialists.
As explained in Appendix C.3 (“Rationality of Scoring Relaxation”), our design follows two steps:
(1) Define **synergy** as the core metric for capability levels;
(2) Propose a **practical scoring method**, given real-world model training constraints.
Ideally, **synergy** could be defined as a model performing better on a joint distribution $P_\theta(y|A,B)$ than on $P_\theta(y|A)$ or $P_\theta(y|B)$ separately. However, isolating such distributions is infeasible, as large generalist models are already jointly trained on many tasks. Retraining to cleanly separate task spaces is impractical.
Thus, we relax the evaluation: we treat cases where generalists **match or exceed SoTA** specialist performance in a task (without in-domain fine-tuning) as indirect but valid signals of synergy—i.e., effective cross-task/modality generalization.
This is supported by multiple studies showing generalists can outperform fine-tuned specialists:
- [1] shows GPT-4, via prompt engineering alone, achieves SoTA in medical QA benchmarks without fine-tuning.
- [2] shows OpenMedLM, via prompting techniques, surpasses prior fine-tuned open-source models on multiple medical benchmarks.
- Flamingo [3] outperforms fine-tuned specialist models on six vision-language tasks.
These findings support our core assumption: **the stronger a model’s synergy capability, the more likely it is to surpass SoTA specialists when synergy is effectively activated.** This avoids costly pairwise modeling and enables practical, scalable synergy evaluation.
[1] *Can Generalist Foundation Models Outcompete Special-Purpose Tuning?*
[2] *OpenMedLM: Prompt Engineering Can Outperform Fine-Tuning in Medical QA*
[3] *Flamingo: A Visual Language Model for Few-Shot Learning*
[4] *Perceiver IO: A General Architecture for Structured Inputs & Outputs*
[5] *Segment Anything*
---
**Q4. Multimodal synergy *does* improve language—models like Vicuna, Qwen2, and LLaMA benefit from image-text data.**
**A:** We appreciate this feedback and apologize for the unclear statement. We do **not** deny that multimodal data can improve language understanding. Our point is more specific: **such improvement has not yet enabled models to outperform SoTA NLP specialists on core language tasks.**
There is a clear distinction between *enhancing language performance* and *exceeding SoTA NLP models*. While models like Vicuna, Qwen2, and LLaMA show better results with image-text pretraining, our large-scale evaluation shows they still fall short of outperforming fine-tuned language specialists. Therefore, our statement does not contradict existing evidence. But for sure we will refine the statement for clarity in the revision.
---
**Q5. Typos: Line 60 “Yhe” -> “The”**
**A**: Thanks!Will correct this. | Summary: This paper introduces a five-tier General-Level framework that assesses multimodal generalists based on their synergy across comprehension, generation, and cross-modal interactions. Also, inspired by autonomous driving grading, it proposes a new benchmark, General-Bench, covering over 700 tasks and 325K instances across diverse modalities. Evaluation of 100+ state-of-the-art models reveals that most multimodal large language models (MLLMs) struggle to achieve true cross-task and cross-modal synergy. Authors show many interesting insights via their framework and benchmark, highlighting significant challenges on the journey toward genuine AGI overall progress.
Claims And Evidence: - The authors claim they have developed a new evaluation framework and have indeed proposed a completely new theoretical basis for it.
- They assert that they have introduced the most comprehensive and largest-scale benchmark dataset to date; subsequent comparisons with other datasets clearly demonstrate that its scale and scope exceed those of existing benchmarks.
- The authors contend that current MLLMs still face numerous issues that existing benchmarks fail to evaluate, and they have verified these problems in their experiments.
Methods And Evaluation Criteria: This is a benchmark paper. The authors propose a new evaluation approach for multimodal generalists (MLLMs/agents), focusing on the models' synergy effects across comprehension, generation, and cross-modal interactions. They have also introduced an entirely new benchmark dataset to evaluate over 100 LLMs from different perspectives and methodologies. Further, the authors provide extensive and detailed information in the appendix—nearly 300 pages—to substantiate the reliability of both the evaluation framework and the dataset.
Theoretical Claims: The authors propose a five-tier General-Level evaluation framework that incorporates innovative theoretical contributions. Their core claim is that current benchmarks for multimodal generalists or MLLMs merely compare performance across individual tasks, which fails to fully assess the true capabilities of these models. Consequently, they introduce a new evaluation approach based on the synergy effects of MLLMs in comprehension, generation, and cross-modal interactions. Further, to validate the soundness of their evaluation framework, the authors provide extensive mathematical proofs in the appendix, which I have reviewed and found to be both mathematically correct and robust.
Experimental Designs Or Analyses: The authors conducted extensive evaluations on over 700 multimodal tasks for more than 100 MLLMs. The assessments include individual task results, meta-task outcomes, the number of supported tasks, comparisons where models surpass state-of-the-art specialists, and the final ranking of models across different levels. Also extensive visualization analyses reveal the performance preferences of various models, all of which are both interesting and insightful. I found the experimental scope to be so vast, with validation approaches and perspectives that are both reasonable and comprehensive.
Supplementary Material: The supplementary material is detailed and spans nearly 300 pages, providing extensive information for a comprehensive understanding of the work, including:
- All the evaluated multimodal large models
- The complete experimental results across 700 tasks
- The full ranking of the large models at each level
- Various visualization analyses
- Extensive theoretical proofs
- A detailed extended introduction to the benchmark data
Relation To Broader Scientific Literature: I believe there are two core contributions:
1) The authors introduce an entirely new perspective for evaluating the rapidly increasing number of MLLMs. Instead of simply comparing performance across various tasks, they propose a capability grading system akin to that in the autonomous driving industry, based on the core idea of synergy. This approach is poised to revolutionize the field of MLLM evaluation and guide the development of the MLLM community.
2) I think the authors have developed an ultra large-scale evaluation benchmark for MLLMs that encompasses 145 skills across more than 700 tasks with over 325K samples, involving five common modalities and covering 29 domains. This unprecedented comprehensiveness and high quality ensure that the evaluation results should be extremely reliable, which, to my knowledge, might largely position this benchmark as the future standard for performance assessment in the field.
Essential References Not Discussed: The references provided in the paper are adequate.
Other Strengths And Weaknesses: I appreciate this work because it makes a clear contribution to the community. I believe the significance and value of this paper will be revolutionary in the field. Currently, research on multimodal generalists—whether MLLMs or agents—is gaining increasing traction and is progressively oriented toward developing more powerful models, as the authors claim. An important question for the community is how these multimodal generalists should evolve: should they focus on achieving higher performance or on supporting a broader range of capabilities? Simply assuming that higher scores on various multimodal tasks equate to a more capable generalist is too simplistic. The authors reject this notion and propose an entirely new evaluation approach. They apply a five-level classification system, borrowed from the autonomous driving industry, to rate multimodal generalists, where each level represents a specific range of capabilities, with further differentiation possible within each level. This idea is truly eye-opening, as it is not only theoretically rigorous and correct—thanks to a carefully designed scoring algorithm ensuring key attributes—but also highly feasible. The authors have introduced a novel evaluation perspective and methodology that is set to revolutionize the field.
For the second point, I was struck by the enormous effort reflected in this work. For instance, the authors have contributed an new dataset—possibly the largest benchmark I have seen—which includes 145 skills across more than 700 tasks with over 325K samples, involving five common modalities and covering 29 domains. Its high quality ensures extremely reliable evaluation outcomes, and this benchmark is likely to become the future standard for performance assessment in the field. Also the team evaluated 100+ current state-of-the-art MLLMs. Lastly, the paper, including an appendix spanning nearly 300 pages, provides exceptionally detailed information, which I find impressive.
Other strengths include the meticulous craftsmanship of the paper; both the writing and the visual presentation (such as the organization and visualizations) are of high quality. The paper is exceedingly detailed, providing necessary detail, and the experimental findings and conclusions are both fascinating and highly instructive for the community.
As for potential weaknesses, the only concern I can identify is that the authors might need to further strengthen their guidance for the community by providing clearer directions for future research. Although they include a "Limitations and Future Investigation" part in Section 7, I feel it could be even more detailed, as noted in the following comments.
Other Comments Or Suggestions: I believe the authors could offer further guidance from multiple perspectives to the future research community on how to steer the development of MLLMs to achieve higher performance within the General-Level evaluation framework.
Questions For Authors: I have a few minor questions:
- Do the skills in Tables 3–7 refer to meta-tasks, corresponding to the specific tasks shown in Appendix Table 103? Given the enormous scale and hierarchical structure of the dataset, I couldn’t fully grasp this aspect from the paper.
- In Table 9, I noticed that some pure language LLMs are included. Since the evaluation targets MLLMs, why evaluate the performance of LLMs?
- Although explanations may have been provided in the experiments, I remain curious: why did the GPT series models fail to achieve leading rankings across all levels, and why do only three models have scores at Level 4? Does this result seem reasonable, as it appears somewhat counterintuitive to me?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer’s recognition of our work and the thoughtful, constructive feedback. Below, we provide point-by-point responses to each comment. Hope you can reevaluate our work if you feel the response is effective and useful.
---
**Q1. The “Limitations and Future Investigation” section is helpful, but future research directions could be elaborated further.**
**A:** Thank you for this helpful suggestion. In addition to the *Limitations and Future Investigation* at section 7, we have also included a more actionable guide in **Appendix C.5: “Path to Advancing Higher in General-Level”**. This section outlines concrete guidelines for how future MLLMs can progress through the levels in our framework, offering clear directions for advancing synergy, modality coverage, and generalization—key factors in moving toward Level-5 multimodal generalists.
---
**Q2. Do the “skills” in Tables 3–7 refer to meta-tasks? Are they directly mapped to the specific tasks in Appendix Table 103?**
**A:** Not really, the “skills” listed in Tables 3–7 refer to **meta-tasks**, but they do not directly correspond one-to-one with the specific tasks in Appendix Table 103. Instead, each skill (i.e., meta-task) includes **multiple specific tasks**. For example, the skill *Crack Detection* includes the specific tasks *Tire Crack Detection* and *Road Crack Detection*. This hierarchical organization allows us to abstract task capabilities at a higher level, making the benchmark both manageable and scalable.
---
**Q3. Why are pure language LLMs included in Table 9, given that the focus is on MLLMs?**
**A:** We intentionally included language-only LLMs in the comparison to provide a **reference point**. This helps readers assess the performance gap between unimodal LLMs and MLLMs on NLP tasks. It also highlights a key finding of our benchmark: **multimodality has not yet enabled MLLMs to outperform SoTA LLMs on core NLP tasks**, which is critical for understanding the current limitations of multimodal synergy and what it would take to reach Level-5 performance.
---
**Q4. Why do GPT-series models not achieve top rankings across all levels? And why do only three models reach Level 4? These results feel somewhat counterintuitive.**
**A:** This is an excellent question. Although GPT models demonstrate strong performance in several areas, our analysis shows that they often excel in **specific task types**, such as vision comprehension, where they behave like specialists. However, they **lack broad modality and task support**, and in many cases, do not even support certain modalities or task formats.
Our *General-Level* scoring framework is based on two core principles:
1) The model should be a **true generalist**, i.e., capable across a **wide range of modalities and task types**.
2) **Synergy** is the key metric—performance gains must come from meaningful cross-modal, cross-task integration.
GPT models, while powerful, do not consistently meet these criteria across all levels, which explains their limited presence at Level 4 and absence from the top in other levels. Thus, the result is not only reasonable but aligns with our framework’s goals: to reward generality and synergy, not just isolated task excellence.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot to the authors for the detailed explanations in response to my questions. I re-read the paper and would like to confirm that my overall evaluation remains unchanged—I continue to be very supportive of this work and am willing to raise my score.
That said, I was a bit surprised that GPT-4o did not achieve a higher ranking in the current version of the paper. For example, recently GPT-o1/o3 (deepseek as well) has demonstrated quite strong long-chained reasoning capabilities, and I would have expected better performance. Also, the very recent updates to GPT-4o have shown impressive advancements in image generation (I bet you tried it).
Given this, I would suggest that the authors consider **timely updates** to the leaderboard, so that the rankings can more accurately reflect the evolving capabilities of the latest multimodal foundation models. If the goal of this work is to make a long-term contribution to the field, maintaining and updating the leaderboard over time would be essential.
---
Reply to Comment 1.1.1:
Comment: We extend our heartfelt thanks to the reviewer. Your recognition and support are the greatest encouragement for us to continue refining our work. We will keep improving this paper, and more importantly, we are committed to maintaining this evaluation platform and turning it into the most beneficial resource for the multimodal large foundation model community.
Regarding the powerful capabilities recently demonstrated by models such as GPT-o1/o3, GPT-4o, and DeepSeek, we will definitely include these models in future evaluations. However, objectively speaking, we still do not expect them to make a significant improvement on our overall leaderboard. The fundamental reason lies in our General-Level scoring principle: our leaderboard (as its name highlights: towards multimodal generalist) prioritize broader coverage across modalities and tasks, rather than only rewarding models that achieve expert-level performance in isolated capabilities or specific domains.
Fortunately, as mentioned in our rebuttal to Reviewer `7pDZ`, we will further propose a multi-scope evaluation structure with corresponding leaderboards based on different scopes of capability:
- **Scope-1**: A full-spectrum leaderboard (the currecnt version as General-Bench) covering all modalities and task types, intended for highly capable, general-purpose multimodal models.
- **Scope-2**: Modality-specific leaderboards that focus on a single modality, accommodating models that specialize in one area (i.e., modality-specific generalists).
- **Scope-3**: Finer-grained, task-cluster-specific leaderboards under each modality, designed for meta-task generalists with partial or specialized abilities.
All sub-leaderboard rankings are derived using our General-Level framework, allowing users to select their evaluation scope based on model capability, resource constraints, and intended impact. Those prioritizing rapid iteration and low cost can opt for smaller-scope evaluations, while more powerful models seeking higher visibility may choose to participate in full-scope rankings.
We believe that the latest models you mentioned above like GPT-o1/o3, GPT-4o, and DeepSeek are likely to achieve leading rankings in Scope-3 or Scope-2 evaluations.
Anyway, thank you again for your great support! | Summary: The authors introduce General-Level, a comprehensive evaluation framework for multimodal generalist models that emphasizes synergy across tasks and modalities. It further presents General-Bench, an extensive benchmark covering over 700 tasks spanning various modalities to assess both comprehension and generation capabilities. The framework categorizes model performance into five levels, reflecting the progression from task-specific skills to cross-modal generalization critical for achieving AGI. Extensive experiments reveal that while current models show progress, significant gaps remain in true synergy and broad task support.
Claims And Evidence: This paper is quite extensive in both length and content, and thus presents a lot of scientific claims. Many claims are backed by extensive experimental results and comprehensive benchmark data, particularly regarding the framework’s ability to differentiate models based on their task and modality support. Other claims, such as those concerning the detailed characteristics of the datasets, are supported by sufficient details. Also, the findings and conclusions in the experimental section are validated and supported by corresponding experimental data. In particular, regarding the general-level properties, the authors provide ample mathematical proofs in the appendix. But I think the normalization and metric mapping methods are mentioned without thorough empirical validation, leaving their effectiveness in accurately comparing heterogeneous tasks not 100% substantiated.
Methods And Evaluation Criteria: The authors propose a completely new evaluation approach, called the General-Level framework. The proposed methods and evaluation criteria, in my opinion, are innovative and largely appropriate for assessing multimodal generalist capabilities. The multi-level General-Level framework and the expansive General-Bench dataset provide a comprehensive way to capture both comprehension and generation across various modalities. Overall, I have not found any obvious issues with the validation methods or processes.
Theoretical Claims: The paper’s theoretical claims, especially those regarding the “synergy effect” that underpins the proposed 5-layer evaluation framework, are more conceptual than rigorously proven. Their core argument is that current benchmarks for multimodal generalists or MLLMs only compare performance across different tasks, failing to fully assess these models’ true capabilities. The authors provide some mathematical proofs in the appendix concerning certain properties at the General level. I examined the rationale for defining synergy levels and the assumptions underlying the ability of multimodal generalists to outperform specialized models.
Experimental Designs Or Analyses: The experimental design is very extensive, evaluating over 700 tasks with diverse modalities, which provides a broad view of multimodal capabilities. The analysis comparing generalists against SoTA specialists is methodically structured. Also the paper includes very appealing visualization-based analyses. But there might be several points to be further improved. 1) The synergy effect design would benefit from more ablation studies to isolate the contribution of individual components. 2) While the large-scale benchmark is impressive, some analyses appear to emphasize breadth over in-depth evaluation of failure cases, which limits understanding of the underlying challenges.
Supplementary Material: Yes, I review the supplementary material, which looks remarkably comprehensive, extending to nearly 300 pages and offering an abundance of details that give a thorough understanding of the work. It includes information on every evaluated multimodal large model, complete experimental results for 700 tasks, a detailed ranking of these models at each level, an array of visualization analyses, rigorous theoretical proofs, and an extensive introduction to the benchmark data.
Relation To Broader Scientific Literature: The paper’s contributions are deeply rooted in the evolving landscape of MLLMs or multimodal generalist models. It provides a novel perspective on understanding and evaluating the capabilities of multimodal generalists. The notion of “synergy”, a central theme in the paper, probably builds on earlier ideas of cross-modal joint learning or transfer learning, where knowledge transfer between modalities should be explored in various studies. But to my knowledge, there isn’t any prior related work that evaluates MLLMs in this way. Furthermore, the comprehensive benchmark (General-Bench) and the tiered evaluation framework resonate with existing efforts in creating standardized tests for model performance, such as LVLM-eHub, MME, and others. By synthesizing these ideas, I think the paper provides a structured method to compare and improve multimodal generalists, advancing the broader conversation on achieving AGI.
Essential References Not Discussed: There are no essential related works missing from the citations.
Other Strengths And Weaknesses: In the above fields, I have thoroughly emphasized the value and strengths of this work. Overall, I believe this work will bring a significant revolutionary impact to the MLLMs community, potentially leading and even changing the current development direction of large multimodal foundation models.
However, I do have a few minor concerns (which might be potential weaknesses) that I hope the authors can address or clarify during the rebuttal phase:
- The experimental study may lack sufficient discussion of failure cases with detailed analyses to fully summarize the common errors made by existing MLLMs. Such insights could help guide future research directions.
- The paper assumes effective cross-modal synergy without adequately addressing integration and robustness issues; I think future work should focus on these aspects for improved model interoperability.
- While the paper posits that non-language modalities can enhance language intelligence, the experimental evidence for this reverse synergy might be largely absent.
Other Comments Or Suggestions: See Weakness.
Questions For Authors: See Weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewers' recognition of our work and the valuable feedback provided. Below, we provide detailed responses.
-----
**Q1. But I think the normalization and metric mapping methods are mentioned without thorough empirical validation, leaving their effectiveness in accurately comparing heterogeneous tasks not 100% substantiated.**
**A:** We would like to clarify that the proposed normalization and metric mapping techniques are primarily designed to enable fair and consistent comparisons across heterogeneous tasks. This is necessary because certain evaluation metrics, such as FID, are not naturally bounded within the [0, 1] range, and lower values indicate better performance. While it is reasonable to directly compare models using FID within a single task, averaging performance across multiple tasks becomes problematic when combining metrics with different scales and monotonicities—such as FID (lower is better) and ACC (higher is better).
To address this, we carefully designed a normalization and metric mapping strategy that brings different evaluation metrics into a unified range. We also took special care in choosing appropriate scaling factors to ensure that the transformed scores remain faithful representations of the original quality measurements. For example, an FID of 25 and an FVD of 100 are mapped in a way that preserves their relative performance levels. This normalization process allows us to compute an overall average performance across tasks without introducing unintended biases.
---
**Q2. The synergy effect design would benefit from more ablation studies to isolate the contribution of individual components.**
**A:** Thank you for the suggestion. We would like to point out that a detailed analysis and discussion of the synergy effect in our multimodal generalist framework—across skills, comprehension and generation capabilities, and different modalities—is provided in Appendix B.7. We kindly refer the reviewer to that section for a comprehensive breakdown.
---
**Q3. While the benchmark provides a broad and impressive evaluation across diverse tasks, it currently lacks in-depth analyses of failure cases, which could limit insights into the specific challenges faced by MLLMs.**
**A:** Thank you for raising this important point. The core motivation behind General-Bench is to provide a comprehensive and systematic evaluation of MLLMs that goes far beyond conventional VQA-style assessments. To this end, we deliberately designed the benchmark to cover a wide range of task formats, modalities, and skills. This enables us to quantitatively and qualitatively measure MLLM performance from multiple perspectives, offering valuable insights into their generalization capabilities and limitations.
That said, we fully agree with the reviewers that in-depth failure case analysis is crucial for understanding model weaknesses and guiding future improvements. While our current focus has been on establishing broad coverage and performance patterns across dimensions, we acknowledge the value of qualitative case studies. In future work, we plan to include more detailed failure analyses and case-based evaluations to better illuminate the specific challenges MLLMs face and foster more targeted research efforts in this direction.
---
**Q4. The paper assumes effective cross-modal synergy without adequately addressing integration and robustness issues; I think future work should focus on these aspects for improved model interoperability.**
**A:** Thank for the reviewer’s constructive suggestion. In Appendix B.7, we provide a preliminary analysis of cross-modal synergy, particularly in terms of whether different models have learned synergy effects across modalities. We agree and will conduct a deeper exploration of integration mechanisms and robustness to modality-specific noise or failure for improving model interoperability in the revision.
---
**Q5. While the paper posits that non-language modalities can enhance language intelligence, the experimental evidence for this reverse synergy might be largely absent.**
**A:** We appreciate the reviewer’s comment and would like to clarify a possible misunderstanding regarding our claim. Our primary argument is that language serves as a strong prior to enhance other modalities, rather than the reverse. While we do not deny that non-language modalities can, to some extent, aid language understanding, our empirical findings suggest that current multimodal signals are not yet capable of boosting language performance beyond that of SoTA NLP models.
In other words, we distinguish between helping language understanding and surpassing NLP SoTA performance through multimodal input—two very different thresholds. Our intention was to highlight this gap and motivate future work toward developing truly synergistic models where multimodal information can meaningfully and reliably elevate language intelligence beyond what is achievable through text alone.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors’ response.
Once again, I find the ideas presented in this paper both meaningful and thought-provoking—especially the discussion around *cross-modal synergy*. I generally agree with the perspective that most current MLLMs achieve a form of *pseudo-intelligence* by leveraging the emergent capabilities of language models, rather than realizing true multimodal intelligence.
In my own team, we’re also exploring ways to enable *native cross-modal emergent intelligence*—a truly foundational and native form of multimodal intelligence. Within this framework, one of the key goals is to observe *symmetric cross-modal synergy*, where multimodal inputs not only benefit from language but also actively enhance language intelligence itself sufficiently.
In this regard, I'm curious, have the authors considered how the concept of native multimodal foundation models might be integrated into your general-level evaluation framework?
By the way, I’d be happy to champion this paper.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer again for your recognition and support, which is the most crucial driving force behind our continued efforts to advance and maintain this grand benchmark. We will keep investing resources to ensure the long-term maintenance of this open evaluation platform.
Regarding the idea you raised about **“how to achieve a truly native multimodal foundation that enables native bidirectional cross-modal synergy (e.g., multimodality synergizing language intelligence)”**, we believe it's a very thought-provoking and trending question.
We actually touched upon some preliminary discussions related to this topic in the paper. Overall, we firmly confirm that achieving **Level-5 multimodal generalist intelligence** must involve this kind of **bidirectional or symmetric cross-modal synergy**, where different modalities and tasks can assist and enhance each other.
From a technical perspective, given the current SoTA research landscape of the MLLM community, we believe two key aspects need attention:
1) **Model Architecture**: It is essential that an MLLM treats all modalities equally, including adopting a **universal approach to task modeling**. We believe that using an **autoregressive framework**, combined with a **unified tokenization** method across modalities, is one of the most promising approaches for unifying both understanding and generation across modalities. Also the AR solution has gained a lot of debate recently.
2) **Training Paradigm**: The training process must involve super large-scale data from **all modalities**, not just language. Moreover, the training should **explicitly model cross-modal reasoning**. For example, a type of **modality-interleaved reinforcement learning mechanism** (e.g., used in long-chained LLMs) could be employed to facilitate mutual enhancement and learning among different modalities.
Once again, thank you so much for your continued support! | null | null | null | null |
InfoSAM: Fine-Tuning the Segment Anything Model from An Information-Theoretic Perspective | Accept (spotlight poster) | Summary: The work identifies a problem with PEFT for SAM - the breaking down of domain-invariant relations encoded during pre-training. It proposes InfoSAM, a model that minimizes a lower bound on the mutual information between the encoder and the decoder during PEFT. It does it in a Rényi entropy sense, without having to run similarity determination on the two distributions. The work is evaluated via a number of standard experiments.
Claims And Evidence: The idea that there exist domain-invariant relations e.g. edge information, between pre-training data )the teacher model) and the downstream task (the student model) is rather imporant. I could not find literature in reading the work that substantiates it, nor a similar mention at the beginnings of the benchmark works. I may be willing to put it to my ignorance but any science paper will make an effort to make its central premise traceable.
Methods And Evaluation Criteria: Both do. The entropy hack is sublime, and appropriate going by the literature. Evaluation is very out-of-the-box.
Theoretical Claims: Proofs are left to backing literature.
Experimental Designs Or Analyses: No surprises.
Supplementary Material: The sup at the end. It assumes a limited role.
Relation To Broader Scientific Literature: Read bundled with the following question.
Essential References Not Discussed: The claim that a loss of information happens needs to be butressed.
Other Strengths And Weaknesses: \
Other Comments Or Suggestions: \
Questions For Authors: \
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful feedback. In the following sections, we will provide a detailed response to each of your comments.
---
**Q1:Literature reviews for domain-invariant information.**
**The concept of domain-invariant information was first introduced in prior works on domain adaptive segmentation (DAS), which explored cross-domain invariant features such as edge and structural information** (Hoffman et al., 2018). DAS aims to learn domain-invariant representations across multiple domains and follows two main approaches: (1) **extraction and refinement of domain-invariant features,** where methods like feature disentanglement (Chang et al., 2019) or analysis (Xu et al., 2022) decompose images into domain-invariant (e.g., shapes, edges) and domain-specific (e.g., textures, colors) components, aiming to enhance the former while suppressing the latter; (2) **GAN-based domain-invariant feature generation**, which employs adversarial training to align domains at different levels: image (Li et al., 2022), feature (Ma et al., 2024), and output (Huang et al., 2022). For example, GLGAN (Ma et al., 2024) integrates multi-scale global and local features to improve cross-domain transferability in remote sensing.
With the introduction of SAM, this domain-invariant concept has gained further attention. **SAM's large-scale segmentation pretraining on 11 million images inherently encodes cross-domain commonalities,** enabling strong zero-shot generalization. Recent works leverage these universal visual patterns for downstream tasks (Li et al., 2024; Peng et al., 2024). **However, these methods rely on complex designs or external data to learn representations. In contrast, we focus on preserving the domain-invariant information in pre-trained SAM for fine-tuning.**
Thanks the thoughtful review again, we will further add this discussion in the related work in the revised paper.
---
**Q2:Loss and preservation of information**
Many recent studies leverage SAM's pretrained capabilities for downstream tasks by fine-tuning. However, when the fine-tuning data distribution is narrow, the model tends to overfit task-specific local features (Wang et al., 2024). **We argue that this is mainly because task-specific optimizations will cover or suppress domain-invariant features learned during pre-training.**
To substantiate this assumption, **we have conducted experiments in Sec. 5.4 to illustrate that the extracted relation works (see Tab. 5) and is domain-invariant (see Tab. 6)** .
- Tab. 5: Extracted relations boost other distillation methods (e.g., TinySAM) by 1.7%–5.2% IoU, indicating the preserved information's effectiveness.
- Tab. 6: Applying RM trained on one domain to a completely different domain preserves its effectiveness, suggesting that these transferable relations are domain-invariant and beneficial for fine-tuning.
We further **explore the nature of domain-invariant information**. We employ relations to represent domain-invariant information, which serves as an implicit yet generalizable characterization that may inherently encode various domain-agnostic properties. Here, we **showcase and evaluate structural edge information using the Boundary F1 Score** (BFS) (Peng et al., 2023). As shown in Fig.2 (https://anonymous.4open.science/r/InfoSAM-7D61/README.md), InfoSAM with the relation module outperforms other fine-tuning baselines in boundary preservation, demonstrating that this implicit relational encoding effectively extracts richer structural edge features.
*Boundary F1 Score comparisons on leaf dataset (threshold=3):*
| Method| BFS (↑)|
| -| - |
| SAM |39.0 ± 0.16 |
| HQSAM|63.7 ± 0.65 |
| SU-SAM|75.1 ± 0.69|
| ConvLoRA-SAM|71.5 ± 0.56|
| InfoSAM (Ours)|**76.4 ± 0.29**|
-------------------------------------
References:
1. Hoffman et al. Cycada: Cycle-consistent adversarial domain adaptation. ICML, 2018.
2. Chang et al. All about Structure: Adapting Structural Information across Domains for Boosting Semantic Segmentation. CVPR, 2019.
3. Xu et al. DIRL: Domain-Invariant Representation Learning for Generalizable Semantic Segmentation. AAAI, 2022.
4. Li et al. A stepwise domain adaptive segmentation network with covariate shift alleviation for remote sensing imagery. TGRS, 2022.
5. Ma et al. Decomposition-based Unsupervised Domain Adaptation for Remote Sensing Image Semantic Segmentation. TGRS, 2024.
6. Huang et al. MLAN: Multi-level adver sarial network for domain adaptive semantic segmentation. PR, 2022.
7. Li et al. Domain-invariant Representation Learning via Segment Anything Model for Blood Cell Classification. Arxiv, 2024.
8. Peng et al. Learning to Adapt SAM for Segmenting Cross-domain Point Clouds. ECCV, 2024.
9. Zhang et al. Learning Shape-Invariant Representation for Generalizable Semantic Segmentation. TIP, 2023
10. Wang et al. SAMCL: Empowering SAM to Continually Learn from Dynamic Domains. Arxiv, 2024. | Summary: In this paper, the authors focus on parameter-efficient fine-tuning for segment anything (SAM) network from information theory aspect, and propose InfoSAM. Specifically, InfoSAM aims to mine the domain-invariant relations encoded in the pretrained model, and design a new knowledge distillation framework with two new training objectives, i.e., intra-SAM relation loss and inter-SAM relation loss. by preserving domain-invariant relations in the pretrained model and maximizing mutual information between teacher and student models, InfoSAM achieves better segmentation abilities on various downstream segmentation tasks.
Claims And Evidence: The formulation regarding intra-SAM and inter-SAM relations is correct. The theoretical analysis is sufficient and convincing.
Methods And Evaluation Criteria: The proposed method is intuitive and effectiveness. The evaluation criteria is widely used in segmentation tasks.
Theoretical Claims: The formulation regarding intra-SAM and inter-SAM relations is correct.
Experimental Designs Or Analyses: The validity of experimental design and analysis is enough. The experimental results are promising.
Supplementary Material: The reviewer has read the supplementary material. The useful parts include: pseudo code of InfoSAM, derivation of information-theoretic losses, and additional experimental results. All these parts improve the quality of the manuscript.
Relation To Broader Scientific Literature: The proposed method may insight future knowledge distillation works and other improved parameter-efficient fine-tuning works for better visual models.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: weakness: I noticed that both relation modules in the teacher and student SAM are optimized, how to ensure that the proposed relation modules will not fall into trivial solution.
Other Comments Or Suggestions: Eqn 14 may include typo, I think it should be lambda_1 * L_r + lambda_2 * L_d.
Questions For Authors: The ablation study of parameter numbers of relation module, e.g., the number of attention layers and different architecture. A thought analysis can support the claim from information theory aspect.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your thorough review of our paper and the valuable insights you provided. In the following sections, we will provide a detailed response to each of your comments.
--------
**Q1:Risk of trivial solutions in relation module (RM).**
We clarify that the **teacher's RM and student's RM share identical parameters,** as described in the Problem Formulation in Sec. 4.2 (lines 185–187).
Moreover, our proposed loss function ($L_{info}$) includes **several regularization terms** ($\log\_2 \|| G^{T}\_{imr} \||\_F^2, \log\_2 \|| G^{T}\_{r} \||\_F^2 , \log\_2 \|| G^{S}\_{r} \||\_F^2$), which are elaborated in Sec. 4.2 after Eq. (11) and Eq. (13). These terms explicitly **promote diversity in the feature distribution** and **prevent it from converging to trivial solutions.**
To further verify the effectiveness of these regularization terms, we conducted an **ablation study to assess their impact both qualitatively (through the visualization of relation maps) and quantitatively (through performance on downstream tasks).** **Both results indicate that the proposed loss with regularization terms effectively extracts domain-invariant features, rather than domain-specific noise, thereby enhancing downstream performance and alleviating the problem of trivial solutions.**
- **Visualization:** We visualize the relation maps and their corresponding statistical distributions evolving from early to late epochs. As shown in Fig. 1 (https://anonymous.4open.science/r/InfoSAM-7D61/README.md), without the regularization terms, the distribution of relation maps becomes increasingly narrow during training, and the domain-invariant information captured by the relation maps becomes less distinct. In contrast, the RM trained with regularization terms maintains a broad relation distribution and a more representative relation map.
- **Performance:** The regularization terms benefit our method by improving performance, as demonstrated by a 1.0% and 1.8% increase in IoU on the Leaf and Road datasets, respectively.
| **Method** | **Agriculture** | **Remote Sensing** |
| ---------- | --------------- | ------------------ |
| | IoU (Leaf) | IoU (Road) |
| w/o RT | 74.6 ± 0.12 | 59.6 ± 0.69 |
| w RT | **75.6 ± 0.27** | **61.4 ± 0.30** |
---
**Q2:Typographical error of $L_{info}$.**
Thank you for your careful review. Eq. (14) should indeed be **$L_{info}=\lambda_1 L_r+\lambda_2 L_d$.** We will correct this typographical error in the revised paper.
---
**Q3:Ablation study of various relation modules.**
We conducted an analysis to compare different model architectures and explore the number of attention layers for relation module (RM). We compare **direct dot product, a linear layer, multiple attention layers, and our proposed RM** across multiple experiments on two distinct domains.
The experimental results show that: (1) **attention-based RM outperforms other other architectures designs.** This indicates that attention mechanism effectively assess the correlations between the input features (i.e., image and mask features), thereby adaptively filtering and enhancing the useful information (e.g., edge details) while reducing redundancy. (2) If we stack an appropriate number of attention layers (e.g., 3 layers) in the RM can be beneficial for capturing key information. However, **stacking too many (e.g., five layers) increases training difficulty and risks overfitting.** In a nutshell, **the current RM design is a trade-off between performance and computational overhead**, and it effectively captures the relationships between image and mask features.
*Ablation study of various RM:*
| **Method** | **Agriculture** | **Remote Sensing** |
|---------------|-----------------|--------------------|
| | IoU (Leaf) | IoU (Road) |
| Dot Product | 75.2 ± 0.35 | 61.0 ± 0.04 |
| Linear | 74.9 ± 0.51 | 59.3 ± 0.58 |
| Attn-5 | 75.4 ± 0.22 | 61.4 ± 0.12 |
| Attn-3 | 75.4 ± 0.40 | **61.7 ± 0.06** |
| Attn-1 (ours) | **75.6 ± 0.27** | 61.4 ± 0.30 |
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. The additional results have addressed my concerns. I believe the quality of the manuscript will be improved after revision. I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback! We are delighted that our response has addressed your concerns and appreciate your acknowledgment of the additional results. All the discussions and experiments will be added to the revised paper. | Summary: This paper proposes InfoSAM, a new SAM fine-tuning framework that (1) compresses the domain pseudo-invariant information and (2) maximizes mutual information between a pre-trained teacher and a fine-tuned student model. Experiments across diverse datasets demonstrate that InfoSAM significantly enhances segmentation performance compared to traditional parameter-efficient fine-tuning and distillation methods.
Claims And Evidence: Yes. The paper proposes two key mutual information losses: Relation compression loss Lr and Distillation loss Ld. The effectiveness of these two combinations is demonstrated by the performance of SAM and SAM2 in downstream tasks and tasks in different domains. The effectiveness of each is also verified by ablation study.
Methods And Evaluation Criteria: The proposed methods make sense for the task, the evaluation also is thorough.
Theoretical Claims: Yes, I have the following question for author:
1. In lines 248–250, Equation (14) is L_info=lambda1*Lce+lambda2*Linfo. However, Lce has not been explicitly defined earlier, and Linfo appears on both sides of the equation. I think it should be L_r and L_d?
2. In lines 258-260, the choice of α = 2 because it simplifies the computation (using the Frobenius norm), are there other theoretical or practical reasons for selecting this value?
Experimental Designs Or Analyses: Yes, the experimentals design is sound.
Supplementary Material: Yes, the authors provide the code in supplementary Material.
Relation To Broader Scientific Literature: The paper’s contributions are twofold.
1. Different from other parameter-efficient fine-tuning (PEFT) methods (SAMAdapter (Chen et al., 2023), Conv-LoRA (Zhong et al., 2024)), it introduces a novel information-theoretic framework that leverages mutual information to preserve critical domain-invariant features.
2. Different from traditional knowledge distillation techniques (TinySAM (Shu et al., 2023), MobileSAM (Zhang et al., 2023)), this paper considers the inter-module relationships within the teacher SAM, thereby enabling a more effective transfer of structural knowledge.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
- The paper creatively combines parameter-efficient fine-tuning with an information-theoretic framework, introducing mutual information-based losses that focus on preserving domain-invariant inter-module relationships within SAM. This represents an innovative twist on traditional knowledge distillation approaches.
Weaknesses:
- Insufficient Justification for alpha=2
- Unclear Definition of L_info
Other Comments Or Suggestions: None
Questions For Authors: 1. In lines 248–250, Equation (14) is L_info=lambda1*Lce+lambda2*Linfo. However, Lce has not been explicitly defined earlier, and Linfo appears on both sides of the equation. I think it should be L_r and L_d?
2. In lines 258-260, the choice of α = 2 because it simplifies the computation (using the Frobenius norm), are there other theoretical or practical reasons for selecting this value?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your detailed and valuable review on our paper. We will address each of your comments thoroughly in the following sections.
---
**Q1:Insufficient justification for $\alpha=2$.**
The core reasons for choosing $\alpha=2$ in matrix-based Rényi's $\alpha$-entropy are as follows:
(1) The primary **practical motivations are computational efficiency and alignment with prior works**. By setting $\alpha=2$, we enable direct computation of matrix-based Rényi entropy through Frobenius norm operations (Eq.11), eliminating the necessity for eigenvalue decomposition. This optimization **reduces time complexity from $O(n^3)$ to $O(n^2)$** ($n$ represents the sample numbers) (Dong et al., 2023), substantially reducing computational costs while maintaining theoretical rigor, particularly advantageous for high-dimensional data analysis (Yu et al., 2019). Additionally, **prior research (Miles et al., 2023) has successfully applied Rényi entropy with $\alpha=2$ in segmentation tasks**, to align with the established practices in this field, we adopt $\alpha=2$.
(2) For theoretical reasons, if the application requires emphasis on tails of the distribution (rare events) or multiple modalities (distributions with multiple peaks), $\alpha$ should be less than 2 and possibly approach to 1 from above. If the goal is to highlight the dominant mode (the most probable region), $\alpha$ should be greater than 2 to emphasize central tendencies. **$\alpha=2$ provides neutral weighting** (Yu et al., 2019). Moreover, the **Frobenius norm's differentiable and strongly convex properties guarantee rapid convergence in gradient-based optimization algorithms** (Boyd, 2004).
(3) Furthermore, we conducted an analysis to **evaluate the performance of different $\alpha$ values ($\alpha=1.01, 2, 3$)**. Following with prior work (Yu et al., 2019), we set $\alpha=1.01$ to asymptotically approach Shannon entropy. The results indicate that **$\alpha=2$ achieves the highest verification accuracy while reducing computational overhead by an order of magnitude**. This computational gain stems from its exclusive reliance on Frobenius norm operations (Eq. 11), whereas $\alpha=1.01$ or $3$ require eigenvalue decompositions, which are computationally more expensive.
*Experiments of different $\alpha$ values in $L_{info}$ :*
| **Method** | **Agriculture** | **Remote Sensing** | **Computation Time** |
| ------------- | --------------- | ------------------ | ------------- |
| | IoU (Leaf) | IoU (Road) | ms |
| $\alpha=1.01$ | 75.3 ± 0.31 | 60.6 ± 0.12 | 32.1 ± 30.7 |
| $\alpha=2$ | **75.6 ± 0.27** | **61.4 ± 0.30** | **1.2 ± 0.3** |
| $\alpha=3$ | 75.2 ± 0.30 | 61.2 ± 0.06 | 35.4 ± 31.2 |
-------
**Q2:Unclear definition of $L_{info}$**
Thank you for your careful review. Equation (14) should indeed be **$L_{info}=\lambda_1 L_r+\lambda_2 L_d$**. We will correct this typographical error in the revised paper.
-------------------------------------
References:
1. Dong et al. Optimal Randomized Approximations for Matrix-based Renyi’s Entropy. TIT, 2023.
2. Yu et al. Multivariate Extension of Matrix-Based Rényi's α-Order Entropy Functional. TPAMI, 2019.
3. Miles et al. MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation. CVPR, 2023.
4. Boyd S. Convex optimization[J]. Cambridge UP, 2004.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. The additional results have addressed my concern. I will keep the 'accept' recommendation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback! We are delighted that our response has addressed your concerns and appreciate your acknowledgment of the additional results. All the discussions and experiments will be added to the revised paper. | null | null | null | null | null | null | null | null |
AdaDecode: Accelerating LLM Decoding with Adaptive Layer Parallelism | Accept (poster) | Summary: This paper introduces AdaDecode, a self-speculative decoding method with an early exiting mechanism. Based on empirical findings that many simple and predictable tokens can be accurately generated at intermediate transformer layers, the authors propose three key contributions. First, they introduce a lightweight intermediate-layer LM head training approach that enables high-confidence early predictions without modifying the original model parameters. This lightweight head achieves performance comparable to fully parameterized LM heads from prior works, and is tuned to minimize KL divergence with final layer outputs. Second, they develop adaptive layer parallelism that concurrently processes multiple early-predicted tokens generated by the lightweight LM heads from different layers, significantly improving hardware utilization and decoding speed. Third, by incorporating early exiting with self-speculative decoding, AdaDecode enables KV-cache sharing between draft and verification stages, reducing computational resources while maintaining output consistency.
Claims And Evidence: - The paper's core claims about the effectiveness of lightweight intermediate-layer LM heads and adaptive layer parallelism are generally well-supported by the presented evidence:
- The proposed method of using KL divergence between final layer and intermediate layer output distributions is well-motivated and effectively implemented. This approach successfully calibrates log probability between draft and verification stages, as implicitly demonstrated in the experiments about relation between **γ** and rejection rate (Figure 5(c))
- The claim of achieving comparable performance with lightweight head is supported by comprehensive experimental results across multiple model architectures (e.g. Llama3.1-8B, CodeLlama-13B/34B-inst) and datasets (e.g. XSum, HumanEval, GSM8K)
- However, there are concerns about the robustness and justification for the early exiting layer selection policy:
- The authors select three intermediate layers for early exiting (8th, 16th, 24th) across different model architectures without providing sufficient analysis justifying this specific configuration. While their experiments with LLaMA3.1-8B, CodeLLama-13B, and 34B show the approach works well, the paper lacks ablation studies or theoretical analysis explaining why selecting three layers works well.
- Previous work in layer skipping or early exiting typically incorporates additional fine-tuning or specialized architectures to determine optimal exiting points. The absence of a more sophisticated approach to layer selection raises questions about whether the reported speedups represent the maximum potential of the method.
- A more thorough analysis of different layer configurations would strengthen the paper's claims and potentially provide insights for adapting AdaDecode to other model architectures or domains in future work.
Methods And Evaluation Criteria: - The paper's proposed methods are technically sound and well-aligned with the goal of improving inference efficiency through speculative decoding:
- The lightweight intermediate-layer LM head approach is a practical solution that avoids modifying original model parameters while enabling efficient early predictions.
- The adaptive layer parallelism technique represents a meaningful advancement in hardware utilization for speculative decoding systems.
- Regarding evaluation, the authors employ standard metrics for the field:
- The paper uses conventional evaluation metrics for speculative decoding: throughput (Tokens/s) and relative speedup compared to vanilla decoding. These metrics do allow for basic comparisons with prior work in the field.
- The evaluation across different model architectures (LLaMA3.1-8B, CodeLLama-13B, 34B) and diverse tasks provides good coverage of practical applications.
- However, there is a notable limitation in the reproducibility of evaluation results:
- While throughput and speedup are standard metrics, they are hardware-dependent, which can make cross-study comparisons challenging.
- The paper would benefit from more comprehensive evaluation details such as: detailed hardware specifications for reproducibility, acceptance rates for speculative tokens across different scenarios, memory utilization statistics, and perhaps theoretical computation reduction metrics. These additional measures would provide a more complete picture of the method's efficiency beyond raw throughput numbers, especially for researchers working with different hardware configurations.
- Also, the comparison methodology with previous state-of-the-art speculative decoding methods presents a significant limitation in the paper's evaluation:
- According to Supplement section C, the authors appear to have compared AdaDecode with previous methods like Self-SpecDecode using configurations that differ from those reported in the original papers. This approach undermines fair comparison, which is a cornerstone of reproducible machine learning research
- The specific example of comparing with Self-SpecDecode on XSum using LLaMa-3.1-7B is particularly problematic since this configuration wasn't reported in the original Self-SpecDecode paper. This is especially concerning because Self-SpecDecode utilizes Bayesian optimization for hyperparameter tuning, which is highly sensitive to initial conditions and configuration settings.
- Hyperparameter tuning significantly impacts model performance and reproducibility. When comparing methods that rely on different tuning approaches, ensuring consistent evaluation conditions becomes critical.
- For improved reproducibility, the authors should:
- Provide the exact scripts used for Bayesian optimization in their comparisons, including random seeds to ensure deterministic results.
- Alternatively, conduct experiments using the exact model architectures and datasets reported in the original papers they compare against.
Theoretical Claims: The paper provides theoretical justification for its lightweight LM head approach in Supplement A. Upon examination, the proof focuses on establishing the mathematical existence of a transformation matrix T that enables parameter-efficient implementation of intermediate layer heads. The mathematical derivations appear correct and thoroughly support the claims made in the main paper.
Experimental Designs Or Analyses: I have reviewed the experimental designs and analyses in the paper, with particular focus on their methodological soundness:
The ablation studies examining the robustness of AdaDecode to the hyperparameter **γ** (used as a threshold for the drafting stage) are well-designed and provide valuable insights into the method's stability. The authors appropriately vary this key parameter and demonstrate consistent performance across a reasonable range of values, which strengthens confidence in the practical applicability of the method.
The overall experimental framework is methodologically sound, with the authors:
- Testing on multiple model architectures (LLaMA3.1-8B, CodeLLama-13B, 34B) to demonstrate generalizability
- Evaluating on diverse tasks including summarization, mathematical reasoning, and code generation
- Providing appropriate baselines for comparison including vanilla decoding and alternative speculative methods
- Measuring both throughput improvement and output consistency to balance speed and quality
However we do have some suggestion that could provide more insight for the future readers of this paper.
1. Context Length Impact on Rejection Rates
While the paper demonstrates effectiveness on standard benchmarks, it lacks analysis of how rejection rates scale with context length. This is critical because:
- Draft quality and acceptance rates often degrade with longer contexts due to increased model strain
- Smaller models like CodeLlama-7B may struggle to maintain draft quality for long-context inputs (as mentioned in paper)
Suggested Additions:
- Experiments measuring acceptance rates across varied context lengths (e.g., 4K → 32K → 128K tokens)
- Comparison of rejection ratios between AdaDecode and baselines under long-context scenarios
- Analysis of whether lightweight LM heads mitigate long-context challenges better than conventional drafting approaches
2. Maximum New Token Length Analysis
The current experiments only test up to 512 new tokens, which may not reveal efficiency patterns for different generation demands:
Suggested Additions:
Comparative tests with max_token ∈ {256, 512, 1024} to evaluate:
- Throughput consistency across generation lengths
- Rejection rate trajectories over extended sequences
- Memory utilization patterns during prolonged generation
These additions would better demonstrate AdaDecode's robustness across practical deployment scenarios while addressing inherent challenges in speculative decoding systems.
Supplementary Material: We do review all parts in supplementary material. From A. proofs to E. Limitation and future work. We do have some concerns about Section C, but already mentioned that in Methods And Evaluation Criteria. Also we review slightly about the features and structure of codes given in supplement materials to check reporoducibility issue.
Relation To Broader Scientific Literature: AdaDecode's lightweight LM head approach builds upon previous early exit methods, but distinguishes itself by offering a more parameter-efficient implementation. While prior works like LayerSkip[1] also explore early exiting with shared LM heads, AdaDecode's specific parameter efficient LM head appears to be orthogonal contribution. As author mentioned future works for using LoRA instead of lightweight LM head is also expected approach.
The paper's approach of combining multiple early exiting layers with self-speculative decoding connects to concurrent works. AdaDecode's adaptive layer parallelism for concurrent processing of multiple early-predicted tokens represents a potentially orthogonal optimization approach.
The reuse of KV-cache between drafting and verification stages appears in several recent works including LayerSkip[1], EESD[2]
[1] Elhoushi, Mostafa, et al. "LayerSkip: Enabling early exit inference and self-speculative decoding." arXiv preprint arXiv:2404.16710 (2024).
[2] Liu, Jiahao, et al. "Speculative decoding via early-exiting for faster llm inference with thompson sampling control mechanism." arXiv preprint arXiv:2406.03853 (2024).
Essential References Not Discussed: The paper is missing a critical citation related to its core contributions. EESD (Early-exiting Speculative Decoding), published in ACL 2024 Findings presents remarkably similar techniques and should be acknowledged.
[1] Liu, Jiahao, et al. "Speculative decoding via early-exiting for faster llm inference with thompson sampling control mechanism." arXiv preprint arXiv:2406.03853 (2024).
Other Strengths And Weaknesses: - Although the paper's contributions are well aligned with its motivation, there is concern that these contributions may overlap significantly with concurrent works. For instance, EESD[1] proposes a self-speculative decoding framework with early exiting layers that bears notable similarities to AdaDecode:
- Both methods train early exiting layers while keeping the original model's parameters fixed
- EESD[1] uses self-distillation which performs similarly to the KL divergence approach in AdaDecode
- Both methods highlight that KV cache created at the draft stage is reusable during the verification stage
- Given these substantial overlaps in core technical approaches, the authors should more clearly articulate what contributions of AdaDecode are orthogonal to or extend beyond those of EESD[1]. This clarification would help position the paper's contribution.
[1] Liu, Jiahao, et al. "Speculative decoding via early-exiting for faster llm inference with thompson sampling control mechanism." arXiv preprint arXiv:2406.03853 (2024).
Other Comments Or Suggestions: Algorithm 1 provides a helpful overview of how AdaDecode works, which aids reviewer understanding. However, the verification stage is not adequately presented in the algorithm.
Specifically, the algorithm should explicitly express how KV cache management is handled between draft and verification stages. There should be clear notation or steps indicating that the KV cache values generated for accepted draft tokens are preserved and reused during the verification stage, while KV cache values for rejected drafts are discarded. This KV cache reuse is a critical aspect of the method's efficiency gains, as it prevents redundant computation for tokens that pass verification.
This clarification would improve the algorithm's completeness and better highlight one of the key advantages of the AdaDecode approach - the ability to share computation between draft and verification stages through KV cache reuse. Without this explicit indication, readers might miss this important implementation detail that contributes significantly to the method's performance benefits.
Questions For Authors: 1. The paper presents experiments with pre-selected early exiting layers across different model architectures. Could you provide preliminary experimental results that justified this specific selection? Alternatively, could you present additional experiments demonstrating that AdaDecode's performance is robust to different choices of early exiting layers? This would strengthen the generalizability claim of your approach.
2. In the comparison with baseline methods like Self-SpecDecode, did you conduct experiments using the exact configurations reported in the original papers? If modifications were necessary, could you provide more details about how you ensured fair comparison? This would address concerns about reproducibility and the validity of reported performance improvements.
3. While the benchmark datasets used are reasonably diverse, they may not fully represent challenging scenarios where early exiting might underperform. Specifically, tasks requiring processing of long contexts or generating open-ended responses (e.g. ELI5, NarrativeQA) might present different dynamics for speculative decoding. Have you evaluated AdaDecode on such tasks, and if so, could you share those results?
4. Could you elaborate on how AdaDecode is orthogonal to recently published Early-exiting Speculative Decoding (EESD) in ACL 2024 Findings? EESD similarly uses early-exiting structures and self-distillation (comparable to your KL divergence approach), while employing a Thompson Sampling Control Mechanism. What specific technical innovations in AdaDecode differentiate it from EESD's approach, and how might these differences contribute to performance improvements?
ELI5 : https://facebookresearch.github.io/ELI5/
Narrative QA : https://huggingface.co/datasets/deepmind/narrativeqa_manual
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's insightful feedback.
**[Q1]: Could you present additional experiments demonstrating that AdaDecode's performance is robust to different choices of early exiting layers?**
**[A1]**: Please refer to Table 1 in this [PDF](https://anonymous.4open.science/r/AdaDecode-ICML2025-132E/rebuttal/To_Reviewer_5F7j.pdf). The results show that AdaDecode achieves consistent speedups across different layer configurations.
**[Q2]: The paper is missing a critical citation Early-exiting Speculative Decoding (EESD). Could you elaborate on what specific technical innovations in AdaDecode differentiate it from EESD's approach?**
**[A2]**: We have cited EESD in our original submission (Line 565). Below is a detailed comparison.
- **Efficiency**: To enable reliable early exiting, EESD requires an extra decoder layer and a full LM head (vocab size × hidden size), adding ~0.7B trainable parameters. In contrast, AdaDecode introduces only three lightweight LM heads (hidden size × hidden size), totaling just 48M additional parameters, making it significantly more efficient.
- **Flexibility**: While both methods use early exiting for drafting, AdaDecode supports dynamic-depth early exiting that allows adaptive layer parallelism, whereas EESD is restricted to fixed-depth early exiting.
- **Training and stopping strategy**: EESD trains its additional modules using cross-entropy loss and employs Thompson Sampling for stopping decisions. AdaDecode instead optimizes lightweight heads via KL divergence and uses probability-thresholding for termination.
**[Q3]: The paper would benefit from more comprehensive evaluation details such as: (1) detailed hardware specifications for reproducibility, (2) acceptance rates for speculative tokens across different scenarios, (3) memory utilization statistics, and perhaps (4) theoretical computation reduction metrics.**
**[A3]**: Please refer to Appendix D and Figure 5 for a detailed discussion on (1) and (2). For (3), we maximize the memory utilization during training (~80G per GPU), and the memory utilization during inference varies depending on context length and batch size, starting from ~18G with FP16 precision.
For (4), theoretical computation reduction depends on the number of total generated tokens and number of accurate early predictions. Let $\alpha$ be the fraction of layers needed for early exiting. Given $N$ tokens and per-token latency $T$, vanilla decoding takes $NT$, while AdaDecode requires $T (1+ \alpha (N-1))$. As $N \to \infty$, the theoretical speedup is $1/\alpha$. In practice, the value of $\alpha$ can vary based on model size. For instance, in a 48-layer model (e.g., CodeLlama-34B), exiting at layer 12 ideally gives $\alpha \sim 0.25$, leading to a potential $4\times$ speedup.
**[Q4]: In the comparison with baseline methods like Self-SpecDecode, did you conduct experiments using the exact configurations reported in the original papers? If modifications were necessary, could you provide more details about how you ensured fair comparison?**
**[A4]**: Yes, we use **exactly the same configuration** as Self-SpecDecode for CodeLlama-13B, as it was also adopted in the original paper. However, since they did not report results for LLaMA-3.1-8B, we used their Bayesian optimization script to search extensively for the best hyperparameters (Appendix C), ensuring a fair comparison.
**[Q5]: Provide the exact scripts used for Bayesian optimization in their comparisons, including random seeds to ensure deterministic results.**
**[A5]**: We used the same Bayesian optimization script from the official repo of Self-SpecDecode: https://github.com/dilab-zju/self-speculative-decoding/blob/main/search.ipynb.
**[Q6]: Context Length Impact on Rejection Rates.**
**[A6]**: Our AdaDecode exhibits consistent speedups across different scenarios with varying context length. Please refer to Table 2 in this [PDF](https://anonymous.4open.science/r/AdaDecode-ICML2025-132E/rebuttal/To_Reviewer_5F7j.pdf).
**[Q7]: Maximum New Token Length Analysis.**
**[A7]**: Please refer to Table 3 in the same PDF, which confirms our method’s effectiveness with different max new token lengths.
**[Q8]: Algorithm 1 should explicitly express how KV cache management is handled between draft and verification stages.**
**[A8]**: Indeed, KV cache management is handled internally in Line 238. If any rejection occurs, the KV cache of discarded tokens will be cleared, and the entire KV cache length is truncated to the last verified token. This ensures that generation resumes from the correct token, as described in Section 2.2. We will update the annotations in Algorithm 1 to make this clearer.
If you find our response satisfactory, we would be grateful if you could consider raising your score. Thanks again for your time! | Summary: The authors present AdaDecode, a methodology to accelerate decoding without auxiliary models or modification of the model. The proposed approach adaptively predicts tokens from an intermediate layer based on confidence, using a set of additional lightweight LM heads whose predictions are verified using a rejection sampling scheme.
AdaDecode retains output consistency and achieves a speed-up of 1.73x over vanilla decoding, outperforming four baselines: speculative decoding, self-speculative decoding, LookAhead, and SWIFT.
Moreover, the authors also include ablation studies and hyperparameter sensitivity experiments to validate design decisions and better understand the impact of the algorithm's different components.
Claims And Evidence: The author claims are factual and revolve around algorithm speed-up and output consistency. Both claims are clearly backed up by the presented experiments.
Methods And Evaluation Criteria: The methodology used to evaluate the approach makes sense. My only comment is about the limited selection of the baselines.
The authors in the related work reasonably exclude methods that do not preserve output consistency but mention approaches like SpecInfer (Miao et al., 2023b), Medusa (Cai et al., 2024), and its extension HYDRA (Ankner et al., 2024). While orthogonal to AdaDecode, it could be interesting to discuss further or report their results to contextualize the contribution better.
Theoretical Claims: Checked the proof for Lemma A.1.
The proof is clear, but I invite the authors to expand on why P* high-rank implies E* is full rank, as this seems speculative. It would be interesting to show an empirical study on the models considered for the experiments in the same appendix.
Experimental Designs Or Analyses: The paper and appendix clearly describe experimental designs and analyses.
I tried to run the code, but at the current stage, some needed artifacts are missing (e.g., adadecode/llama_3.1_8b_instruct).
Supplementary Material: I checked the appendix and the provided code. See the section above for issues with the code base.
Relation To Broader Scientific Literature: The paper's findings are extremely relevant, given the strong need to improve latency in LLM without increasing the already high hardware requirements. The work is timely and builds on existing concepts addressing key limitations.
Essential References Not Discussed: I'm not aware of any essential related work not referenced.
Other Strengths And Weaknesses: Already highlighted in previous sections.
Other Comments Or Suggestions: Regarding the passage on deviation from theoretical guarantees, the authors hypothesise a possible cause originating from FP16 precision used at inference time.
It would be interesting to see some experiments performing inference tweaking the precision.
The results that show that generalist heads are significantly slower might suggest a limited applicability of the approach in a "production" setting.
It would be interesting to expand more on this in the discussion and suggest potential strategies to mitigate this effect.
Questions For Authors: No important further questions besides the comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful comments and appreciation of our work.
**[Q1]: While approaches like SpecInfer (Miao et al., 2023b), Medusa (Cai et al., 2024), and its extension HYDRA (Ankner et al., 2024) are orthogonal to AdaDecode, it could be interesting to discuss further to contextualize the contribution better.**
**[A1]**: We agree that these tree-based speculative decoding methods are orthogonal to AdaDecode, but we believe it is possible and interesting to explore how our method could potentially be combined with tree-based decoding techniques. For instance, one potential approach could be generating multiple tokens at intermediate layers using our lightweight LM heads, rather than introducing additional full LM heads as in Medusa and always drafting tokens at the last layer. This strategy could allow for a more efficient draft token generation while maintaining the benefits of tree-based verification, and we believe this will be an interesting direction to explore in future work.
**[Q2]: It would be interesting to show an empirical study on the models to show E\* is full rank.**
**[A2]**: Thanks for the great suggestion. Please refer to Table 1 in this [PDF](https://anonymous.4open.science/r/AdaDecode-ICML2025-132E/rebuttal/To_Reviewer_2V7R_and_KXLJ.pdf).
**[Q3]: The paper and appendix clearly describe experimental designs and analyses. I tried to run the code, but at the current stage, some needed artifacts are missing (e.g., adadecode/llama_3.1_8b_instruct)**
**[A3]**: Please find the datasets and model checkpoints at this anonymous repo: https://huggingface.co/AnonyResearcher
**[Q4]: Regarding the passage on deviation from theoretical guarantees, the authors hypothesise a possible cause originating from FP16 precision used at inference time. It would be interesting to see some experiments performing inference tweaking the precision.**
**[A4]**: Following the reviewer’s suggestion, we re-run the experiments in Figure 4 with FP32. While FP32 offers slightly higher numerical consistency than FP16, it still does not reach 100% consistency.
|Method|SpecDec|Self-SpecDec|LookAhead|SWIFT|AdaDecode|
|-|-|-|-|-|-|
|FP16|0.97|0.98|0.98|0.98|0.99|
|FP32|0.98|0.98|0.99|0.98|0.99|
This aligns with [the finding in the open-source community](https://github.com/huggingface/transformers/issues/30413) that the output of speculative decoding can differ slightly from standard decoding due to numerical precision inaccuracies and minor variations in token probabilities during computation (please refer to this [discussion](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535) for a detailed study on the impact of different precisions). We would like to note that, during our experiments, we also noticed that different hardware specifications and library versions contribute to variances, demonstrating that half-precision is not the only factor affecting consistency.
**[Q5]: The results that show that generalist heads are significantly slower might suggest a limited applicability of the approach in a "production" setting.**
**[A5]**: While we acknowledge the advantage of using a generalist head that can support a mix of domain requests, we would like to highlight the growing trend of developing specialized models for domain-specific use cases.
In many real-world production settings, instead of training a single monolithic model for all domains, specialized models offer greater efficiency and improved performance in their respective areas. For example, Cursor and GitHub Copilot are optimized for programming assistance, while models like Qwen-2.5-Math and DeepSeek-Prover are designed for mathematical reasoning and theorem proving. In such applications, only a specific task type is considered, making a domain-specific LM head more preferable.
That said, we believe our method can also produce good generalist LM heads. We hypothesize that the lower performance observed in the ablation using a generalist head is primarily due to the relatively small size of our mixed-domain dataset (<20K examples). With more extensive mixed-domain training data, we expect the performance of the generalist head to improve significantly. In this work, we focus on demonstrating that lightweight training (<2 GPU hours) is sufficient to produce high-quality intermediate-layer LM heads and achieve substantial speedups with our method. Exploring more comprehensive training for generalist heads will be interesting future work.
We hope the reviewer finds our response helpful. We are also happy to incorporate additional suggestions you might have! Thanks again for your time!
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing all my comments and I reflected this in my updated score.
---
Reply to Comment 1.1.1:
Comment: We are glad that our response addressed the reviewer’s concerns. Thank you again for the support and constructive comments, and we will include them in our revision accordingly. | Summary: The authors propose to use early exiting to accelerate autoregressive decoding in LLMs while leveraging adaptive layer parallelism for efficient hardware deployment. The early exiting framework uses a lightweight head at intermediate layers to enable high-confidence early token predictions. An additional verification step is also added to ensure early-predicted tokens match the results of standard autoregressive decoding. AdaDecode achieves upto 1.73X speed up in token generation on several token generation tasks such as summarization, codegen, and mathematical reasoning.
Claims And Evidence: The authors provide multiple experiments to support the claims in the paper.
Methods And Evaluation Criteria: The benchmarks and speedup comparisons between the proposed method and the baselines are meaningful.
Theoretical Claims: The authors provide a proof in Lemma A.1 in the appendix which appears valid. The assumption that $P^*$ is high rank makes sense. Additionally, as indicated, since $E^*$ is full rank, matrix $E^{(i)}$ can be expressed as a linear transformation of $E^*$.
Experimental Designs Or Analyses: Please refer to the weaknesses section. While the experimental results are valid, they lack comparison to the many prior Early-Exiting works.
Supplementary Material: The supplementary provides proof in Lemma A.1, provides info on the benchmarks, hyper-parameters, and training details.
Relation To Broader Scientific Literature: The paper does not provide any new contributions compared to prior methods on early exiting for autoregressive decoding and fails to mention or compare to them. At its current state, the paper provides no novelty or new contribution to the field.
Essential References Not Discussed: The paper seems to miss many of the existing related works in the literature with identical/significant overlap:
[1] Bae, Sangmin, et al. "Fast and robust early-exiting framework for autoregressive language models with synchronized parallel decoding." arXiv preprint arXiv:2310.05424 (2023).
[2] Elhoushi, Mostafa, et al. "LayerSkip: Enabling early exit inference and self-speculative decoding." arXiv preprint arXiv:2404.16710 (2024).
[3] Liu, Jiahao, et al. "Speculative decoding via early-exiting for faster llm inference with thompson sampling control mechanism." arXiv preprint arXiv:2406.03853 (2024).
[4] Cai, Tianle, et al. "Medusa: Simple llm inference acceleration framework with multiple decoding heads." arXiv preprint arXiv:2401.10774 (2024).
[5] Varshney, Neeraj, et al. "Accelerating llama inference by enabling intermediate layer decoding via instruction tuning with lite." arXiv preprint arXiv:2310.18581 (2023).
Other Strengths And Weaknesses: While the authors have done a good job in comparing their method to approaches other than early exiting, such as SpecDecode and Swift, they have not performed the obvious comparison to the numerous publications on early exiting in the literature.
- The method offers no novelty compared to prior work and it is unclear what the authors are contributing. Plugging in an intermediate head for early exiting inside a language model is not a new contribution.
- Combining verification / speculative decoding with early exiting is not a new idea and has been published in the references provided above.
- The related work section on early exiting just mentions a few works and refers the reader to a survey. The authors should clarify how their method is differentiated from prior work.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments.
**[Q1]: The method offers no novelty compared to prior work and it is unclear what the authors are contributing.**
**[A1]**: We would like to highlight that our work provides an efficient solution to tackle the key limitations of speculative decoding and early exiting through our technical innovations.
Speculative decoding and early exiting have notable limitations: (1) speculative decoding typically incurs substantial training and memory costs due to the need for an additional draft model, and (2) early exiting is unable to leverage off-the-shelf intermediate layer representations without requiring further training or modifications, which can cause output deviations from standard autoregressive decoding.
To address these challenges, we propose the following key innovations:
- **Leveraging off-the-shelf intermediate layer representations**: For the first time, we demonstrate that, despite the challenges of using off-the-shelf intermediate layers for next-token prediction (Fig. 3a), learning a simple linear projection on these features yields surprisingly good next-token prediction (Fig. 3b). This finding has significant practical implications: The full fine-tuning of the entire model (e.g., LayerSkip [3]) or the introduction of a full additional layer and full LM head (e.g., EESD [4]) may not be necessary. Learning a simple linear projection based on frozen intermediate-layer features significantly reduces not only training complexity but also facilitates our design objective of guaranteeing output parity to vanilla decoding.
- **Efficient lightweight LM heads without loss of expressiveness**: Through theoretical analysis in Lemma A (which is unknown in prior works), we prove that our lightweight intermediate LM heads (i.e., the transformation matrices) are lossless proxies for full LM heads, which reduce memory costs by 30x. This innovation allows us to serve multiple lightweight LM heads at various intermediate layers (otherwise introducing multiple full LM heads will cause expensive training and memory costs), enabling dynamic early exiting and delivering significant speedups (as shown in the “w/ fixed-layer early prediction” ablation of Table 2).
To better position our contributions, we present a comparison table that highlights how our contributions differ from recent works. Please refer to Table 1 in this [PDF](https://anonymous.4open.science/r/AdaDecode-ICML2025-132E/rebuttal/To_Reviewer_u7As.pdf).
**[Q2]: While the authors have done a good job in comparing their method to approaches other than early exiting, they have not performed the obvious comparison to the numerous publications on early exiting in the literature.**
**[A2]**: To address the reviewer’s concern, we would like to make the following clarifications.
- **Why not compare with output-inconsistent methods (e.g,. FREE [1], LITE [2], LayerSkip [3])**: Our work is explicitly designed under the constraint of output consistency—producing the same outputs as standard autoregressive decoding. Relaxing this constraint leads to fundamentally different problem settings and design objectives, making direct comparisons with output-inconsistent methods inappropriate.
- **Why not compare with tree-based decoding methods (e.g,. SpeccInfer [8], Medusa [5])**: Tree-based decoding methods are orthogonal to early exiting methods: The former generates multiple tokens simultaneously, accelerating **horizontally (across time steps)**, while the latter reduces per-token computation by parallelizing deep layers after early exit, accelerating **vertically (within each time step's forward pass)**. A direct comparison between these two orthogonal approaches would conflate their respective benefits. As precedent, SWIFT [7], LayerSkip [3], or Self-SpecDecode [6] also didn't compare with tree-based decoding methods for the same reason.
If you find our response satisfactory, we would be grateful if you could consider raising your score. Thanks again for your time!
**References**
[1] Bae et al. Fast and robust early-exiting framework for autoregressive language models with synchronized parallel decoding. 2023
[2] Varshney et al. Accelerating llama inference by enabling intermediate layer decoding via instruction tuning with lite. 2023
[3] Elhoushi et al. LayerSkip: Enabling early exit inference and self-speculative decoding. 2024.
[4] Liu et al. Speculative decoding via early-exiting for faster llm inference with thompson sampling control mechanism. 2024
[5] Cai et al. Medusa: Simple llm inference acceleration framework with multiple decoding heads. 2024
[6] Zhang et al. Draft & verify: Lossless large language model acceleration via self-speculative decoding. 2023
[7] Xia et al. SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration. 2024
[8] Miao et al. Specinfer: Accelerating large language model serving with tree-based speculative inference and verification. 2024
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response, however, my concerns regarding lack of novelty and lack of comparisons to a plateura of efficient decoding methods remain. | Summary: The paper proposes to improve autoregressive decoding process in LLMs by decoding by using intermediate layer outputs when the confidence is high. A lightweight LM head is trained to decode the next token from such intermediate layer output enabling this method to be applied on pretrained models without requiring retraining from scratch. Remaining layer computations are executed in parallel with subsequent tokens as needed. The approach shows speedup over baselines in experiments.
## update after rebuttal
The authors have clarified my concerns around some of the technical details of the work and therefore I am increasing my score to 3. I still feel that the novelty is a bit limited and my concerns around the rank of the transformation matrix $E^*$ and the cost of domain specific heads have not been fully alleviated, but I would not be opposed to accepting this paper as it does add to the discussion in the literature on adaptive inference.
Claims And Evidence: The authors claim that $E^{(i)}$, the lightweight LM head at layer $i\ \forall i$, used to check the exit condition can be represented as a linear transformation of the last layer LM head $E^*$. However, the proof presented in Appendix A requires $E^*$ to be full rank but does not conclusively demonstrate that that is the case. The authors only show that the rank of $E^*$ is lower bounded by the rank of a matrix $P$ that is likely to be high rank. However, this is neither a deterministic nor a probabilistic guarantee on $E^*$ being full rank.
Methods And Evaluation Criteria: Yes
Theoretical Claims: I checked the proof of Lemma A.1 in Appendix A and believe that it isn't entirely correct as described above.
Experimental Designs Or Analyses: I checked the experiments in the main paper and do not have any issues with them.
Supplementary Material: I reviewed the proof of Lemma A.1 in Appendix A.
Relation To Broader Scientific Literature: The paper builds upon prior work in speculative decoding and early exits. Unlike early exit which skips the computation of KV cache in post exit layers, this work performs those computations in parallel with subsequent tokens. Unlike speculative decoding which requires a separate drafter model for generating outputs this work uses the same model for both generation (intermediate layers) and verification (last layer).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The observation that the existing LM head does not work well with intermediate layer outputs and that training an LM head for intermediate layers will work enables this process to work with pretrained models which is a very useful feature in the era of foundation models and a clear improvement over prior works that require training from scratch.
2. The efficiency modifications for minimizing the complexity of the new LM head and using adaptive layer parallelism to compute KV cache on the fly can help reduce inference cost.
Weaknesses:
1. The work appears to be a synthesis of ideas from the early exit and speculative decoding areas and so the overall novelty is a bit limited.
2. The proof of Lemma A.1 does not appear to be entirely correct (as mentioned above) and this weakens the argument that representing the intermediate LM heads as a linear projection of the final LM head will never lead to a loss of expressivity.
3. Certain technical details appear to be inconsistent (see questions below)
4. It is mentioned in Section 4.3 that domain specific heads for intermediate layers leads to significantly faster generation than heads trained on a mix of domain. However, this will significantly increase the memory/cost requirements for this approach in real LLM-based services where requests come from a mix of domains/users.
Other Comments Or Suggestions: I would suggest saying that "$E^*$ is likely to be full rank", as opposed to "$E^*$ must be full rank" in Appendix A and modifying the corresponding claim in Section 2.1 accordingly.
Questions For Authors: 1. How can layer 2 of t2 and t3 run in parallel (Fig 2) when the 2nd layer of t3 is depended on the output (KV cache) from the second layer of t2?
2. How will adding $t_i$ (token generated by the intermediate layer) to the parallel processing list of deeper layers in line 235 help in parallel processing? Don't we need to add the layer output instead to compute the outputs of subsequent layers?
3. Shouldn't there be a condition after line 238 for resuming processing from any rejected token $t$, as described after equation (2), after the final layer is reached at any position?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s insightful feedback.
**[Q1]: I would suggest saying that "$E^\star$ is likely to be full rank", as opposed to "must be full rank" in Appendix A and modifying the corresponding claim in Section 2.1 accordingly.**
**[A1]**: We provide an empirical validation confirming that $E^*$ is indeed full rank in this [PDF](https://anonymous.4open.science/r/AdaDecode-ICML2025-132E/rebuttal/To_Reviewer_2V7R_and_KXLJ.pdf).
Moreover, as indicated by Lemma 1 of (Yang, 2018), the language modeling problem can be completely solved (i.e., achieve a 0 loss) if $\text{rank}(P) < d$. However, this is practically infeasible due to the inherent complexity of natural language, suggesting that $\text{rank}(P) > d$.
We’ll clarify our theoretical claim in the revision for improved precision.
**[Q2]: The work appears to be a synthesis of ideas from the early exit and speculative decoding areas and so the overall novelty is a bit limited.**
**[A2]**: To better position our contributions, we provide a comparison table (Table 2) in the same PDF. We also clarify our novelty in Q1 to Reviewer u7As.
**[Q3]: Domain-specific heads for intermediate layers will significantly increase the memory/cost requirements for this approach in real LLM-based services where requests come from a mix of domains/users.**
**[A3]**: While we acknowledge the advantage of a generalist head, we would like to highlight the growing trend of developing specialized models for domain-specific use cases such as programming (e.g., Cursor/Github Copilot) and mathematical reasoning (e.g, Qwen-2.5-Math). In such applications, only a specific task type is considered, making a domain-specific LM head more preferable.
That said, we believe our method can also produce good generalist LM heads. We hypothesize that the lower performance observed in the ablation using a generalist head is primarily due to the relatively small size of our mixed-domain dataset (<20K examples). With more extensive mixed-domain training data, we expect the performance of the generalist head to improve significantly. In this work, we focus on demonstrating that lightweight training (<2 GPU hours) is sufficient to produce high-quality intermediate-layer LM heads and achieve substantial speedups with our method. Exploring more comprehensive training for generalist heads will be interesting future work.
**[Q4]: How can layer 2 of t2 and t3 run in parallel (Fig 2) when the 2nd layer of t3 is dependent on the output (KV cache) from the second layer of t2?**
**[A4]**: While it is true that, in standard autoregressive decoding, the 2nd layer of token $t_3$ depends on the KV cache from the 2nd layer of token $t_2$, our method eliminates this dependency by generating the next token earlier. As illustrated in Fig 2, $t_3$ is produced using only the 1st layer output of $t_2$, enabling the 2nd-layer KV caches of $t_2$ and $t_3$ to be computed in parallel. This is analogous to prefilling, where knowing multiple tokens upfront allows for simultaneous KV cache calculation across those tokens. This parallel computation is the core mechanism behind our method's efficiency.
**[Q5]: How will adding $t_i$ (token generated by the intermediate layer) to the parallel processing list of deeper layers in line 235 help in parallel processing? Don't we need to add the layer output instead to compute the outputs of subsequent layers?**
**[A5]**: Line 235 updates $\mathcal{P}$, the list of tokens that need to be processed at each layer. For instance, if $t_1$ is an early prediction at layer 8, its KV cache at layers 9-32 remains empty. Then for the next token $t_2$, we have the following adaptive parallelism strategy to calculate their KV caches:
- Phase 1: The first 8 layers of $t_2$ are processed individually
- Phase 2: The remaining layers 9-32 of $t_2$ are processed in parallel with $t_1$
Specifically, at the beginning of Phase 2, we concatenate the hidden representation of $t_1$ at layer 8 with that of $t_2$ at layer 8 to enable parallel processing. These intermediate layer outputs (hidden representations) are managed accordingly when updating $\mathcal{P}$. Since the KV cache computation automatically requires the layer outputs, such operations on layer output are omitted in the algorithm for simplicity.
**[Q6]: Shouldn't there be a condition after line 238 for resuming processing from any rejected token $t$, as described after equation (2), after the final layer is reached at any position?**
**[A6]**: Indeed, this "resume processing" operation is internally handled in Line 240 – if any rejection happens, Line 240 will remove disgarded tokens from $y$ and add the replacement token to $y$ so that the generation resumes from the correct token, as described in Section 2.2. We will update the annotations in Algorithm 1 to clarify this.
If you find our response satisfactory, we would be grateful if you could consider raising your score. Thanks again for your time! | null | null | null | null | null | null |
Counterfactual Contrastive Learning with Normalizing Flows for Robust Treatment Effect Estimation | Accept (poster) | Summary: The paper points out that the prediction of the individual treatment effect (ITE) is crucial for personalized therapy planning and proposes a contrastive learning approach (along the lines of SIMCLR) to estimate it. The accuracy of an estimator is measured in terms of the expected squared error of the ITE estimates with respect to the ground-truth. However, ground-truth ITEs are not accessible in practice (since the counter-factual outcome is unknown), and existing sample alignment methods are not good enough for applications with high-dimensional covariates and considerable individual heterogeneity. To circumvent this difficulty, the authors derive a tractable contrastive upper bound for the squared loss that justifies their choice of learning method. Experimental comparisons with several baselines demonstrate the superior behavior of the new method.
Crucially, the authors explain that contrastive training examples cannot be generated by standard data augmentations or adversarial training in data space, because these samples are not sufficiently realistic -- they are too far from the manifold of typical covariate samples. Instead, they propose to first train a normalizing flow to represent the data distribution around the manifold, and then perform a gradient search for contrastive samples in the latent space, where the gradient is calculated from the Jacobian of the decoder. The search for contrastive examples thus stays near the manifold, and the generated counter-factual training data are much more realistic.
EDIT after rebuttal: The authors appropriately addressed my questions and concerns, so I've increased my score.
Claims And Evidence: The claims are largely clear and well supported.
However, I am surprised that debiased ("double") machine learning (e.g. [1]) is neither mentioned as related work nor included in the comparison. In my understanding, debiased ML is a relatively simple method that can estimate ITEs in the absence of randomized controlled experiments, where selection bias of treatment decisions would otherwise contaminate the predictions. I'm curious how the authors asses this possibility.
[1] https://matheusfacure.github.io/python-causality-handbook/22-Debiased-Orthogonal-Machine-Learning.html#non-parametric-double-debiased-ml
Methods And Evaluation Criteria: Evaluation follows standard practices in the field and standard benchmarks. Results appear trustworthy.
Theoretical Claims: The claims in Lemma 4.1 and Theorem 4.2 are plausible, but I did not check the proofs.
Experimental Designs Or Analyses: Evaluation follows standard practices in the field and standard benchmarks. Results appear trustworthy.
It would have been helpful to include an experiment demonstrating that standard data augmentation and adversarial training fail and how the flow-based method fixes this. For example, one could use the Maximum Mean Discrepancy (MMD) to quantify the difference between the distributions of real and synthetic (counter-factual) covariantes. Then, the MMD should be considerably smaller for the proposed method.
The latter experiment could be augmented by an ablation study, showing how ITE results deteriorate when flow-based counter-factual generation is not used.
Supplementary Material: 10 pages of comprehensive supplement (4 pages of proofs, 1 page of additional explanations, 5 pages of additional experiments)
Relation To Broader Scientific Literature: The literature is reviewed well and compared under standard protocols with the proposed method, except for the possible omission of debiased ML mentioned above.
Essential References Not Discussed: see above.
Other Strengths And Weaknesses: The main weakness of the paper are some minor presentation glitches.
* Definition 3.6 (and repeatedly further down the paper, e.g. equations (6) - (8)): The text refers to "the classification decision k(x)", but k(x) is undefined.
* Lemma 4.1, Theorem 4.2: disc(., .) (the distance in representation space) is never concretely defined, nor are the pros and cons of possible choices discussed. How is it related to the similarity sim(.,.) in equation (9)?
* Second line of section 5.1: possible typo [rho 1 p 1 p'] => [rho 1 p 1^T p' ] ?
Other Comments Or Suggestions: none
Questions For Authors: Please answer the following questions (I'm willing to increase my score):
* relation to debiased ML
* how to fix presentation shortcomings
* experiments demonstrating the superiority of flow-based counter-factual generation
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate your high recognition of our work, and are eager to share our thoughts with you.
$\textbf{Q1:}$ Relation to debiased ML.
$\textbf{R1:}$ Thanks for your insightful idea. We would like to clarify that our work primarily focuses on the challenges of deep learning-based causal effect estimation. As debiased machine learning (DML) is a traditional method, we did not explicitly discuss it in the literature review. We will include this discussion in the revision.
Then, we discuss the differences and connections between DML and our FCCL.
First, we explain the different mechanisms that DML and our proposed FCCL use to address confounding bias. DML handles confounding bias through orthogonal residual regression, i.e., $Y_{i}-M_{y}(X_{i}) =\tau(X_{i})(T_{i}-M_{t}(X_{i}))+\epsilon_{i} $, where $M_{y}(X_{i})$ eliminates the influence of the confounder $X$ on $Y$ and $M_{t}(X_{i})$ eliminates the influence of the confounder $X$ on $T$[1]. The way creates statistical independence between treatment and confounder to simulate randomization conditions. Instead, our method captures the characteristics of potential outcomes under different treatments, mitigating distribution shifts through sample-level alignment to emulate RCTs, which better captures individual-level heterogeneity.
Second, the DML way to addressing bias is highly instructive, particularly its unbiasedness. Therefore, we see great potential in incorporating DML into our future work, as R-learners draw inspiration from DML's approach to handling confounding bias, $\tau(\cdot)=argmin_{\tau}\frac{1}{n} \sum_{i=1}^{n} ((Y_{i} -M_{y}(X_{i}))-\tau(X_{i})(T_{i} -M_{t}(X_{i})) )^{2} $. Thanks again.
Besides, we add comparison experiments with DML (Table1) and the results show that FCCL achieves lower estimation errors.
Table1 ITE estimation errors (std) comparison with DML on IHDP.
| Method | $\epsilon_{PEHE}^{within}$ | $\epsilon_{ATE}^{within}$ | $\epsilon_{PEHE}^{out-of}$ | $\epsilon_{ATE}^{out-of}$ |
|--------|----------------------------|----------------------------|-----------------------------|----------------------------|
| DML | 2.87(0.09) | 0.30(0.05) | 2.95(0.14) | 0.35(0.03) |
| FCCL | 0.53(0.04) | 0.09(0.01) | 0.64(0.07) | 0.12(0.02) |
$\textbf{Q2:}$ How to fix presentation shortcomings?
$\textbf{R2:}$ Thank you for pointing out the typos, we will correct them in next version.
( 1 ) We will add a formal definition of $k(x)$ in $\textbf{Definition 3.6}$.
( 2 ) We thank the reviewer for the suggestion. Indeed, $dis(\cdot ,\cdot )$ denotes the inner product of vectors in the representation space,i.e.,
$disc(\Phi^{t}(x),\Phi^{t}(\tilde{x}))=u \sum_{i} b_{i} \sqrt{2-2{{\Phi }^{t=1}({x}\_{i})}^{T}{\Phi }^{t=1}({\tilde{x} }\_{i})}+(1-u)\sum_{i} {a}_{i}\sqrt{2-2{{\Phi }^{t=0}({x}\_{i})}^{T}{\Phi }^{t=0}({\tilde{x} }\_{i})}$. Detailed proofs are provided in Appendix A. $\textbf{Theorem 4.2} $ shows that the estimation error $\epsilon _{PEHE}(h,\Phi )$ is upper bounded by the distance constraints in the representation space. Therefore, we implement the optimization via contrastive loss, specifically by leveraging the cosine similarity $sim(\cdot ,\cdot )$.
( 3 ) We apologize and will thoroughly go over this for the revision, specifically changing $[\rho \mathbf{1}{p} \mathbf{1}{p}^{\prime}+(1-\rho) \mathbf{I}{p}]$ to $[\rho \mathbf{1}\_{p} \mathbf{1}\_{p}^{\prime}+(1-\rho) \mathbf{I}\_{p}]$, where $\mathbf{1}\_{p}$ denotes the $p$-dimensional all-ones vector and $\mathbf{I}\_{p}$ denotes the identity matrix of size $p$.
$\textbf{Q3:}$ Experiments demonstrating the superiority of flow-based counter-factual generation. For example, one could use the MMD to quantify the difference between the distributions of real and synthetic (counter-factual) covariantes.
$\textbf{R3:}$ Following your valuable suggestion, we use the MMD to quantify the difference between the distributions of factual and counterfactual covariates (Table 2) and the result shows that the MMD is considerably smaller for our FCCL. Besides, we compare the effect of alternative counterfactual generation strategies on ITE estimation error in Table 3 in the main text. These results show the superiority of flow-based counterfactual generation.
Table2 MMD mean (std) comparison on IHDP.
| Method | grad asc in $\mathcal{X}$ | GAN | FCCL |
|--------|---------------------------|-------|-------|
| MMD | 0.13(0.16) | 0.52(0.55) | 0.09(0.14) |
[1] https://matheusfacure.github.io/python-causality-handbook/22-Debiased-Orthogonal-Machine-Learning.html#non-parametric-double-debiased-ml
---
Rebuttal Comment 1.1:
Comment: > R1: As debiased machine learning (DML) is a traditional method, we did not explicitly discuss it in the literature review. We will include this discussion in the revision.
The regression components of DML can just as well be implemented by neural networks, so it is not restricted to traditional methods :-)
Otherwise, your answers appropriately address my concern. Please make sure to revise the paper accordingly.
---
Reply to Comment 1.1.1:
Comment: We agree with your comment. The regression components of DML can indeed be implemented using any prediction model, including neural networks. We implemented DML with neural networks for the regression components, and the experimental results are presented in the table below. As shown, our FCCL still demonstrates superior performance compared to DML.
Table1 ITE estimation errors (std) comparison with DML on IHDP.
| Method | $\epsilon _{PEHE}^{within}$ | $\epsilon _{ATE}^{within}$ | $\epsilon _{PEHE}^{out-of}$ | $\epsilon _{ATE}^{out-of}$ |
|-----------|-----------------------------|----------------------------|-----------------------------|----------------------------|
| DML (RF-based) | 2.87(0.09) | 0.30(0.05) | 2.95(0.14) | 0.35(0.03) |
| DML (NN-based) | 2.45(0.12) | 0.20(0.05) | 2.60(0.14) | 0.33 (0.05) |
| FCCL | 0.53(0.04) | 0.09(0.01) | 0.64(0.07) | 0.12(0.02) |
We would like to clarify that our categorization of DML as a "traditional method" was based on how it handles confounding bias through orthogonal residual regression, where DML operates directly on the confounder X. Instead, deep learning methods (also known as representation learning) learn a representation space $\Phi(\cdot)$ to obtain $\phi(X)$, and handle the confounding bias in the representation space.
We sincerely hope our responses can resolve your concern. In light of these clarifications, we respectfully invite you to consider raising the score. | Summary: This paper presents FCCL, an ITE estimation method. FCCL integrates diffeomorphic counterfactual generation and contrastive learning to align treatment and control groups at a fine-grained, sample level, mitigating distribution shifts and approximating RCT randomization. By ensuring realistic counterfactuals and enforcing semantic consistency, FCCL lowers ITE estimation error. Experiments indicate superior performance in heterogeneous and data-scarce settings.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I looked at Lemma 4.1 and Theorem 4.2. They seem to be correct.
Experimental Designs Or Analyses: - The experiments are valid; although it would have been nice to include more complicated datasets such as ACIC. This is important to evaluate robustness and generalization of the proposed method in large scale settings.
- The evaluation metric “dis” in Fig. 3 is not well-motivated (see section "Questions For Authors” for a specific question).
Supplementary Material: Yes, I briefly looked at the appendix.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
- The paper provides strong theoretical grounding.
- Innovative combination of normalizing flows (to maintain semantic meaning in counterfactual samples) and contrastive learning (to ensure robust alignment), addressing core challenges in causal inference.
Weaknesses:
- The paper could improve in terms of clarity and flow. Specifically, theory is presented before providing a clear intuition or practical context, which makes it hard to follow the core logic. As a result, readers might struggle with intuitive understanding without first seeing practical motivation.
- The evaluation could benefit from inclusion of more challenging datasets (e.g., ACIC).
- The metric "dis" used in evaluations lacks clear motivation.
Other Comments Or Suggestions: NA
Questions For Authors: 1. It is stated that the proposed method is specifically useful for “handling sparse boundary samples.” However, neither a proper definition is provided, nor a discussion of why handling them is challenging.
2. Is optimization end-to-end? or are diffeomorphic counterfactuals found first and then the representation and prediction modules are trained?
3. The evaluation metric “dis” (average distance between boundary samples and corresponding class centers) in Fig. 3 is stated to be reflecting sample heterogeneity. However, it’s not mentioned why a smaller dis is desirable.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and for recognizing the importance of heterogeneous treatment effect estimation. Please see below answers to the questions.
$\textbf{Q1:}$ Provide a proper definition of boundary samples and discuss why handling them is challenging.
$\textbf{R1:}$ We thank the reviewer for consideration. We provide a formal definition of boundary samples in Appendix D.1, and will include this definition in the main text in the revision.
Furthermore, we analyze the challenges of handling boundary samples from two perspectives:
(1) Many scenarios demand flexible investigation of effect heterogeneity across individuals. However, boundary samples, due to their distinctive characteristics, reside in low-probability density regions distant from class centroids, which poses challenges for accurate ITE estimation.
(2) Existing methods, such as MMD, minimize the distribution discrepancy between treated and control groups by aligning their mean representations $\frac{1}{m\_1}\textstyle\sum_{i=1}^{m\_1}{\phi}(x\_{i}^{t})$ and $\frac{1}{m\_2}\textstyle\sum_{j=1}^{m\_2}{\phi}(x_{j}^{c})$. However, such methods overlook samples near the boundaries of the treatment and control distributions.
Our method achieves robust performance via sample-level alignment, especially in scenarios where individual differences are significant. Besides, we also provide boundary sample analysis by listing the estimation errors of the first five samples from Table 4 in Appendix D, as shown in Table 1. More detailed results are presented in Appendix D.
Table1 The ITE estimation error $|\epsilon_{ITE}|$ of boundary samples on IHDP.
| Sample | CFR-MMD | FCCL |
|--------|---------|--------|
| 1 | 4.2264 | 0.2144 |
| 2 | 3.0977 | 0.0594 |
| 3 | 0.0337 | 0.0805 |
| 4 | 6.1246 | 0.6418 |
| 5 | 2.1014 | 0.2263 |
$\textbf{Q2:}$ Is optimization end-to-end? or are diffeomorphic counterfactuals found first and then the representation and prediction modules are trained?
$\textbf{R2:}$ Our approach falls into the latter: we first generate the diffeomorphic counterfactuals, and then train the representation and prediction modules.
$\textbf{Q3:}$ Why the smaller evaluation metric “dis” is desirable?
$\textbf{R3:}$ A smaller "dis" value indicates that dispersion of samples in the latent representation space is reduced, which further reflects a smaller distance between boundary samples and their corresponding class centers. This shows that there are fewer boundary samples, enabling better model fitting and exhibiting lower ITE estimation bias, as demonstrated by our results. We will make this point clearer in the next version.
$\textbf{Q4 (Weaknesses2):}$ The evaluation could benefit from inclusion of more challenging datasets (e.g., ACIC).
$\textbf{R4:}$ Thank you for your valuable suggestions. For validation, we additionally evaluate our method against several representative baselines on ACIC (Table 2). We observe that our method still outperforms the baselines.
Table2 Within-sample and out-of-sample mean (std) for the metrics on ACIC.
| Method | $\epsilon_{PEHE}^{within}$ | $\epsilon_{ATE}^{within}$ | $\epsilon_{PEHE}^{out-of}$ | $\epsilon_{ATE}^{out-of}$ |
|---------|----------------------------|----------------------------|-----------------------------|----------------------------|
| CFR-MMD | 1.70(0.38) | 0.29(0.12) | 2.36(0.59) | 0.30(0.12) |
| SITE | 1.71(0.39) | 0.38(0.14) | 2.33(0.58) | 0.39(0.15) |
| ABCEI | 1.93(0.46) | 0.17(0.07) | 2.49(0.64) | 0.18(0.07) |
| CBRE | 1.69(0.38) | 0.12(0.05) | 2.31(0.56) | 0.14(0.06) |
| DIGNet | 1.66(0.35) | 0.23(0.07) | 2.30(0.54) | 0.24(0.07) |
| FCCL | 1.51(0.40) | 0.16(0.05) | 2.16(0.56) | 0.17(0.05) |
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my comments.
Please make sure to clarify R2 in the paper.
BTW, would it be possible to design the algorithm to be end-to-end?
---
Reply to Comment 1.1.1:
Comment: Thank you for your suggestion. We aim to generate semantically meaningful counterfactuals, but an end-to-end design would likely introduce non-negligible interference gradients into training and thus impair the quality of counterfactual generation. Nevertheless, we will explore better approaches to design the algorithm to be end-to-end. | Summary: The paper introduces Flow-based Counterfactual Contrastive Learning (FCCL), a novel approach for Individual Treatment Effect (ITE) estimation that integrates normalizing flows for realistic counterfactual generation and contrastive learning for fine-grained sample alignment. It derives a theoretical generalization-error bound linking ITE estimation error to factual prediction error and representation distances. Empirical evaluations on synthetic, semi-synthetic (IHDP), and real-world (Jobs) datasets demonstrate FCCL’s superior performance
Claims And Evidence: Claim 1: FCCL generates realistic counterfactuals that adhere to the data manifold.
Supported. Through the use of normalizing flows, which enforce structure on counterfactual transformations. The authors present visualization results showing that FCCL maintains sample-level semantic consistency better than baseline methods.
Claim 2: FCCL reduces ITE estimation error via sample-level alignment.
Partially supported. While contrastive learning improves alignment, the empirical evidence (particularly in Tables 1 and 2) primarily shows improvement in error metrics rather than a direct measure of sample alignment. Additional ablation studies isolating the contrastive loss would be beneficial.
Claim 3: The proposed theoretical generalization-error bound justifies FCCL’s effectiveness.
Supported. The bound is mathematically derived (Theorem 4.2) and aligns with the contrastive loss objective. However, empirical validation of whether this bound holds in practice is not explicitly tested.
Claim 4: FCCL significantly outperforms state-of-the-art baselines.
Mostly supported. The results show FCCL achieving the best ϵ_PEHE and ϵ_ATE scores across datasets. However, the magnitude of improvement varies, and for some cases (e.g., IHDP out-of-sample ATE error), the advantage is marginal.
Methods And Evaluation Criteria: The use of benchmark datasets (IHDP, Jobs, synthetic data) is appropriate for treatment effect estimation.
Baselines (OLS, CFR, GANITE, ABCEI, etc.) are well-chosen, covering both traditional and deep learning-based ITE estimation methods.
Evaluation metrics (ϵ_PEHE, ϵ_ATE, ATT) are standard, but additional fairness metrics (e.g., subgroup fairness or bias analysis) could strengthen the evaluation.
The use of latent space visualizations (Figure 3) is insightful, but further quantitative measures of alignment (e.g., KL divergence or propensity score matching quality) would reinforce the claims.
Theoretical Claims: Correctness of Proofs: The theoretical derivations appear correct and align with existing literature on treatment effect bounds (e.g., Shalit et al., 2017).
Missing Considerations: The assumptions regarding the invertibility of representations (Φ) and the geodesic distance formulation could be further justified. Additionally, the bound does not account for sample sparsity effects, which are critical in real-world datasets.
Experimental Designs Or Analyses: The experiment design is generally well-structured, with appropriate train-test splits and multiple trials.
Missing Analyses: The impact of hyperparameters (especially temperature τ in contrastive loss) is not explored. Additionally, no robustness checks (e.g., sensitivity to noise or dataset shifts) are provided.
Supplementary Material: Supplementary material includes code seems technically sound
Relation To Broader Scientific Literature: FCCL extends prior work on representation learning for ITE estimation (e.g., CFR, SITE, CITE) to contrastive learning for causal inference, aligning with recent trends in self-supervised learning for structured data.
The approach could be adapted to multi-treatment and continuous intervention settings, similar to recent developments in continuous treatment estimation (e.g., Kazemi & Ester, 2024).
Essential References Not Discussed: The paper extensively cites prior work but does not discuss alternatives to normalizing flows for counterfactual generation, such as energy-based models (Du et al., 2021) or diffusion-based generative approaches (Song et al., 2021).
Other Strengths And Weaknesses: no
Other Comments Or Suggestions: no
Questions For Authors: 1) How does FCCL compare to diffusion-based counterfactual generators? Diffusion models are gaining traction in structured data. Would FCCL’s theoretical framework extend to them?
2) How stable is FCCL under different hyperparameter choices? Particularly, how does the contrastive loss temperature τ affect performance?
3) Can FCCL generalize to continuous treatments? The paper focuses on binary treatments, but many real-world applications require handling continuous interventions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful suggestions. We have addressed the comments related to the counterfactual generation and model robustness evaluation. Please see our responses below.
$\textbf{Q1:}$ How does FCCL compare to diffusion-based counterfactual generators?
$\textbf{R1:}$ We thank the reviewer for the question. We add experiments on diffusion-based counterfactual generation (Table1), which shows performance comparable to FCCL, but with slightly inferior result on $\epsilon _{PEHE}^{out-of}$. This may be due to the noise-driven mechanism of the diffusion model [1], which causes counterfactual to deviate from the sample semantic space, especially in the out-sample cases. In contrast, the flow-based model ensures that counterfactuals reside on the same manifold as the original instances, which generates both meaningful and reliable counterfactuals, making FCCL more reliable for individual-level treatment effect predictions. We will include this discussion in the revision.
Table1 ITE estimation errors (std) with different generation methods on IHDP.
| Method | $\epsilon _{PEHE}^{within}$ | $\epsilon _{ATE}^{within}$ | $\epsilon _{PEHE}^{out-of}$ | $\epsilon _{ATE}^{out-of}$ |
|-----------|-----------------------------|----------------------------|-----------------------------|----------------------------|
| diffusion-based | 0.54(0.05) | 0.09(0.01) | 0.72(0.10) | 0.12(0.03) |
| FCCL | 0.53(0.04) | 0.09(0.01) | 0.64(0.07) | 0.12(0.02) |
$\textbf{Q2:}$ How does the contrastive loss temperature $\tau $ affect performance?
$\textbf{R2:}$ We perform sensitivity analysis focusing on the temperature coefficient $\tau$ (see Table 2). We observe minimal variation in performance with respect to $\tau $, which demonstrates our model's general robustness.
Table2 ITE estimation errors (std) with different values of $\tau $ on IHDP.
| $\tau$ | $\epsilon_{PEHE}^{within}$ | $\epsilon_{ATE}^{within}$ | $\epsilon_{PEHE}^{out-of}$ | $\epsilon_{ATE}^{out-of}$ |
|--------|----------------------------|----------------------------|-----------------------------|----------------------------|
| 0.1 | 0.54(0.04) | 0.11(0.02) | 0.64(0.06) | 0.13(0.02) |
| 0.3 | 0.56(0.04) | 0.10(0.02) | 0.70(0.08) | 0.14(0.02) |
| 0.5 | 0.58(0.06) | 0.10(0.02) | 0.76(0.16) | 0.11(0.02) |
| 0.7 | 0.61(0.06) | 0.10(0.02) | 0.79(0.15) | 0.11(0.02) |
| 0.9 | 0.56(0.04) | 0.10(0.01) | 0.69(0.07) | 0.12(0.02) |
$\textbf{Q3:}$ Can FCCL generalize to continuous treatments?
$\textbf{R3:}$ This is a valuable point. FCCL currently focuses on binary treatments, we can extend it to continuous treatments in future work. The most direct approach would involve discretizing the continuous treatment variable[2]. Specifically, we can split treatments into $E$ heads, each assigned a dose level from the range$[a, b]$, which is divided into $E$ equal intervals of width $(b-a)/E$. We will explore better approaches to handle continuous treatments directly.
$\textbf{Q4 (Claim2):}$ Additional ablation studies isolating the contrastive loss would be beneficial.
$\textbf{R4:}$ We discussed the impact of contrastive loss, thank you for going into Figure 4 in the main text.
$\textbf{Q5 (Claim3):}$ Empirical validation of whether this ITE bound holds in practice is not explicitly tested.
$\textbf{R5:}$ Our empirical analysis demonstrates that the proposed ITE bound effectively guides ITE model training and becomes tighter than the bound proposed in CFR as iterations increase under identical conditions[3] (Table 3).
Table3 Generalization-error bound comparison on IHDP.
| Iterations | 400 | 800 | 1200 | 1600 | 2000 |
|------------|---------|---------|---------|---------|---------|
| CFR | 13.24 | 10.84 | 10.06 | 9.64 | 9.91 |
| FCCL | 8.54 | 3.74 | 2.93 | 2.77 | 2.67 |
$\textbf{Q6 (Methods And Evaluation Criteria):}$ Further quantitative measures of alignment would reinforce the claims (e.g., KL divergence).
$\textbf{R6:}$ Thank you for your suggestion. We add KL divergence comparisons between treatment and control distributions for two typical methods (see Table 4), which demonstrates that our FCCL can better address distribution shifts. We will include this metric in Figure 3 in the next version.
Table4 KL divergence mean (std) comparison on IHDP.
| Method | CFR | ABCEI | FCCL |
|--------|-----------|-----------|-----------|
| KL | 0.14(0.08) | 0.20(0.30) | 0.09(0.05) |
[1] Kotelnikov Akim, et al. Tabddpm: Modelling tabular data with diffusion models. In International Conference on Machine Learning, pp. 17564–17579. PMLR, 2023.
[2] Schwab et al. Learning counterfactual representations for estimating individual dose-response curves. Proceedings of the AAAI Conference on Artificial Intelligence, pp. 5612-5619, 2020.
[3] Shalit, U., et al. Estimating individual treatment effect: generalization bounds and algorithms. In International Conference on Machine Learning, pp. 3076–3085. PMLR, 2017. | Summary: The paper proposes FCCL framework for the ITE estimation. The proposed method can generate realistic counterfactuals by leveraging normalizing flows to ensure adherence to the data manifold, preserving semantic similarity to factual samples. The authors also derive a new generalization bound connecting ITE estimation error to factual prediction errors and representation distances between factual-counterfactual pairs, providing theoretical grounding for their proposed sample-level alignment method.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes.
Experimental Designs Or Analyses: The experimental design and comparisons are proper.
Supplementary Material: Yes. Theoretical parts.
Relation To Broader Scientific Literature: ITE model development.
Essential References Not Discussed: Some papers also discuss the new upper bound of PEHE. Can include them in the literature review.
Other Strengths And Weaknesses: Strengths:
The motivation, presentation, and experimental comparisons are good.
Weaknesses:
I only have one concern about the theoretical parts. One of the key contributions is that the authors claim they propose a new ITE error bound for their sample alignment method. However, anyone can propose an ITE error bound and minimize such a bound to learn the ITE model. The point is, how can you make sure the proposed ITE bound is tight or not? If the bound is too loose, such a theoretical bound may give bad guidance for training the ITE model.
I thus suggest authors give some theoretical or empirical evidence to show that the bound is tight (at least tighter than the bound proposed in CFRWASS)
Other Comments Or Suggestions: None.
Questions For Authors: Please see Strengths And Weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your time and your thoughtful and encouraging comments. We hope our responses can resolve your concern.
$\textbf{Q1:}$ I suggest authors give some theoretical or empirical evidence to show that the bound is tight (at least tighter than the bound proposed in CFR).
$\textbf{R1:}$ Following your helpful suggestion, we conduct an experimental analysis to compare the generalization error bounds of our proposed FCCL (Equation (14) in Appendix A) with the CFR bound (Equation (8) in Appendix A.2, [1]) under the same conditions.
As shown in Table 1, we track the generalization error bounds at different iterations. The results demonstrate that as the number of iterations increases, FCCL achieves a significantly tighter bound than that of CFR.
Table1 Generalization-error bound comparison on IHDP.
| Iterations | 400 | 800 | 1200 | 1600 | 2000 |
|------------|----------|----------|----------|----------|----------|
| CFR | 13.24 | 10.84 | 10.06 | 9.64 | 9.91 |
| FCCL | 8.54 | 3.74 | 2.93 | 2.77 | 2.67 |
In addition, we would like to clarify that our FCCL provides a different theoretical perspective from CFR. Our generalization error bound links ITE estimation error to the generalization error of factual predictions and representation distances, motivating our focus on minimizing these distances via sample-level alignment. Instead, the bound of CFR shows ITE estimation error can be reduced by the difference between the treated and control distributions.
[1] Shalit, U., et al. Estimating individual treatment effect: generalization bounds and algorithms. In International Conference on Machine Learning, pp. 3076–3085. PMLR, 2017. | null | null | null | null | null | null |
Discrete Spatial Diffusion: Intensity-Preserving Diffusion Modeling | Reject | Summary: The authors proposed a discrete diffusion model that respects mass conservation. They define the forward process as a random walk of particles and train an NCSN to reorganize these particles during the reverse process. The total mass of the particles is preserved since these operations do not create or destroy particles. While the proposed DSD model is designed for applications in material microstructure, the authors also demonstrate its effectiveness on image datasets.
Claims And Evidence: The claims are clear.
Methods And Evaluation Criteria: The method and evaluation make sense to me.
Theoretical Claims: The derivation of DSD hinges on (Santos et al., 2023). The authors did not provide other theoretical results.
Experimental Designs Or Analyses: I checked the experimental results and they are clear to me. Perhaps the authors can consider comparing the FID between DSD and other Gaussian-noise diffusion models. I understand that generating high-quality images is not the main goal but this helps understand the behavior of DSD.
Supplementary Material: I checked the detailed algorithms and some additional results.
Relation To Broader Scientific Literature: The proposed method is valuable for specific physics applications.
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: Strengths: The proposed method is novel and valuable for scientific use in specific domains. The mass-preserving property is well-demonstrated by the experiments.
Weaknesses: While the paper is mostly easy to follow, at places I do feel some paragraphs are lengthy and overwhelming. For example, breaking down the relationships between DSD, FPE, and heat equation can help the reader better understand the model.
Other Comments Or Suggestions: Fig. 1(c) should be fixed. See my other comments above.
Questions For Authors: 1. In Fig. 4, the differences between the generated representations are not obvious. Does the total mass of these datasets have small variances?
2. How will existing discrete diffusion models perform on the tasks if the mass is provided as a condition? Adding such comparisons can better demonstrate the difficulty of the tasks.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful assessment of our results.
**4.1** [*Theoretical results*] We agree that the core derivation of DSD builds on the framework of Santos et al. (2023). That said, we submit that the paper includes theoretical contributions, albeit specific to DSD. These include the derivation of first-reaction rates (Section 3.3 and Appendix A), as well as the Green’s function (propagator) for the DSD process under periodic boundary conditions (Appendix C). These mathematical results are critical for enabling efficient and exact computation within the DSD framework.
**4.2** [*FID comparisons*] We agree that this helps comparability. We have found the following preliminary FIDs:
| Dataset | FID | # | Comments |
|---------------------|------|-----|------------------------------------|
| MNIST | 5.73 | 8K | |
| CIFAR-10 | 53.7 | 14K | |
| CIFAR-10 | 40.9 | 14K | Mean-preserving Gaussian filter[*] |
| CelebA-64 | 60.1 | 8k | |
| CelebA-64 | 41.9 | 8k | Mean-preserving Gaussian filter[*] |
| Rock microstructures| 0.9 | 300 | |
| Batteries | 1.3 | 300 | |
[*] While FID is not the primary goal of DSD, its pixel-by-pixel process leads to less smooth outputs and higher FIDs; applying a mean-preserving Gaussian filter improves FID by aligning better with image-based metrics.
We have also performed additional quantitative metrics for our models, as well as comparisons to existing techniques, please see points **1.2** and **2.2**.
**4.3** [*Lengthy paragraphs*] We appreciate the feedback. If accepted we will clearly expand the explanation of how DSD links to Fokker–Planck and heat equations using some of the additional space, and we will reorganize lengthy paragraphs for better clarity. We are also willing to include other detailed derivations and/or explanations that the reviewer requests in the appendices.
**4.4** [*Fixing Fig. 1(c)*] We are unsure about the specific concern the reviewer is raising regarding Fig. 1c, and would appreciate clarification to better address this comment.
**4.5** [*Differences in generations of Fig. 4*] Thank you for this helpful question. In response, we have added a new visualization that shows the distribution of the training data for the rock microstructure dataset. Specifically, we plot the sampling distribution of $\mu$, along with $\mu \pm 2\sigma$ and $\mu \pm 3\sigma$ to better illustrate the spread of the generated samples. This updated figure more highlights our model's ability to generate samples across the full range of the dataset’s variability, maintaining consistent structure even at the distributional tails. The updated figure can be found [here](https://rb.gy/mdmzam).
We also contrast our approach with [recent work](https://rb.gy/8x23e3), led by [Professor Martin Blunt](https://rb.gy/ru98qg). Their model lacks explicit control over porosity, and so it produces out-of-distribution generated samples, with longer tails in porosity distribution (see their Figure 3E). Our DSD offers a clear advantage when precise field-measurement tolerances of multiple significant digits have to be met. We will revise the manuscript to briefly include this very recent work and clarify how our model addresses these limitations.
**4.6** [*Comparison with mass conditioning approaches*] In order to respond to this point, we have evaluated a baseline discrete diffusion model conditioned on the target total intensity (or mass). The model performs reasonably well near the mean of the dataset, where the target intensity falls clearly within the training distribution. However, it tends to reach this point by producing small negative values in the final image—despite being trained on images with pixel values in the [0-1] range. When we threshold these negative values to zero and discretize the image to the [0, 255] range, the intensity error increases. This degradation becomes especially pronounced in the tails of the intensity distribution. In these regions, the Gaussian diffusion model often fails to honor the conditioning and produces non-compliant samples. In contrast, our method (DSD) guarantees intensity compliance across the full range, including the tails of the distribution. This difference is demonstrated in this [Figure](https://rb.gy/dyuymu). Please also see response **1.4** where we compared to another approach.
**4.7** [Summary] We have addressed the reviewer’s clarification to the best of our ability, added further quantitative comparisons, and better explanation of the significance for Fig. 4. We believe that addressing these comments substantially improves the manuscript.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses. The right-most grid in Fig. 1(c) is not plotted properly, but my other concerns have been addressed. I have updated my score to 4 accordingly. | Summary: This paper introduces Discrete Spatial Diffusion (DSD), a new diffusion-based generative modeling approach that operates on discrete intensity units and enforces strict global conservation of these intensities. Traditional image-based diffusion models typically treat pixel intensities as continuous quantities, and independently diffuse intensities at each pixel. In contrast, DSD places each intensity “unit” on a spatial lattice and performs a continuous-time random walk of these units, which ensures that the total intensity per color channel does not change throughout both the forward and the reverse processes.
Claims And Evidence: The paper claims exact preservation of total intensity. The construction as a jump process enforces the claim since the entire chain simply redistributes discrete “particles.”
The paper claims its ability to generate coherent images, which can be seen by example images on MNIST, CelebA, and microstructure data. Some samples look qualitatively plausible, which partially supports this claim.
The authors present domain-specific examples in section 4 (lithium-ion electrode phases, rock porosity) where they replicate morphological features. This is evidence for the method’s applicability, though the paper’s evaluations remain primarily qualitative or rely on morphological measures.
The experimental comparisons are relatively narrow, especially for standard image benchmarks, there is no evidence how well the approach outperforms simpler baselines on domain tasks.
Methods And Evaluation Criteria: The proposed method, DSD, is formulated as continuous time, discrete-state Markov jumps on a spatial lattice. It uses network architecture adapted from NCSN++ to predict reverse transition rates. For evaluation, the authors give primarily qualitative (sample visual quality) plus some morphological or porosity-based statistics in scientific domains, as well as they check the inpainting capability if partial images yield coherent reconstructions.
The paper does not provide in-depth baseline comparisons and widely used generative metrics (FID). Thus, it is unclear how DSD’s sample quality or training efficiency compares to continuous diffusion methods.
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: domain-specific datasets (rock microstructures, lithium-ion electrode scans). For MNIST and CelebA, they demonstrate unconditional generation, class conditioning, and inpainting, but they do not report common generative metrics like FID or IS. Instead, they rely on visual inspection and qualitative comparisons for these datasets.
For microstructure data, they measure morphological properties relevant to the domain (for example, porosity, phase-volume fractions). Although they do not perform much large-scale physical simulations, they argue these morphological analyses are indicative of fidelity in scientific contexts.
Potential Issues:
The evaluation on standard benchmarks is more limited than typical generative model papers (less emphasis on FID).
Scalability to larger images and verifying performance with thorough domain simulations remain open challenges. While the paper acknowledges these limitations, it does not fully address them.
Supplementary Material: There is no supplementary material for this paper
Relation To Broader Scientific Literature: For Generative Diffusion Models, the paper places itself as discrete diffusion approaches (Hoogeboom et al., Austin et al., Campbell et al., etc.) and with heat-based or spatially correlated noise approaches such as Inverse Heat Dissipation or Blurring Diffusion.
For scientific microstructure modeling, they cite domain-specific references dealing with microstructures. They state that the proposed DSD can preserve global constraints exactly while the references cannot.
Essential References Not Discussed: the paper cites enough references
Other Strengths And Weaknesses: Strengths:
This paper demonstrates general utility across standard datasets (MNIST, CelebA) and domain-specific tasks.
This paper gives a clear derivation of the proposed method, DSD.
Weakness:
The proposed method may encounter computational complexity for large, high-resolution images. Since each intensity unit is an explicit “particle,” the forward corruption and backward sampling can become expensive for typical 8-bit or 16-bit images with large spatial resolutions. Although the paper acknowledges this constraint, it offers no clear strategy for extending the approach to high-res images.
In experiments, the authors only give limited benchmark comparisons. For example, performance metrics like FID are less explored here in CelebA dataset. The paper focuses on domain-specific usage, but it reduces direct comparability.
This paper does not attempt to claim better coverage or better generative fidelity than standard continuous diffusion and no exact SOTA claims or comparison on typical image tasks, which is less convincing.
Other Comments Or Suggestions: A table comparing training time, sampling speed, and sample quality (FID-like metric) versus standard diffusion would clarify the practical trade-offs.
Questions For Authors: How to handle large-scale images with higher bit-depth? Could partial binning or approximate methods still preserve approximate mass constraints?
Could you provide standard generative metrics (FID) for comparability? Even though your focus is on constraints, I hope to see standard benchmarks.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments; we feel that the reviewer has understood the purpose of the work and offers concrete avenues for improvement, all of which we are able to address in our revision.
**3.1** [*computational complexity*] First, let us remark on the computational requirements, per iteration, involved in DSD. Noising the images involves sampling using a random walk propagator that is precomputed–this is cheap enough to be amortized by dataloader workers on CPU, so training DSD is not any more expensive than training NCSN++. At test time with our current non-parallel code, about half the time is spent moving the particles around and half the time at network evaluation. The overall time for training is similar to other diffusion models. Sampling time depends on the noise schedule and CFL tolerance parameter. Good samples can presently be accomplished in <4 minutes per celebA sample per GPU without any special software engineering and A5000 hardware (this includes batching of images). In terms of cost breakdown, on celebA (the most challenging for DSD in this work), roughly equal amount of time is required for moving particles as performing NCSN++ evaluation; the extra computational cost for DSD is approximately a factor of 2, and in our inference code this (CPU-based) cost is presently hidden by running multiple inferences in parallel processes. We will revise the manuscript to describe this information.
**3.2** [*high resolution*] Regarding scaling, our method scales effectively to larger resolutions, and as a demonstration, we have trained it on [1000×1000 Leopard sandstone images](https://rb.gy/3wsudt): realizations are shown in the attached repository as well as included in the revision. Leopard sandstones feature larger critical percolation diameters and more dispersed clay regions. Our model captures these morphological nuances, demonstrating both fine-scale and macro-scale structural details.
**3.3** [*scalability*] The reviewer’s point that DSD may be difficult to scale to images which are both high resolution and high bit depth (high-bit-depth) images. As mentioned (**3.1**), sampling noised images is not expensive–and so this is a concern only for inference throughput. We do address 24-bit CelebA images and separately 1000x1000 rock images (**3.2**). We feel that DSD is applicable to a variety of datasets, and future innovations may make it possible to scale the approach further. It may only be a question of implementation that is solvable with a GPU kernel for particle motion sampling.
**3.4** [*quantifying performance*] Regarding quantifying performance, we have primarily focused on domain-specific measures (see also **2.2**) because these binary microstructure images differ from typical computer vision benchmarks. Nonetheless the reviewer’s point about comparability is well-taken, and so we now include FID metrics for MNIST and CelebA. It is important to take these in context, as although the performance of these models with respect to FID is not state-of-the-art, our intent is not to replace the state of the art, but to address the niche where conservation of intensity/particles is necessary. To that end, we also offer to demonstrate the ability for a conditional standard gaussian diffusion model to produce given overall intensities on MNIST (**1.4**). The conditioning works well within the typical range of intensities for the dataset, but falters very significantly as one approaches the tails of intensity. Even still, the conditioned model sometimes produces *negative* intensities in order to meet its objective for total intensity; this failure mode is not possible with DSD. All of this will also be described in our revision.
**3.5** [*standard benchmarks*] To further address the reviewer’s comment that they would like to see more standard benchmarks: We would like to argue that the non-standard tasks we introduce (intensity constrained generation and intensity constrained inpainting) provide value to the field as well; while one goal for ML research is to advance the state of the art for identified important tasks, another important goal is to advance the scope of (well-motivated) tasks for which ML techniques are available.
**3.6** [*summary*] Through these revisions and clarifications, we are confident that our revised version will address the reviewer’s concerns about quantitative comparability, computational costs, and scalability. While our models do not advance state-of-the art FID scores, DSD is reasonably affordable, solves a well-motivated novel scientific task (intensity constrained generative modeling) far better than simple extensions of existing work (input-based conditioned models, and c.f. **1.4** regarding another approach), and produces very high quality domain metrics (**2.2**, **4.2**, **4.6**) | Summary: This paper presents Discrete Spatial Diffusion (DSD), a framework that ensures intensity preservation in diffusion models by using a continuous-time, discrete-state jump stochastic process. Unlike standard diffusion models that operate in continuous intensity spaces, DSD naturally incorporates stochasticity while maintaining conservation laws, making it well-suited for scientific applications.
Claims And Evidence: - Problematic Claim:
1. The model’s capability to control phase volume fractions in scientific applications (e.g., lithium-ion electrodes, porous rock microstructures) is supported by qualitative results.
2. The authors demonstrate that such a model is powerful enough for conventional image synthesis tasks.
- Reasons:
1. While qualitative examples are provided, quantitative evaluations (e.g., comparison with existing vanilla diffusion models w.r.t. FID score) are limited.
2. The paper does not clearly demonstrate whether the model generalizes well on complex datasets as the unconditional generation outputs of CelebA are not good.
Methods And Evaluation Criteria: The method effectively preserves intensity and structure in discrete-state diffusion models, making it suitable for simple image synthesis and scientific applications such as porous materials and lithium-ion battery electrodes.
However, the evaluation primarily relies on qualitative results (e.g., visualizations in Figures 3 and 4). To enhance the analysis, the authors should incorporate quantitative metrics (e.g., FID, MMD, or SSIM) to compare generated microstructures with real samples. Although Figure 6 presents a time schedule design inspired by SSIM, the paper lacks a specific quantitative comparison with other benchmarks.
Theoretical Claims: 1. The paper claims that its discrete-state diffusion process strictly preserves total intensity throughout the forward and reverse processes via particle transition at rate $r$. This theoretical formulation appears sound. However, in Eq. (5) or (7), when measuring the difference between the predicted and true rates ($\bar{r}^{NN}$ and $\bar{r}$), why does the paper use $\bar{r}^{NN} - \bar{r}\log\bar{r}^{NN}$ instead of the more straightforward $\bar{r}^{NN} - \bar{r}$? Could the authors clarify the motivation behind this formulation and its impact on the optimization process?
2. The use of SSIM for designing time schedules is an interesting idea, but the paper does not formally prove why SSIM is an appropriate guiding metric for diffusion noise scheduling. From Figure 6, the polynomials schedule ($x^5$, or $x^6$) also looks good. A deeper mathematical justification, beyond empirical observation, would help support this claim.
Experimental Designs Or Analyses: The experimental design is reasonable and aligns well with the paper’s objectives. The evaluation effectively demonstrates the model’s ability to preserve intensity in discrete-state diffusion and explores its applications in scientific contexts (Figures 4 and 5) and image synthesis tasks (Figure 3).
MNIST, as a structured image dataset, is well-suited for assessing the model’s ability to preserve spatial features. Similarly, rock subsurface and lithium-ion electrodes are materials with complex microstructures, where intensity preservation is essential for accurately modeling pore structures and phase distributions. These dataset choices effectively showcase the applicability of the proposed method across both structured image data and scientific materials.
Supplementary Material: There are no supplementary materials.
Relation To Broader Scientific Literature: This paper builds on prior work in discrete-state diffusion models, intensity-preserving generative processes, and scientific applications:
1. Discrete-State Diffusion Models
Extends traditional diffusion models (Ho et al., 2020; Song et al., 2021) by introducing a spatially correlated discrete-state process, addressing limitations of prior Markov chain-based models (Hoogeboom et al., 2021; Austin et al., 2021).
2. Intensity-Preserving Generative Processes
Unlike GAN- and VAE-based approaches (Duquesnoy et al., 2023), this method strictly preserves intensity without requiring post-hoc projections or approximations (Chung et al., 2022; Finzi et al., 2023).
3. Scientific Applications
Advances generative modeling for rock subsurface (Blunt et al., 2013) and lithium-ion electrodes (Usseglio-Viretta et al., 2018), enabling precise control over porosity and phase distributions.
Essential References Not Discussed: The paper has referenced all the important related works.
Other Strengths And Weaknesses: This paper presents a highly novel approach with a clear structure and well-organized presentation, making it easy to follow. It provides a thorough review and comparison of prior work while offering a comprehensive summary of existing methods. Notably, it is the first to introduce intensity preservation in diffusion models using a continuous-time, discrete-state jump stochastic process, with significant implications for medical imaging, astronomical data synthesis, super-resolution reconstruction, and advancements in film effects and rendering.
Other Comments Or Suggestions: I have no additional comments or suggestions.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for providing a clear review that shows understanding of the work, constructive criticism for improvement, and good questions for clarification.
**2.1** [*incorporate quantitative metrics such as FID*] We emphasize that DSD primary advantage is for scientific applications, whereas the FID metric implicitly focuses on human-centric datasets (the Frechet distance is computed as the latent embedding of the Inception V3 model, which was pre-trained on the ImageNet dataset). That said, we have performed the computation of FID as requested, achieving results in rebuttal section **4.2**. We do not have any rational or theoretical foundation that the FID metric would be meaningful when applied to scientific datasets, however, for completeness sake we will now provide FID to the scientific images as rendered in RGB in the paper. In addition to FID, please see also **1.4** regarding tests comparing Gaussian models to ours with regard to total intensity generation targets.
**2.2** [*scientific applications… supported by qualitative results*] The reviewer contends that our quantitative results are limited. In addition to the FID metrics included in revision **2.1**, we do already provide several quantitative measurements highly relevant for the scientific data. For one, porosity (phase volume fraction) is a key variable because permeability of porous media typically scales by the inverse cube of the porosity, and so quantifying our ability to match the total intensity is a relevant metric for scientific applications. Additionally, we present several quantitative scientific evaluations in appendices (due to space constraints). These come from earth science, materials science, and energy storage literature, and include two-point correlation functions and pore size distributions (Appendix F.1), as well as interface length, triple-phase boundary, and relative diffusivity (Appendix G.2). For context on the relevance of these metrics, please see the paper [Gayon-Lombardo et al 2020](https://rb.gy/f90gtc). Thus we argue that the quantitative evaluation of our models has not been quite limited as the reviewer has claimed. Perhaps this was not emphasized enough in the main text, and we will revise it to emphasize the domain science metrics.
**2.3** [*motivation for loss function*] Your question about the loss functions is a good one. We propose to add the following explanation in revision: `We remark that Eq. (5) is a heuristic approach to match the predicted rate and the ground-truth one, which is a popular approach for building loss functions in machine learning (similar approaches include flow-matching and score-matching); the motivation for using this loss is simplicity and analogy to existing work. Eq. (7) is a more principled statistical approach, derived in Santos 2023 by designing a maximum likelihood loss using the analytical reverse transition rates. While less intuitive, it is easy to verify by taking ordinary derivatives that the loss is an absolute minimum when the predicted and true reverse rates are equal, the process of which also reveals that this Eq. (7) is essentially the integral of the mean absolute percentage error. We have tested both approaches, and did not observe a significant difference, which demonstrates the robustness of the DSD framework.` We hope this clarifies the nature of the logarithmic loss function.
**2.4** [*SSIM-based scheduler*] Indeed, our SSIM-based approach is a heuristic way to construct the time scheduler, which we have mentioned in the manuscript (lines 216-217). We share the sentiment that it would be nice to have a theoretically sound way to construct the scheduler for DSD, but we are not aware of an existing approach which fulfills this criteria, and it is not an innovation that we are able to offer at this time. In revision we will remark on the pursuit of a deeper mathematical support for the time sampling schedule as a good candidate for future work. Ultimately, the design of schedules (noise schedule, time schedule, learning rate schedule) is a grand challenge for which there is no existing framework with a solid and complete mathematical justification (as far as we know).
**2.5** [*summary*] We appreciate the reviewers comments, and will be able to address them in revision; we will improve the quantitative comparison using FID score, we will clarify the loss function, and remark further on the heuristic nature of our scheduling and prospects for future work. We also gently argue that our present results are not only qualitative, as we have applied several significant domain-based quantitative metrics. Although we concede that the [CelebA faces](https://rb.gy/p37qwg) fall short of the state of the art (and now quantifyably so), we believe that by addressing the rest of the reviewer’s concerns we have substantially improved the manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the clarifications. I agree that FID may not be suitable in all scenarios. However, I strongly recommend including some form of quantitative analysis to demonstrate the effectiveness of the generated data—I'm glad to see this addressed in Figure 11. Regarding the loss function, I believe Section 3.1 of [Lou et al 2024](https://arxiv.org/pdf/2310.16834) provides a helpful illustration. Additionally, including a copy of the supplementary code demo could further strengthen the paper’s persuasiveness. Overall, I believe my initial score was positive, and I will keep it unchanged.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our responses and for the reference to Lou et al. (2024); we will incorporate it into the revised version. The analyses suggested by you and the other reviewers have significantly strengthened the manuscript, and we’re grateful for the constructive feedback throughout.
We also wanted to briefly follow up on your earlier concern regarding CelebA generation quality. During the rebuttal period, we managed to train a larger model, despite limited computational resources (just a single GPU), and obtained substantially improved generations which can be viewed [here](https://anonymous.4open.science/r/DSD-rebuttal-E860/figs/celebA_realizations_100.png). We’ll continue training and report updated FID scores in the final version, we believe that this can improve significantly with the allowed time.
Additionally, we’re preparing a clean and complete code release with runnable examples. We hope this will not only support adoption in the scientific community but also inspire new applications in machine learning—such as budget-constrained inpainting and coherent colorization—beyond our original scope.
We truly appreciate your positive review. Unfortunately, we didn’t receive input on our rebuttal from the other reviewers, but we’ve aimed to thoroughly address all raised points in our rebuttal and revision. We hope these efforts are taken into consideration in the final assessment. | Summary: Discrete Spatial Diffusion (DSD) is a novel generative diffusion modeling framework specifically designed for discrete spatial domains, explicitly preserving mass throughout the diffusion processes. Traditional diffusion models typically assume continuous pixel intensities, thereby limiting their applicability to scientific datasets involving discrete, conserved physical quantities. DSD demonstrates its capability effectively in generating data for scientific tasks.
Claims And Evidence: Quantitative performance comparisons (e.g., FID, Inception Score) against state-of-the-art diffusion or discrete generative models are missing.
Methods And Evaluation Criteria: In detailed comments
Theoretical Claims: NA
Experimental Designs Or Analyses: Quantitative performance comparisons (e.g., FID, Inception Score) against state-of-the-art diffusion or discrete generative models are missing.
Supplementary Material: NA
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: It may related to : Planning with Diffusion for Flexible Behavior Synthesis
Other Strengths And Weaknesses: Strength:
The idea to constrain particle numbers and only allow transfers to neighboring locations is interesting.
Weakness:
It’s difficult for me to understand the paper. Many terms are either not defined clearly or change names throughout the text.
There is no comparison with other models that do not strictly constrain particle numbers.
Other Comments Or Suggestions: Background:
“The Markov process...” – I do not understand Figure 1. What are the meanings of variables such as x, y, I, and P(I_t>S | I_S)? They should be clearly defined. Also, clarify the context when mentioning "Gaussian" or "Discrete." Does this refer to the values of pixels?
“An alternative approach leverages Bayes... samples are exactly conditioned” – It appears similar to using a guidance function at inference time. The authors could refer to this paper [1].
Methods:
“where r is the transition rate of the particle jumping to one of their nearest neighbors” – I do not understand clearly what the transition rate means. Does it indicate how many particles jump to neighbors, or something else?
Figure 2(a): What is the timestep or step size for each image? Figure 2(b): What exactly is meant by "mass"?
Experiment:
Quantitative performance comparisons (e.g., FID, Inception Score) with other state-of-the-art diffusion or discrete generative models are missing.
[1] Planning with Diffusion for Flexible Behavior Synthesis
Questions For Authors: Involving a guidance function during inference might achieve the same goal as DSD. Therefore, I think it would be beneficial to include experiments comparing the guidance function approach and DSD on scientific tasks.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback and questions.
**1.1** [*Difficult to understand*] We have tried our best to make it accessible and attempted to follow the conventions of [the theoretical paper](https://rb.gy/d7vqid). We will be happy to provide clarifications. It would be helpful if the reviewer can be more specific regarding confusing terms, missing definitions or explanations.
**1.2** [*No comparison*] Our main point of the paper is to impose the hard constraint, strictly constraining the particle numbers. As such, the most important objective is to generate configurations which are exactly conditioned on the total intensity. In this sense, the rest of the models are failing the objective. The commonly adopted metrics, such as the FID score, are difficult to perform for exact conditioning for the approximate methods, as there is only a small set of images in the test set that has exactly a given total intensity, where FID between two given sets requires a large sample number.
**1.3** [*Markov process*] We used a consistent notation as defined in the main text (line 156 of the text): “*We treat a digital image with discretized intensities $I_{x,y,c}$ at pixel $(x,y)$ in color channel $c$*”. $p(I_t \vert I_{t-1})$ are conditional distributions and a common notation used in the diffusion literature (e.g. Fig. 2, equation (1) in https://rb.gy/3v28z7). We have also provided a clear description of the continuous-state Gaussian generative diffusion models (line 41-48, left column), as well as state-of-the-art discrete-state generative models based on discrete-state Markov chains (line 72-98, left column), highlighting the difference to the Gaussian generative diffusion models in line 83-88, left column. We specified what the discreteness is referring to in line 147, right column: “*...Discrete Spatial Diffusion (DSD), noting the 'discreteness' refers to both the discretized intensity units and the discreteness of the spatial lattice... where the particles are allowed to reside.*"
**1.4** [*Alternative approach*] Thank you. The guidance function in the provided paper makes sense for reinforcement-learning tasks - which the paper is about - where a cost function $J$ is naturally defined. For our generative tasks, it is not well-defined---in fact, it has to be learned either additionally as stated in our cited reference about *a posteriori* sampling, or it has to be built into the diffusion model and learned during training. This is because the "guidance function" is exact only when it is $\partial_{I_t} \log p(C\vert I_t)$ where $C$ is the condition. Here, $\log p(C\vert I_t)$ is not known. While one can argue that we could try to impose heuristic guidance functions, our additional numerical experiments - which imposes a MSE between the generated and target total intensity - suggest that this is not a valid approach. We provide the Jupyter Notebooks (`/codes/Guided*.ipynb` in https://rb.gy/x1fjvy) generating the statistics of the total intensity of the generated samples with various guided functions (under `/figs/`). It is difficult to explain the experimental details with the 5K-character limit of the rebuttal, but we hope the notebooks are clear and we look forward to future discussion with the reviewer. In addition, we have implemented a conditional diffusion model taking the target total intensity as an additional input of the NN during training (`/codes/mass*.ipynb` in https://rb.gy/x1fjvy). The [results](https://rb.gy/ndpjo6) (see also **4.6**) suggest that (1) the total intensity can be *statistically* conditioned within the intensities in the data distributions, but not *exactly*, (2) after rounding and clipping the samples generated by Gaussian diffusion to `uint8`, a systematic additional bias is introduced, indicating that the conditioner is introducing physically impossible *negative intensities* to enforce the constraints, and (3) outside the data distribution, the conditioner completely fails. In comparison, DSD *always* has the required total intensity.
**1.5** [*$r$*] The transition rate is a standard term referring to the transition probability per time. Cf. the cited textbooks, Van Kampen and Gardiner, or the published discrete-state generative model such as https://rb.gy/d7vqid and https://rb.gy/z7vfjv).
**1.6** [*Timestep*] The diffusion model generating the schematic diagram has 1K steps and the snapshots were taken every 200 steps.
**1.7** [*Mass*] We apologize. The mass stands for total particle numbers. We will revise.
**1.8** [*Quant. performance*] Please refer to **1.2**, **1.4**, and **4.2**.
**1.9** [*Additional exp.*] Please refer to **1.4**.
**1.10** [*Summary*] We hope to have adequately addressed the points of confusion in the manuscript. We have also performed additional experiments following the suggestion to improve the work. These experiments help to show the benefits of our approach, and we thank the reviewer for the suggestions.
---
Rebuttal Comment 1.1:
Comment: I think the rebuttal is good and the Jupyter Notebooks is good. My previous rating was probably mostly based on the writing quality, but I may not have time to recheck the paper. If other reviewers continue to support accepting this paper, I would also agree with that decision.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for taking the time to evaluate our manuscript and rebuttal. We are glad to hear that the reviewer found the new analysis satisfactory, and we are confident that the updated presentation now better supports a clearer and more accessible explanation of our contributions. These improvements will be further refined in the camera-ready version, should the paper be accepted. At this stage, the other two reviewers who have actively engaged in post-rebuttal discussions have expressed their strong support. We are encouraged by their recognition of the paper's novelty, analytical rigor, quantitative validations in the scientific datasets, and potential impact. If reviewer @EVW9 feels it is appropriate to support the manuscript, we kindly ask them to consider reflecting this in their score: currently it is still "weak rejection (but can be accepted)". This will help ensure that their support is visible during the AC's review and decision-making process, particularly in case this discussion does not receive close attention during the broader decision process. Once again, we truly appreciate the reviewer's time and thoughtful feedback. | null | null | null | null | null | null |
Learning Dynamics in Continual Pre-Training for Large Language Models | Accept (oral) | Summary: This paper studies the training dynamics during continual pre-training. Specifically, they study the scaling law of the loss curve during continual pre-training (CPT), which considers the scaling law due to learning rate annealing and the scaling law due to the distribution shift. Based on this formulation, they consider several factors in CPT, including loss potential, distribution shift between pre-training and CPT dataset, peak learning rate, and CPT steps. They also discuss how the proposed scaling law can be accurately fitted for open-source models.
Claims And Evidence: I do not find all the claims to be supported by the evidence. However, I believe this is somewhat related to the presentation issue of the paper, making it hard for me to know what the paper is saying. For example, on page 3, the paper writes, " As shown in Fig. 2, these distribution shift terms tend to overlap at each transfer starting point." I am not sure where the overlap referred to here is. In this case, it would be hard to evaluate how well-supported this claim is.
Methods And Evaluation Criteria: This paper uses FineWeb for pre-training and Knowledge-Pile for CPT. Using FineWeb as a pre-training dataset is reasonable. However, only using Knowledge-Pile as the CPT data is a weakness of this paper. CPT can be conducted on coding or mathematical datasets to improve the specific ability. It is unclear whether the observation and conclusion in this paper hold for CPT datasets in other domains or different datasets. Using Knowledge-Pile may also be questionable since this is not a dataset that the research community has a consensus on its quality, at least judging by the review from ICLR 2025.
Theoretical Claims: There are no theoretical claims.
Experimental Designs Or Analyses: The experiments use multiple models of the same architecture with different sizes. The issue with the evaluation is mostly related to the choice of the CPT dataset, which I elaborated previously.
Supplementary Material: No
Relation To Broader Scientific Literature: This paper proposes a scaling law for CPT, which has not been discussed in prior literature. They also discuss how the proposed scaling law can help understand the contribution of different factors in CPT and how to choose the best hyperparameters. These contributions are unique and not seen in prior works
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths
==
I found this paper to be quite interesting, and it provided many useful and intriguing observations. The findings highlighted in this paper should be of interest for researchers in the community
Weakness
==
The key weakness of this paper, and the reason I am currently leaning toward rejecting it, is the presentation issue, which makes the paper difficult to read.
- Acronyms are not properly explained. For example, the term "LRS" is never explained. The term "WSD" is not clearly explained when it first appeared in the paper.
- The term "transfer curve" is not explained. This is not a formal term in math, so I am not sure what the author is referring to when using this term.
- The explanation of Figure 1 is not sufficient when it is presented. In Line 107 and the "Observation" in Section 2.1, the paper refers to Figure 1 without explaining many details, including the LRS, WSD, hidden pre-training, and transfer point. It is not easy for the reader to understand this figure.
- The font size in many figures is too small to read. The legends in Figures 1, 2, and 3 are completely unreadible when printed on an A4 paper.
- The term "loss potential" appears in the abstract and introduction but is explained in Section 3.3
Other Comments Or Suggestions: - Is the caption in Figue 5 (b) (c) (e) (f) wrong?
Questions For Authors: - In Figure 1, is the hidden pre-training the second epoch of the same pre-training data?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful suggestions and valuable feedback.
**Response to "Claims and Evidence about sentences that are hard to understand":**
We apologize for misunderstanding caused by our use of the term "overlap." Our intended meaning was that the curves from different transfer starting points "coincide". In the subgraph of Figure 2, we show the distribution shift curves of different transfer starting points and the horizontal axis represents the number of steps in CPT. These distribution shift curves almost "coincide" and could be fitted with a power-law form.
**Response to "Choosing Knowledge-Pile":**
First, we do not exclusively utilize Knowledge-Pile as the CPT dataset. We also employ Pile-of-Law [1] as a domain-specific CPT dataset and demonstrate the fitting and predictive capabilities of our scaling law in Appendix D. Second, the data in Knowledge-Pile [2] is derived and filtered from existing data sources, such as Wikipedia, arXiv, Semantic Scholar, without any atypical components. From the components [2], Knowledge-Pile is a good combination with higher STEM knowledge density, which can be used to enhance domain-specific abilities. Third, Our law is independent with the CPT dataset. When the CPT dataset distribution or quality change (better or worse), the form of our law remains applicable while the coefficients differ.
**Response to "Weakness":**
**W1: "LRS" and "WSD"**:
We apologize for the reduced readability caused by the abbreviation of certain concepts in Figure 1. LRS refers to learning rate schedules, and WSD [3] denotes the Warmup-Stable-Decay schedule. We will implement these clarifications in the final version.
**W2: Explain "transfer curve"**:
The term "transfer curve" is actually the CPT curve. As stated in line 96, "the CPT loss curve is a transitional curve on both D_pt and D_cpt validation set." The CPT curves in Figure 1 (b), (c), (e), and (f) illustrate that the curves deviate from the blue dashed line, and finally align with the orange dashed line.
**W3: Explain "transfer point" and "Hidden Pre-training"**:
The "transfer starting point" represents the starting step point of CPT. We formally define the "Hidden Pre-training" curve in Section 2.2 and represent these trajectories with dashed lines in Figure 1.
We acknowledge that the definations of some terms were not placed appropriately, and we commit to refining these definitions in the final version.
**W4: Small font size**:
We apologize for the small font size in the figures and we will increase the font size in the final version.
**W5: Explain "loss potential"**:
In the final version of the manuscript, we will incorporate a comprehensive description of the "loss potential" concept within the introduction section to establish a clearer theoretical foundation from the outset.
**Response to "Comments and suggestions: The wrong caption in Figue 5 (b)(c)(e)(f)?":**
We sincerely apologize for unclear captions of Figure 5 due to our intention to save space during the writing process.
The complete caption in the Figure 5 is:
(b) D_cpt true loss v.s. CPT step of different loss potentials (w/o re-warmup setting)
(c) D_cpt predicted loss v.s. loss potentials of different CPT steps (w/o re-warmup setting)
(e) D_cpt true loss v.s. CPT step of different loss potentials (w/ re-warmup setting)
(f) D_cpt predicted loss v.s. loss potentials of different CPT steps (w/ re-warmup setting)
**Response to "Questions: is the hidden pre-training the second epoch of the same pre-training data?":**
No. The hidden pre-training is not from the second epoch. it is based on the different data from the same distribution. For example, in our experiment, we continually pre-train the model on the remaining PT dataset Fine-web.
[1] Henderson, Peter, et al. "Pile of law: Learning responsible data filtering from the law and a 256gb open-source legal dataset." Advances in Neural Information Processing Systems 35 (2022): 29217-29234.
[2] Fei, Zhaoye, et al. "Query of cc: unearthing large scale domain-specific knowledge from public corpora." arXiv preprint arXiv:2401.14624 (2024).
[3] Hu, Shengding, et al. "Minicpm: Unveiling the potential of small language models with scalable training strategies." arXiv preprint arXiv:2404.06395 (2024).
---
Rebuttal Comment 1.1:
Comment: Thank you for your explanation. I trust that the author will resolve the clarity issue in future revisions. I believe this can be easily resolved, so I raise my score. | Summary: This paper explores the learning dynamics of continual pre-training and proposes scaling laws for the same. Based on the proposed scaling laws, the authors discuss several critical factors in continual pre-training. Overall, this is an educational paper to understand how continual pre-training works.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: There are no proofs in the paper. There are several claims about the scaling laws which are derived from other works. I did not check the correctness of the scaling laws since I am not an expert in scaling laws. I will let other reviewers judge this aspect.
Experimental Designs Or Analyses: Yes. The experimental designs in this paper are well thought out. There is a lot of thorough analysis.
Supplementary Material: No.
Relation To Broader Scientific Literature: The authors have done a good job of relating the work to the existing works.
Essential References Not Discussed: Nothing I am aware of.
Other Strengths And Weaknesses: This paper explores the effect of several hyperparameters including learning rate, replay ratio during continual pre-training. Overall, this is a great educational paper to learn more about the learning dynamics of CPT.
Other Comments Or Suggestions: 1. "loss potential" and "turning length" are used in the introduction without proper definition of them. So it is hard to understand the last paragraph of section 1.
2. In page 7, for Fig 13, it is worth mentioning that the figure is in the appendix.
Questions For Authors: 1. In page 5, you mention "models with higher loss potential achieve lower final losses". Why is that the case?
2. Finding-3 is not clear. What do you mean by releasing the high loss potential version?
3. Finding-4 is also not clear to me. Can you please explain it?
4. How do you set lambda_1 and lambda_2 in equation 5? You mention that it is based on practical requirements. But it is not clear what these requirements could be.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful suggestions and valuable feedback.
**Response to "Comments Or Suggestions 1 about loss potential and turning length":**
We apologize for any poor reading experience caused by organizational issues about these terms in our paper and will address these concerns in the final version.
For loss potential, we define in line 187 "captures the potential of future loss drop via LR annealing.", which corresponds to the height in Figure 4(c). For example, if the pre-training implements substantial annealing and end at the valley bottom in Figure 4(c), the remaining annealing room or potential for CPT is diminished. Quantitatively, we define loss potential as the ratio of the final learning rate of the pre-training annealing phase to the initial or maximum learning rate in the pre-training phase: $ loss\ potential = \frac{Final\ LR\ of\ PT}{Maximum\ LR\ of\ PT} $, which is same as the legend in our Figure 5(b)(e) and x-axis of Figure 5(c)(f). In Figure 5(a)(d), if the pre-training model anneals from 2e-4 to 1e-4 using a linear method, the loss potential for this model is 50%.
For turning length, we define it in line 315 as "The minimum training steps required to achieve a lower loss are designated as the turning length." We also illustrate the meaning of this term in Figure 7(c). Qualitatively, when the CPT steps once reach the turning length, the validation loss on pre-training dataset reverts to the same loss as the very beginning of CPT as shown in Figure 7(c).
**Response to "Comments Or Suggestions 2 about Figure 13":**
We will implement appropriate revisions for Figure 13 in the final version of the paper.
**Response to "Q1: Why is models with higher loss potential achieve lower final losses":**
The statement "models with higher loss potential achieve lower final losses" indicates that pre-training models with higher loss potential can attain lower CPT validation losses and demonstrate superior adaptation to the CPT dataset.
The intuitive understanding is that models with higher loss potential can allocate more annealing to the CPT phase, which enable to achieve lower D_cpt validation loss. The more specific reason of this finding is explained in line 255-265 in our paper. This finding is substantiated by the truth training loss trajectories of different loss potential models, as illustrated in Figure 5(b) and 5(e), as well as by the predicted loss values derived from our CPT law, as depicted in Figure 5(c) and 5(f).
**Response to "Q2: What do you mean by releasing the high loss potential version":**
Based on the above explanation, the models with higher loss potential could better adapt to the CPT dataset. However, current open-source models, such as Qwen or Llama, typically employ extensive learning rate annealing that reduces the learning rate to near-zero or minimal values, resulting in low loss potential. If releasing high loss potential variants of these models specifically (i.e., those trained without learning rate annealing), researchers and practitioners could adapt these models for CPT or downstream tasks.
**Response to "Q3: Explain Finding 4":**
The turning length is affected by: (1) the distribution distance between PT and CPT data; and (2) the sufficiency of pretraining. If the distribution distance is large or the pre-training is sufficient, the turning length becomes larger and even to potentially infinite. The detailed explanation is in the line 308-315 and Figure 7(c). If you have further questions about the Finding 4, feel free to further comment.
**Response to "Q4: How do you set lambda_1 and lambda_2 in equation 5?":**
It can be considered in two scenarios.
In some situations, we could allocate a percentage ratio for validation losses on PT and CPT datasets based on our prior knowledge with the importance between general and downstream performance.
Otherwise, for a specific test set that we focus and optimize, we could always precompute the linear coefficients based on dataset similarity, as demonstrated in Section 5.2. | Summary: This paper performs an empirical study of loss curves during continual pre-training. The paper seeks to characterize how the training loss on a new dataset will evolve when a pretrained model is subjected to new data in a CPT setup. To that end, the authors present a series of experimental setups showing how loss behaves in different scenarios. CPT loss curve is shown to be decomposable into previously characterized scaling law with learning rate annealing and a novel power-law term for capturing distribution shift between pretraining and continued pre-training data domains. The authors show in many setups that the laws proposed fit nicely the empirical data. The authors discuss the formulation and fitting of the laws.
Claims And Evidence: The claims of the authors are limited to the well-fittedness of their law-formulation to their empirical data. The data presented seem clear enough. The authors make scant claims of the effectiveness of their method compared to any baseline, and the claims they do make are not substantiated with experimentation, but rather exist as almost side-comments.
Methods And Evaluation Criteria: The paper does not really present baselines as it does not really present experiments. There are no figures or tables showing comparison to any other method. Rather, the paper presents empirical findings showing the proposed law fits nicely the data drawn from the various experimental setups.
Theoretical Claims: No theoretical claims present. The authors note this in limitations.
Experimental Designs Or Analyses: This is a peculiar paper in that it does not really have experiments per se. The authors present empirical results on the setups they designed and show that their 'law' formulation is a good fit for the empirical data. But they do not apply this 'law' towards solving any extant articulated problem; there is no 'baseline' against which a novelty is compared. A stronger version of the paper might say something like "this allows us to more efficiently find CPT hyperparams which lead to good performance" and then show experiments against a baseline. Another potential experiment would be fitting the laws on some setup and extrapolating the fit laws to other setups, showing generalization across datasets or models.
Supplementary Material: No.
Relation To Broader Scientific Literature: CPT is a currently topical method for analysis, as it is widely in use with LLMs. The 'laws' proposed in this work follow a general trend of identifying power laws in various machine learning settings to help make predictions about model performance, oftentimes with the goal of restricting the hyperparameter space of training and thus increasing the efficiency of producing useful models. This work evaluates laws which seem to fit nicely the data presented, though the contribution of those laws towards improving efficiency is left unstated and unexplored.
Essential References Not Discussed: Not to this reviewers limited knowledge.
Other Strengths And Weaknesses: Clarity leaves much to be desired. The first of two "Research Questions" in the intro reads "can we find such an accurate law containing as many variables that affect the final CPT performance as possible?". This can be greatly improved. Notions 'loss potential' and 'turning length' are used before being explained in the introduction. Throughout, concepts are poorly articulated, making unclear in sections what the authors are attempting to communicate.
Significance is somewhat unclear. It may be interesting that the "laws" described fit the data nicely, but it is left unstated as to why this may be *useful*. There is not a clear statement of what problem may be solved by the description of the proposed "laws" and accordingly there is no experiment which demonstrates that problem as baseline and some methodological solution. Some of the "Findings" are of questionable value. #5 seems to be a series of uncontroversial statements which are so general as to be obvious.
Many works already exist fitting power laws to various aspects of machine learning, so originality of the meta-methodology is limited. The presented empirical analysis seems reasonable, if limited. The utility of the method is unclear.
Other Comments Or Suggestions: The fundamental issue with this work is that it is a empirical study with neither theoretical analysis or demonstrated pragmatic value. The authors propose a law for continual pre-training loss evolution and show that this law fits nicely, but do not explicate or demonstrate why anyone might want to know about it, or how anyone might make use of it. The paper reads almost like an extended analysis section, missing experiments comparing to any baseline or a framing of the method which answers the question 'so what?'. For what the paper is, the analysis seems sound and the law seems to fit the data well. Whether or not this alone is sufficient for publication is dubious. It seems addressing the above concern could feasibly produce a much stronger presentation of this work.
Questions For Authors: In the authors view, what is a practitioner who reads this work and is now famliar with the proposed 'laws' to do differently? What problem is being solved, and what evidence is there that is has been?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your suggestions. We would also like to acknowledge the thorough and constructive feedback provided which help strengthen our work, which we summarize and respond as follows.
**Response to "Our work is empirical":**
Our work is indeed empirical. However, most scaling laws in LLMs are empirical, such as the OpenAI [1] scaling law and the Chinchilla [2] scaling law. We believe that it is common practice in developing scaling laws of LLMs.
Moreover, our scaling law can be applied to many practical scenarios
We have conducted extensive experiments to demonstrate the validity of our empirical law, including different learning rate schedules in Figure 3 and Figure 10, various model sizes in Appendix E, and different CPT datasets or replay ratios in Appendix D.
**Response to "No baselines":**
We are the first to propose the CPT law that traces the learning dynamics of LLMs throughout the CPT process and we model the full loss curve during CPT. Our CPT law considers numerous variables that affect CPT performance, which was not addressed in previous works. Consequently, we cannot compare the accuracy of fitting or prediction with other baselines due to the absence of comparable approaches. Nevertheless, to validate our CPT law, we have compared it with alternative law formulations as baselines, as detailed in Appendix I.
**Response to "Contribution of laws", "Why this may be useful", "What problem may be solved", "No extant articulated problem" and "Questions: what is a practitioner who reads this work and is now familiar with the proposed 'laws' to do differently?":**
Although our scaling law is empirical, it could help researchers understand the dynamics of continual pre-training, as noted by several reviewers. Reviewer nT6s states: "This work sets a stage for subsequent research for scaling to larger model scales during continual pre-training." Similarly, Reviewer 5BNQ observes: "Overall, this is an educational paper to understand how continual pre-training works," while Reviewer oHxM acknowledges: "These contributions are unique and not seen in prior works." Our scaling law especially assists researchers in understanding the impact of various hyperparameters on the complete learning dynamics. More specifically, our scaling law could fit by very few CPT data points and accurately predict the loss across diverse hyper-parameters (e.g. LR schedule) as shown in Figure 10, which enables to efficiently optimize the training hyperparameters (Figure 8) without extensive search procedures, thereby conserving computational resources.
**Response to "Findings seems to be a series of uncontroversial statements which are so general as to be obvious":**
We believe that some findings in our paper may not be so obvious to certain researchers. Our scaling law validates these obvious findings, which further demonstrates the correctness of our scaling law and reliability of these findings. Most importantly, our scaling law not only provides the qualitative analysis about these findings but also enables quantitative analysis for each finding. For example, it can predict the optimal loss potential, peak LR, and other hyperparameters through the scaling law as shown in Figure 8.
**Response to "Loss Potential" and "Turning Length":**
We apologize for any poor reading experience caused by organizational issues about these terms in our paper and will address these concerns in the final version.
For loss potential, we define in line 187 "captures the potential of future loss drop via LR annealing.", which corresponds to the height in Figure 4(c). For example, if the pre-training implements substantial annealing and end at the valley bottom in Figure 4(c), the remaining annealing room or potential for CPT is diminished. Quantitatively, we define loss potential as the ratio of the final learning rate of the pre-training annealing phase to the initial or maximum learning rate in the pre-training phase: $ loss\ potential = \frac{Final\ LR\ of\ PT}{Maximum\ LR\ of\ PT} $, which is same as the legend in our Figure 5(b)(e) and x-axis of Figure 5(c)(f). In Figure 5(a)(d), if the pre-training model anneals from 2e-4 to 1e-4 using a linear method, the loss potential for this model is 50%.
For turning length, we define it in line 315 as "The minimum training steps required to achieve a lower loss are designated as the turning length." We also illustrate the meaning of this term in Figure 7(c). Qualitatively, when the CPT steps once reach the turning length, the validation loss on pre-training dataset reverts to the same loss as the very beginning of CPT as shown in Figure 7(c).
[1] Kaplan, Jared, et al. "Scaling laws for neural language models." arXiv preprint arXiv:2001.08361 (2020).
[2] Hoffmann, Jordan, et al. "Training compute-optimal large language models." arXiv preprint arXiv:2203.15556 (2022).
---
Rebuttal Comment 1.1:
Comment: Thanks for the thorough response. In light of the points made, I adjust my score upwards. | Summary: The paper explores learning dynamics in Continual Pre-Training (CPT) for LLMs, focusing on how general and downstream domain performance evolves at each training step, using validation losses. The paper observes that the CPT loss curve represents a transition between hidden loss curves, influenced by distribution shift and learning rate annealing. The paper empirically derives a CPT scaling law combining these factors, enabling loss prediction at any training step and across learning rate schedules in CPT. This scaling law sheds light on key CPT factors, including loss potential, learning rate, training steps, and replay ratio. The proposed approach enables customizing training hyper-parameters for different CPT goals, such as balancing general and domain-specific performance. Lastly, the paper extends the scaling laws to more complicated scenarios such as out-of-domain datasets and models with unknown pre-training details.
## update after rebuttal
Most of my concerns were addressed in the rebuttal response. I hope remaining clarity issues are fixed in the revision. Considering all feedback, I stand by my score and recommend acceptance.
Claims And Evidence: The paper’s claims are well-supported by the provided evidence.
1. A CPT scaling law (Equation 4) is derived and validated by decomposing the CPT loss curve (Figures 1, 2). Experiments (Figure 3) demonstrate effectiveness for small models (0.1-1B parameters) across various training settings, though generalizability to larger models is a concern.
2. The paper analyzes key CPT factors (loss potential, distribution distance, learning rate, CPT steps) and their impact (Section 4, Figures 5-7).
3. Figure 8 demonstrates the approach’s adaptability for customizing hyperparameters to specific CPT goals, with more detailed discussions in Section 5.
4. The scaling law is shown to extend to OOD datasets (Subsection 5.2, Figure 9) and open-source models with unknown training details (Appendix G).
In summary, the paper provides substantial empirical support for its claims, including the empirical scaling laws. The analysis is detailed, and limitations regarding model scales are acknowledged.
Methods And Evaluation Criteria: Based on the paper, the proposed methods and evaluation criteria make sense for the problem and application at hand. The reason being:
1. The paper analyzes and models learning dynamics in CPT of large language models, focusing on performance evolution across general and downstream domains during CPT, analyzing catastrophic forgetting.
2. The paper introduces a CPT scaling law combining distribution shift and learning rate annealing to predict loss at any training step and across learning rate schedules, modeling key CPT factors like loss potential, learning rate, training steps, and replay ratio.
3. The paper uses validation loss of corresponding domains to trace performance changes, a standard practice in language modeling and continual learning, and validates the scaling law using different learning rate schedules, datasets, and model sizes.
4. The paper uses standard datasets like FineWeb and Knowledge-Pile, enabling comparisons and ensuring relevance.
Theoretical Claims: The paper derives scaling laws inspired by existing works and validates them by fitting all (continual) pre-training loss curves with different learning-rate schedules. So it is mainly an empirical paper.
Experimental Designs Or Analyses: The paper generally employs sound experimental designs. The paper considers multiple setups with variations in training settings—learning rate schedules, model size, continual pre-training datasets, and hyperparameters—to test the robustness of the CPT scaling law. The paper uses established datasets (Fineweb, Knowledge-Pile) and standard evaluation metrics (validation loss). The analysis of the CPT loss curve and the validation of the scaling law through fitting empirical loss curves are appropriate (with details about the fitting procedure in Appendix C).
However, a key limitation is the lack of rigorous theoretical analysis and proof for the CPT scaling law, as acknowledged in the paper. The conclusions regarding model size are also based on assumptions, as the experiments did not reach the scale of current large language models and are limited to 0.1-1B parameter models.
Supplementary Material: Primarily reviewed Appendix G in detail as it concerns employing the derived CPT scaling laws to more realistic open-source pre-trained models with unknown training details.
Relation To Broader Scientific Literature: The paper analyzes and models the learning dynamics in Continual Pre-Training (CPT) of large language models, especially understanding how the performance of these models evolves across general and downstream domains during the domain-specific adaptation process. This is a critical issue in continual learning, as models often suffer from catastrophic forgetting, where they lose previously learned information when learning new tasks. So, identifying key operating points like optimal replay ratios, starting loss potentials, and learning rate schedules is very important. Although some of these details are well-known and intuitive, the paper takes a stab at defining them, deriving scaling laws for the same, and employing them even in more realistic scenarios like open-source pre-training language models with unknown training details. This work sets a stage for subsequent research for scaling to larger model scales during continual pre-training.
Essential References Not Discussed: The paper is organized well, citing all relevant prior work and detailing how the proposed work draws inspiration from and differs from existing research. To my knowledge, the paper does not omit any important related works or works that would hinder an understanding of the current paper.
Other Strengths And Weaknesses: The paper addresses the important and timely problem of understanding continual pre-training, a prevalent technique where a generic pre-trained model is further pre-trained with a domain-specific corpus for particular use cases. The paper is well-written, clearly positions itself within existing work, is easy to follow, and includes extensive experimentation to support the claims.
As discussed previously, two main weaknesses are the paper’s focus on smaller model scales (0.1-1B parameters), raising questions about the generality of findings to larger models, and the limited theoretical foundation for the derived scaling laws, as they are primarily empirical. However, the latter does not constitute a reason to reject the paper.
Other Comments Or Suggestions: 1. It would help to formally define “loss potential” for better readability and understanding of this introduced concept.
2. In Figure 5, there are few legends that are 0% loss potential while some are 10% loss potential. Is there a typo somewhere?
Questions For Authors: 1. What do 0-100% loss potential values mean? What is the baseline used to compute this percentage?
2. In Finding 3, the paper notes that “PT models with higher loss potential always achieve lower $D_{cpt}$ validation losses…”. However, "loss potential" is relative to $D_{pt}$ validation loss or $D_{cpt}$ validation loss. It is unclear from the context which loss is being referred to in this finding.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your recognition of our work.
**Response to "Limited theoretical foundation for the derived scaling laws":**
Our work is indeed empircial. However, most scaling laws in LLMs are empirical, such as the OpenAI [1] scaling law and the Chinchilla [2] scaling law. We believe that it is common practice in developing scaling laws of LLMs.
[1] Kaplan, Jared, et al. "Scaling laws for neural language models." arXiv preprint arXiv:2001.08361 (2020).
[2] Hoffmann, Jordan, et al. "Training compute-optimal large language models." arXiv preprint arXiv:2203.15556 (2022).
**Response to "Smaller model scales (0.1-1B parameters)":**
The largerest model size in our experiment is 1.7B parameters without embedding, and we also conduct experiments with LLaMA-3.2-1B in Appendix G. We believe this model size is reansonable in the current context of large language models. Due to resource constraints, we are indeed unable to train larger models. Our scaling law has demonstrated consistent patterns across these model sizes (0.1-1.7B), so we believe our law is also applicable for larger models.
**Response to "Comments Or Suggestions 1" and "Q1" about "Loss Potential":**
We apologize for any poor reading experience caused by organizational issues about these terms in our paper and will address these concerns in the final version.
For loss potential, we define in line 187 "captures the potential of future loss drop via LR annealing.", which corresponds to the height in Figure 4(c). For example, if the pre-training implements substantial annealing and end at the valley bottom in Figure 4(c), the remaining annealing room or potential for CPT is diminished. Quantitatively, we define loss potential as the ratio of the final learning rate of the pre-training annealing phase to the initial or maximum learning rate in the pre-training phase: $ loss\ potential = \frac{Final\ LR\ of\ PT}{Maximum\ LR\ of\ PT} $, which is same as the legend in our Figure 5(b)(e) and x-axis of Figure 5(c)(f). For example, in Figure 5(a)(d), if the pre-training model anneals from 2e-4 to 1e-4 using a linear method, the loss potential for this model is 50%.
**Response to "Comments Or Suggestions 2: typo in figure 5":**
We apologize for the typo in the legend of Figure 5(a); the correct value is 10% loss potential, not 0%. We will revise it in the final version.
**Response to "Q2: loss potential is relative to D_pt validation loss or D_cpt validation loss":**
Based on the defination of $loss\ potential$ above, the loss potential is neither related to D_PT validation loss nor D_CPT validation loss but rather represents the annealing potential of the models. Our findings indicate that pre-trained models with higher loss potential can achieve lower D_CPT validation losses, suggesting that these models could better adapt to the downstream datasets.
---
Rebuttal Comment 1.1:
Comment: Thank you for your explanation. I hope the authors will resolve the clarity issues in future revisions. Having considered all the reviews and the authors' comments, I will maintain the current score. | null | null | null | null | null | null |
PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting for Novel View Synthesis | Accept (poster) | Summary: This paper introduces PF3Plat, a novel two-stage framework for novel view synthesis from sparse and unposed images. In the first stage, it leverages pre-trained depth estimation and visual correspondence models to achieve a coarse alignment of 3D Gaussians. In the second stage, it refines depth and pose estimation using geometry-aware modules.
## update after rebuttal
I remain positive due to the impressive experimental results.
Claims And Evidence: The claims are well-supported by extensive experimental results.
Methods And Evaluation Criteria: The paper presents a highly practical solution for novel view synthesis from unposed images.
Theoretical Claims: To the best of my knowledge, the theoretical claims appear to be correct.
Experimental Designs Or Analyses: In Table 4, the baseline model is MVSplat with pose estimation from Stage 1. Why not use MVSplat with MASt3D, given that this method falls behind MASt3R in camera pose estimation?
Supplementary Material: I've reviewed the supplementary material.
Relation To Broader Scientific Literature: This paper is related to novel view synthesis, pose estimation, and 3D reconstruction.
Essential References Not Discussed: To the best of my knowledge, none are missing.
Other Strengths And Weaknesses: This paper assumes known camera intrinsics, whereas DUSt3D does not have this requirement.
For camera pose estimation, this method underperforms compared to MASt3R, which does not require additional training on those datasets.
The improvement over InstantSplat is limited.
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > Why not use MVSplat with MASt3D, given that this method falls behind MASt3R in camera pose estimation?
We opted not to use MVSplat with MASt3R because MASt3R's runtime of around 10 seconds conflicts with our goal of achieving a fast, feed-forward process. Moreover, our experiments include a variant, MASt3R*, which employs PnP+RANSAC for pose estimation similar to our approach, yet MASt3R* performs worse than both our baseline and final model, as shown in Tab. 2. We believe these reasons justify our design choice for the baseline.
Additionally, we wish to highlight that we have shown comparisons that outperform MASt3R. We kindly refer the reviewer to ***Tab. 6***, where RoMa outperforms MASt3R, and our variant that uses RoMa enables further improvements in both image and pose estimation quality. Through this experiment, we highlight the flexibility of model choice for coarse alignment stage and a potential for performance improvements.
> This paper assumes known camera intrinsics, whereas DUSt3r does not have this requirement.
We thank the reviewer for highlighting this point. As discussed in Sec. A.6, we acknowledge that assuming known camera intrinsics is a limitation common to many NVS methods. Nevertheless, we conducted an additional experiment using UniDepth's intrinsic predictions instead of ground truth, and the results indicate that our method still performs well without GT intrinsics.
| Method | PSNR | SSIM | LPIPS |
|------------------------|---------|--------|-------|
| CoPoNeRF | 19.536 | 0.638 | 0.398 |
| Ours w/o GT intrinsics | 22.688 | 0.733 | 0.201 |
| Ours | 23.589 | 0.782 | 0.181 |
From the results, we highlight that even without GT intrinsics, ours outperforms CoPoNeRF and also performs on par with ours with GT intrinsics. This suggests a potentially feasible approach to alleviate the limitation.
Moreover, our primary focus is on generalized novel view synthesis from unposed images—where pose, depth, and correspondence estimation serve as intermediate tasks to achieve high-quality novel view rendering. In contrast to SfM-based methods such as DUSt3R, VGGSfM, and SP+SG, our approach differs in terms of data, objectives, and training setups. Notably, methods like DUSt3R and MASt3R require additional steps (e.g., training a radiance field via NeRF or 3D Gaussian Splatting), incur long optimization times per scene.
> The improvement over InstantSplat is limited.
We thank the reviewer for the comment. As shown in Tab. 5, our method outperforms InstantSplat by a large margin, and our inference speed is 100x faster, demonstrating a significant practical advantage. Moreover, when a more advanced correspondence network is adopted or test-time optimization is performed (like InstantSplat) for coarse alignment, training process or inference-phase optimization, our method further broadens the gap, as shown below:
| Method | PSNR | SSIM | LPIPS | Rot Avg | Rot Med | Trans Avg | Trans Med | Time |
|--------------|--------|-------|-------|---------|---------|-----------|-----------|-------|
| InstantSplat | 23.079 | 0.777 | 0.182 | 2.693 | 0.882 | 11.866 | 3.094 | 53 |
| Ours | 23.589 | 0.782 | 0.181 | 1.756 | 0.897 | 9.474 | 4.628 | 0.390 |
| Ours+TTO | 24.689 | 0.798 | 0.167 | 1.662 | 0.871 | 8.998 | 4.311 | 24 |
| Ours + RoMa | 24.412 | 0.799 | 0.167 | 2.152 | 0.501 | 7.544 | 3.233 | 0.523 |
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my questions. Based on the its efficiency and performance on novel view synthesis, i am leaning toward accepting this paper.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for acknowledging our rebuttal and for the appreciation of our work. If we have successfully addressed all of your concerns, we would be highly grateful if the reviewer could consider increasing the rating, as it would significantly support our submission. | Summary: This paper proposes a novel 3D Gaussian Splatting prediction model based on sparse views. Starting from a coarse initialization using off-the-shelf depth and correspondence models, the proposed fine alignment module predicts the correct scale for the depth map and camera poses, and finally, Gaussian heads predict the 3DGS parameters. The results demonstrate improved performance over baseline methods.
Claims And Evidence: There are several concerns regarding the claim:
- I do not think the proposed method qualifies as a “feed-forward” approach, because the coarse initialization stage employs a test-time iterative optimization for initial pose estimation. A feed-forward model typically involves directly regressing scene parameters in an end-to-end manner, but the proposed method is more like an engineered system that combines several off-the-shelf pre-trained models and iterative optimization modules, raising questions about its elegance and novelty.
- The authors assume that camera intrinsics are generally available, which is not always the case for generic video analysis tasks.
Methods And Evaluation Criteria: The benchmark datasets are reasonable; however, the study overlooks some state-of-the-art baselines, such as NoPoSplat and MASt3R-SfM, both of which had publicly available code well before the ICML submission deadline.
Theoretical Claims: Checked.
Experimental Designs Or Analyses: Checked.
Supplementary Material: Checked.
Relation To Broader Scientific Literature: As noted in the "Methods and Evaluation Criteria" section, some important baselines are missing.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: While the method demonstrates a practical system design and solid performance on benchmark datasets, it appears to be a composite system that lacks significant scientific insight.
Other Comments Or Suggestions: No
Questions For Authors: Already commented questions and concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: > Questions about “feed-forward” approach, because there is a module for a test-time iterative optimization for initial pose.
We wish to refer to reviewer k81v’s appreciation that ***“feed-forward” in our context emphasizes efficiency and speed*** (See comparison to InstantSplat). Although our coarse initialization stage employs iterative optimization, specifically, PnP+RANSAC, which is recognized as fast and efficient that takes only milliseconds (as also noted in NoPoSplat), the subsequent fine alignment module is fully feed-forward and represents a novel contribution. This fine alignment refines the coarse pose to correct 3D Gaussian misalignments, leading to significant performance gains (see Tab. 4). Leveraging off-the-shelf models for coarse alignment allowed us to focus on this key challenge (Our novelty in this aspect of coarse alignment is also recognized by the reviewer TAPU and EyLp), and we believe that our framework is novel and addresses a very important task in computer vision, which are appreciated by reviewer K81V and EyLp. We hope that this minor terminology point should not impact the overall evaluation of our work.
> The authors assume that camera intrinsics are generally available, which is not always the case for generic video analysis tasks.
While we acknowledge that there are some corner cases where intrinsics might not be available, ***this limitation is common to many pose-free feed-forward NVS methods, as discussed in Sec. A.6.*** Nonetheless, when using intrinsic estimates from UniDepth at inference, our method still outperforms CoPoNeRF as shown below. We wish to stress that in practical applications, intrinsics are generally available, and even in cases where they are not, our approach maintains competitive performance relative to other methods facing the same limitation.
| Method | PSNR | SSIM | LPIPS |
|------------------------|---------|--------|-------|
| CoPoNeRF | 19.536 | 0.638 | 0.398 |
| Ours w/o GT intrinsics | 22.688 | 0.733 | 0.201 |
| Ours | 23.589 | 0.782 | 0.181 |
From the results, we highlight that even without GT intrinsics, ours outperforms CoPoNeRF and also performs on par with ours with GT intrinsics. This suggests a potentially feasible approach to alleviate the limitation.
> The study overlooks some state-of-the-art baselines, such as NoPoSplat and MASt3R-SfM, both of which had publicly available code well before the ICML submission deadline.
We thank the reviewer for highlighting NoPoSplat and MASt3R-SfM. As noted by reviewer EyLp and the reviewer guideline explicitly states, ***NoPoSplat is a concurrent work—released on arXiv less than three months before our ICML submission and accepted by ICLR three days prior—so we are not obliged to compare against it***, though it is acknowledged in the related works (Nonetheless, we provide additional comparison below, as the reviewer kindly suggests). Moreover, MASt3R-SfM is an arXiv paper and, while it shares similar objectives in 3D reconstruction, it addresses a different task than our pose-free feed-forward NVS (its sparse view variants (MASt3R) are already compared with our method in Tab. 2).
Below includes the comparison:
| Methods | Pose Supervision | PSNR $\uparrow$ | LPIPS $\downarrow$ | SSIM $\uparrow$ |
| --- | :---: | --- | --- | --- |
| CoPoNeRF | $\checkmark$ | 19.536 | 0.398 | 0.638 |
| NoPoSplat | $\checkmark$ | 26.820 | 0.126 | 0.879 |
| DBARF | $\times$ | 14.789 | 0.490 | 0.570 |
| FlowCam | $\times$ | 18.242 | 0.597 | 0.455 |
| Ours | $\times$ | 23.589 | 0.181 | 0.782 |
While NoPoSplat performs well, as indicated in the table, we stress that our method clearly differentiates in the setting that the 3D geometry data, such as GT camera pose, is not utilized during training. Moreover, we show in the following that our method can be naturally extended to N-view setting, where NoPoSplat struggles.
| Methods (6 views) | Pose Supervision | PSNR $\uparrow$ | LPIPS $\downarrow$ | SSIM $\uparrow$ |
| --- | :---: | --- | --- | --- |
| NoPoSplat | $\checkmark$ | 18.007 | 0.384 | 0.584 |
| Ours | $\times$ | 27.028 | 0.116 | 0.879 |
| Methods (12 views) | Pose Supervision | PSNR $\uparrow$ | LPIPS $\downarrow$ | SSIM $\uparrow$ |
| --- | :---: | --- | --- | --- |
| NoPoSplat | $\checkmark$ | 17.625 | 0.399 | 0.583 |
| Ours | $\times$ | 28.133 | 0.099 | 0.993 |
To summarize, compared to NoPoSplat, our approach offers key advantages compared to NoPoSplat: it is a conceptual contribution proposing a general pipeline for pose-free view synthesis that addresses 3D Gaussian misalignment using off-the-shelf depth and correspondence networks and fine alignment modules; it naturally extends to N-view inference without retraining; and it is designed for scenarios without 3D geometry data (GT camera pose) during training. We will additionally include this discussion in related works. | Summary: The paper introduces PF3plat to address the problem pose-free feed-forward view synthesis using 3D Gaussians parametrization. First, the method leverages pretrained monocular depth estimator and visual correspondence models to get coarse depth and pose. Then learnable refinement modules are proposed to refine the depth and pose, which are conditioned on estimated confidence using cost volumes. The method further utilizes the 2D-3D and 3D-3D consistent loss to regularize the geometry. It achieves sota performance in pose-free synthesis methods.
## update after rebuttal
The contribution of work mainly relies on the refinement module and new losses, which I argree is incremental rather than substantial. I also encourage the authors to include more analysis/qualitative experiments of the ablation studies to demonstrate their proposed method better. Given the good empirical performance, I decided to maintain my score.
Claims And Evidence: Claims are supported by qualitative and quantative results of novel view synthesis and pose estimation on RealEstate10K, ACID, and DL3DV datasets.
Methods And Evaluation Criteria: Utilizing pretrained models and refining them to predict 3D Gaussians makes sense. Evaluation includes both indoor and outdoor real-world datasets, following pixelSplat[1] , MVSplat[2], and NoPoSplat[3].
[1].Charatan, D., Li, S., Tagliasacchi, A., and Sitzmann, V. pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction. arXiv:2312.12337, 2023.arXiv preprint
[2].Chen, Y., Xu, H., Zheng, C., Zhuang, B., Pollefeys, M., Geiger, A., Cham, T.-J., and Cai, J. Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images. arXiv preprint arXiv:2403.14627, 2024.
[3].Ye, B., Liu, S., Xu, H., Li, X., Pollefeys, M., Yang, M.-H., and Peng, S. No pose, no problem: Surprisingly simple 3d gaussian splats from sparse unposed images. arXiv preprint arXiv:2410.24207, 2024.
Theoretical Claims: In the last equation in equation 2, t misses subscript i. The architecture of T_agg is not specified.
Experimental Designs Or Analyses: 1. Experimental designs on unposed triplet, with the test set divided into small, middle, and large based on overlap between input views follows .
2. Separating the pose-free and pose-required methods in results is resonable.
3. Experiments and analyses in comparison to scene-optimization approach, speed, and cross-domain results are valid.
Supplementary Material: I reviewed the appendix and the videos in supplementary materials.
Relation To Broader Scientific Literature: The key contributions of the paper relate to how to leverage pretrained models and refine their estimations to predict 3D structures. Certain design choices, e.g. network architecture, cost volumes, training losses contribute the performance. The pose-free setting can be applied to in-the-wild scenarios and general 3D reconstruction. Feed-forward manner relates to the efficiency and speed of such methods.
Essential References Not Discussed: Another pose-free feed-forward Gaussian Splatting paper FreeSplatter[1] not discussed in the paper.
[1] Xu, Jiale, Shenghua Gao, and Ying Shan. "FreeSplatter: Pose-free Gaussian Splatting for Sparse-view 3D Reconstruction." arXiv preprint arXiv:2412.09573 (2024).
Other Strengths And Weaknesses: Strengths
1. The task is relevant and important in the field, and the paper is overall well-written and easy to follow.
2. The proposed coarse and refinement pipeline together with other design choices mentioned above is effective and outperform other pose-free methods.
Weaknesses
1. The contribution mainly relies on the refinement modules. While it is effective, much of the work builds on existing works or ideas, such as Unidepth[1], LightGlue[2], cost volumes[3].
2. NoPoSplat [4] is acknowledged but not evaluated and compared.
3. More qualitative analysis is expected for the ablation study.
[1]. Piccinelli, L., Yang, Y.-H., Sakaridis, C., Segu, M., Li, S., Van Gool, L., and Yu, F. Unidepth: Universal monoc- ular metric depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10106–10116, 2024
[2]. Lindenberger, P., Sarlin, P.-E., and Pollefeys, M. Lightglue: Local feature matching at light speed. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 17627–17638, 2023.
[3]. Chen, Y., Xu, H., Zheng, C., Zhuang, B., Pollefeys, M., Geiger, A., Cham, T.-J., and Cai, J. Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images. arXiv preprint arXiv:2403.14627, 2024.
[4]. Ye, B., Liu, S., Xu, H., Li, X., Pollefeys, M., Yang, M.-H., and Peng, S. No pose, no problem: Surprisingly simple 3d gaussian splats from sparse unposed images. arXiv preprint arXiv:2410.24207, 2024.
Other Comments Or Suggestions: 1. In Table 4, I assume (I-III) and (I-IV) are 2D-3D and 3D-3D consistent loss. The naming are somewhat confusing. These losses are essential to achieve good quality based on the results. In the current writing, however, they are not introduced in the earlier part of the paper.
2. In Table 4, Please explain (I-II) Scale/Shift Tuning Depth Network in the paper.
3. There is a typo "sprase" in line 154.
Questions For Authors: 1. Around line 211 and 212, does the inputs to 3D Gaussian parameter prediction also include the refined pose? I assume that gradient can also be flowed from the refined depth input. Is this the case?
2. Followed by question 1, could you explain more about the intuitation of confidence estimation and what exactly does it help with the reconstruction overall?
3. How did you determine the lambda for 3D-3D loss?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > FreeSplatter not discussed in the paper.
We thank the reviewer for highlighting FreeSplatter[1]. FreeSplatter addresses the same task using an LRM-based architecture that directly maps images to 3D Gaussians. To tackle coarse alignment, it employs staged training with early supervision from 3D geometry data (ground-truth pointmaps constructed using pose and depth), which is essential for convergence. This supports our claim that misaligned 3D Gaussians hinder learning, and contrasts with our approach, which incorporates a dedicated coarse alignment module. We will include this discussion in the related works section.
> The contribution mainly relies on the refinement modules. While it is effective, much of the work builds on existing works or ideas.
While our approach leverages existing methods for coarse alignment (while recognized as creative by reviewer TAPU), our main technical contributions lie in the novel fine-alignment modules. In addition to our findings to identify a unique challenge in pose-free feed-forward 3DGS, we address the challenge by proposing new losses, fine alignment modules, and a confidence estimation module. ***As demonstrated in Table 4, each component significantly contributes to performance improvements***, underscoring the technical innovation of our work.
> NoPoSplat [4] is acknowledged but not evaluated and compared.
As mentioned by the reviewer EyLp, ***NoPoSplat is a concurrent work released on arXiv less than three months ago and accepted only three days before our submission.*** Moreover, as shown in Table 2, fair comparison is challenging since other works (CoPoNeRF and NoPoSplat) use additional supervisory data (GT camera pose) during training. Nonetheless, we agree that including NoPoSplat as a baseline would further enhance our work. Below includes the comparison:
| Methods | Pose Supervision | PSNR $\uparrow$ | LPIPS $\downarrow$ | SSIM $\uparrow$ |
| --- | :---: | --- | --- | --- |
| CoPoNeRF | $\checkmark$ | 19.536 | 0.398 | 0.638 |
| NoPoSplat | $\checkmark$ | 26.820 | 0.126 | 0.879 |
| DBARF | $\times$ | 14.789 | 0.490 | 0.570 |
| FlowCam | $\times$ | 18.242 | 0.597 | 0.455 |
| Ours | $\times$ | 23.589 | 0.181 | 0.782 |
While NoPoSplat performs well, as indicated in the table, we stress that our method clearly differentiates as they use GT camera pose during training. Moreover, below shows our method can be naturally extended to n-view setting, where NoPoSplat struggles.
| Methods (6 views) | Pose Supervision | PSNR $\uparrow$ | LPIPS $\downarrow$ | SSIM $\uparrow$ |
| --- | :---: | --- | --- | --- |
| NoPoSplat | $\checkmark$ | 18.007 | 0.384 | 0.584 |
| Ours | $\times$ | 27.028 | 0.116 | 0.879 |
| Methods (12 views) | Pose Supervision | PSNR $\uparrow$ | LPIPS $\downarrow$ | SSIM $\uparrow$ |
| --- | :---: | --- | --- | --- |
| NoPoSplat | $\checkmark$ | 17.625 | 0.399 | 0.583 |
| Ours | $\times$ | 28.133 | 0.099 | 0.993 |
> More qualitative analysis for the ablation study.
We thank the reviewer for the valuable suggestion. Although it has been challenging to identify samples that clearly distinguish all the variants in Table 4 during rebuttal period, we agree that including such examples would further strengthen our work. We will incorporate additional qualitative results in the updated version of the paper.
> In Table 4, Please explain (I-II) Scale/Shift Tuning Depth Network in the paper.
Variant (I-II) in Table 4 refers to the approach where we fine-tune the scale and shift parameters used to convert relative depth estimates into UniDepth predictions. We observed that directly fine-tuning the UniDepth parameters resulted in unstable training, indicating that a more sophisticated training strategy may be required.
> Around line 211 and 212, does the inputs to 3D Gaussian parameter prediction also include the refined pose? I assume that gradient can also be flowed from the refined depth input.
Yes, that's correct. The refined pose is used to construct the standard MVS cost volume, which is then aggregated with the guidance cost volume. Consequently, gradients do flow from the refined depth input as well, contributing to the overall optimization process.
> Intuition of confidence estimation and what exactly does it help with the reconstruction overall?
This confidence score is used to modulate the regression of 3D Gaussian parameters, such as opacities and covariances, ensuring that more reliable matches have a greater influence on the reconstruction. Empirically, this approach improves both rendering quality and pose estimation accuracy, leading to an overall improved reconstruction, as shown in the Tab. 4.
> How did you determine the lambda for 3D-3D loss?
We set the lambda for the 3D-3D loss to 0.05 by balancing its scale with the other losses in our framework. This value was chosen empirically, and a more extensive hyperparameter search might yield performance improvements. | Summary: The paper presents a novel feed-forward method for 3D reconstruction and view synthesis from sparse, unposed images, eliminating the need for ground-truth depth or pose at both training and inference. The method builds on pixel-aligned 3D Gaussian Splatting (3DGS) but addresses the challenge of Gaussian center misalignment, which can destabilize training.
To tackle this, the method first leverages off-the-shelf monocular depth estimation and image correspondence models to infer coarse depth and camera pose estimates. It then refines these estimates through a multi-view refinement process using learned modules, improving reconstruction quality and stability. Finally, the method computes geometry-aware confidence scores to assess the reliability of Gaussian centers, which condition the prediction of opacity, covariance, and color.
Using large-scale real-world indoor and outdoor datasets, the paper demonstrates that this method outperforms existing approaches in both rendered image quality and inferred camera pose accuracy, setting a new state-of-the-art in pose-free generalizable novel view synthesis.
## update after rebuttal
After reading the rebuttal, I think the paper makes a valuable contribution to pose-free novel view synthesis. The comparison to NoPoSplat is reasonable—it’s a concurrent paper, and the authors added results in the rebuttal showing that their method is competitive, faster, and works in more general settings. They also compared to MASt3R + MVSplat and showed better results and faster runtime.
I still recommend weak accept, and I believe the paper is above the bar.
Claims And Evidence: The authors’ claim of state-of-the-art pose-free generalizable novel view synthesis is supported by extensive quantitative and qualitative analysis. See more in “Evaluation Criteria.”
Their claim that improved depth and camera pose estimates enhance pixel-aligned 3D Gaussian Splatting is backed by their ablation analysis, which demonstrates that refining scene estimates contributes to better view synthesis quality.
Additionally, their experimental results suggest that proper initialization is critical for stable training, as training without it leads to convergence issues. However, a deeper theoretical discussion on why inaccuracies in 3D Gaussian center localization cause noisy and sparse gradients—and whether alternative initialization strategies could mitigate this—would further strengthen their argument (see "Theoretical Claims").
Methods And Evaluation Criteria: The authors evaluate their method on two tasks: novel-view synthesis and camera pose estimation. Their model is trained and tested on three large-scale datasets: RealEstate10K (Zhou et al., 2018), ACID (Liu et al., 2021), and DL3DV (Lang et al., 2024).
For novel-view synthesis, they use standard image quality metrics, including PSNR, SSIM, LPIPS, and MSE. Camera pose estimation is assessed using the geodesic rotation error and angular difference in translation. This evaluation protocol follows a pose-free sparse view reconstruction method (Hong et al.) and has been adopted by other works in the field.
While the authors acknowledge NoPoSplat (Ye et al., 2024), a concurrent work achieving state-of-the-art results in pose-free generalizable novel-view synthesis from two views, they do not include it as a baseline for comparison. Given that NoPoSplat was accepted to ICLR 2025 only 50 days before the submission deadline, its omission is understandable. However, incorporating it as a baseline in future revisions would further strengthen the claims of state-of-the-art performance.
Theoretical Claims: The authors do not provide any formal proofs. However, their hypothesis that inaccuracies in the localization of 3D Gaussian centers lead to noisy and sparse gradients, which cannot be easily compensated for, is supported by empirical evidence. Specifically, their results show that training without proper initialization leads to almost intractable convergence, reinforcing this claim.
While their findings provide some support, a more thorough analysis of the underlying causes could further strengthen their argument. Including citations that discuss this phenomenon would also improve credibility. Additionally, if training without initialization leads to intractable results, it would be valuable to explore whether alternative initialization strategies—beyond depth-based alignment—could be viable.
Prior work, such as PixelSplat, demonstrated that parameterizing Gaussian positions implicitly via dense probability distributions can mitigate local minima issues when optimizing primitive parameters through gradient descent. This suggests a potential complementary approach that the authors could explore. Integrating implicit parameterization into their method might further improve robustness.
In addition, the loss functions are mathematically well-defined.
Experimental Designs Or Analyses: The experiments are well-structured and include strong ablations on different model components.
Supplementary Material: I reviewed all sections of the supplementary material, which provide implementation details as well as additional information on the training and evaluation processes. It also includes further qualitative analysis. The authors examine different off-the-shelf methods for coarse alignment, offering more context for their approach.
Relation To Broader Scientific Literature: The paper presents a novel feed-forward method for 3D reconstruction and view synthesis from sparse, unposed images, situating itself within the existing research. These methods open the door to applicability in real-world settings, where casually captured images contain sparse and distant viewpoints, and lack precise camera poses.
Essential References Not Discussed: I did not notice any missing citations. However, I recommend including works that further analyze the sensitivity of 3D Gaussian Splatting (3DGS) synthesis quality to the initial locations of the 3D Gaussian centers, which would provide valuable context and support on the matter.
Other Strengths And Weaknesses: The claim and demonstration that increasing metric depth and camera pose inference to a sufficiently high level is enough to infer the locations of Gaussian centers for high-quality view synthesis is original and presents an intriguing direction for future work.
In addition, multiview refinement of monocular predictions is an interesting direction on its own, with potential of surpassing sparse view depth estimators.
Most sections of the paper are written clearly. The authors include and describe all relevant literature and related works effectively. The evaluation protocol follows established standards in the field and is well-described.
However, the description of the method could be clearer. It takes some time to understand that the Gaussian centers are derived directly from the depth of the pixels, rather than from a learned module. A more refined overview of the method, along with a revision of Figure 1 to better explain how and when each Gaussian parameter is estimated, could improve clarity.
Other Comments Or Suggestions: In Introduction (line 27, right column) unnecessary repetition of citation.
PF3plat is sometimes written as “PFsplat”.
Questions For Authors: What is the performance of NoPoSplat in comparison to your approach?
Did you consider or experiment with other parameterization strategies or approaches to mitigate the impact of inaccuracies in 3D Gaussian center localization on noisy and sparse gradients?
Could you provide further analysis on why inaccuracies in 3D Gaussian center localization result in noisy and sparse gradients? Specifically, what factors contribute to this phenomenon?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > Given that NoPoSplat was accepted to ICLR 2025 only few days before the ICML submission deadline, its omission is understandable. However, incorporating it as a baseline in future revisions would further strengthen the claims of state-of-the-art performance.
We appreciate the reviewer’s valuable suggestion. As noted, NoPoSplat is a concurrent work released on arXiv less than three months ago and accepted only three days before our submission. Moreover, as indicated in Table 2, fair comparison is challenging since other works (CoPoNeRF and NoPoSplat) use additional supervisory data (GT Camera pose) during training. In contrast, our approach can be learned without such data, clearly differentiating our approach to them. Nonetheless, we agree that including NoPoSplat as a baseline along with detailed comparison would further enhance our work. Below includes the comparison:
| Methods | Pose Supervision | PSNR $\uparrow$ | LPIPS $\downarrow$ | SSIM $\uparrow$ |
| --- | :---: | --- | --- | --- |
| CoPoNeRF | $\checkmark$ | 19.536 | 0.398 | 0.638 |
| NoPoSplat | $\checkmark$ | 26.820 | 0.126 | 0.879 |
| DBARF | $\times$ | 14.789 | 0.490 | 0.570 |
| FlowCam | $\times$ | 18.242 | 0.597 | 0.455 |
| Ours | $\times$ | 23.589 | 0.181 | 0.782 |
While NoPoSplat performs well, as indicated in the table, we stress that our method clearly differentiates in the setting that the 3D geometry data, such as GT camera pose, is not utilized during training. Moreover, we show in the following that our method can be naturally extended to N-view setting, where NoPoSplat struggles.
| Methods (6 views) | Pose Supervision | PSNR $\uparrow$ | LPIPS $\downarrow$ | SSIM $\uparrow$ |
| --- | :---: | --- | --- | --- |
| NoPoSplat | $\checkmark$ | 18.007 | 0.384 | 0.584 |
| Ours | $\times$ | 27.028 | 0.116 | 0.879 |
| Methods (12 views) | Pose Supervision | PSNR $\uparrow$ | LPIPS $\downarrow$ | SSIM $\uparrow$ |
| --- | :---: | --- | --- | --- |
| NoPoSplat | $\checkmark$ | 17.625 | 0.399 | 0.583 |
| Ours | $\times$ | 28.133 | 0.099 | 0.993 |
To summarize, compared to NoPoSplat, our approach offers key advantages compared to NoPoSplat: it is a conceptual contribution proposing a general pipeline for pose-free view synthesis that addresses 3D Gaussian misalignment using off-the-shelf depth and correspondence networks and fine alignment modules; it naturally extends to N-view inference without retraining; and it is designed for scenarios without 3D geometry data (GT camera pose) during training. We will additionally include this discussion in related works.
> Did you consider or experiment with other parameterization strategies or approaches to mitigate the impact of inaccuracies in 3D Gaussian center localization on noisy and sparse gradients?
We appreciate the reviewer’s suggestion. While we have not yet experimented with alternative strategies like PixelSplat, we agree that mitigating inaccuracies in 3D Gaussian center localization is promising. Future work could explore leveraging NeRF representations, as in RADSplat, or formulating a probabilistic approach similar to PixelSplat. We will include a discussion of these alternatives in the supplementary material.
> Could you provide further analysis on why inaccuracies in 3D Gaussian center localization result in noisy and sparse gradients? Specifically, what factors contribute to this phenomenon? Including citations that discuss this phenomenon would also improve credibility.
We thank the reviewer for the valuable suggestion. In our framework, we claim that inaccuracies in 3D Gaussian center localization lead to noisy and sparse gradients because only the Gaussians within the rasterization window receive gradients. When centers are mislocalized, few Gaussians get effective updates, resulting in weak guidance, especially when initialized far from the optimal positions [1]. For this, we stressed the importance of coarse alignment. Moreover, providing photometric only showed limited effectiveness, in which we proposed 2D-3D consistency and regularization losses for addressing this. Additionally, [2] and NoPoSplat highlight that when the network is initialized without 3D priors and reliable 3D geometry data during early training (e.g., GT pointmaps) to compensate for such priors is absent, convergence is difficult (Early training stage for FreeSplatter and CroCo initialization for NoPoSplat). We will include these discussions and citations in the final version of the paper.
[1] Jung, J., Han, J., An, H., Kang, J., Park, S. and Kim, S., 2024. Relaxing accurate initialization constraint for 3d gaussian splatting. arXiv preprint arXiv:2403.09413.
[2] Xu, Jiale, Shenghua Gao, and Ying Shan. "FreeSplatter: Pose-free Gaussian Splatting for Sparse-view 3D Reconstruction." arXiv preprint arXiv:2412.09573 (2024). | Summary: This work presents a framework for novel view synthesis (NVS) from unposed images in a single feed-forward pass. It estimates depth and pose from unposed images using a combination of pre-trained monocular depth estimation and visual correspondence models. It outperforms prior pose-free methods like DBARF and CoPoNeRF in NVS quality and pose estimation.
Claims And Evidence: The claims in the paper are generally supported by experimental results, ablation studies, and comparisons with prior work.
1) The method shows clear improvements over prior pose-free methods like DBARF and CoPoNeRF in PSNR, SSIM, and LPIPS metrics on RealEstate-10K and ACID.
2) The ablation studies in Table 4 demonstrate a clear drop in performance when removing depth refinement, pose refinement, or geometry-aware confidence scores. They effectively validate the effect of these components.
3) The inference time of 0.39s per view (Table 5a) is significantly faster than InstantSplat (53s), making PF3plat more practical.
However, some claims could be better substantiated with additional experiments.
a) "Our method improves robustness in regions with low texture or significant viewpoint changes." The paper states that correspondence models like LightGlue struggle in low-texture regions. It is unclear how PF3plat can improve in these areas.
b) Lack of qualitative results for pose estimation. While the paper presents quantitative results (e.g., rotation and translation errors), it does not provide visual examples of estimated vs. ground truth poses, which would offer clearer insight into the actual performance of the pose refinement module.
Methods And Evaluation Criteria: The benchmarks make sense for the problem. The paper tackles a well-motivated problem: most existing 3DGS methods rely on accurate camera poses, which are difficult to obtain in casual image capture scenarios. PF3plat removes this dependency, making it more practical for real-world applications.
However, the method still requires ground-truth (GT) camera intrinsics, which may not always be available in real-world scenarios. This contradicts the claim of being fully pose-free since intrinsic parameters are a part of the camera model.
Theoretical Claims: The paper does not appear to contain formal theoretical proofs—it is primarily an experimental and algorithmic contribution focused on pose-free novel view synthesis using 3DGS. Since there are no complex theoretical derivations or proofs, there are no major mathematical errors to verify.
Experimental Designs Or Analyses: Yes, I reviewed the soundness and validity of the experimental design and analysis. Overall the experimental setup is well-structured.
However, the paper does not compare PF3plat to a simple pipeline using Mast3R for pose estimation + MVSplat for rendering. Since Mast3R outperforms PF3plat in pose estimation, it is unclear whether a Mast3R + MVSplat baseline would yield better results than PF3plat.
Without this comparison, it’s hard to judge whether PF3plat’s pose-free approach is necessary or if it just introduces more error.
Supplementary Material: Yes. However, I find the demos are too limited to get a detail understanding of the generalization capability, the failure cases of the model.
Relation To Broader Scientific Literature: PF3plat extends prior works (e.g. PixelSplat, MVSplat) by:
Removing the need for ground-truth camera poses, which prior 3DGS methods required.
Introducing coarse-to-fine pose estimation using monocular depth and correspondence networks, making pose-free 3DGS feasible.
Using confidence-aware refinement to stabilize Gaussian placement, which was a known issue in pixel-aligned 3DGS.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1) The coarse alignment module creatively uses monocular depth estimation + feature matching to initialize 3D Gaussian positions. The method is fully feed-forward, making it more scalable and applicable to real-world settings.
2) The paper introduces confidence-aware refinement, ensuring that unreliable depth/pose estimates are given lower weight when estimating 3D Gaussians. This helps stabilize training and improves robustness.
Weakness:
a) ACID dataset results indicate weaknesses in large-scale outdoor scenes.
b) PF3plat claims robustness in low-texture regions, but this is unclear.
Other Comments Or Suggestions: 1. Overall the paper is well-written and easy-to-follow. There are some typos, e.g. "fast speed".
2. Equation (2) does not clearly define E_pos. Equation (3) should use consistent notation for confidence scores.
3. The caption in Fig.1 should briefly explain the input/output of each module.
Questions For Authors: 1. The explanation of how the refined poses are computed is unclear (Sec. 3.2.3).
2. In Sec 3.2.4, about Cost Volume Construction and Aggregation, how does this differ from prior multi-view stereo (MVS) cost volumes?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: > The paper states that correspondence models like LightGlue struggle in low-texture regions. It is unclear how PF3plat can improve in these areas.
We enhance performance in low-texture regions through two complementary approaches:
As explained in line 233, our proposed 2D-3D consistency loss encourages corresponding points to lie on the same surface, drawing from principles of multi-view geometry. The effectiveness is shown in ***Table 4 (I-III)***.
Second, the flexibility of the design choice at coarse alignment stage allows us to use other methods beyond LightGlue. For example, replacing LightGlue with networks like RoMA has led to noticeable improvements, as demonstrated in ***Table 6***.
> Lack of qualitative results for pose estimation.
Please refer to ***Figure 7***, which provides a visual comparison for pose estimation. Additionally, we have included the baseline pose estimates in this figure to clearly demonstrate the effectiveness of our fine alignment module. Please refer to the following anonymous link: https://anonymous.4open.science/r/7048/7048_camvis.pdf
> However, the method still requires GT camera intrinsics, which may not always be available in real-world scenarios.
Camera intrinsics are commonly required in previous pose-free methods, such as DBARF, CoPoNeRF and NoPoSplat, as discussed in Sec. A.6. While this is a common limitation, for this rebuttal, we have conducted an additional experiment using camera intrinsics from UniDepth prediction:
| Method | PSNR | SSIM | LPIPS |
|------------------------|---------|--------|-------|
| CoPoNeRF | 19.536 | 0.638 | 0.398 |
| Ours w/o GT intrinsics | 22.688 | 0.733 | 0.201 |
| Ours | 23.589 | 0.782 | 0.181 |
From the results, we highlight that even without GT intrinsics, ours outperforms CoPoNeRF and performs on par with the model with intrinsics. This suggests a potentially feasible approach to alleviate the limitation.
> Mast3R for pose estimation + MVSplat for rendering.
We opted not to use MVSplat with MASt3R because ***MASt3R's runtime of around 10 seconds conflicts with our goal of achieving a fast, feed-forward process***. Despite our superior performance in in RealEstate10K, thanks to the relatively larger-scale outdoor training of MASt3R, it is true that MASt3R performs slightly better than ours in pose estimation in ACID. Nevertheless, this is compensated when compared to MAST3R* in Tab. 2. We also highlight that with more advanced correspondence network, RoMa, ours further broadens the gap, as shown in Tab. 6.
Nonetheless, for this rebuttal, we provide an additional comparison below:
| | | | | |
|---------------------------|---------|-------|-------|-------|
| Method | PSNR | SSIM | LPIPS | Time |
| (0) Coarse Pose + MVSplat | 20.140 | 0.694 | 0.281 | 0.264 |
| Mast3R + MVSplat | 21.712 | 0.721 | 0.254 | 11 |
| Mast3R* + MVSplat | 21.167 | 0.702 | 0.268 | 0.642 |
| Ours | 23.589 | 0.782 | 0.181 | 0.390 |
From the results, we find that ours outperforms the baselines, thanks to our fine alignment modules and confidence estimation that contribute to further improvement.
> ACID dataset results indicate weaknesses in large-scale outdoor scenes
We acknowledge that our performance in large-scale outdoor scenes (dynamic coastline environments), is not optimal, ***as discussed in Sec. A.6***. We attribute these challenges primarily to UniDepth's training dataset, which, unlike DUSt3R or MASt3R, was built on a smaller and less diverse collection of outdoor scenes. A straightforward approach would be to leverage depth models that are trained on large-scale outdoor scenes or advanced correspondence models, to yield more accurate initial 3D Gaussian locations.
> The explanation of how the refined poses are computed is unclear
In Section 3.2.3, the refined pose is computed using three distinct inputs: Plücker Coordinates, Feature Maps and Pose Token. Each of these inputs is processed through a series of attention layers, which help propagate information about previous camera estimates, multi-view geometry, and the current camera state. After fusing this information, the pose token is passed through a simple MLP that predicts residual rotation and translation parameters. These residuals are then added to the coarse camera parameters to yield the final refined pose.
> Cost Volume Construction and Aggregation, how does this differ from prior multi-view stereo cost volumes?
Our approach differs from traditional MVS cost volumes by introducing a guidance cost volume. Since the cost volume built from our estimated camera poses is inherently noisy, we supplement it with a guidance cost volume derived from a monocular depth estimate. By aggregating these two volumes, we construct a fina cost volume better tailored for a pose-free setting.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed explanation. The comments partially solved my problems. However, my main concern about the overall reconstruction quality still exists. From the visualizations in main paper and supplementary materials, the reconstructed scenes contain limited view change and blurry/floating regions. I also suggest use a higher resolution to improve visual quality. So, I maintain my original score.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's feedback and apologize for not sufficiently addressing the concerns regarding the limited viewpoint variations and visual artifacts such as blurry or floating regions in our original submission. To clearly demonstrate the capability of our method under challenging scenarios involving large viewpoint changes, we provide additional visualizations through the following link (Please download the file, since the anonymous github seems to have a bug, not showing the captions properly) :
https://anonymous.4open.science/r/7048/7048_qual.pdf
These new visualizations explicitly include samples exhibiting significant viewpoint shifts, where our method ***consistently outperforms existing state-of-the-art methods*** such as DBARF, FlowCAM, and CoPoNeRF. Notably, our method produces higher-quality rendered images even under conditions of minimal overlap between context images. Given this clear demonstration of superiority, we respectfully suggest it would be unreasonable to maintain a rejection rating solely based on the originally perceived quality issue.
We emphasize that although the absolute quality of our renderings may not match that of pose-supervised or pose-required approaches and does not render higher-resolution images (requires additional fine-tuning), ***this comparison should be contextualized within the scope of our task***. Our contribution specifically addresses the highly challenging scenario of pose-free, feed-forward novel view synthesis, where ground-truth poses are leveraged neither during training nor inference. Therefore, expecting similar image quality to supervised or pose-based methods would not be entirely fair or appropriate.
We hope our response adequately addresses reviewer's concern. | null | null | null | null |
AdaptiveStep: Automatically Dividing Reasoning Step through Model Confidence | Accept (poster) | Summary: This work proposes a method (Adaptive Step Process Reward Model - ASPRM) to split reasoning chains into reasoning steps based on the model confidence rather than pre-defined rules. For each token in the generated output, if the model’s probability for that token is below some threshold, then this token is treated as the start of a new reasoning step.
The authors train their ASPRM with rollouts and hard estimation of intermediate step values, and show that this adaptive step strategy yields higher-performing PRMs on math and coding benchmarks.
This work also experiments with token-level value guided decoding: during decoding, when the model generates a low-confidence token, the PRM can be used to rank the top M low-confidence tokens at that step. Experiments show that value-guided decoding performs best with ASPRM.
Eventually, multiple ablation studies are made in the experimental section to show the generalization capacity of the proposed approach.
## update after rebuttal
Claims And Evidence: Yes, claims are supported by convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed methods and benchmarks (Math500 & GSM8k & LeetCode problems) make sense for introducing a new type of PRM
Theoretical Claims: No theoretical claims found.
Experimental Designs Or Analyses: Experiments and their analysis seems reasonable and valid. A lot of experiments have been made which is nice. It can get a little complicated to follow sometimes because in PRMs there are 3 models interacting with one another:
1. the model that generates trajectories and that is used with rollouts to generate data of the form `(partial trajectory, target score)`.
2. the actual PRM that is trained on the data generated in (1).
3. a policy model that is generating trajectories with the help of the PRM at inference time.
For each of these models the authors experimented with Mistral & Llama for math benchmarks, and deepspeed for LeetCode problems.
Supplementary Material: I briefly looked at all supplementary material.
Relation To Broader Scientific Literature: Process Reward Models (PRM) became a popular research topic recently because of their advantages over Outcome Reward Models (ORM) in giving intermediate feedback while training LLMs. This particularly became important as modern LLMs output reasoning chains before their final answer. Cheking the validity of each reasoning step can boost their performance. The idea of splitting reasoning chains based on model confidence (rather than on every new line character which is the default strategy) is novel to the best of my knowledge and seems to perform well. This shows once again that letting the model decide is better than imposing human judgement.
Essential References Not Discussed: No critical related work missing to the best of my knowledge.
Other Strengths And Weaknesses: # Strengths
This paper presents a novel strategy to split reasoning chains into reasoning steps in the goal of training Process Reward Models. Throught numerous experiments the proposed method is shown to perform better than more traditional PRMs.
In addition, the method is tested as a value-guided decoding tool, which is also shown to perform well.
Eventually, numerous generalisation experiments and analysis is being presented at the end of the paper, showing rigorous evaluation.
# Weaknesses
One weakness of the proposed approach is that we need to define a confidence threshold to define “low” confidence steps and thus break the reasoning chain into steps. The decision of this threshold could influence the performance of the proposed method. The authors set the threshold such that 2% of reasoning tokens would fit under. This feels arbitrary and could benefit from further motivation.
In addition, an ablation study on the decision of the threshold could also benefit this paper. How does 2% compares to 1%, 5%, 10%, 20%, etc…
Eventually, the model used to estimate the confidence could also influence this decision: maybe some models are more confident than others and their threshold should be higher. An informed discussion with experiments could boost the quality of this work.
Other Comments Or Suggestions: Clarification suggestions:
- In tables 2, 4, 5, 6 it would be clearer to put the performance of the method you compare against directly in the caption (or as an additional table row) so that the reader doesn’t have to search where this number is in another result table.
- The up & down arrows are nice (though the colors could be inverted: red for decline & green for improvement).
Typo:
- in the last paragraph of section 4.4, “_ results in 4.3 and 4.3 indicate …_”
Questions For Authors: No additional questions for now.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Reviewer Comments
**Dear Reviewer NpJW:**
We would like to express our sincere gratitude for your time, your thorough review and valuable feedback on our manuscript. Your comments have provided us with important insights that will help improve the quality of our work.
>## W1: Ablation Study on Threshold Decision
**R1:** We appreciate your suggestion regarding the threshold decision process. Though our approach is grounded in cognitive theory, an ablation study is needed in the revised manuscript to demonstrate the impact of different thresholds. We add the BoN results of ASPRM models trained with threshold of 0.5%, 1%, and 1.5%. However, larger thresholds (3%, 5%, 10%) mean perform more rollouts. Due to computational resource constraints during rebuttal, we are unable to conduct extensive experiments with various models and higher threshold combinations, we will conduct further ablation studies when we have more computational resources available. Table 1 and Table 2 shows the BoN results of ASPRM trained with threshold at 0.5%, 1.0%, and 1.5%, compared with the baselines, and a 0.5% change means that each solution has one less step on average. We find that although there is some fluctuation, a greater number of segments within the range of 0.5% to 2% generally means better judging capability. We used more models (MetaMATH-Llama-7b / -13b - 70b) for generation to supplement the results. We hope for your understanding and approval due to the lack of additional results.
**Table 1: Bo64 results on Math500.**
| model | MetaMATH-Llama-7b | -13b | -70b |
|:---:|:---:|:---:|:---:|
| ASPRM-M-T0.5 | 25.00 | 28.80 | 32.60 |
| ASPRM-M-T1.0 | 25.00 | 29.60 | 31.60 |
| ASPRM-M-T1.5 | 27.80 | 28.80 | 31.40 |
| ASPRM-M-T2.0 | 25.40 | 31.00 | 34.60 |
| Math-shepherd | 28.40 | 31.00 | 33.00 |
| ASPRM-L-T0.5 | 31.80 | 35.2 | 37.00 |
| ASPRM-L-T1.0 | 31.60 | 34.80 | 36.20 |
| ASPRM-L-T1.5 | 32.00 | 35.60 | 38.60 |
| ASPRM-L-T2.0 | 33.40 | 37.80 | 40.00 |
| ER-PRM | 33.20 | 37.40 | 38.80 |
**Table 2: Bo64 results on GSM8k.**
| model | MetaMATH-Llama-7b | -13b | -70b |
|:---:|:---:|:---:|:---:|
| ASPRM-M-T0.5 | 79.45 | 83.32 | 85.82 |
| ASPRM-M-T1.0 | 81.04 | 83.55 | 88.48 |
| ASPRM-M-T1.5 | 81.65 | 82.64 | 89.61 |
| ASPRM-M-T2.0 | 81.27 | 85.80 | 89.23 |
| Math-shepherd | 84.23 | 85.22 | 88.40 |
| ASPRM-L-T0.5 | 86.84 | 86.85 | 91.38 |
| ASPRM-L-T1.0 | 89.60 | 86.58 | 91.74 |
| ASPRM-L-T1.5 | 86.20 | 90.00 | 90.02 |
| ASPRM-L-T2.0 | 85.52 | 88.25 | 91.66 |
| ER-PRM | 86.58 | 87.49 | 88.86 |
Besides, we also investigated the influence of various thresholds on the performance of TVD. The results have been collated and presented in the following two figures: [Figure 1](https://anonymous.4open.science/r/PIC-0969/fig2.png) and [Figure 2](https://anonymous.4open.science/r/PIC-0969/fig3.png).
>## W2 : Model Influence of Confidence Threshold
**R2:** We use the pecentage of whole distribution of model confidence, instead of a fixed threshold, this means that different models will indeed have different threshold values, and different tasks will also have different threshold values. We use a table to illustrate this, the following Table 3 below shows the values at a 2% confidence threshold for different models and tasks. We find that for the same split ratio, models with greater capabilities and simpler tasks tend to have higher threshold values.
**Table 3: 2% confidence threshold for different models and tasks.**
| dataset | MATH500 | GSM8k |
|:---:|:---:|:---:|
| MetaMATH-7b | 25.51 | 34.26 |
| 13b | 25.78 | 35.91 |
| 70b | 33.41 | 39.51 |
>## W3 and T1: Figure, Table and Typographical Errors
**R3:** Thank you for highlighting the issues with the table, the up and down arrows' color and the typographical errors in our manuscript. We will correct these inaccuracies in the revised version to improve clarity and readability.
---
**Thank you again for your time, thorough review and constructive feedback. Your insights have significantly helped us identify areas for improvement in our work. We welcome any additional suggestions that could further enhance the quality and rigor of our research.** | Summary: The paper addresses the challenge of training Process Reward Models (PRMs) for large language model reasoning by introducing a novel step segmentation method called AdaptiveStep. Instead of using fixed rules or token counts to break a model’s chain-of-thought into steps, AdaptiveStep dynamically segments the reasoning process based on the model’s own confidence in predicting the next token.
The authors sample solution paths from an LLM and compute the probability (confidence) for each generated token; tokens with unusually low confidence (below a learned threshold τ) are treated as decision boundaries, initiating a new reasoning step. Using these segmented steps, the authors then train a PRM by simulating rollouts from each partial solution (each step) to see if a correct final answer can still be reached.
Each step is labeled positive if any continuation yields a correct answer, or negative if all continuations fail, following a heuristic “hard” reward assignment, similar to how PRM is usually trained.
AdaptiveStep-augmented PRMs (termed ASPRM) are tested on complex reasoning tasks in mathematical problem solving (GSM8K, MATH dataset) and code generation (LeetCode-style programming problems, LiveCodeBench) and the approach achieves good results overall.
## update after rebuttal
I have given a weak accept to the paper and I wont mind seeing it get accepted.
Claims And Evidence: A lot of claims are made in the paper and are backed by empirical results.
1. Dynamically segmenting reasoning steps by model confidence yields more “decision-making insights” per step than naive rule-based segmentation: This is supported qualitatively by examples and quantitatively by analysis of the segmentation output: they observe that AdaptiveStep tends to insert breaks at meaningful junctures (e.g. mid-formula or before a crucial choice in logic) rather than at arbitrary punctuation or fixed intervals.
2. The claim that no manual annotation is needed is right as the proposed method relies purely on the model’s probabilities and an automated rollout procedure.
Other claims like lower cost and reduced number of samples are also verified in the paper.
Methods And Evaluation Criteria: The proposed method is very simple and using loprobs to estimate confidence is not new and has been explored a lot in the past. Applying this to PRM is unique but this approach makes intuitive sense – low confidence often indicates the model is choosing among multiple possibilities (e.g. figuring out the next step in a math proof or deciding on a coding approach), which is exactly where a new step and potentially a feedback signal would be most valuable.
The PRM training procedure – labeling each segmented step via rollouts – is a logical way to obtain supervision. It mirrors prior work (like Wang et al., 2024a’s heuristic rollout for Math-Shepherd) but importantly removes the need for manual identification of step boundaries, which is one of the main contributions. The training is the same as past works and the authors’ choice to use binary labels (hard estimation) for each step is reasonable and aligned with previous PRM approaches.
The evaluation criteria and benchmarks are appropriate and standard for the domain - math and code. In the code domain, since no established stepwise PRM existed publicly, they construct a baseline Outcome Reward Model (ORM) by training a reward model that only gives feedback at the final answer.
Overall, I would say that the benchmarks and metrics clearly align with the paper’s goals and is pretty standard.
Theoretical Claims: The paper is mostly empirical and has no theoretical claims as such.
One interesting claim is the rationale behind the 2% threshold for low-confidence tokens. The authors justify setting the threshold so that roughly 2% of generated tokens are below it, citing Kahneman’s (2011) finding that about 2% of human thinking is “deep thinking”. This is very interesting but hard to judge. The paper does not prove that 2% is the best choice; rather, it assumes this fraction yields a reasonable number of decision points. While the results with 2% are good, I would be interested in seeing a grid search over more possible values.
Finally, the main claim of the paper is that “low model confidence indicates a potential decision point”. Intuitively, this claim is sound – if an LLM is uncertain about the next token, it likely means multiple continuations are plausible, implying a branch in reasoning. The paper’s empirical analysis supports this (confidence dips align with meaningful junctures), but theoretically one could question if low confidence always corresponds to an important decision. There might be cases where an LLM’s confidence is low simply because it’s generating an uncommon word or name, not because it’s at a logical decision point. Therefore, this claim is plausible and supported by examples, but not theoretically guaranteed for all scenarios.
One theoretical aspect that might have been explored more is the calibration of model confidence. AdaptiveStep assumes the model’s token probability is a meaningful indicator of uncertainty. In theory, if a model’s confidence estimates are poorly calibrated, the threshold might not accurately reflect decision difficulty. The paper doesn’t delve into calibration theory or prove that the chosen LLMs have well-calibrated confidences in these domains. They proceed empirically, and indeed the approach works, implying the confidences were informative enough.
Experimental Designs Or Analyses: The experiments are well designed and the authors clearly describe their experiment setup, including datasets, model choices, baselines, and metrics used to test the models. The baselines themselves are appropriate and the authors make an effort to ensure comparability. For math, they use published open-source PRMs (Math-Shepherd and ER-PRM). It’s a little unclear whether they recomputed those baselines’ performance on their setup or took numbers from the literature. Maybe I might have missed this in the paper.
The metrics and analysis of results are appropriate. They measure both Accuracy/Pass@1 (to see if the PRM hurts single-shot performance – it doesn’t; guided decoding often improves it) and Best-of-N accuracy (to demonstrate how well the PRM can identify a correct solution among many).
On thing missing in the paper is that the paper doesn’t report any significance tests. Improvements like +3% on GSM8K could be within error margin if not enough samples. I am not sure what % of improvement is good.
Supplementary Material: I read the statistic information of the constructed dataset in the Appendix.
Relation To Broader Scientific Literature: The authors discuss the usefulness of PRMs and discuss prior works. There is a good coverage of recent literature on PRMs and stepwise reasoning alignment.
One interesting thing is that the authors used OpenAI cookbook ideas to assess logprobs, which is interesting. I wonder if the authors are the first to turn this idea into a full pipeline for training a reward model.
Essential References Not Discussed: I think the authors mentioned all PRM related past works. One area that wasn’t explicitly mentioned is the line of research on verifier models or consistency checks aside from PRMs. For example, works like Cobbe et al. (2021) or Li et al. (2022) on verifying chain-of-thought or Iterative Polishing might be related.
Other Strengths And Weaknesses: ### Strengths
- The core idea of using the model’s own confidence to segment reasoning steps is very clever. It addresses a clear limitation of prior methods, which relied on ad-hoc rules or costly annotations.
- AdaptiveStep yields a more efficient data generation process for training the PRM. By segmenting only when needed, it produces far fewer total steps to label.
- The experiments follow past works and results are provided on math and code. The authors provide insightful analysis of where the model places step breaks (e.g., highlighting that conjunctions and math operators often trigger low confidence).
### Weakness:
- AdaptiveStep’s segmentation is only as good as the model’s confidence estimations. If an LLM has idiosyncrasies in its probability outputs (e.g., it might be overconfident in certain wrong steps or underconfident in trivial but rare phrasing), the segmentation could be suboptimal.
- A notable weakness is the somewhat arbitrary choice of using the bottom 2% confidence as the segmentation threshold. While the authors cite a cognitive science justification, this parameter was not deeply examined.
- [MINOR]: Interesting to see how the method would perform in domains without binary success criteria (e.g., open-ended logical reasoning or commonsense questions where “correctness” is fuzzy). The paper’s approach relies on having a ground truth check to label steps.
- The baselines chosen were appropriate (other PRMs and an outcome model), but one could argue that the paper doesn’t compare against the strongest possible alternatives. For example, Reinforcement Learning from Human Feedback (RLHF) at the step level or even outcome level could improve reasoning – how would a model fine-tuned with RLHF on these tasks compare to using a PRM? Similarly, OmegaPRM (which uses MCTS and presumably a powerful orchestrator) is mentioned but not empirically compared.
- [MINOR] A minor weakness is that some details of the method are not fully spelled out in the paper, potentially hindering replication. For example, the exact number of samples generated per question, the value of J (number of rollouts per step) used for labeling, or the training hyperparameters for the PRM model are not explicitly listed (at least not in the excerpt we see).
Other Comments Or Suggestions: -
Questions For Authors: 1. Your method assumes the model’s confidence (top-1 probability) is a reliable indicator of decision difficulty. Did you observe any cases where the model was overconfident in a wrong step (thus no break inserted) or underconfident in an easy step (inserting an unnecessary break)? In other words, how robust is AdaptiveStep to calibration errors in the base LLM’s probabilities?
2. How sensitive are your results to the choice of the 2% confidence threshold for segmenting steps? Did you experiment with different percentages or adaptive thresholds per dataset/model? It would be useful to know if 2% is truly optimal or just one reasonable setting.
3. Could you clarify the data generation process for PRM training in terms of scale? Specifically, how many solutions N did you sample per problem to determine the confidence distribution and threshold, and how many rollouts J per step were used to label each step? These numbers are important to understand the compute cost.
4. For the math baselines (Math-Shepherd and ER-PRM), did you reimplement/retrain those methods on the same datasets and base models that you used for ASPRM, or are you citing their reported results?
5. You cite OmegaPRM (which uses MCTS) and stepwise RLHF approaches. How do you expect AdaptiveStep to compare to those in practice?
6. How would AdaptiveStep and PRM training apply to domains where checking correctness is non-trivial (for example, commonsense reasoning or legal question answering where there isn’t a single numeric answer or test cases)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer Comments
**Dear Reviewer LC8A:**
We greatly appreciate your insightful comments and suggestions and thank you for your time. Due to the character limit, we have summarized your questions. If we have missed or misunderstood any points, please let us know.
>## Related Work
Thank you for your valuable suggestions regarding related work. We will incorporate literature on reward models for LLMs in our revised manuscript, including BT model, GenRM from Zhang et al. (2024), Pairwise RM from Jiang et al. (2023), Cobbe et al. (2021), Li et al. (2022) and so on.
>## Q1 and W1: Suboptimal Concerns
**R1:** Thank you for expressing your concerns. We provide a [statistical results](https://anonymous.4open.science/r/PIC-0969/fig1.png) to show the relation between segmentation numbers and task difficulty. As shown in the figure, there are indeed some segmentation issues in the lower-left corner and on the right side, where few difficult problems are segmented into fewer steps, or certain easy problems are segmented into more steps. Although these cases are rare (fewer than 2%), the issue does exist. We believe that combining rule-based methods with AdaptiveStep would result in better divisions.
>## Q2 and W2: Threshold Settings
**R2:** Thank you for your insightful suggestion. We will include more percentage settings in our experimental evaluation to provide a more comprehensive analysis of our method's performance across various conditions. We used more models for generation to supplement our results. [Table 1](https://anonymous.4open.science/r/PIC-0969/fig4.png) and [Table 2](https://anonymous.4open.science/r/PIC-0969/fig5.png) shows the BoN results of ASPRM with threshold at 0.5%, 1.0%, and 1.5% compared with the baselines, and a 0.5% change means that each solution has one less step on average. Due to computational limitations, we are unable to scale up more rollouts in a short time, but we will conduct further analysis in subsequent versions, we hope for your understand. We find that although there is some fluctuation, a greater number of segments within the range of 0.5% to 2% generally means better evaluation capability.
>## Q3, Q4 and W6: Implementation Details
**R3:** Thank you for highlighting the missing methodological details. Regarding data preprocessing, we have provided training data specifications in Sec.4 (Parameter Setting), which documents 30 solutions per problem and 8 rollouts per step for labeling. In Sec.4.4 (Construction Efficiency), we present a comparison of computational costs with others.
For ASPRM training, we employed a batch size of 256 and learning rate of 1e-6 (Mistral), 2e-6 (Llama), 5e-6 (Deepseek), consistency with the Math-Shepherd parameters in the OpenRLHF script.
For baselines, we utilized the released models of other methods and rigorously adhered to their prescribed usage for evaluation on our dataset. At the 7B model scale, our reported Math-Shepherd BoN results exceed those in the original paper, demonstrating the fairness of our comparisons.
>## Q5 and W4: OmegaPRM and RLHF
**R4:** Thank you for raising these questions. OmegaPRM is a method that integrates binary search into MCTS to construct data, while ours is a fine-grained step segmentation method. The methods are orthogonal, and since this and data have not been released, we did not use it as a baseline for comparison. RLHF is indeed a great comparison approach, but the computational resources required for RLHF are currently beyond our capacity. Therefore, we opted for the more lightweight BoN and TVD methods for evaluation, which are commonly used in previous reward model works.
Regarding the combination with these methods, we believe that fine-grained segmentation helps OmegaPRM more accurately identify errors, thereby improving the error detection efficiency. As for RLHF, positions with low confidence indicate that the model likely generate other branches. Compared to rule-based step-level RLHF, we believe that using AdaptiveStep allows the model to better learn from these branches, leading to good results. We will also validate these claims when sufficient computational resources are available.
>## Q6 and W3: Commensense Domain
**R5:** Utilize to commensense domains remains challenging for process reward models. Some researchers employ powerful models like GPT-4o or human judgment for evaluation [1]. We believe AdaptiveStep offers advantages in this context: our segmentation method identifies low-confidence model outputs, suggesting more choices and higher error probability, thereby improving labeling efficiency.
**[1]** O1 Replication Journey: A Strategic Progress Report: Part I
---
**We sincerely thank you for your constructive feedback and insightful recommendations. These comments have significantly helped us identify areas for improvement in our manuscript. We look forward to incorporating more suggestions from you to enhance the quality and clarity of our research.** | Summary: 1. In this paper, a novel step dividing method, AdaptiveStep is proposed. The method enables automatic step dividing while being more informative than rule-based step dividing method.
2. By adapting AdaptiveStep, the ASPRM demonstrates stronger discriminative power at the token level than existed method at some extent.
3. The author open-sourced a LeetCode dataset, which may benefit following works on PRM.
Claims And Evidence: The article claims that ASPRM is a SOTA PRM model in the introduction, but in the experiment, the ASPRM-M model is not as effective as Shepherd-M. In addition, the article does not show the effect of ASPRM in reinforcement learning training. In view of these circumstances, I think the conclusion that ASPRM is SOTA is problematic.
Methods And Evaluation Criteria: I think the proposed method make sense to me. Using confidence score as an indicator to identify difficult tasks and divide step based on it seems like a feasible solution to balance cost and informative of the annotated dataset.
Theoretical Claims: This is primarily an experimental paper that focuses on empirical results rather than theoretical claims requiring formal proofs. Therefore, no formal proofs needed to be verified during my review.
Experimental Designs Or Analyses: The paper lacks experiments and results on how PRM can improve the final performance of the model when used for reinforcement learning. In terms of experimental design, it is inappropriate to compare Llama based PRM with Mistral based PRM.
Supplementary Material: I reviewed all the supplementary material.
Relation To Broader Scientific Literature: ### Innovation in Reasoning Step Segmentation
Existing Process Reward Models (PRMs) typically use rule-based methods, such as predefined symbols or fixed-length reasoning steps, to segment reasoning steps. The AdaptiveStep method proposed in this paper automatically segments reasoning steps based on the model's confidence in predicting the next token. This aligns with the cognitive cost theory, which suggests that the cognitive cost of reasoning depends on task difficulty. Additionally, many reasoning errors stem from incorrect numerical calculations or misuse of words, further supporting the necessity of segmenting reasoning steps at critical points.
### Improvement in Process Reward Models (PRMs)
The AdaptiveStep PRM (ASPRM) introduced in this paper demonstrates excellent performance in mathematical reasoning and code generation tasks, outperforming existing open-source PRMs. This is consistent with research emphasizing the importance of intermediate reasoning steps in complex tasks and demonstrating how step-by-step feedback enhances reasoning reliability and reduces logical errors.
Essential References Not Discussed: I believe essential references are discussed. However, it would be better to discuss in the related works part how this work differ from the previous works to help the reader gain a big picture of this work.
Other Strengths And Weaknesses: The weaknesses mainly on the experiment part. Results on more scenarios (performance lifting when leverage the proposed PRM on RL training), more based models (32B, 70B models) are needed to fully exhibit the effectiveness of the proposed method.
Other Comments Or Suggestions: Regarding the presentation of TVD results across different methods, I suggest replacing the current large table containing many blank cells (where code models are not applicable to certain tasks) with a series of focused bar charts.
Questions For Authors: 1. I noted that the RPM training and TVD methods described in this paper have similarities to previous work in the field. Could the authors clarify the specific novel contributions or advancements in these two areas? In particular, I would appreciate a more detailed explanation of how these approaches differ from or improve upon existing techniques, as this would help better position the work within the current literature and highlight its unique contributions.
2. I appreciate the contribution of open-sourcing the LeetCode dataset, but I have some concerns about the ethical implications. Given that LeetCode problems are proprietary content, could the authors clarify the licensing arrangements or permissions obtained for redistributing this data? Additionally, it would be helpful to understand what steps were taken to ensure compliance with relevant terms of service and intellectual property rights when creating and sharing this dataset."
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer Comments
**Dear reviewer ZPdL:**
Thank you for your insightful comments and valuable feedback. Below, we address the concerns raised and provide clarifications and improvements.
>## W1: Concerns about experiments results
**R1:** Thanks for your comment and we will revise our paper to clarify the setting/scope of the SOTA statement accordingly. We argue that the overall performance of ASPRM-M exceed that of the baselines. Specifically, ASPRM-M significantly outperforms baselines across all TVD experiments. In the BoN-Mistral generated data, ASPRM-M performs better than Math-Shepherd when N is small. In positions with few instances, such as Figure 4(b) for Bo64 and Figure 4(d) for Bo16 and above, ASPRM-M is slightly worse than Math-Shepherd. In subsequent experiments, such as those involving position generalization, ASPRM also performs well.
>## W2: Llama compared with Mistral
**R2:** Thank you for bringing this up. The experiments involving Llama and Mistral were simply presented in the same figure for space efficiency. There was no intention to directly compare the performance of these base models. We likely would not have conducted the experiments with the Mistral-based ASPRM-M model if we want to make a direct comparison.
>## W3: Reward Model Related Work
**R3:** We appreciate the suggestion to add related work on reward models. We will revise the manuscript to include relevant references and discussions on prior work in this area (e.g., the Bradley-Terry model, GenRM, Pairwise reward model, etc.).
>## W4: RLHF and More Models
**R4:**
**RLHF Methods:**
While RLHF methods are often used to evaluate reward models, they are not strictly necessary for this area. Several influential works on (process) reward models have not included RLHF experiments but use BoN or test-time scaling to evaluate the model’s capability, yet they remain impactful in the field [1, 2]. Our focus was on proposing and testing a novel reasoning step segmentation method. Given the substantial computational resources required for RLHF experiments, we are unable to include them in this study. We kindly hope that the reviewer takes this into consideration.
**More Models:**
Regarding the inclusion of more model scales, we have conducted experiments on a wider range of model sizes and thresholds in [Table 1](https://anonymous.4open.science/r/PIC-0969/fig4.png) and [Table 2](https://anonymous.4open.science/r/PIC-0969/fig5.png), which we will incorporate into the updated manuscript, we observe that as the model size increases, the PRM’s ability to make judgments remains effective.
[1] Let's Verify Step by Step
[2] Training Verifiers to Solve Math Word Problems
---
>## Q1: Our Contributions
We sincerely request that the reviewer reconsider the contributions of our work. Our primary contribution is the introduction of a novel reasoning step segmentation method AdaptiveStep, which we validate in the PRM senario. The PRM trained with AdaptiveStep can guide the task model in generation and reasoning at the token level, which previous PRMs have not been able to do in our experiments. Moreover, we have successfully extended PRM to domains where step segmentation is difficult using rules (such as code generation). And we use a single model for data construction and reduces construction cost by 30% in the math domain and 60% in the code generation domain compared to rule-based methods.
In the experimental setup, we have made every effort to maintain consistency with previous work, including model selection, PRM training methods, and evaluation techniques. This was done to establish a baseline for a fair comparison, which consequently limited the potential for introducing significant innovations in the training of the PRM or the testing of BoN/TVD.
---
>## Q2: Credit and License
Thank you for pointing this out. We use this dataset only for research purposes and will strictly regulate its usage context and license. We will do our best to align our open-source work with [LiveCodeBench](https://github.com/LiveCodeBench/LiveCodeBench), which collects data from LeetCode weekly contests.
---
>## General Rebuttal
We propose a novel reasoning step segmentation method and have evaluated it within the context of process reward models, supported by statistical analysis and extensive experiments. While RLHF methods are a reasonable evaluation approach, the heavy computational resources required for such experiments made it unfeasible for us to incorporate them, so we use the lightweight BoN and TVD method. As highlighted earlier, many impactful reward model studies have not employed RLHF experiments, yet they have been influential in the field.
---
**We greatly appreciate the your thoughtful feedback and your time for upgrading our work once again, and we hope that these clarifications will be satisfactory. We welcome any further constructive comments that can help improve the quality of our work.** | Summary: Simple yet effective method for segmenting reasoning traces into individual steps, resulting in modest but significant improvements on relevant math and coding benchmarks.
The paper proposes to segment reasoning traces according to next-token confidence levels, instead of e.g. heuristics like new lines.
The experiments show that this segmentation method outperforms prior work in PRM training and ultimately results in higher performance on Math and Coding datasets.
The analysis is limited to coding and math problems, which however constitute highly relevant domains.
Claims And Evidence: Claim: AdaptiveStep yields a SOTA PRM for math tasks.
Evidence: Evaluation on relevant benchmarks and for multiple models.
Methods And Evaluation Criteria: The method constitutes cutting generations at tokens that have low probability, training a PRM on the data, and using the PRM to train a Math/coding model. this is a straightforward procedure that follows relevant prior work.
Theoretical Claims: NA.
Experimental Designs Or Analyses: yes.
Supplementary Material: no.
Relation To Broader Scientific Literature: The paper fits well with recent literature on step-by-step reasoning for math and coding problems.
Essential References Not Discussed: NA.
Other Strengths And Weaknesses: Strengths:
Simple method and significant improvements on relevant benchmarks.
Paper is well written and problem setting is relevant.
Cross-domain generlaisation results suggest robustness.
Weaknesses:
- Failure cases of proposed segmentation method are neither discussed nor empirically analysed. It would be nice if the authors could both provide examples of common failure cases, as well as a discussion of them
- The Adaptive step method is only described in words, it would be great to have an algorithm definition in the methods section.
Other Comments Or Suggestions: See weaknesses.
Questions For Authors: See weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer Comments
**Dear reviewer T2AZ:**
We sincerely appreciate your constructive feedback and the time you have devoted to reviewing our manuscript. We try to address your concerns point by point.
>## W1: Discussion or empirical analysis for failure cases of AdaptiveStep
**Response:** Thank you for highlighting this important aspect. Though explicitly classifying "failure cases" in the model's segmentation is challenging as what is regarded as "failure" for humans indeed aligns with the model's confusion points, we still identified several instances of suboptimal segmentation results.
We understand that failure cases can be categorized from two perspectives. The first perspective involves words that should remain intact but are erroneously split into separate parts. The second perspective concerns instances where the model fails to segment at positions prone to errors. We illustrate the first type of failure cases through the following examples, while the second type is demonstrated through a figure.
**Example 1:** "We can rewrite the **quad** / **ratic** as $y = 3(x^2 + 2x) + 9$"
**Example 2:** "**Sub** / **stituting** these values" — where "quadratic" and "substituting" are incorrectly segmented despite being complete words
**Example 3:** "**Natal** / **ie** would need to trade" — where "Natalie," a proper noun referenced in the question, is inappropriately segmented
We conduct a statistical analysis of the aforementioned type where words are split, and find that about 2% of the segmentation belongs to this category. However, since this split is determined by the model itself, we retained these splits during the training process.
Additionally, in the very begining of our work, we observed that approximately 3% of split points occur at the beginning of solutions, which we classify as erroneous segmentations, we removed these split positions because they indicate that the solutions have not yet started to be generated.
For the second perspective, due to limitations of the base model, certain complex questions exhibit little segmentation in their solutions, as illustrated in the [figure](https://anonymous.4open.science/r/PIC-0969/fig1.png), we believe that the problems in the lower-left corner (the 1.62% with low accuracy after 64 generations and segmentations numbers lower than 5) represent problems that exceed the model's capabilities, and require additional supervisory information to assist.
In summary, aside from the examples that are beyond the model's capabilities to judge (which account for about 5% of the total questions), there are 5% of the splits that, from a human perspective, seem unreasonable. We have removed some of these (those at the beginning of the solutions) and retained others. We will explain this in more detail in the appendix. Thank you again for your highlighting.
>## W2: Algorithm definition
**Response:** Thank you for your suggestion. We will include the following algorithmic in out next version:
## Algorithm: PRM Training from Confidence-Guided Rollouts
**Inputs**:
- `Q`: A sequence of questions
- `N`: Number of responses to generate per question
- `J`: Number of rollouts per split point
**Output**:
- A trained process reward model (PRM)
```text
for each question q in Q do
// Step 1: Generate N responses for the question
responses ← GENERATE_RESPONSES(q, N)
// Step 2: Compute confidence distribution for the responses by eqution (1)
confidence_distribution ← COMPUTE_CONFIDENCE(responses)
// Step 3: Determine threshold and find split points
threshold ← COMPUTE_THRESHOLD(confidence_distribution)
split_points ← FIND_SPLIT_POINTS(confidence_distribution, threshold)
for each split in split_points do
labels ← empty list
// Step 4: Perform J rollouts for each split point
for i from 1 to J do
label ← PERFORM_ROLLOUT(split)
APPEND(labels, label)
end for
// Step 5: Aggregate rollout labels into a hard estimate by eqution (2)
prm_label ← HARD_ESTIMATE(labels)
// Step 6: Train the PRM with the obtained labels by eqution (3)
TRAIN_PRM(prm_label)
end for
end for
```
---
**We welcome any additional feedback you may have regarding our manuscript or this response. We are committed to incorporating your suggestions to enhance the quality of our work. Thank you again for your valuable insights.**
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my concerns. I maintain my score as I believe that the contribution is significant, but do not believe that it merits a "strong accept" rating.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your time and for recognizing our contribution as “significant.” We truly appreciate your thoughtful feedback. We just want to clarify a small point in case there was any misunderstanding regarding the ICML scoring system: under the new criteria, a score of 3 corresponds to a “weak accept,” 4 to an “accept,” and 5 to a “strong accept.” Since you kindly mentioned that the work may not merit a “strong accept,” we were wondering if you might consider updating the score to a 4 (“accept”). Thank you again for your valuable comments. | null | null | null | null | null | null |
A Forget-and-Grow Strategy for Deep Reinforcement Learning Scaling in Continuous Control | Accept (poster) | Summary: Deep reinforcement learning suffers from primacy bias, a tendency to overfit early experiences stored in the replay buffer. This paper proposes Forget and Grow (FoG), a novel method with two new components: Experience Replay Decay (ER Decay) and Network Expansion. ER Decay gradually reduces the influence of early experiences, while Network Expansion dynamically adds new parameters during training. Comprehensive experiments on all kinds of continuous control tasks show that FoG outperforms the state-of-the-art methods.
Claims And Evidence: Yes. The claims are supported by comprehensive experiments and ablation studies. The results are convincing.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The proofs are reasonable. I did not check them in detail.
Experimental Designs Or Analyses: Yes. The experiments are comprehensive and well-designed. The ablation study is also convincing.
Supplementary Material: I scanned the learning curves in the supplementary material.
Relation To Broader Scientific Literature: The paper is related to the rl in continuous control tasks. The key contribution is to alleviate the primacy bias in deep reinforcement learning. The method is related to the experience replay and network expansion. The paper is well-positioned in the literature.
Essential References Not Discussed: No. The paper is well-positioned in the literature.
Other Strengths And Weaknesses: Strengths
- The paper is well-written and well-organized. The method is simple and effective. The experiments are comprehensive and convincing.
- Figures are beautiful and mostly informative.
Weaknesses
- The ER Decay in Figure 1 is not very clear. It looks like ER Decay will sample a transition more times than Normal ER, making me confused why it is called 'Decay'.
Other Comments Or Suggestions: There are large space in page 1 and 2, which can be reorganized to make the paper more compact.
Questions For Authors: - ER decay gradually decreases the sampling weight of older transitions, how to understand Figure 4 that ER decay is lower than Normal ER at the beginning and then higher than Normal ER? What does the x-axis 'steps' mean, environment step? Does the curve represent the sample times of a fix transition?
- Any intuition for the design of the Network Expansion? Are there any alternative ways to expand the network?
- line 37, Network Expansion introduces new neurons to the model early in training. Any implementation details about how to decide 'early in training'?
- What is the final size of the network after training compared to other methods?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort you have dedicated to reviewing our work. We deeply appreciate your careful and thorough review. In the following, we seek to address each of your concerns.
___
**Q1:** *"The ER Decay in Figure 1 is not very clear. It looks like ER Decay will sample a transition more times than Normal ER, making me confused why it is called 'Decay'."*
**A:** The "sample times" in Figure 1 represent relative values, not absolute ones, so they should only be used for horizontal comparison. The term "Decay" actually refers to the fact that the probability of sampling transitions gradually decreases as training progresses. When a transition first enters the replay buffer, it is considered new and has a higher sampling probability. However, as training continues and the data ages, the probability of sampling that transition decreases over time.
___
**Q2:** *"There are large space in page 1 and 2, which can be reorganized to make the paper more compact."*
**A:** Thank you for the suggestion. We will make the necessary adjustments in the revised version to make the paper more compact.
___
**Q3:** *"ER decay gradually decreases the sampling weight of older transitions, how to understand Figure 4 that ER decay is lower than Normal ER at the beginning and then higher than Normal ER? What does the x-axis 'steps' mean, environment step? Does the curve represent the sample times of a fix transition?"*
**A:** The "steps" on the x-axis actually represent environment steps. The curve shows the total number of times that transitions collected at each environment step are sampled from the replay buffer during the entire training process. Since ER decay reduces the sampling weight of older data, the sample times for earlier transitions will be lower at first. However, as training continues, the sample times for later transitions will increase. This aligns with the goal of ER Decay: to ensure that transitions collected at different environment steps are sampled similarly, rather than favoring older transitions as in the case of normal ER. We will provide a clearer explanation of the axes in the paper.
___
**Q4:** *"Any intuition for the design of the Network Expansion? Are there any alternative ways to expand the network?"*
**A:** The intuition behind the design of the Network Expansion is based on the concept of infantile amnesia, as mentioned in the introduction. We believe that as data increases, the network's capacity must be expanded to enhance its representation power. In our early experiments, we also found that naive network expansion could improve the agent's sample efficiency in certain environments. There are several alternative ways to expand the network, such as increasing the network's width (e.g., PNN[1], DEN[2]) or using sparse networks and adjusting the network topology (e.g., Neuroplastic Expansion in Deep Reinforcement Learning[3]). However, increasing the depth of the network is the most straightforward and simple approach. When designing our method, we aimed for an easy implementation that could be applied to other algorithms besides OBAC to boost performance, rather than being limited to our specific backbone.
[1]: Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., ... & Hadsell, R. (2016). Progressive neural networks. arXiv preprint arXiv:1606.04671.
[2]: Yoon, J., Yang, E., Lee, J., & Hwang, S. J. (2017). Lifelong learning with dynamically expandable networks. arXiv preprint arXiv:1708.01547.
[3]: Liu, J., Obando-Ceron, J., Courville, A., & Pan, L. (2024). Neuroplastic Expansion in Deep Reinforcement Learning. arXiv preprint arXiv:2410.07994.
___
**Q5:** *"line 37, Network Expansion introduces new neurons to the model early in training. Any implementation details about how to decide 'early in training'?"*
**A:** We expand our critic networks at the 50k-th and 250k-th iterations, as detailed in Appendix B.1. These points are considered early in the training process, especially given that the networks are updated up to 2 million times during training.
___
**Q6:** *"What is the final size of the network after training compared to other methods?"*
**A:** The largest network in FoG has a depth of 4, resulting in 9 linear layers, each of size 512 x 512 (calculated as 4 * 2 + 1 = 9). With 9 such networks in OBAC, the total parameter count is approximately 21 million. This is about four times the size of both BRO and SimBa. | Summary: The paper draws inspiration from the phenomenon of infantile amnesia in neuroscience, proposing a "forget and grow" mechanism to mitigate primacy bias in deep reinforcement learning. The authors identify limitations in existing reset mechanisms and introduce two novel strategies: Experience Replay Decay (ER Decay), which gradually reduces the influence of older experiences, and Network Expansion, which introduces new neurons early in training to facilitate adaptation. These methods are integrated into a new algorithm, Forget-and-Grow (FoG). Empirical results across multiple benchmarks demonstrate FoG's superiority over existing methods, establishing it as a competitive approach in sample-efficient deep reinforcement learning.
Claims And Evidence: The empirical results demonstrate that FoG performs well, but the underlying reasons behind the effectiveness of the forget-and-grow mechanism remain insufficiently explored. While I am not requesting a full theoretical analysis, the evidence provided—particularly Figure 10—seems inadequate. A deeper investigation into the effects of network expansion would be valuable for the community. For example, measuring various weight and gradient norms or incorporating plasticity metrics (e.g., dormant neurons) could provide better insights into what happens when new neurons are introduced.
Regarding the forgetting mechanism, my interpretation is that ER Decay shifts learning dynamics toward on-policyness rather than purely off-policy. However, I am curious whether ER Decay is truly the key factor in achieving this effect. A natural baseline to consider would be a smaller replay buffer—does it lead to similar on-policyness effects and performance improvements? While I find the motivation for on-policyness convincing, as a practitioner, I would like to understand whether ER Decay is the best approach or if there are simpler alternatives with comparable benefits.
For the growth mechanism, the comparison with plasticity injection appears somewhat misleading. Plasticity injection was originally designed to diagnose plasticity loss, not to improve performance, so stating that it is "complex" seems unfair. Instead, this paper can be viewed as an extension of plasticity injection, applying it to intermediate layers rather than just a specific subset of the network. I recommend revising the framing of this comparison to reflect the relationship between these methods better. Additionally, after reviewing the appendix, it is unclear whether the authors expanded the network in depth or width—clarifying whether new layers were appended or if existing layers were widened would improve transparency.
Methods And Evaluation Criteria: The evaluation and comparisons in the paper are limited and require further depth.
A crucial missing comparison is with the Plasticity Injection paper. Given its relevance, a direct empirical and conceptual comparison would strengthen the evaluation. Additionally, there are multiple established methods for expanding neural networks, such as Progressive Neural Networks (PNN) [1], Dynamically Expandable Networks (DEN) [2], and Neuroplastic Expansion (NE) [3] in reinforcement learning. The current discussion does not sufficiently situate the proposed network expansion mechanism within this broader landscape. A more comprehensive analysis is necessary to highlight the novelty and advantages of the approach compared to these existing methods.
In particular, the paper should address how FoG's expansion strategy differs from or improves upon these prior works. Does it offer advantages in terms of computational efficiency, stability, or sample efficiency?
Additionally, the computational cost of increasing network depth needs further discussion. Since deeper networks require more sequential computation, they may introduce significant overhead. It is important to compare the computational cost of FoG against BRO, SimBa, and TD-MPC, as these methods are widely used baselines.
Theoretical Claims: I've checked the proof in the Appendix.
Experimental Designs Or Analyses: - The experimental design could be improved by exploring different network growth strategies, including existing methods (e.g., PNN, DEN, NE), and analyzing learning dynamics using plasticity metrics (e.g., dormant neurons, gradient norms).
- Additionally, the current training setup is complex, making it harder for the community to adopt. I suggest two possible improvements:
- Simplify the experimental framework: Instead of integrating forgetting and growing mechanisms with multiple training protocols, evaluating them within a simpler baseline (e.g., SimBa or TD-MPC2) would better isolate their contributions and improve clarity.
- Unify and streamline the training setup: The current reset protocol is overly detailed, using different reset lists for specific benchmarks and tasks (e.g., locomotion tasks in HumanoidBench vs. other environments). A more integrated and standardized reset mechanism, along with a consistent OBAC wait period (e.g., 250k iterations) across all experiments, would improve reproducibility and ease of adoption while maintaining strong performance.
Supplementary Material: I've checked all the contents.
Relation To Broader Scientific Literature: n/a
Essential References Not Discussed: It would be beneficial to incorporate relevant prior work on progressive neural network architectures in continual learning and reinforcement learning, ensuring a more comprehensive discussion.
- Progressive Neural Networks, Rusu et al, arXiv'16.
- Lifelong learning with dynamically expandable networks, Yoon et al, ICLR'18.
- PLASTIC: Improving Input and Label Plasticity for Sample Efficient Reinforcement Learning, Lee et al, NeurIPS'23.
- Mixtures of Experts Unlock Parameter Scaling for Deep RL, Johan et al, ICML’24.
- Neuroplastic Expansion in Deep Reinforcement Learning, Liu et al, ICLR'25.
- Towards General-Purpose Model-Free Reinforcement Learning, Fujimoto et al, ICLR’25.
Other Strengths And Weaknesses: n/a
Other Comments Or Suggestions: - In Section 3.1, the concept of expansion appears after the preliminary experiments, making it hard to follow. I suggest introducing expansion earlier for better readability.
- The OBAC backbone is not widely adopted in the community, so a more self-contained explanation would help readers unfamiliar with it.
- Figure resolution is too high, causing scrolling lag—reducing DPI would improve readability.
- In Related Work, SimBa should be described accurately—it promotes simplicity bias rather than alleviating it to enable network scaling.
I appreciate the idea and direction, but the paper needs more completeness. Strengthening connections to related work, providing a deeper analysis of growing effects, and offering a more thorough comparison with existing methods would improve its clarity and impact. I am open to increasing my score if these concerns are well addressed.
Questions For Authors: n/a
Ethical Review Concerns: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort you have dedicated to reviewing our work. We deeply appreciate your careful and thorough review. In the following, we seek to address each of your concerns.
___
**Q1** *"the evidence provided—particularly Figure 10—seems inadequate"*
**A:** We tracked the ratio of activated neurons during training, and FoG does indeed increase the ratio of activated neurons. This indicates that the model effectively utilizes more of its capacity during training.
Training curves available here: https://anonymous.4open.science/r/ICML-dormant-E41C/README.md.
Moreover, we analyze representations via replay buffer samples processed by the critic's last layer for t-SNE visualization, Fixed 2/4 blocks vs. expanded networks. Network expansion shows clearer clustering and better feature separation/structure.
Visualization results available here: https://anonymous.4open.science/r/ICML-t_SNE-D076/README.md.
___
**Q2:** *"A natural baseline to consider would be a smaller replay buffer—does it lead to similar on-policyness effects and performance improvements? "*
**A:** We appreciate the suggestion to compare ER Decay with a smaller replay buffer. We conducted experiments on SAC with only modifications of replay buffer on 3 tasks. The results show that SAC with ER Decay significantly outperforms those with a smaller buffer size ranging from 5e4 to 5e5.
While ER Decay introduces more "on-policy" characteristics, it also retains older transitions, preventing forgetting and improving final performance.
Training curves available here: https://anonymous.4open.science/r/ICML-buffer_size-5FB0/README.md.
___
**Q3:** *"the comparison with plasticity injection appears somewhat misleading"*
**A:** Your concern is valid—we acknowledge potential imprecision. By "complex," we referred to implementation complexity (parameter freezing, residual construction) versus FoG's direct parameter addition without tricks. We will clarify this distinction and refine descriptions in the revised paper.
___
**Q4:** *"The current discussion does not sufficiently situate the proposed network expansion mechanism within this broader landscape. "*
**A:** We have compared our network expansion mechanism with three related works:
1) **Neuroplastic Expansion in Deep Reinforcement Learning**
This paper presents a clever and effective approach by utilizing sparse networks and expanding through network topology adjustments. Our method, better suited for high replay ratios and larger networks, is easier to implement. While we don't have their code, we observe that naively expanding network capacity boosts performance in Gym environments, especially HalfCheetah. However, this success doesn't easily extend to more complex environments.
2) **Progressive Neural Networks & Dynamically Expandable Networks**
Both methods are designed for multitask learning, expanding network width when new tasks are introduced, whereas our approach increases depth. Since the settings differ, direct performance comparison is not feasible, but we will discuss these methods in the related works section.
___
**Q5:** *"the computational cost of increasing network depth needs further discussion"*
**A:** FoG's computation is about 4× that of BRO. However, our goal is to explore the limits of effective network scaling. On humanoid tasks, FoG outperforms larger models, including SimBa (10-layer, ~42M params), TD-MPC2 (19M params), and BRO (10-layer, ~42M params), despite having at most 21M params and lower computational cost. We will add a discussion on computational cost in the paper.
Training curves available here: https://anonymous.4open.science/r/ICML-full_size-25B6/README.md.
___
**Q6:** *" the current training setup is complex"*
**A:** Both FoG's network structure and OBAC backbone play significant roles in improving performance. We conducted an ablation study to test FoG without OBAC(training curves: https://anonymous.4open.science/r/ICML-structure-B1DF/README.md). While FoG with SAC still gives strong performance, using OBAC backbone is helpful for both performance and parameter scaling.
Regarding the setup, we already use a consistent OBAC wait period across experiments. The plasticity loss differs for each task, and we haven't yet explored adaptive reset or other methods like partial resetting or noise injection. However, we’ve minimized the number of reset lists to simplify the setup.
___
**Q7:** *"It would be beneficial to incorporate relevant prior work" " I suggest introducing expansion earlier for better readability." "The OBAC backbone is not widely adopted in the community" "Figure resolution is too high" "In Related Work, SimBa should be described accurately"*
**A:** Thank you for the suggestions. We will add a dedicated section on network expansion in related works and make corresponding adjustments throughout the paper.
---
Rebuttal Comment 1.1:
Comment: I have updated my score from 2 to 3. The paper's core message is interesting and valid, which motivated the higher score. However, the writing still leaves significant room for improvement. If accepted, I hope the paper is restructured to clarify its contributions better and differentiate itself from related work.
On a personal note, I found the "grow" component particularly compelling. I would accept the paper even without the "forget" part. The paper would be stronger if it rigorously evaluated this component across a variety of architectures and algorithms, demonstrating its potential for integration into the RL community. | Summary: The paper addresses the challenge of Deep Reinforcement Learning (DRL) in continuous control tasks, where models suffer from primacy bias, overfitting to older memories in the replay buffer. Drawing inspiration from infantile amnesia in humans, the authors propose two modifications: (1) a decaying replay buffer to mitigate overfitting to older memories, and (2) an increase in network size to enhance continuous learning. These modifications are empirically validated across 41 control tasks, demonstrating their effectiveness in improving learning performance.
Claims And Evidence: - The authors claim that standard experience buffers cause models to overfit to older memories, hindering the learning of new information. This is supported by a critic loss heat map, which visualizes the loss in plasticity over time. However, a more comprehensive comparison across all 41 tasks would strengthen this claim. The analysis can be placed in the appendix.
- The claim that increasing the critic network size improves learning is supported by empirical ablation studies. However, the authors do not provide a representation analysis to explain how the expanded network adapts to new memories without interfering with older ones. For instance, does adding new neural blocks increase interference within the network and noise, hence supporting new learning? Such an analysis would offer deeper insights into the mechanism behind this improvement.
Methods And Evaluation Criteria: The critic loss heat map effectively illustrates the loss in plasticity as training progresses. However, additional metrics, such as tracking parameter changes across learning steps (e.g., Kumar et al. 2024, bioRxiv 2024.12.12.627755 Supp. Fig. 4), could provide a more robust evaluation of plasticity loss and the efficacy of the proposed strategies.
Theoretical Claims: The theoretical claims regarding oversampling older memories appear correct and are well-supported.
Experimental Designs Or Analyses: - An inconsistency arises in Figure 3, where the critic loss for Normal SAC is significantly higher than for other models, yet it still achieves decent task performance. This discrepancy warrants further explanation.
- The authors should analyze the learned representations and their evolution under the proposed modifications (decaying replay buffer and network expansion). Empirical results alone are insufficient to fully validate the mechanisms in improving performance.
Supplementary Material: Only the appendix was reviewed.
Relation To Broader Scientific Literature: The paper draws an interesting analogy between infantile amnesia in humans and primacy bias in DRL. However, it does not discuss recent advancements in noise-based regularization techniques, such as those proposed by Dohare et al. (2024, Nature) and Kumar et al. (2024, bioRxiv), which could offer additional insights into continual learning and flexibility in RL models.
Essential References Not Discussed: - Noise-based regularization techniques, such as reinitializing unused parameters with noise (Dohare et al. 2024, Nature), could enhance continual learning in RL models.
- The hypothesis that noise regularization allows networks to explore degenerate solution spaces and escape local minima (Kumar et al. 2024, bioRxiv) is relevant but not discussed.
Other Strengths And Weaknesses: - Strengths: The paper presents a novel approach to addressing primacy bias in DRL, with empirical validation across a wide range of tasks.
- Weaknesses: The number of random seeds used for experiments should be explicitly stated in all figure captions, and error bars should be included in bar graphs e.g. Fig. 1. Increasing the number of random seeds to 10, particularly for Figure 7, would improve the robustness of the results.
Other Comments Or Suggestions: - The title's use of "continuous control" may be misleading, as the framework could potentially apply to discrete control tasks as well.
- The term "scaling" in the title might be misinterpreted as referring to scaling laws in machine learning.
- The consistent dips in reward curves in Figure 8 require further explanation.
- Clarify the difference between Expanded SAC (Figure 3) and FoG.
Questions For Authors: - Can the authors perform a representation analysis to elucidate how block expansion facilitates learning without interfering with older memories?
0 Could the authors discuss the potential benefits of injecting noise into network parameters, as suggested by recent literature?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort you have dedicated to reviewing our work. We deeply appreciate your careful and thorough review. In the following, we seek to address each of your concerns.
___
**Q1:** *"The authors should analyze the learned representations and their evolution under the proposed modifications (decaying replay buffer and network expansion).."*
**A:** We tracked the ratio of activated neurons during training, and FoG does indeed increase the ratio of activated neurons. This indicates that the model effectively utilizes more of its capacity during training.
For the training curve, please refer to the following link: https://anonymous.4open.science/r/ICML-dormant-E41C/README.md.
Due to time constraints, we cannot complete the comparison across all 41 tasks immediately, but we will conduct it in the future and add the results to the appendix.
___
**Q2:** *" The authors do not provide a representation analysis to explain how the expanded network adapts to new memories without interfering with older ones."*
**A:** We analyze representations via replay buffer samples processed by the critic's last layer for t-SNE visualization, Fixed 2/4 blocks vs. expanded networks. Network expansion shows clearer clustering and better feature separation/structure.
For the visualization results, please refer to: https://anonymous.4open.science/r/ICML-t_SNE-D076/README.md.
Due to time constraints, we currently provide results for HalfCheetah-v4 at 200k steps, but will release more within a week.
___
**Q3:** *"An inconsistency arises in Figure 3, where the critic loss for Normal SAC is significantly higher than for other models, yet it still achieves decent task performance."*
**A:** We sincerely apologize for the color mislabeling of SAC and SAC with reset in the left-bottom plot of Figure 3. However, this does not affect the validity of our experimental results.
Even though SAC performs reasonably well, it still performs the worst among all variants, achieving less than half the performance of Expanded-SAC. Its decent early-stage performance stems from the effective use of initial data, but its reward growth stagnates later due to poor utilization of new data.
Additionally, SAC shows low loss for old data but significantly higher loss for new data, supporting our claim that standard experience buffers overfit to early memories, hindering new learning.
___
**Q4:** *"However, it does not discuss recent advancements in noise-based regularization techniques."*
**A:** These techniques are indeed effective in mitigating plasticity loss. However, they are orthogonal to the ER Decay and network expansion proposed in our work, and their combined effects remain an open question for future exploration. While FoG does not incorporate these techniques, we will still discuss them in the related works section to provide a more comprehensive perspective.
___
**Q5:** *"The number of random seeds used for experiments should be explicitly stated in all figure captions, and error bars should be included in bar graphs e.g. Fig. 1."*
**A:** We are currently running 16 tasks for the main result in Figure 7, with each task using 10 seeds. However, due to time constraints, we have not yet obtained the complete results. We will release the results within one week.
Currently we have finished with 4 tasks, please check https://drive.google.com/drive/folders/14u4Ag3MrD9GVl4pib5RneSo6n5WGeQO_ for training curves.
___
**Q6:** *"The title's use of "continuous control" may be misleading"*
**A:** Thank you for the suggestion. However, we have only evaluated FoG on continuous control benchmarks and have not yet explored its performance on discrete control tasks. We primarily followed BRO and SimBa, which also focus on continuous control.
___
**Q7:** *"The term "scaling" in the title might be misinterpreted"*
**A:** Thank you for raising this important point. We acknowledge that the term "scaling" in the title could indeed be conflated with scaling laws in machine learning (ML), and we will revise the title in subsequent versions to use clearer phrasing, such as "Scaling Up Parameters" or "Bigger Models," to explicitly distinguish our focus.
___
**Q8:** *"The consistent dips in reward curves in Figure 8 require further explanation"*
**A:** The dips in Figure 8 occur when the network is reset, requiring it to relearn from the replay buffer, causing a temporary drop before recovery.
___
**Q9:** *"Clarify the difference between Expanded SAC (Figure 3) and FoG."*
**A:** The difference between Expanded SAC and FoG lies primarily in the use of the OBAC backbone and some slight modifications to the network structure in FoG. We will provide a clearer and more detailed explanation of these differences in the paper and will also modify the names of these SAC variants to make them easier to understand. | Summary: This paper focuses on the problem of sample efficiency in deep reinforcement learning. The authors introduce the Forget and Grow (FoG) method, which relies on three ideas: 1) reducing the sampling probability of older samples, 2) expanding the network size, and 3) resetting the network with a certain frequency. The results suggest that FoG outperforms multiple performant methods, namely Simba, BroNet, and TDMPC2 in a number of environments.
Claims And Evidence: While some of the claims are supported with evidence, many are problematic. Here I focus on a few:
- The comparison against other methods may be unfair since FoG uses increased computation while the other does not. One approach to improve the empirical evaluation is to use other methods that use the maximum network size used by FoG.
- It’s not clear what defines old and new experience, so it is hard to just make claims about deemphasizing the probability of old data in favor of new data.
- The authors focus on the two mechanisms they propose: experiencing replay decay and growing the network. However, there is a missing mechanism which is reset. Resetting is part of FoG, but it is not emphasized as the other new mechanisms, which might be confusing. An experiment is needed to show how FoG performs without resets.
Methods And Evaluation Criteria: - The paper uses standard benchmarking tasks that are well-accepted to study the problem of sample efficiency in deep RL methods.
- However, the paper misses comparison against similar methods. For example, the paper “Neuroplastic Expansion in Deep Reinforcement Learning” by Jiashun Liu, Johan Obando-Ceron, Aaron Courville, and Ling Pan is based on growing the network along with some experience replay techniques.
- Additionally, it’s not clear that the evaluation is fair if FoG uses much more computation than other methods.
- Additionally, using the critic loss for comparison in Figure 3 and Figure 5 is problematic since the loss doesn’t necessarily represent any meaningful results. For example, a method that gives zero-value function prediction everywhere would achieve the maximum score based on the metric you’re proposing, which shows that it’s problematic.
- Finally, the experiments are conducted with only three independent runs (seeds), which is a very small number of runs. Many of the figures have overlapping confidence intervals, so their statistical significance may be compromised.
Theoretical Claims: No theoretical claims are presented in the paper apart from the ones used to motivate intuition.
Experimental Designs Or Analyses: Check the Methods And Evaluation Criteria section.
Supplementary Material: I didn’t check the supplementary material section.
Relation To Broader Scientific Literature: The paper considers an important research question about sample-efficient deep RL methods, which is of interest to many people in the research community.
Essential References Not Discussed: The paper “Neuroplastic Expansion in Deep Reinforcement Learning” by Jiashun Liu, Johan Obando-Ceron, Aaron Courville, and Ling Pan is based on growing the network along with some experience replay techniques.
Other Strengths And Weaknesses: The paper considers a fundamental problem in deep reinforcement learning. The idea seems exciting but evaluation is lacking rigor. Overall, I would like the paper to be published, but I think it’s not ready in its current format to be published in ICLR 2025, and thus my recommendation is to reject it.
Additionally, the parallel with infantile amnesia needs to be de-emphasized in the paper since it doesn’t match the natural phenomenon. The brain doesn’t only contain neurogenesis but also neural pruning, which breaks the parallel since FoG does not prune its network.
Other Comments Or Suggestions: N/A
Questions For Authors: - How does FoG perform without resetting?
- How does FoG perform to the Neuroplastic Expansion (NE) method?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort you have dedicated to reviewing our work. We deeply appreciate your careful and thorough review. In the following, we seek to address each of your concerns.
___
**Q1:** *"The comparison against other methods may be unfair since FoG uses increased computation while the other does not."*
**A:** Our experiments on the humanoid benchmark (h1-walk & h1-run) demonstrate that FoG outperforms competitive baselines even when compared to larger models. We tested SimBa (depth=10, ~42M params), TD-MPC2 (19M version), and BRO (depth=10, ~42M params)—all of which are larger than FoG (at most 21M params). Despite this, FoG achieves superior performance while using less computation in this setup.
Moreover, TD-MPC2 and BRO fail to improve over their default sizes, and only SimBa shows an improvement in h1-walk but not in h1-run, indicating that comparing FoG to these baselines at their default sizes is fair. This further highlights FoG’s ability to efficiently manage a larger number of parameters.
For additional insights, please refer to the training curve figure:https://anonymous.4open.science/r/ICML-full_size-25B6/README.md.
___
**Q2:** *"It’s not clear what defines old and new experience"*
**A:** The distinction between old and new experience is based on the time at which the data was collected and added to the replay buffer. The earlier the data was collected and stored in the buffer, the older it is. ER Decay allows agents to focus on ~100k transitions that's newly collected to buffer. We will provide a clearer explanation in the paper.
___
**Q3:** *" An experiment is needed to show how FoG performs without resets."*
**A:** In the scenario without resets, FoG still outperforms SimBa and BRO with the maximum network size (~42M) on humanoid-run, humanoid-walk, and HalfCheetah-v4, while using less computation. However, removing resets hinders FoG’s performance, showing a clear drop compared to our original version. The reset mechanism was introduced to increase the replay ratio, allowing better utilization of data collected, and hence maximizing data efficiency.
For details, refer to the training curve figure: https://anonymous.4open.science/r/ICML-no_resets-76A1/README.md.
___
**Q4:** *"the paper misses comparison against similar methods."*
**A:** We currently do not have access to the code from the referenced paper. However, as demonstrated in their work, we also observe that naively expanding network capacity significantly improves performance in many Gym environments, especially HalfCheetah. Yet, this success does not easily extend to more challenging environments like DMC, MetaWorld, or the humanoid benchmark. While increasing the replay ratio enhances sample efficiency, this approach alone is not sufficient to prevent early convergence.
___
**Q5:** *"using the critic loss for comparison in Figure 3 and Figure 5 is problematic"*
**A:** Your point is valid, relying solely on critic loss is insufficient. However, SAC's critic loss in this experiment obviously exceeds reasonable thresholds. Such abnormally large errors can destabilize actor gradients and impede training.
To further substantiate our claim, we tracked the ratio of activated neurons[1]. FoG does indeed increase the ratio of activated neurons. This indicates that the model can effectively utilizes more of its capacity during training, achieving better learning efficiency[2].
For the dormant ratio curves, please refer to the following link: https://anonymous.4open.science/r/ICML-dormant-E41C/README.md.
[1]:Liu, J., Obando-Ceron, J., Courville, A., & Pan, L. (2024). Neuroplastic Expansion in Deep Reinforcement Learning. arXiv preprint arXiv:2410.07994.
[2]:Xu, G., Zheng, R., Liang, Y., Wang, X., Yuan, Z., Ji, T., ... & Xu, H. (2023). Drm: Mastering visual reinforcement learning through dormant ratio minimization. arXiv preprint arXiv:2310.19668.
___
**Q6:** *"the experiments are conducted with only three independent runs (seeds), which is a very small number of runs. "*
**A:** We are currently running 16 tasks for the main result in Figure 7, with each task using 10 seeds. However, due to time constraints, we have not yet obtained the results. We will release the results within one week.
Currently we have finished with 4 tasks, please check https://drive.google.com/drive/folders/14u4Ag3MrD9GVl4pib5RneSo6n5WGeQO_ for training curves
___
**Q7:** *"the parallel with infantile amnesia needs to be de-emphasized in the paper"*
**A:** Your point is valid. FoG lacks neural pruning, unlike the brain's dual process. However, the analogy with infantile amnesia stemmed from its link to neurogenesis-disrupted memory connections, mirroring network expansion (supported by our neuron activation experiments). We will de-emphasize this parallel in revisions and clarify to prevent misinterpretation.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response. I think some concerns have been addressed, but the majority of them have not been addressed. I will keep my score since 1) There is still no comparison with similar methods. I think it's feasible to implement the baseline method following the pseudocode in the paper even if their code is not available, and 2) Relying on critic loss for evaluation. The authors didn't convince me why it is a viable metric. In fact, the authors agreed. However, they didn't offer an alternative way to fix it such as modifying the experiment or removing it altogether.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback. We'd like to address the two main concerns raised:
1. **Comparison with Similar Methods**
We have now included a comparison between FoG and the Neuroplastic Expansion (NE) algorithm using its official implementation. The evaluation spans four tasks—HalfCheetah-v4, dog-run, dog-walk, and humanoid-walk—with both the original NE model and an expanded-capacity variant. Results show that FoG consistently outperforms NE, even at relatively low update-to-data ratio. While increasing NE’s model size brings only marginal improvement. This highlights FoG’s superior scalability and adaptability.
For details, please refer to the training curve figure: https://anonymous.4open.science/r/ICML-NE_compare-D403/
2. **Use of Critic Loss for Comparison**
While we do acknowledge that **critic loss alone** is insufficient to measure plasticity loss, **critic loss is still a very important metric for measuring stability of training and critics' plasticity**. If a low and uniform critic loss curve is not enough to demonstrate plasticity of critics, at least we can be sure that an extremely large critic loss is a definite sign of loss of plasticity in training, as a large loss shows that critic networks can no longer fit transitions in the buffer. That's exactly the case in our original experiments in the paper, SAC's critic loss **obviously exceeds reasonable thresholds**, hence impedes training. While FoG's critic loss is uniform and within a reasonable range.
Additionally, as mentioned earlier, we report the ratio of activated neurons of critics in the same tasks to support our viewpoint, following prior work [1, 2]. FoG significantly increases neuron utilization during training, indicating enhanced learning efficiency beyond just loss metrics. As the ratio of activated neurons is a commonly used metric in similar methods, it's widely acknowledged that a lower ratio of dormant neurons indicates better plasticity and learning capacity.
For more details, please refer to the dormant ratio curves: https://anonymous.4open.science/r/ICML-dormant-E41C/README.md.
**We strongly believe that critic loss accompanied by the ratio of activated neurons can effectively support our claim** and we are happy to answer any concerns or doubts of yours.
**Thank you for raising your score! If you have any more concerns or doubts, we are very willing to discuss!**
[1] Liu et al. (2024). Neuroplastic Expansion in Deep Reinforcement Learning.
[2] Xu et al. (2023). Dormant Ratio Minimization for Efficient Representation Learning. | null | null | null | null | null | null |
LLM-Assisted Semantically Diverse Teammate Generation for Efficient Multi-agent Coordination | Accept (poster) | Summary: This work proposes a novel algorithm called “SemDiv” which uses LLMs to generate semantically diverse behavior in MARL. Specifically, SemDiv uses an LLM to (1) generate a language description of a plausible, novel coordination behavior, (2) generate a reward function that incentivizes that coordination behavior, and (3) validate that trained agents actually follow that behavior. SemDiv also generates a common “coordinating agent” that can coordinate with unseen partners given a language description of their convention. Experiments in multiple standard MARL environments validate the performance of SemDiv agents when paired with unseen partners.
## Update After Rebuttal
The author provided satisfactory answers to my questions, so I am comfortable keeping my review as a strong accept.
Claims And Evidence: The claims made in the paper are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The methods and evaluation make sense for the problem at hand (though I have specific questions about alternative approaches in the “questions for authors” section of my review)
Theoretical Claims: N/A
Experimental Designs Or Analyses: For completeness, it would be good to show the performance of FCP, MEP, and LIPO with the multi-head network architecture (i.e. keep stage 1 the same but replace stage 2 with the technique described in section 3.3)
Supplementary Material: I’ve broadly skimmed the appendix, closely reading sections B, C, and D.2
Relation To Broader Scientific Literature: The key contribution of this work is the pipeline of (1) generating a language description of a coordination behavior, (2) converting this description into a policy, and (3) iterating to ensure novelty and diversity.
- Other works have addressed the middle stage of the pipeline (“InstructRL” referenced in the next section of the review). However, the pipeline as a whole is still novel.
Essential References Not Discussed: - The problem formulation seems to follow the N-AHT from “N-Agent Ad Hoc Teamwork” in NeurIPS 2024
- “Language Instructed Reinforcement Learning for Human-AI Coordination” in ICML 2023 provides an alternative way of regularizing a policy given a language instruction (InstructRL)
- “Adversarial Diversity in Hanabi” in ICLR 2023 and “Diverse Conventions for Human-AI Collaboration” in NeurIPS 2023 are additional techniques for cross-play based diversity (ADVERSITY and CoMeDi)
Other Strengths And Weaknesses: I consider this to be a truly revolutionary paper for the MARL field. The presented pipeline for generating a diverse set of teammates is broadly applicable to many multi-agent settings and enables many real-world applications for more personalized AI co-agents.
Weaknesses:
- There aren't any human user studies. It is currently a bit unclear how directly the presented results transfer when working with humans.
- This paper also lacks ablations over other potential options for generating single teammates given prompts, namely InstructRL.
- Due to the dependence on LLMs, it is unclear whether the presented technique can learn strategies in complex games that require deeper understanding of game mechanics before planning coordination strategies.
Other Comments Or Suggestions: - It was initially unclear to me whether section 3.3 was presented as a novel contribution of this work or identical to prior work. I think this needs to be clarified in the text (i.e. rephrase the first sentence of the second paragraph in 3.3), since I was initially wondering if this is different from what Macop does when reading the results section.
- The “best results” in tables should also include results where the confidence intervals overlap. For instance, LLM-Agent seems to be within the margin of error for many of the results.
- “Incorporate” -> “incorporating” on second column of line 320
Questions For Authors: - Why is a regularization term used to constrain updates in section 3.3 instead of using a more standard multi-task RL setup? I.e. first generate the diverse training population and then generate the coordinating agent by randomly sampling partners
- Furthermore, why isn’t behavior cloning used, considering that the complementary policy already exists from the training regime? The complementary policy should already be a strong coordinator, so conducting RL again from scratch seems unnecessary; a QDagger-like approach such as the one from CoMeDi may be helpful.
- How are the testing teammates generated given the descriptions? Specifically, does it follow the same pipeline as your technique (skipping section 3.1 only), is it a scripted agent, or is it using a hand-designed reward function?
- How are R2 values calculated exactly? The definition of “satisfying teammates’ preferred coordination behaviors” is vague.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We sincerely appreciate your recognition and the valuable comments! Extra experimental results can be found in this [link](https://telling-floor-898.notion.site/1c7c2fed721a80b9ba7ef7fa2b3bffed).
**Q1: The performance of population-based baselines with the multi-head network architecture.**
A: We train the agents using a multi-head architecture (denoted as {}-mh) with the generated teammates. For testing with unseen teammates, we evaluate the baselines by selecting the best-performing head among all learned heads (the same as the -R1 variants in the paper), which requires extensive interactions. As shown in Table 11 in the link. the multi-head architecture significantly improves performance in LBF but yields no gains in the more complex SMACv2 environment. In both cases, a substantial performance gap remains between the baselines and SemDiv, highlighting the necessity of semantic-level diverse teammate generation.
**Q2: Discuss essential references.**
A: Thanks for pointing out these essential related work. We will add the discussion in the paper.
- The N-AHT framework [1] does align with our problem formulation, and we extend it by introducing natural language descriptions about testing teammates, making it more suitable for multi-agent systems with communication and intention sharing.
- InstructRL [2] is a notable work that leverages LLMs to guide RL by using their decisions to regularize policy learning. This approach offers an alternative to reward-generation methods like those discussed in our paper. However, InstructRL requires querying the LLM at every step, which is computationally expensive and limits its applicability to relatively simple tasks. Due to these constraints, we did not include it as a baseline in our study.
- ADVERSITY [3] and CoMeDi [4] propose novel techniques for cross-play based diversity, which can be categorized into the “policy-level” methods in our paper and introduced as strong baselines.
**Q3: Human user studies and more complex games.**
A: Both directions present valuable opportunities, particularly in bridging the gap between simulated coordination and human-AI teamwork, as well as extending the framework to richer strategic environments. We plan to investigate these challenges in future work.
**Q4: Design choices for continual learning.**
A: We adopt a continual learning paradigm instead of a two-stage framework (like FCP or MEP) because it allows us to leverage previously generated grounded behaviors as positive examples, guiding the LLMs toward progressively better behavior generation. In contrast, a two-stage approach, where behaviors are generated first and then used to train the teammate population, isolates the learning process of different teammates and may reduce diversity.
**Q5: Design choices for agents’ RL instead of BC.**
A: While behavior cloning (BC) from the complementary policy is a viable approach, we choose RL for two key reasons: 1. Surpassing complementary policy performance: BC merely imitates the complementary policy, capping performance at the quality of the training data. In contrast, RL enables exploration and optimization beyond demonstrations, potentially discovering superior coordination strategies. 2. Mitigating distribution shift: BC may struggle with OOD teammates due to its inability to recover from unfamiliar situations. Since our focus is on generalizing to unseen teammates, BC’s reliance on static datasets may lead to failure.
We apologize for unclear expressions and typos, and will correct them in the paper.
- Relationship between Section 3.3 and Macop: For training, we remove the head merging technique of Macop, since the novelty of new teammates if confirmed in the policy verification process. For testing, we utilize language-based reasoning to select optimal heads, avoiding Macop’s need for trial-and-error interactions to improving efficiency. We will rephrase the first sentence of the second paragraph in Section 3.3: To address these challenges, SemDiv adopts a multi-head network architecture \cite{owl,macpro} similar with Macop \cite{macop} and empowers the agents with continual learning ability.
- We report the 95% confidence intervals (CI) of SemDiv and the next best performing method (Macop-R1) in the link.
- Testing teammate generation: We train these teammates using hand-designed reward functions and manually verify that they exhibit the desired behaviors. These reward functions are simple and sparse (returning either 1 or 0), allowing for direct derivation of the R2 values.
References
[1] Wang et al. N-Agent Ad Hoc Teamwork. NeurIPS 2024.
[2] Hu et al. Language Instructed Reinforcement Learning for Human-AI Coordination. ICML 2023.
[3] Cui et al. Adversarial Diversity in Hanabi. ICLR 2023.
[4] Sarkar et al. Diverse Conventions for Human-AI Collaboration. NeurIPS 2023. | Summary: The paper introduces SEMDIV, a novel framework that uses LLMs to generate semantically diverse teammates for efficient multi-agent coordination. Unlike traditional methods that focus on policy-level diversity, SEMDIV iteratively generates natural language descriptions of coordination behaviors, translates them into reward functions, and trains teammates to embody these behaviors. This approach allows agents to learn and adapt through continual learning with a multi-head architecture, selecting the most suitable policy through language-based reasoning. Experiments across four environments show SEMDIV’s superior performance in coordinating with unseen teammates.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes, this work focuses on the multi-agent domain with diverse teammates, and the proposed method and evaluation criteria are well-suited to this context.
Theoretical Claims: There is not theoretical claim in this submission.
Experimental Designs Or Analyses: This work conducts experiments in four multi-agent scenarios. However, several issues remain:
- The number of agents in these scenarios is very limited — only two — which raises concerns about the scalability of the proposed method. It would be more comprehensive to evaluate additional tasks in SMACv2 and GRF with a greater number of agents.
- All selected scenarios involve a discrete action space; it would be beneficial to assess performance in tasks with a continuous action space, such as MAMuJoCo.
- The diversity of testing teammates appears to be limited. Including a wider variety of teammates, such as lazy agents, would provide a more thorough evaluation.
Supplementary Material: I have thoroughly reviewed the supplementary material, including the additional related works, implementation details, and other relevant information.
Relation To Broader Scientific Literature: The work integrates LLMs into the multi-agent RL, extending research on ad-hoc teamwork and zero-shot coordination. It thoughtfully builds on existing diversity-inducing methods while addressing their limitations on semantic information.
Essential References Not Discussed: no
Other Strengths And Weaknesses: Strengths
--
1. The paper is well-structured and the idea is well-motivated.
2. The implementation details and prompt designs are clearly and thoroughly described.
3. The experimental results are compelling and thoroughly explained.
Weaknesses
--
1. The proposed method heavily relies on several assumptions about the environments and testing setups, including (i) accessible attributes and APIs of the environments, (ii) language descriptions of episodes, and (iii) language descriptions of teammates' behavior during execution, which may not hold in other scenarios.
2. I find Section 3.2 a little bit confusing: (i) What does $\tilde{\pi}_m^{tm}$ refer to during teammate policy training? (ii) Why are both $\lambda$ configurations necessary in Equation (2)?
3. There are concerns regarding the experimental design, as noted above.
4. The discussion of limitations is insufficient.
Other Comments Or Suggestions: 1. I am curious about the scaling effect of SEMDIV with a larger number of teammates.
2. I am unclear about the meaning of SEMDIV-R1/R2. Does it refer to evaluating each heading and selecting the best one?
3. minor: It would be helpful to add detailed explanations to the figures, such as Figures 1 and 2.
4. Typos: Second paragraph of Section 4.1 — "In each environment, we train **five** teammates exhibiting distinct and representative coordination behaviors." Should this be six teammates?
Questions For Authors: How does SEMDIV handle contradictory behaviors across teammates? Could conflicting policies cause catastrophic forgetting despite the regularization term?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for the valuable comments! Extra experimental results can be found in this [link](https://telling-floor-898.notion.site/1c7c2fed721a80b9ba7ef7fa2b3bffed).
**Q1: Experiments with more agents and with continuous action space.**
A: SemDiv is agnostic to team sizes and action spaces, still achieving the best performance compared with baselines, indicating its scalability and potential to solve more complex tasks. The results are in Table 8 in the link.
First, we extend the original LBF task to a 3-agent setting, where it takes 3 agents to collect a food at the same step. During testing, one unseen agent joins the team. Training details (like MARL methods, prompts for LLMs, etc), and behaviors of testing teammates, are similar with the 2-agent experiments.
Next, we include the Spread task based on the HARL codebase [1], in which 3 agents with continuous action space need to approach three different landmarks. For training, we select MAPPO similar with GRF. For LLM prompts, we slightly modify the ones used in PP since both of them are MPE tasks. During testing, 6 unseen teammates trained in teams with different approaching strategies (different agent-landmark pairs) are joined.
**Q2: Testing with lazy agents.**
A: We build lazy agents that take the noop action or move back and forth in the initial position, in all four environments used in the paper. We set their descriptions as “I am a lazy agent and I will do nothing” when testing SemDiv. The results are shown in Table 9.
- LBF & SMACv2: All methods fail as ≥2 agents are needed for food/enemy tasks.
- PP: Baselines match SemDiv by ignoring lazy teammates (reward for catching a rabbit = 0.5).
- GRF: SemDiv excels by learning attack strategies without passing and selecting these policy heads.
**Q3: Discuss several assumptions and limitations.**
A: (1) Environment APIs, and language descriptions of episodes. Both assumptions are important considerations, which are well-established in prior LLM-assisted RL research [2,3,4]. However, we acknowledge that relaxing these assumptions is valuable future work. (2) Language descriptions of teammates' behavior during execution. Please refer to Q1 for **reviewer EFqS**. (3) Dependence on LLMs. Like prior work, our method relies on LLMs, which may hallucinate. Potential solutions include using more advanced models or human verification. (4) Generalization scope. Our evaluation focuses on close games with clear rewards. Extending to real-world tasks (e.g., embodied AI) introduces challenges like real-time perception and physical constraints. We will discuss these limitations explicitly in the paper.
**Q4: How does SemDiv handle contradictory behaviors across teammates?**
A: SemDiv uses a multi-head architecture (Sec. 3.3) to mitigate catastrophic forgetting in multi-agent settings [5,6]. Testing confirms the final agents successfully coordinate with all 6 learned teammates, including those with contradictory behaviors (Table 10).
We apologize for these unclear expressions, and will improve them in the paper.
- Complementary policy: we aim to train a teammate policy $\pi^{tm}$ (e.g., player A in a two-player game) to form a team and train with agent $\pi^{ag}$ (e.g., the other player B). To achieve this, $\pi^{tm}$ must first learn coordination by training alongside a complementary policy $\tilde{\pi}^{tm}$ (which also controls Player B) using MARL algorithms. This training phase with $\tilde{\pi}^{tm}$ ensures that $\pi^{tm}$ develops specific coordination behaviors before interacting with $\pi^{ag}$.
- $\lambda$ configurations in Eq. 2: The equation ensures the new teammate $\pi^{tm}_m$ differs from previous ones $\pi^{tm}_j$. When $\lambda_1 = 1, \lambda_2 = 0$, it verifies that agents trained with $\pi^{tm}_j$ cannot achieve comparable original task rewards. When $\lambda_1 = 0, \lambda_2 = 1$, it checks the same for shaped rewards. Together, these configurations strictly guarantee the new teammate's distinctiveness.
- Explanations to the figures and SemDiv-R1/R2: Please refer to the reply for **reviewer XbFR**.
- Second paragraph of Section 4.1: The *five* teammates mentioned here are the ones for testing different methods, rather than the *six* teammates generated during training. We again apologize for the unclear expressions.
References
[1] Liu et al. Maximum entropy heterogeneous-agent reinforcement learning. ICLR 2024.
[2] Xie et al. Text2Reward: Reward Shaping with Language Models for Reinforcement Learning. ICLR 2024.
[3] Ma et al. Eureka: Human-Level Reward Design via Coding Large Language Models. ICLR 2024.
[4] Ma et al. DrEureka: Language Model Guided Sim-To-Real Transfer. RSS 2024.
[5] Yuan et al. Multi-agent Continual Coordination via Progressive Task Contextualization. TNNLS 2024.
[6] Yuan et al. Learning to Coordinate with Anyone. DAI 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their response and the additional experiments. It is good to see the improvements made to the manuscript.
However, my concerns regarding the applicability and scalability of the proposed method persist. As the authors themselves acknowledge, the method is subject to significant constraints—such as reliance on specific APIs and language descriptions—and I do not see a clear path for overcoming these limitations. Its scalability with respect to the number of agents also appears limited, demonstrated only up to three agents. Therefore, I will maintain my score until substantial improvements are made.
**Update**: Regarding the first concern, I understand that it may not be possible to address it through experiments currently. Nevertheless, I believe it is necessary to provide a detailed and convincing discussion of the rationale behind the currently constrained setup.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer's feedback and the opportunity to address the concerns regarding the applicability and scalability of our method. Below, we provide a detailed response to the raised issues, along with additional experimental results to support our claims.
### Applicability: Reliance on Specific APIs and Language Descriptions
The reviewer raises valid concerns about dependencies on environment APIs and language descriptions. We argue these are not fundamental limitations for the following reasons:
- **Environment APIs**: Our method uses APIs only to access basic information (e.g., collected food in LBF or defeated enemies in SMACv2). This aligns with standard practices in LLM-assisted RL [1], where APIs enable reward generation [2,3,4], embodied decision making [5,6,7], etc. Even without explicit APIs, lightweight functions can extract equivalent data from state representations (e.g., tracking food collections through certain dimensions) to build prompts for SemDiv.
- **Language Descriptions**: The head selection module requires minimal natural language (under 10 words in our experiments). This module serves as an interpretable alternative to interaction-based teammate modeling [8,9,10] and is not central to the framework’s core mechanics. Its simplicity also facilitates future integration with human partners.
### Scalability: Number of Agents
SemDiv is agnostic to team sizes. To further empirically validate scalability, we extend the SMACv2 task into a **10vs10** scenario, where SemDiv still outperforms baselines (see table below). In this scenario, our team with 10 marines needs to fight 10 enemies split into two groups. During testing, five unseen allies from another team will form a new team with five of our agents. The prompts are slightly modified based on the original two-agent version. For efficiency during rebuttal, we reduced the number of training teammate groups from 6 to 3 for all methods. These additional experiments validate the scalability of SemDiv. For even larger systems, techniques like mean-field RL [11] or hierarchical grouping [12] could be integrated to further enhance efficiency in future work.
Table 1. Number of killed enemies that the testing teammates prefer to kill (mean ± std).
| Testing teammates | FCP | MEP | SemDiv |
| --- | --- | --- | --- |
| Attack group 1 | 3.72 ± 0.04 | 3.8 ± 0.05 | 4.45 ± 0.10 |
| Attack group 2 | 3.88 ± 0.07 | 3.98 ± 0.08 | 4.41 ± 0.08 |
| Avg | 3.80 | 3.89 | 4.43 |
We hope these responses alleviate the reviewer’s concerns. We are committed to addressing limitations transparently and believe the additional experiments will strengthen the paper’s contributions. If you have any further questions or suggestions, we would be more than happy to address them. We again truly appreciate your thoughtful feedback and consideration.
References:
[1] Cao et al. Survey on Large Language Model-Enhanced Reinforcement Learning: Concept, Taxonomy, and Methods. TNNLS 2024.
[2] Xie et al. Text2Reward: Reward Shaping with Language Models for Reinforcement Learning. ICLR 2024.
[3] Ma et al. Eureka: Human-Level Reward Design via Coding Large Language Models. ICLR 2024.
[4] Ma et al. DrEureka: Language Model Guided Sim-To-Real Transfer. RSS 2024.
[5] Zhang et al. Building Cooperative Embodied Agents Modularly with Large Language Models. ICLR 2024.
[6] Du et al. Constrained Human-AI Cooperation: An Inclusive Embodied Social Intelligence Challenge. NeurIPS 2024.
[7] Chang et al. PARTNR: A Benchmark for Planning and Reasoning in Embodied Multi-agent Tasks. ICLR 2025.
[8] Zhang et al. Fast Teammate Adaptation in the Presence of Sudden Policy Change. UAI 2023.
[9] Yuan et al. Learning to Coordinate with Anyone. DAI 2023.
[10] Ma et al. Fast Peer Adaptation with Context-aware Exploration. ICML 2024.
[11] Yang et al. Mean Field Multi-Agent Reinforcement Learning. ICML 2018.
[12] Christianos et al. Scaling Multi-Agent Reinforcement Learning with Selective Parameter Sharing. ICML 2021. | Summary: This paper proposes a teammate generation method called SemDiv, which uses LLMs to learn diverse coordination behaviors a the “semantic” level. SemDiv generates novel teammates by iterating the following steps: (1) generating natural language description of a novel coordination behavior, (2) translating it into a shaping reward function for training a teammate policy, (3) verifying whether training using such a reward function generates a meaningfully different policy, (4) if verified, train a new policy head to coordinate with the newly generated teammate, using a continual learning objective to avoid forgetting how to coordinate with previously seen teammates. At test time with an unknown teammate and a behavior description of that teammate, SemDiv uses an LLM to select the best response policy head.
The method is evaluated on 4 MARL environments: LBF, Predator-Prey, SMAC-v2, and Google Research Football (GRF), with 5 unseen teammates per task. Emprical results show that SemDiv outperforms various teammate generation baselines and LLM baselines in terms of generalization.
Claims And Evidence: Most claims are somewhat supported by evidence. There are some claims that are misleading or insufficiently supported, which I have pointed out below.
- “First, the exploration of the teammate policy space is inefficient, as teammates are driven to optimize for differences at the policy-level rather than actively discovering novel coordination behaviors at the semantic level.” ([pdf](zotero://open-pdf/library/items/B38DLBJN?page=1))
- The authors do not provide any theoretical justification or empirical demonstration of why optimizing policy level differences is bad. It’s not clear what the difference between “policy level” vs “semantic level” is.
- A key claim of this paper is that LLMs are better able to generate diverse teammates than non-LLM teammate generation baselines
- I don’t think the paper presents enough evidence for this claim.
- The case study figure where the strategies generated by SemDiv and FCP is good (Fig 3c), but not totally convincing, because (1) FCP is one of the weakest teammate generation baselines, and (2) T-SNE can produce different results when run with different seeds. The evidence would be much stronger if the authors do two things: first, re-generate Fig. 3c for the next-strongest non-SemDiv baseline for all tasks, and second, provide cross-play matrices for the populations generated by all teammate generation methods.
- Another key claim of this paper is that the proposed LLM-based teammate generation process generalizes better to unseen teammates, compared to non-LLM based methods.
- Evidence provided in form of performance comparisons against basedline non-LLM methods (FCP, MEP, LIPO, Macop) and an LLM-agent only baseline, where manually designed unseen teammates are used to test SemDiv and baseline
- However, given the external calls to LLMs and potentially computationally expensive verification sub-routines within SemDiv (e.g. testing if the reward function is valid), the authors should also report the wall-clock time required to train each method to obtain the results in Table 1.
- Misleading claim - the abstract states that SemDiv is evaluated on 20 unseen teammates and 4 tasks, making it unclear whether it was evaluated using 20 teammates per task, or divided evenly among the tasks (as turns out to be the case). The authors should reword that claim in the abstract (and anywhere else it appears in the paper) to avoid implying that their method was evaluated with more teammates than it actualy was.
- Misleading characterization the paper is couched in terms of N-agent teams, and in various places refers to the number of controlled agents as {1, …, n_ag} and the number of uncontrolled as {1, …, n_tm}. This implies that N>2. While I’m fine with describing the method in greater generality, the paper should clearly state somewhere that N=2 in all evaluation settings.
- Misleading characterization - SemDiv’s continual ego agent learning procedure and multi-head architecture directly comes from Macop (Yuan et al. 2023). This is fine, as this isn’t really the main contribution of the paper, but Section 3.3 (which describes the ego agent learning procedure) should clearly acknowledge this. and make it clear what is different in this work.
Methods And Evaluation Criteria: Overall, the proposed method makes sense, and the experiments evaluate the key claims of the method (improved generalization + improved teammate diversity). I have some concerns with the evidence provided for the claims, which I have described previously. Some additional concerns about the method are provided below.
- The LLM is used to generate/verify several key points in the teammate generation procedure. i would like to see evaluations of each component in terms of metrics such as the number of attempts, success rate, and wall-clock time
- Generating reward function programs
- Verification of alignment between behaviors/policies
- Policy selection process for testing with unknown teammate.
- Statistical significance measures: results are presented with mean +/- std deviations throughout the paper, which reflects the variance in the mean performance for each method. To provide information about the statistical significance of the results, the authors should provide statistical significance tests comparing the performance of SemDiv to the next best performing method, or computing 95% CI’s instead.
Theoretical Claims: N/A
Experimental Designs Or Analyses: See above.
Supplementary Material: I reviewed the appendix.
Relation To Broader Scientific Literature: This method is a teammate generation method for ad hoc teamwork/zero-shot coordination. The authors propose using LLMs to generate novel teammate behaviors, which to my knowledge, has not been considered/addressed before for ad hoc teamwork.
Essential References Not Discussed: No
Other Strengths And Weaknesses: - Strengths:
- Overall, the paper is well-written and well-situated in the field of ad hoc teamwork
- The idea is novel and timely, and the overall framework is quite compelling.
- The results convincingly show that SemDiv outperforms other common teammate generation baselines.
- Weaknesses
- SemDiv requires natural language behavior descriptions from unknown teammates to allow an LLM to select the optimal head.
- In my view, this is the largest weakness of the paper. SemDIV is presented as an end-to-end teammate generation + AHT method, but this is a pretty large limitation of the AHT component of the method. The policy selection problem makes up a large portion of the challenge for AHT methods (e.g. PLASTIC by Barret et al. 2017). Assuming access to a natural language description of the unknown teammate essentially assumes an oracle for the teammate modeling problem (i.e. generating a characterization of an unseen teammate, and relating it to the representation of known teammates).
- However, I believe that the main contribution of the method is showing how to generate teammates using LLMs/natural language descriptions, which is sufficient on its own, even without any novelties in the ego agent learning process. The authors should explicitly acknowledge this limitation.
- Multiple misleading claims are made.
Barrett, Samuel, Avi Rosenfeld, Sarit Kraus, and Peter Stone. 2017. “Making Friends on the Fly: Cooperating with New Teammates.” *Artificial Intelligence* 242 (January):132–71. [https://doi.org/10.1016/j.artint.2016.10.005](https://doi.org/10.1016/j.artint.2016.10.005).
Other Comments Or Suggestions: - Details that need to be made clearer:
- What percentage of the time does SemDiv generate valid vs invalid teammates?
- Reading the methods section was confusing, because I thought that the paper addressed the scenario of N>=2. The authors should specify that the paper addresses N=2.
- Each figure caption should explain the main message that the figure is meant to show, to support readers skimming the paper based on figures alone
- There are a couple issues with Table 1:
- Since {SemDiv, Macop}-R1, R2 are upper bounds, all four methods should be included at the top, perhaps in the oracle section, or perhaps in their own section. It’s confusing to have it presented together with SemDiv.
- Shouldn’t Macop-R1, R2 be grayed out too, since it is also an upper bound and presumably excluded from consideration for computing the best result?
Questions For Authors: 1. In Table 1, the difference between SemDiv and SemDiv-PBT is much larger than the gap between SemDiv-PBT and non-LLM teammate generation methods (e.g. Macop-PBT, FCP, MEP, LIPO), which makes me wonder if SemDiv is performing better than the non-LLM teammate generation methods because of the continual learning paradigm introduced by MACOP, rather than because of the generated teammates. Can you present the performance of MACOP (the original, non-PBT version) as well, in Table 1?
2. [Clarification Question] After reading the paper, my impression is that SemDiv requires behavior descriptions from unknown teammates. However, the experiments section (L242-245) states that the teammates’ behavior descriptions remain unknown to the tested methods during training. Does SemDiv select a policy head without the behavior description? If so, then what is the purpose of the Semdiv-Dist baseline?
3. Given that the SemDiv does require behavior descriptions:
1. Can the authors provide a discussion of how reasonable this assumption is?
2. Semdiv-Dist performs much worse than Semdiv, suggesting that the behavior descriptions do a lot of the ‘heavy lifting’ for enabling generalization to unseen teammates. Do the authors have any thoughts on how to improve this?
4. Reward shaping: the criteria outlined for whether the reward function is valid seems to require training a policy under the reward function as an inner-loop step (Line 173 in the submitted PDF). Can the authors comment on how many timesteps are allocated to train the teammate under hte reward function? Some reward functions can only be optimized in a large number of steps -- how does this affect the “deepness” of the generated behaviors? Is it the case that only “easy/simple” behaviors can be generated?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for the valuable comments!
**Q1: Difference between “policy level” and “semantic level”, and why optimizing policy level differences is bad.**
A: Policy-level methods mainly optimize policy-space diversity without explicitly modeling the corresponding semantics. In this approach, the search space grows exponentially with the number of agents, leading to inefficient exploration. In contrast, our semantic-level method explicitly models human-interpretable coordination behaviors through language-guided generation. By clustering similar policies into semantically meaningful behaviors, our approach significantly reduces the complexity of the search space.
Prior works [1,2] have provided analysis and conducted experiments to demonstrate that traditional policy-level methods (e.g., FCP and LIPO) struggle to efficiently explore teammate policy spaces. We further validate this claim through comprehensive performance evaluations and trajectory analysis, highlighting the advantages of semantic-level optimization over policy-level approaches.
**Q2: More results on t-SNE visualization, cross-play matrices, etc.**
A: We report these important results in this [link](https://telling-floor-898.notion.site/1c7c2fed721a80b9ba7ef7fa2b3bffed), and will add them in the paper.
**Q3: Requirement of natural language behavior descriptions from unknown teammates.**
A: Please refer to Q1 for **reviewer EFqS**.
**Q4: The performance of Macop.**
A: We would like to clarify that we have actually included the performance of Macop in Table 1, though presented as its *upper bound* versions that directly report the results of the best policy heads. However, there is still a gap between Macop-{R1, R2} and SemDiv, proving the impact of SemDiv’s semantically diverse teammates.
**Q5: [Clarification Question] How SemDiv selects policy heads and the SemDiv-Dist baseline.**
A: During training, all methods are unaware of testing teammates' behavior descriptions. SemDiv agents only select heads for the corresponding training teammates for policy verification. During testing, SemDiv agents select heads according to the testing teammates’ behavior descriptions. The SemDiv-Dist baseline also requires these descriptions during testing to select heads. Instead of using LLMs for reasoning, it selects the head that minimizes Distance(embed($b$), embed($b_{test}$)), where $b$ is the behavior learned by the head, $b_{test}$ is the behavior of the testing teammate. To improve SemDiv-Dist, we can finetune the language model that computes the embeddings with task-specific data, enhancing its understanding of the task.
**Q6: The reward shaping problem.**
A: In our work, we empirically set a fixed step limit for inner-loop training, a practical trade-off that may limit the "deepness" of learnable behaviors. While this suffices for simpler tasks, we fully acknowledge the challenge for more complex behaviors. Future work will adopt Neural Architecture Search (NAS) [3] to automate hyperparameter setting, enabling efficient training of deeper behaviors under shaped rewards. We will clarify this limitation and solution in the paper.
We apologize for these unclear expressions, and will improve them in the paper.
- Number of testing teammates and controlled agents: We have modified the Abstract and Introduction to explicitly state: “Evaluation with five unseen representative teammates per environment”, and we will add an explanation in the Problem Formulation that we focus on scenarios where N=2.
- Related work of Section 3.3: Please refer to the reply for **reviewer Q2F9**.
- More detailed figure captions:
- Figure 1: An overview of the training and testing process of SemDiv. Left: During training, SemDiv proposes novel coordination behaviors in natural language and transform them into teammate policies for agent learning. Right: During testing, SemDiv takes as input the description of the unseen teammates and selects the optimal learned policy for coordination.
- Figure 2: The overall workflow of SemDiv. (a) Generating coordination behavior. SemDiv iteratively generates of semantically diverse coordination behaviors, enabling efficient exploration of the teammate policy space. (b) Training aligned teammate policy. For each coordination behavior described in natural language, a teammate policy is trained to align with that behavior. (c) Training agents. Agents are continually trained with these teammates, developing strong coordination ability.
- Upper bounds: {SemDiv, Macop}-R1, R2 are only upper bounds of {SemDiv, Macop}, which directly report the results of the best policy heads to investigate the impact of the head selection module. We again apologize for these unclear expressions.
References
[1] Yuan et al. Learning to Coordinate with Anyone. DAI 2023.
[2] Rahman et al. Minimum Coverage Sets for Training Robust Ad Hoc Teamwork Agents. AAAI 2024.
[3] Elsken et al. Neural Architecture Search: A Survey. JMLR 2019. | Summary: The paper proposes a novel partner generation method, which iteratively generates new partner through generated reward functions via LLM queries. The input to the LLM contains semantic information of the behavior, allowing the method to include semantic information for generating the partners in addition to policy-level validity and diversity check. The proposed method also leverages the LLM for selecting the right best response policy given a behavior description of the test partner. The method outperforms various baselines across diverse environments.
Claims And Evidence: > This demonstrates that generating semantically diverse teammates not only enables more efficient exploration of the teammate policy space but also facilitates the discovery of coordination behaviors that policy-level exploration alone cannot cover.
The paper claims that "policy-level exploration" cannot discover certain behaviors. But in the experiment, it only uses FCP---which does not even explicitly generate diverse partners---as the only reference for this claim. It is not fair to claim this without testing more sophisticated "policy-level exploration" methods.
Methods And Evaluation Criteria: The method and evaluation are reasonable. Though the paper would be more convincing with an ablation study on the Policy Verification is included (see Comments or Suggestions)
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experiments and analyses are reasonable with statistical confidence.
Supplementary Material: The supplementary material is comprehensive and provides additional details.
Relation To Broader Scientific Literature: The paper could be impactful as it introduces the use of LLMs for generating diverse partners. Prior work focuses on generating diverse partners at the policy-level. This approach incorporates semantic information to generate more diverse partners.
Essential References Not Discussed: The paper should mention the work that Eq. 2 is based on ([1]) and why using Eq. 2 could possibly check for policy similarity in the first place. In its current form, the paper simply mentions that it uses Eq. 2 to verify diversity without giving any intuition. The paper already has the citation in various places but not at Eq. 2.
[1] Charakorn et al. "Generating diverse cooperative agents by learning incompatible policies." The Eleventh International Conference on Learning Representations. 2023.
Other Strengths And Weaknesses: Strengths
- The idea of using LLM for partner generation is novel.
- The evaluation includes many SOTA baselines.
Weaknesses
- The claim in Section 4.4 is not fair (see Claims and Evidence).
- It is not clear whether the use of the LLM is the main contributor to the improvement over the baseline (see Suggestions).
- Some information about the pipeline is not provided fully, e.g., how `info_sim` or agent behaviors are represented for the LLM.
Other Comments Or Suggestions: ### Comments
> First, the exploration of the teammate policy space is inefficient, as teammates are driven to optimize for differences at the policy-level rather than actively discovering novel coordination behaviors at the semantic-level.
To strengthen this statement, the paper should substantiate why "the exploration of the teammate policy space is inefficient". Is there any existing paper that confirm this statement?
> As we aim to study teammates generation and agents coordination at the semantic-level, we consider scenarios in which the group of teammates $\pi^\text{tm}$ provides a natural language description $b$ prior to the execution phase.
The reason given for the use of prior communication is not sound. One could also communicate without semantic-level coordination. Coordinating at the semantic-level could also omit communication.
### Suggestions
- The baselines do not use the "Policy Verification". Since it is very likely that with high regularization, XP-min methods tend to generate incapable policies, just like non-functional reward functions generated by the LLM. The paper would benefit by a baseline that uses the Policy Verification as well. This would confirm that the semantic information is indeed useful for partner generation, not the Policy Verification part. In its current form, there could be an alternate explanation that the Policy Verification does the heavy lifting of the improvement.
- Related to Policy Verification, I wonder how much percentage of the generated reward functions are invalid? How much percentage of them leads to executable reward functions but does not pass the Policy Verification? What is the average number of repetitions before generating a valid policy?
I'm willing to increase the score if all these concerns are addressed properly.
Questions For Authors: - How is the `info_sim` represented?
- How are agent behaviors represented in text before feeding into the LLM (for the alignment step)? I suppose different environments would have different behavior representations/descriptions.
- Is the better performance stemmed from the fact that XP-min based methods are unstable/inefficient? My understanding is that the proposed method still generates incompatible policies but through generated reward functions (with the "diversity check" which checks the compatibility of newly generated policy) instead directly optimizing the "diversity check", which is based on the XP-min methods.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for the valuable comments! Extra experimental results can be found in this [link](https://telling-floor-898.notion.site/1c7c2fed721a80b9ba7ef7fa2b3bffed).
**Q1: The motivation for unseen teammate communicating behaviors before testing.**
A: Communication and intention sharing are the basic and core techniques in multi-agent systems [1,2], mitigating partial observability. Following these techniques, our approach leverages one-time natural language communication to boost coordination efficiency. For situations where communication is entirely infeasible, SemDiv can still infer teammate behaviors from interactions using techniques in [3,4]. We note this as a future direction to extend our framework to zero-comm environments.
**Q2: Compare more baselines to support the claims in Section 4.4.**
A: As shown in Table 1 in the link, even with extra policy-level diversity techniques and larger population sizes, policy-level baselines still achieve limited performance, further indicating the necessity and efficiency of semantic-level diversity.
**Q3: Compare baselines with Policy Verification.**
A: Even with Policy Verification, population-based baselines show limited improvement (Table 2). The Policy Verification process: (1) Train 12 teammates, verifying all can complete the task. (2) Select the 6 most diverse policies:
- MEP: Farthest from population mean.
- LIPO: Iteratively pick worst-performing partners.
No behavior verification is needed as baselines lack semantic info. Results confirm LLM-assisted diversity drives SemDiv's success, not just Policy Verification.
**Q4: Pass rate of Policy Verification in SemDiv.**
A: We report the results of Policy Verification (Table 3), including (1) the number of occurrences of different issues averaged over 3 random seeds and (2) total pass rate. SemDiv efficiently generates diverse teammates while verifying their quality. We will add these results in the paper.
**Q5: Similarity information and agent behavior text representation.**
A: We simply represent info_sim as: “Not-novel example: {behavior}. This behavior is the same as {similar_behavior}“. For behavior texts, we write light-weight functions to extract information from trajectory data and fill in text templates, aligning with previous works [5,6]. For example, in SMACv2, info = “Agents killed enemy {enemy_id} …”. We will add more details of text representation in the paper.
**Q6: Whether SemDiv’s improvement is stemmed from the instability of baselines.**
A: SemDiv’s improvement is stemmed from better semantic-level diversity rather than exploting the instability of baselines. We report the performance of all generated teammates in Table 4. The teammate stability and quality of baselines is similar to that of SemDiv.
**Q7: Substantiate previous methods‘ inefficient exploration of the teammate policy space.**
Efficiently exploring diverse high-quality teammate policies is a fundamental challenge in MARL for open environments [7]. Early methods primarily imposed regularization at the action level, requiring exhaustive traversal of the joint policy space, which becomes computationally prohibitive in complex scenarios. Recent works like [4,8] have demonstrated the limitations of classic policy-level methods like FCP and LIPO. In contrast, our approach abstracts the joint policy space into a higher-level semantic space, where a single semantic behavior can correspond to a wide range of policies. This abstraction explicitly reduces exploration difficulty by decoupling diversity from low-level action redundancy. Empirical results validate that SemDiv achieves more efficient and scalable policy exploration compared to prior works, representing a significant technical advancement in this direction.
**Q8: Mention related work in Eq. 2.**
A: We apologize for not properly citing the work [9] when introducing Eq. 2 and explaining its relevance for policy similarity verification. We will correct this by adding the citation at Eq. 2 and providing a clearer discussion.
References
[1] Albrecht et al. Multi-Agent Reinforcement Learning: Foundations and Modern Approaches. MIT Press, 2024.
[2] Zhu et al. A survey of multi-agent deep reinforcement learning with communication. Auton. Agents Multi Agent Syst. 38(1): 4 (2024).
[3] Yuan et al. Multi-agent Continual Coordination via Progressive Task Contextualization. TNNLS 2024.
[4] Yuan et al. Learning to Coordinate with Anyone. DAI 2023.
[5] Xie et al. Text2Reward: Reward Shaping with Language Models for Reinforcement Learning. ICLR 2024.
[6] Ma et al. Eureka: Human-Level Reward Design via Coding Large Language Models. ICLR 2024.
[7] Yuan et al. A Survey of Progress on Cooperative Multi-agent Reinforcement Learning in Open Environment. 2023.
[8] Rahman et al. Minimum Coverage Sets for Training Robust Ad Hoc Teamwork Agents. AAAI 2024.
[9] Charakorn et al. Generating diverse cooperative agents by learning incompatible policies. ICLR 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you the authors for providing a thorough response with additional experiments.
> Q1: The motivation for unseen teammate communicating behaviors before testing.
I do agree that using communication makes sense and can help adaptation of cooperative agents. My comment was specifically how the use of communication is motivated in the text. I suggest the authors clarify the motivation in the main text in the revision.
> Q2: Compare more baselines to support the claims in Section 4.4.
Thank you for providing this result. I believe Section 4.4 is now much more convincing.
> Q3: Compare baselines with Policy Verification.
Thank you for conducting this experiment. This result is another crucial ablation. I suggest the authors include this in the paper.
> Q4: Pass rate of Policy Verification in SemDiv.
Thank you for providing more details on how SemDiv works.
> Q5: Similarity information and agent behavior text representation.
Thank you. Please add these details to the paper.
> Q6: Whether SemDiv’s improvement is stemmed from the instability of baselines.
Thank you for the clarification.
> Q7: Substantiate previous methods‘ inefficient exploration of the teammate policy space.
Please add this motivation to the paper.
> Q8: Mention related work in Eq. 2.
Please add the explanation to the paper.
My concerns are largely addressed. I raise my score from 2 to 3.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our paper and for raising your score, we sincerely appreciate it.
Regarding your comment on the motivation for using communication, we agree that this point could be more clearly articulated. In the revised version, we will clarify that our motivation stems from a key limitation in current teammate adaptation approaches [1,2,3]: they typically require interaction data with the test-time teammates, which can be costly or infeasible in complex environments. To address this challenge, we propose leveraging communication as a means of intention sharing, thereby reducing the need for extensive adaptation. Moreover, with the rapid advancement of LLMs, communication can now be easily realized through natural language, which not only improves interpretability but also opens up new possibilities for coordination with real human partners.
If you have any further questions or suggestions, we would be more than happy to address them. We truly appreciate your thoughtful feedback and consideration.
References
[1] Zhang et al. Fast Teammate Adaptation in the Presence of Sudden Policy Change. UAI 2023.
[2] Yuan et al. Learning to Coordinate with Anyone. DAI 2023.
[3] Ma et al. Fast Peer Adaptation with Context-aware Exploration. ICML 2024. | null | null | null | null | null | null |
Feynman-Kac Correctors in Diffusion: Annealing, Guidance, and Product of Experts | Accept (spotlight poster) | Summary: This paper proposed that the Feynman-Kac Correctors enhance the sampling of several types of compositional distributions. The main idea is based on the Feynman-Kac formulation and the transformation of transport and diffusion to reweight operation.
## update after rebuttal
My view is not changed, so I maintain my original score.
Claims And Evidence: The method of enhanced sampling from compositional distributions via reweighting is novel and reasonable. The derivation of the method is supported by valid MCMC/PDE theory.
Methods And Evaluation Criteria: The evaluation on SD-XL is not that convincing, the metric should be tested on large-scale prompts (at least 1k prompts) from standard datasets like MS-COCO.
Theoretical Claims: I checked the theoretical claims. They are solid.
Experimental Designs Or Analyses: sec 5.1: sound.
sec 5.2: I am not familiar with the molecule generation task.
sec 5.3:
1. The result is not convincing enough. More experimental results on different CFG coefficients are needed. The Pareto curve between CLIP and FID across different beta can indicate the superiority of the proposed method.
2. Tests on large-scale prompts (at least 1k prompts) from standard datasets like MS-COCO are necessary. Tests on other models like PixArt, and SD2b are beneficial.
3. It is not clear why better adherence to the geometric average CFG distribution would contribute to enhancing the image quality.
Supplementary Material: no Supplementary Material
Relation To Broader Scientific Literature: The proposed method contributes to the understanding of DMs and the potential enhanced performance of various downstream DM tasks.
Essential References Not Discussed: no
Other Strengths And Weaknesses: no
Other Comments Or Suggestions: no
Questions For Authors: 1. Considering the relation of MCMC and ParVI, it is possible to add particle interactive force to the proposed FKC method to mitigate the sampling covariance?
2. I noticed that the reweight update has a similar form to the Fisher-Rao gradient flow. There may be some underlying theoretical connection.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed feedback and constructive suggestions. We are glad that they find the proposed idea 'novel' and its derivation 'supported by valid theory'. We are also happy to hear that our method 'contributes to the understanding of Diffusion Models.' We provide the suggested comparisons and answer the reviewer's questions below.
> Testing the metrics on large-scale datasets (at least 1k prompts)
We thank the reviewer for the constructive feedback which has led to some interesting findings and a deeper understanding of this work. We evaluate FKC on two large-scale tasks: on ImageNet-1K using EDM2 [4] and on the Geneval benchmark [1] using SDXL, which evaluates the algorithms on a predefined set of prompts and measures adherence to the prompts. These additional large-scale experiments have led to two new insights:
1. **FKC improves image quality and prompt adherence in ambient space but not latent space.**
Point 1 is supported by the following two tables. We can see substantial improvement in the performance of FKC on an ambient diffusion model but not on a latent diffusion model.
- **EDM2** We first conduct a new set of image experiments using the EDM2 model [4], which directly generates outputs in pixel-space. We compare generations from the baseline method, which uses CFG, with generations guided by FKC on ImageNet-1K and evaluate 10k samples using two metrics: CLIP Score and ImageReward [5], which reflects how closely they align with human preferences.
||steps|churn|Clip Score(↑)|ImageReward(↑)|
|-|-|-|-|-|
|CFG|32|40|28.75|-0.24|
|FKC|32|40|29.00|0.04|
|CFG|48|10|28.83|-0.18|
|FKC|48|10|**29.02**|0.06|
|CFG|64|80|28.67|-0.21|
|FKC|64|80|28.88|**0.09**|
|CFG|128|40|28.71|-0.19|
|FKC|128|40|28.94|0.06|
We find that incorporating FKC improves both scores and qualitatively generates better images; we include examples in [this PDF](https://anonymous.4open.science/r/FKC-D7F0/rebuttal.pdf). We use default settings of $\beta=1.4$ for both models.
- **SDXL Geneval** The table below consists of three parts: Geneval benchmarks, performance of FKC on the prompts from Geneval (553 prompts), and on 1k prompts (553 + newly generated). We also rerun SDXL with the same hyperparameters as our algorithm for a fair comparison.
|Model|$\beta$|Overall|Single object|Two object|Counting|Colors|Position|Color attribution|
|-|-|-|-|-|-|-|-|-|
|CLIP retrieval||**0.35**|0.89|0.22|0.37|0.62|0.03|0.00|
|SD 1.5||**0.43**|0.97|0.38|0.35|0.76|0.04|0.06|
|SDXL||**0.55**|0.98|0.74|0.39|0.85|0.15|0.23|
|**Geneval Prompts**|
|SDXL (Rerun)|7.5|**0.57**|0.99|0.80|0.46|0.86|0.11|0.22|
|FKC|5.5|**0.58**|0.99|0.77|0.49|0.87|0.10|0.22|
|FKC|7.5|**0.57**|0.99|0.78|0.46|0.83|0.13|0.23|
|**1k Prompts**|
|SDXL|7.5|**0.58**|0.99|0.79|0.45|0.88|0.11|0.21|
|FKC|5.5|**0.57**|0.99|0.79|0.42|0.86|0.13|0.21|
|FKC|7.5|**0.57**|0.99|0.80|0.45|0.83|0.13|0.22|
2. **FKC improves performance in the ambient space across tasks whereas improvements in latent space diffusion models are limited.** This is supported by Table 2 and Figure 2 of [the PDF](https://anonymous.4open.science/r/FKC-D7F0/rebuttal.pdf). Here, we present new and stronger results for the molecule generation task in the coordinate space instead of the latent space.
> Is it possible to add particle interactive force to the proposed FKC method to mitigate the sampling covariance?
Indeed, one could combine the proposed method with the Stein Variational Gradient Descent, targeting the intermediate marginals (annealed density or the product of densities) similar to [2]. This would require adding a term to the weights, which, theoretically, should reduce the variance of the weights.
> Сonnection to the Fisher-Rao gradient flow
The reweighting equation $\partial_t p_t^w(x) = p_t^w(x) \bar{g}_t(x)$ does correspond to the Fisher-Rao gradient flow of a linear functional $G[p_t] = \int {g}_t(x) p_t(x) dx$ (after constraining $p_t^w$ to remain normalized). The Wasserstein Fisher-Rao gradient flow has a form similar to the Feynman-Kac PDE with $\sigma_t=0$ (deterministic evolution). We have included a note about this in the Appendix of the updated revision (see e.g. Thm. 3.1 in [3]).
### Closing remarks
We thank the reviewer for suggesting this comparison, which significantly improves our empirical study. We hope that our answers address all the important questions raised by the reviewer, and we are more than happy to consider any additional questions or further suggestions.
[1] Ghosh, Dhruba et al. "Geneval: An object-focused framework for evaluating text-to-image alignment."
[2] Corso, Gabriele et. al. "Particle guidance: non-iid diverse sampling with diffusion models."
[3] Lu, Yulong, et al. "Accelerating Langevin sampling with birth-death."
[4] Karras, Tero et al. "Analyzing and Improving the Training Dynamics of Diffusion Models."
[5] Xu, Jiazheng et al. "ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation"
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply.
My score will remain positive, and this paper is worth acceptance. The rebuttal enhances my understanding of the FKC.
However, current theory cannot suggest why FKC does better in the ambient and provides no help in the latent space.
I believe the latent/ambient should not matter with the reweighting mechanism, it may be due to other model-related reasons.
And its downstream application in CV may not be that promising.
Therefore, I cannot give a higher score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your swift response! We are glad to hear that our rebuttal 'enhances your understanding' of the proposed methodology and that you believe the paper is worthy of acceptance.
We would like to highlight that the main objective of the proposed method is to sample from the modified densities rather than to directly improve the quality of image generation. This discrepancy in motivation is likely the reason why the method does not improve the performance of Stable Diffusion XL. Indeed, such factors as the quality of the VAE model and the quality of the latent diffusion model define (a) the accuracy of the learned scores, which is crucial for our method (b) whether better sampling from the geometric average of densities in the latent space $q_t^{(1-\beta)}(z|\emptyset)q_t^{\beta}(z|c)$ result in better image generations after decoding.
However, the results on the sampling task and other generative modeling tasks suggest that the practice agrees with our theoretical findings, i.e. introducing FKC allows for more accurate sampling from the modified densities.
If you have any further suggestions for how we can improve our work and potentially your evaluation, please do let us know! | Summary: This paper points out modifying score function of the pretrained generative modelings, such as classifier-free guidance might cause to generate samples that are not from the same distribution as the training data, and the corrector schemes used to address this problem either requires infinite many steps requiring more computation resources or is detrimental to the sampling quality. This paper proposes Feynman-Kac PDEs, aiming to generate samples from the same training data distribution and improve sampling efficiency.
Claims And Evidence: For Section 3, this proposes composing a few diffusion models during inference time, especially the product and Geometric Avg examples, but I am not sure if this is a proper conduct. As [1] proves and empirically shows that the reverse SDE induced by composed score function do not correspond to sampling from the composed model and the reverse diffusion sampling will generate incorrect samples from composed distributions. Instead, they propose to sample with Langevin dynamics. Based on my understanding, Feynmann Kac PDEs have SDE component embedded, so I doubt simply performing sampling process according to what this paper introduces will give wrong samples too. I might make the wrong connection and correct me if I am wrong.
[1] Du, Y., Durkan, C., Strudel, R., Tenenbaum, J.B., Dieleman, S., Fergus, R., Sohl-Dickstein, J., Doucet, A. and Grathwohl, W.S., 2023, July. Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc. In International conference on machine learning (pp. 8489-8510). PMLR.
Methods And Evaluation Criteria: I think the experimental designs for performance evaluation are reasonable.
Theoretical Claims: I did not spot issues of the proofs or claims except the one I mentioned in the ``Claims and Evidence'' section.
Experimental Designs Or Analyses: This paper proposes doing SMC and Jump Process for resampling to improve the performance, but I don't find the details specifically for each experiment. For example, this paper mentions SMC is only done when $t \in [t_{\mathrm{min}}, t_{\mathrm{max}}]$, but those $t$'s are not mentioned in the experiment. I also cannot find if both of resampling methods are really used during the inference time, as the tables or figures do not show for them. An ablation study is required if so.
With all those two methods plugged in for resampling, I am also wondering the latency and computational cost-.
In table 5, the experiment is missing when $\beta=7.5$ and FKC presents. I cannot see why this is not done.
Supplementary Material: I checked a few proofs for table 1 and I found they are pretty repetitive so I did not check all of them.
Relation To Broader Scientific Literature: 1. This paper derives PDEs to describe the time-evolution of sample density under the standard SDEs. This makes the sampling process more flexible.
2. This papers propose a few composition approaches, such as annealed, product and geometric average distributions. Again, as I have mentioned, the correctness is doubted as the reverse SDEs wouldn't give the right distribution [1].
[1] Du, Y., Durkan, C., Strudel, R., Tenenbaum, J.B., Dieleman, S., Fergus, R., Sohl-Dickstein, J., Doucet, A. and Grathwohl, W.S., 2023, July. Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc. In International conference on machine learning (pp. 8489-8510). PMLR.
Essential References Not Discussed: I think it is complete.
Other Strengths And Weaknesses: The paper is written well-structured and complete.
Again, the weaknesses would be I did not notice the computation cost is reported to support the true efficiency of the proposed approach. Otherwise, the resampling process should be very costly.
Other Comments Or Suggestions: No.
Questions For Authors: 1. For equation 6, I cannot really see how $$\frac{\partial p_t^w(x)}{\partial t} = \bar{g}_t(x) p_t^w(x)$$ is derived. I quickly skimmed through the Appendix but did not find it explained anywhere.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. We are glad that the reviewer finds our paper to be 'well-structured and complete' and the experimental design to be 'reasonable'. We also would like to thank the reviewer for bringing up reference [1] (Du et al.), as this is an excellent reference for clarifying our contributions.
Before addressing the concerns raised, we would like to clarify a potential misunderstanding about the proposed method.
1. The main objective of the proposed method is to modify the target density at the inference time (e.g. by sampling from the annealed distribution or product of densities) rather than 'sampling from the training data and improving sampling efficiency'.
2. In complete agreement with [1], our work shows that simulating the reverse SDEs with modified scores **does not** sample from the target densities. This is exactly the goal of introducing the Feynman-Kac corrector scheme, which, as we prove, allows for consistent sampling from the target densities by re-weighting and re-sampling. We have added a self-contained proof of Eq. 9 in the updated Appendix to further emphasize this claim.
Next, we would like to address the salient concerns raised by the reviewer individually:
> the choice of $t_\max, t_\min$
The choice of the interval $[t_\min, t_\max]$ depends on the hyperparameters and, especially, the noise schedule, but does not require significant tuning. In practice, we choose the interval based on the fraction of unique samples resampled at every iteration. We report the corresponding plot in Fig. 1 of [the PDF](https://anonymous.4open.science/r/FKC-D7F0/rebuttal.pdf) for the annealing experiments. Note that the scale of the weights is proportional to the noise $\sigma_t$ (see Prop 3.2), which results in a low number of unique samples close to $t=1.0$ due to the Variance Exploding schedule.
For molecule generation experiments in Table 3, we selected $t_\max$ based on a validation task by going over the grid $[0.6,0.7,0.8,0.9,1.0]$; $t_\min$ was always set to $0$.
We have also added a new set of molecule experiments for a harder set of tasks using a model that directly predicts the 3D coordinates of the atoms in a molecule from [2] (Zhou et al.) and docks molecules to a set of target protein pairs. Again, we use a validation task to sweep over $t_\max$ values, which we report in Table 2 of [the PDF](https://anonymous.4open.science/r/FKC-D7F0/rebuttal.pdf).
> Ablation study for the resampling methods
We perform the ablation study of the two considered resampling methods in Table 6 of the Appendix. We find that systematic resampling always performs better or comparably and opt to use it throughout the rest of the empirical study.
> Computational cost of the resampling step
The additional computational cost of the resampling step is negligible compared to the reverse SDE simulation. Indeed, all the weights depend only on scores, which are already evaluated at the forward pass. We purposefully avoid the computation of the divergence operators when deriving the integration scheme (see Lines 176-189 L). The cost of the systematic resampling step is equivalent to generating one uniform random variable, which is negligible.
Our proposed method is notably more convenient than [1], which requires changing the diffusion model parameterization and performing additional MCMC steps or Metropolis-Hastings tests.
> $\beta = 7.5$ for FKC on SDXL
As requested by the reviewer, we provide results for both values of $\beta$ on the Geneval benchmark [3] (Dhruba Ghosh et al.). See the detailed image-generation study in response to Reviewer 65RJ.
|Model|$\beta$|Overall|Single object|Two object|Counting|Colors|Position|Color attribution|
|-|-|-|-|-|-|-|-|-|
|SDXL (Rerun) |7.5|**0.57**|0.99|0.80|0.46|0.86|0.11|0.22|
|FKC|5.5|**0.58**|0.99| 0.77|0.49|0.87|0.10|0.22|
|FKC|7.5|**0.57**|0.99|0.78|0.46|0.83|0.13|0.23|
> Derivation of equation (6)
Equation (6) can be understood by the separation of variables. Indeed,
$$\frac{\partial_t p_t^w(x)}{p_t^w(x)}=\partial_t\log p_t^w(x)=\bar{g}_t(x)\implies p_t^w(x)=p_0^w(x)\exp(\int_0^t ds \bar{g}_s (x)),$$
where the exponent part corresponds to the update of the weights $dw_t=\bar{g}_t(x)dt$. Note that in eq. (9), $w$ appears as log-weights for Self-Normalized Importance Sampling.
Note that Eq. (6) is meant to *introduce* the reweighting evolution. In later sections, we *derive* the weights which preserve the particular target distribution evolution under simulation or transport by an SDE with given drift/diffusion.
### Closing remarks
Again, we thank the reviewer for their questions, which gave us the opportunity to clarify and improve our work. We hope that our answers fully address all the important questions raised by the reviewer, and we are happy to consider any additional questions or further suggestions.
We kindly ask the reviewer to consider increasing their score if our responses address their concerns satisfactorily. | Summary: The paper derives a suite of new tools for modifying pretrained diffusion models at inference time, using particle resampling techniques. In particular, they use the Feynman-Kac formula to derive the evolution of weights for particles when simulating the diffusion reverse SDE (or different variants of it) such that resampling using the weights results in samples from 1) tempered versions of the original diffusion marginals at different diffusion times 2) products of the marginals of two diffusion models 3) geometric averages of the marginals of two diffusion models. These weighting terms are derived for a plethora of different SDEs, and the paper focuses experimentally on ones where the 1) score function is scaled with a scalar 2) both the score and SDE noise are scaled with specific scalars. The paper explores sequential Monte Carlo resampling methods, and jump process reweighting methods. The proposed methods are evaluated on
- different sampling problems, where the ability to change the temperature at runtime is validated
- multi-property molecule generation, where the authors use the method to generate molecules that simultaneously inhibit multiple proteins
- image generation, where the use of the method as a classifier-free guidance replacement is investigated
Claims And Evidence: I think that the claims are supported by evidence.
Methods And Evaluation Criteria: I think that all of the datasets chosen make sense, each tests one clear ability of the collection of methods proposed. The evaluation criteria are sensible as well.
Theoretical Claims: I did go through Proposition B.1 and B.2 and didn’t find issues. But there are many more propositions in the paper.
Experimental Designs Or Analyses: I think that the experimental design, and their analysis made sense, but I didn’t check the their validity in a lot of detail.
Supplementary Material: I went through the beginning of Appendix B, and some of the experimental details in Appendix E.
Relation To Broader Scientific Literature: The paper presents methodological progress in the context of applying inference-time controls to pretrained diffusion models using particle resampling methods. The paper presents many new tools, and, e.g., the ability to temper the target distribution has not been considered in previous work, to the best of my knowledge. This could serve as a very useful building block for diffusion methods targeting at sampling from unnormalised densities. Perhaps the most new important contribution is the systematic mathematical exploration of different reverse SDEs and their impact on the resampling schemes, and the paper could serve as a helpful reference for future work to build on.
Essential References Not Discussed: I am not aware of essential references not discussed, although I am not also particularly familiar with the literature on using SMC to guide diffusion models.
Other Strengths And Weaknesses: I think that this is overall a very nice paper, and a useful contribution to the literature. The paper opens up new tools for diffusion guidance to the community, is well written and organised, and experiments are done on multiple examples, showing the
A weakness of the paper is that some practical details regarding the experiments are a bit more complicated than the theory would imply, and some hyperparameters exist that are not discussed at length in the main paper. I highlight two parts to kick of discussion:
From section 4:
“We find resampling only over an ‘active interval’ t ∈ [tmin, tmax] useful for improving sample quality and preserving diversity, and set weights to zero outside of this interval”
- What is this interval in the experiments used? If this requires significant tuning for different data sets, that seems to be a downside of the method. Regardless, this should be detailed in the paper.
From the Appendix, regarding the compositional molecule generation experiments:
“In practice, we find that the FKC weights have a large variance during molecule generation. This is problematic, as a large number of samples are thrown away. Furthermore, we noted that the score was not always well-conditioned. To ameliorate this, we divided the weights by a set temperature term (T = 100) to reduce their variance before resampling, clipped the top 20% to account for any score instabilities, and did early-stopping (only resampled for 70% of the timesteps).”
- Do the authors have intuition on why is it necessary to divide the weights by a set term 100 in this example? Is this done for the other experiments as well? 100 seems like a large number to use here, and a significant difference from the method implied by the mathematics seems, on the face of it, to raise into question the relevance of the mathematical method definition.
- That said, I suppose it makes sense that the particle resampling may be especially high variance in high dimensions?
- Is early stopping the right phrase to use here, as usually in machine learning it refers to a regularisation method for neural network training?
The image generation results seem quite okay, but there are not a lot of practical details. Weight stabilization techniques used? Active interval choices for resampling? How many particles? I understand that this is not necessarily the highlight of the paper, but since the experiments are there, it would be useful to include the details as well.
Other Comments Or Suggestions: -
Questions For Authors: See “Other strengths and weaknesses”
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback! We are thrilled to hear that they find our paper to be a 'useful contribution to the literature', 'well written and organized' and a new tool for the diffusion sampling community. Below, we address the reviewer's questions.
> What is the interval $[t_\min, t_\max]$ in the experiments used? Does it require significant tuning?
The choice of the interval $[t_\min, t_\max]$ depends on the hyperparameters and, especially, the noise schedule, but does not require significant tuning. In practice, we chose the interval based on the fraction of unique samples resampled at every iteration during the resampling step. We report the corresponding plot in Fig. 2 of [the PDF](https://anonymous.4open.science/r/FKC-D7F0/rebuttal.pdf) for the annealing experiments. Note that the scale of the weights is proportional to the noise $\sigma_t$ (see Prop 3.2), which results in a low number of unique samples close to $t=1.0$ where the variance of the noise is the largest due to the Variance Exploding schedule. We have added corresponding plots and discussion to the manuscript.
> Why is dividing the weights by some constant factor $T$ is necessary? I suppose it makes sense that the particle resampling may be especially high variance in high dimensions?
Indeed, as the reviewer points out, the variance of the weights grows with the number of dimensions due to being proportional to the norm of the score. For the latent molecule generation (8000 dimensions), we had to divide the weights by $T=100$ and clip lowest and largest $20\\%$ of the weights in order to achieve reasonable variance of the weights. To verify that the need for such heuristics is caused by the dimensionality, we did a new empirical study for molecule generation in the coordinate space, where the dim. is 3 * number of atoms in the molecule (<100). Here, we neither divided nor clipped the weights and only chose the interval $t_\max$ based on a validation set. We present new results below and have added them to the manuscript.
> Is early stopping the right phrase to use here?
Thank you for suggesting a better explanation! The closest analog to selecting $[t_\min, t_\max]$ is the reducing the time interval for the integration of diffusion models, i.e. instead of integrating $t\in[0,1]$, one usually integrates $t\in[ε,1]$, where ε is small constant. We have changed this explanation in the manuscript.
> Practical details on image generation
For image generation, we used Stable Diffusion XL with a variance-preserving SDE and an Euler–Maruyama solver, running for 100 steps. All the computations are done with float16 precision, which is crucial for stable generation via SDXL. All images are generated in 1024x1024 resolution. In the FKC portion of our experiments, we did not apply weight rescaling; moreover, we resampled all 64 particles after each time step throughout the entire time interval.
## New molecule experiments
We did a new set of molecule experiments using a model that directly predicts the 3D coordinates of the atoms in a molecule from [1] on a harder set of tasks of generating molecules that dock to a pair of proteins simultaneously, expanding on the promising results from Table 4 of the original submission. For 14 proteins pairs, we generated 32 molecules at 5 different molecule sizes (160 molecules per pair) using FKC product and found that it improved nicely over two SOTA methods:
---
**Table: Docking scores of generated ligands for 14 protein target pairs (P₁, P₂).** Lower docking scores are better.
|||(P₁ * P₂) (↑)|max(P₁, P₂) (↓)|P₁ (↓)|P₂ (↓)|Div. (↑)|Val. & Uniq. (↑)|
|-|-|-|-|-|-|-|-|
|P₁ only [2] ||62.77±23.74|-7.30±1.90|-8.38±1.51|-7.44±1.93|**0.89±0.01**|**0.95±0.07**|
|β|FKC|||||||
|0.5|no [1]|64.35±21.54|-7.14±2.12|-7.90±1.99|-7.96±1.67|**0.89±0.01**|0.89±0.21|
||yes|64.05±31.21|-6.86±3.26|-7.89±2.90|-7.92±2.42|0.88±0.02|**0.95±0.11**|
|1.0|no|69.03±21.61|-7.54±1.74|-8.24±1.71|-8.30±1.53|**0.89±0.01**|0.90±0.19|
||yes|69.83±32.70|-7.40±2.93|-8.51±1.82|-8.27±2.88|0.85±0.02|0.92±0.10|
|2.0|no|68.12±18.56|-7.40±2.03|-8.21±1.66|-8.11±1.62|0.88±0.01|0.94±0.16|
||yes|**75.54±23.26**|**-7.91±1.62**|**-8.60±1.62**|**-8.66±1.55**|0.81±0.05|0.88±0.09|
---
[1] Zhou et al. "Reprogramming Pretrained Target-Specific Diffusion Models for Dual-Target Drug Design." NeurIPS (2024).
[2] Guan et al. ICLR (2023).
> Implementation details
For molecules, we chose the time interval based on a validation set of 1 protein pair at 5 molecule lengths (32x5 generated molecules). We kept $t_{\text{min}}$ at 0 and sweep over values of $t_{\text{max}}$ for $\beta=2$, SDE type="Target score", and FKC="on". We show this and additional ablations over $\beta$ and SDE type in [this PDF](https://anonymous.4open.science/r/FKC-D7F0/rebuttal.pdf). Setting $t_{\text{max}}$ to 0.6 gives a good tradeoff in terms of generating molecules that perform well vs. maintaining diversity, and so we proceed with $t_{\text{max}} = 0.6$. | Summary: Prior work has shown that composing multiple pre-trained diffusion models is not straightforward in the context of energy-based models. This paper investigates Feynman-Kac correctors based on the Feynman-Kac formula to sample from annealed, geometric-average, and product distributions. Compared to the Fokker-Planck equation, a reweighting function is introduced, allowing for the incorporation of Sequential Monte Carlo resampling schemes. Experimental results demonstrate the effectiveness of the proposed algorithm.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes, I checked some of proofs.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes, I reviewed some of proofs in the appendix.
Relation To Broader Scientific Literature: This paper investigates Feynman-Kac correctors based on the Feynman-Kac formula to sample from annealed, geometric-average, and product distributions.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
- The paper is generally well-structured and theoretically sound.
- The paper proposes Feynman-Kac correctors based on the FK formula and apply it to different scenarios, including anealed, geometric-average, and product distributions derived from pre-trained diffusion models.
Weaknesses:
- Please see the questions below for the authors.
Other Comments Or Suggestions: No.
Questions For Authors: - When comparing Eq.(5) and Eq.(7), an additional term is added. I am curious whether this arises from the evolving normalizing constant $Z_{t}$ in the context of sampling scenario? In contrast, does the vanilla FP equation in Eq. (5) describe the evolution of distributions while keeping $Z_{t}$ unchanged?
- To obtain FK PDE in Eq.(7), is it simply the addition of the FP equation and the reweighting equation? I guess there may be something missing before the definition of the FK PDE.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time, feedback, and positive appraisal of our work. We are heartened to hear that the reveiwer feels that the paper is "theoretically sound" and found that the paper was "generally well-structured". We now address the questions and suggestions raised by the reviewer.
> When comparing Eq.(5) and Eq.(7), an additional term is added. I am curious whether this arises from the evolving normalizing constant $Z_t$ in the context of sampling scenario? In contrast, does the vanilla FP equation in Eq. (5) describe the evolution of distributions while keeping $Z_t$ unchanged?
Both PDEs in eqs. (5) and (7) preserve the normalization of the density in the sense that $\int dx ~ p_t(x) = 1$ for all $t$, although the evolution of density changes with the addition of weighting terms.
Note that the normalization constants $Z_t$ defined in eq. (13) change in time to guarantee that the density on the left hand side is normalized, e.g. $\int dx ~ p_t^{\text{prod}}(x) = 1$. This is due to defining the density up to a constant, i.e. $p_t^{\text{prod}}(x) \propto q_t^1(x)q_t^2(x)$.
Note that, while the integral term $\int g_t(x) p_t(x) dx$ in eq. (6) ensures normalization for the density evolution PDE, our SMC resampling can proceed with access to only the unnormalized weights $g_t(x)$ (see eq. (9))
> To obtain FK PDE in Eq.(7), is it simply the addition of the FP equation and the reweighting equation? I guess there may be something missing before the definition of the FK PDE.
FK PDE in eq. (7) is a more general type of PDEs than the FP equation in eq. (5). Indeed, the main difference between these PDEs is the reweighting term (the last term of FK PDE, i.e. eq. 6) that allows for simulation of processes not possible with FP equation. For instance, FK PDE can change the weights of disconnected modes by reweighting the samples without transporting them, while FP PDE would have to transport samples from one mode to another (via the vector field or noise) to adjust the relative weights. We will emphasize this point in the final version of the paper.
We would like to thank the reviewer for their time and feedback. We hope our answers here allow the reviewer to continue to positively endorse our paper, and we would happily clarify any addition questions which may arise.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their responses, which are clear to me.
During the rebuttal period, I have another question:
- In the appendix, Eqs. (74-75) and Eqs. (87-88) give the same marginal distributions. It is possible that, in some cases, multiple solutions exist. How should one choose between them? In this particular case, we should prefer Eqs. (74-75) over Eqs. (87-88), since Eq. (88) contains a Laplacian term?
---
Reply to Comment 1.1.1:
Comment: We are happy to hear that our initial response clearly answers the reviewer's questions!
> In the appendix, Eqs. (74-75) and Eqs. (87-88) give the same marginal distributions. It is possible that, in some cases, multiple solutions exist. How should one choose between them? In this particular case, we should prefer Eqs. (74-75) over Eqs. (87-88), since Eq. (88) contains a Laplacian term?
This is absolutely correct, as we discuss in Section 2.3., a given PDE can be simulated in multiple ways. This is achieved by moving the terms between the continuity equation, the diffusion equation and the reweighting equation and changing their interpretation.
In the current paper, we consider two main motivations for the choice of the simulation scheme (see Lines 176-188 Left):
1. **Computational cost**: As the reviewer correctly points out, we should prefer (74-75) as they avoid expensive evaluation of the Laplacian and the divergence operators (87-88) during inference.
2. **Sampling efficiency**: In general, we should choose the scheme that maximizes sampling efficiency, i.e. minimizes the variance of the weights at minimal computational cost. This choice may depend on the specific setting and application. For example, in the annealing task (section 5.1 and Table 2), the choice of the scheme (tempered noise vs. target score) has differing performance at different annealing temperatures, with target score working better at lower target temperatures (more annealing).
We thank the reviewer for their insightful question, we will be sure to include further discussion of these choices in the revised manuscript. We are happy to provide any further clarification and answer any questions that may arise. | null | null | null | null | null | null |
Scalable Sobolev IPM for Probability Measures on a Graph | Accept (poster) | Summary: This paper introduces a Scalable Sobolev Integral Probability Metric (IPM) for probability measures defined on graph metric spaces. The key focus is on improving computational efficiency while maintaining mathematical rigor for applications in machine learning, topological data analysis (TDA), and document classification. The contributions include
1.Introducing a weighted L_p-norm formulation that enables fast closed-form computation, overcomes long-standing computational barriers in Sobolev IPM and extending Sobolev IPM to graph-based metric spaces.
2.Proving that the regularized Sobolev IPM is a proper metric and showing equivalence to the original Sobolev IPM, ensuring the regularized form preserves theoretical properties.
3.Moreover, the paper makes some connections Sobolev transport, classical optimal transport (OT) on graphs and Wasserstein distances, particularly for tree-structured graphs. This provides more potential applications of this paper.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The paper proposes a Scalable Sobolev Integral Probability Metric (IPM) for probability measures defined on graph metric spaces. This makes sense.
Theoretical Claims: Yes. It looks correct to me overall.
Experimental Designs Or Analyses: Yes, all experiments look valid to me while I wonder one issue about them: the choice of graph structure seems not justified. The paper does not explain why these specific graph structures were chosen. Different graph structures (e.g., random geometric graphs, small-world graphs) may impact computational performance.
Supplementary Material: Yes, the proofs of results in Section 3 and 4.
Relation To Broader Scientific Literature: The paper contributes to the field of probability metrics, optimal transport, and machine learning on graphs by improving the computational efficiency and applicability of Sobolev Integral Probability Metrics (IPM). These are commonly studied in machine learning community.
Essential References Not Discussed: The paper has cited a sufficient amount of related papers to essentially understand their contributions.
Other Strengths And Weaknesses: The strengths are listed in "Summary".
As for their weakness,
1. The paper has over-reliance on fixed graph structures. As mentioned in "Experimental Designs Or Analyses": the choice of graph structure seems not justified. The paper does not explain why these specific graph structures were chosen and has no discussion on how graph structure impacts Sobolev IPM performance. I think this question is very crucial to the Sobolev Integral Probability Metric (IPM) proposed in this paper.
2. The equivalence result (Theorem 4.2) only provides upper and lower bounds, but does not analyze the gap between standard Sobolev IPM and the regularized version.
3. Moreover, the paper proves equivalence with the 1-Wasserstein distance when the graph is a tree but does not fully explore relations for p>1. No theoretical guarantees on whether the regularized Sobolev IPM behaves similarly to higher-order Wasserstein distances.
4. As one of the advantages of this paper focusing on fast closed-form computation, the paper presents empirical runtime comparisons, but does not provide a theoretical complexity analysis of its method, for example, on graph size |V| and |E|, etc.
Other Comments Or Suggestions: Please see the four questions in "Other Strengths And Weaknesses".
Questions For Authors: Please see the four questions in "Other Strengths And Weaknesses".
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer Qz8Z,
Thank you for your valuable feedback. Below are the answers for your questions and comments.
**(1) [...]choice of graph structure (not justified)[...]not explain why these specific graph structures were chosen. Different graph structures (e.g., random geometric graphs, small-world graphs) may impact computational performance[...]**
**[...]over-reliance on fixed graph structures [...]choice of graph structure seems not justified[...]not explain why these specific graph structures were chosen and has no discussion on how graph structure impacts Sobolev IPM performance[...]crucial to (Sobolev IPM)[...]**
→ In this work, we study Sobolev IPM for probability measures supported on **a given graph** (line 11–13). As in line 1481–1284 (Appendix C.3), much as Sobolev transport (Le et al., 2022), we assume that **the graph metric space (i.e., the graph structure) is given**. We will clarify it.
Our proposed regularized Sobolev IPM can be applied for probability measures supported on **any given graph $\mathbb{G}$ such that $\mathbb{G}$ is connected, undirected, physical graph with positive edge length, and $\mathbb{G}$ satisfies the uniqueness property of the shortest paths (as in line 90-103)**. Furthermore, as in Remark 3.6 (181-189), the regularized Sobolev IPM can be also applied for non-physical graph $\mathbb{G}$ for the discrete case as in Theorem 3.5 (line 165-179).
In our experiments, we consider graphs $\mathbb{G}$_Log, $\mathbb{G}$_Sqrt as in Le et al. (2022) (line 326–327). We agree that there are many different approaches for document classification and TDA tasks. However, it is **not** the goal of our empirical studies. As in line 299-306, our experiments aim to illustrate the fast computation of the regularized Sobolev IPM, and show preliminary evidences on its advantages compared to other popular transport approaches (e.g., optimal transport and Sobolev transport) **for probability measures on a given graph under the same settings**.
**(2) [...] (advantages) on fast closed-form computation, the paper presents empirical runtime comparisons, but does not provide a theoretical complexity analysis of its method, for example, on graph size $|V|$ and $|E|$[...]**
→ Besides the preprocessing with computational complexity $\mathcal{O}(|E| + |V| \log |V|)$ (line 262-267), from Equ. (10) in Theorem 3.5, the computational complexity of the regularized Sobolev IPM is **trivially linear to the number of edges in $E$ of graph $\mathbb{G}$, i.e., $\mathcal{O}(|E|)$**.
Moreover, by exploiting the sparsity property, we **further reduce its computational complexity into $\mathcal{O}(|E_{\mu, \nu}|)$** (line 205-216). We will clarify it.
**(3) [...]proves equivalence with the $1$-Wasserstein distance when the graph is a tree[...]not fully explore relations for $p>1$. No theoretical guarantees on whether the regularized Sobolev IPM behaves similarly to higher-order Wasserstein distances.[...]**
→ We do acknowledge it (line 233-243). For $p=1$, regularized Sobolev IPM is equal to $1$-Wasserstein (**Prop. 4.5**). For $p > 1$, to our knowledge, their relation is an open problem (line 233-235), but $p$-order regularized Sobolev IPM is lower bounded by $1$-Wasserstein (**Prop. 4.6**).
We respectfully clarify that **our goal for the behavior of the regularized Sobolev IPM is similar to the original Sobolev IPM (Theorem 2), but NOT the $p$-Wasserstein**. We agree that for $p>1$, the regularized Sobolev IPM and $p$-Wasserstein can behave differently. We believe that it **should not be considered as the weakness** for the regularized Sobolev IPM.
**(4) [...]equivalence result (Theorem 4.2) only provides upper and lower bounds[...]not analyze the gap between standard Sobolev IPM and the regularized version[...]**
→ We agree that in Theorem 4.2, we show that the regularized Sobolev IPM is equivalent to the original Sobolev IPM, but not their gap. However, from Theorem 4.2, one can obtain the upper/lower bounds on the **ratio** between regularized Sobolev IPM and original Sobolev IPM.
We further emphasize that our approach aims neither to derive a **tight bound** for the gap nor to provide a **sharp approximation** for the challenging optimization problem (Equ. (4)) of the original Sobolev IPM. Our purpose is in a different direction. More precisely, we instead propose a **novel regularization for Sobolev IPM** which yields an **equivalent metric** to the original Sobolev IPM, and a **closed-form expression** for a fast computation. Note that, to our knowledge, **there are no efficient algorithms to compute Sobolev IPM effectively yet**. We believe that each approach has its own merits.
---
**Concluding remarks.** We would be grateful if you could let us know whether our clarifications and answers address the raised concerns. If so, we kindly ask that you consider increasing your rating. We are also pleased to discuss any other questions you may have.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for this detailed response. Now I understand this work better and raised my point a bit.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Qz8Z,
Thank you very much for raising your rating into positive, and we deeply appreciate your thoughtful endorsement.
We are glad that our clarifications address your raised concerns. We will revise the paper following the feedback and suggestion.
With best regards, | Summary: This paper studies the Sobolev Integral Probability Metric (IPM) for probability measures supported on graph-structured spaces. The authors introduce a regularized version of Sobolev IPM, which allows for a closed-form solution and efficient computation, making it more scalable for large-scale applications. The key contributions include:
- A proof that the regularized Sobolev IPM is equivalent to the original Sobolev IPM and related to Sobolev transport (ST) and optimal transport (OT).
- Demonstration that the regularized Sobolev IPM is negative definite, allowing for the construction of positive definite kernels.
- Experimental validation showing that the proposed method is computationally much faster than traditional OT and comparable to ST, with promising performance in document classification and topological data analysis (TDA).
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No. I didn't check the proof line by line, unfortunately.
Experimental Designs Or Analyses: Yes.
Supplementary Material: No.
Relation To Broader Scientific Literature: No.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: I am confused with the following potential weakness, besides the contribution I mentioned in the summary part.
- Limited Justification for General Graphs
While the proposed metric is claimed to work on graphs, its theoretical analysis and experiments are only carried out on trees (special cases of graphs). The paper does not provide strong theoretical or empirical justifications for general graphs, where the shortest-path structure may not be unique.
- Evaluation on Embeddings Instead of Direct Graph Metrics
The method is designed for probability measures on graphs, but its empirical evaluation relies on embedding-based representations (e.g., word embeddings for documents, persistence diagrams for TDA). This raises questions about whether the metric truly captures graph-based probability measure discrepancies or if the improvement is due to embeddings.
- Scalability Claim Relative to Existing Closed-Form Solutions
The paper highlights scalability as a key advantage, but tree-Wasserstein (TW) and Sobolev transport (ST) already have closed-form solutions. It is unclear whether the new method provides significant computational benefits beyond these existing approaches.
Other Comments Or Suggestions: NA
Questions For Authors: Could the authors discuss the potential applications of TW/ST or regularized ST to general graphs?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer FFqa,
Thank you for your valuable feedback. Below are the answers for your questions and comments.
**(1) [...]Limited Justification for General Graphs[...]claimed to work on graphs, its theoretical analysis and experiments are only carried out on trees[...] (not for) general graphs, where the shortest-path structure may not be unique[...]**
**[...]discuss the potential applications of TW/ST or regularized ST to general graphs?[...]**
→ We respectfully disagree. We clarify that the regularized Sobolev transport can be applied for probability measures on **any given graph $\mathbb{G}$ such that $\mathbb{G}$ is connected, undirected, physical graph with positive edge length, and graph $\mathbb{G}$ satisfies the uniqueness property of the shortest paths (as in line 90-103)**. We emphasize that **there is no requirement that the given graph $\mathbb{G}$ is a tree**. Furthermore, as in Remark 3.6 (181-189), the regularized Sobolev IPM can be also applied for non-physical graph $\mathbb{G}$ for the discrete case as in Theorem 3.5 (line 165-179).
For the uniqueness property of the shortest paths, we clarify that **it does NOT require that there is only one path connecting between two nodes $x, y \in \mathbb{G}$ (or the given graph should be a tree)**. In fact, there may exist **multiple different paths with various path lengths**, connecting these two nodes. The uniqueness property of the shortest paths only requires that the shortest path between root node $z_0$ and arbitrary node $x$ to be unique (i.e., **no multiple shortest paths** between them; and there is no problem for **existing multiple different paths** connecting them). Please see line 1467-1480 (Appendix C.3) for the further discussion.
Note that a graph is called geodetic if for every pair of nodes the shortest path between them is unique. Thus, geodetic graphs are special examples satisfying the uniqueness property of the shortest paths (i.e., geodetic graphs are not necessarily trees). Please see **Fig. 1 in Le et al. (2022)** for an example of such geodetic graphs. Moreover, in our experiments, graph $\mathbb{G}$_Log has $M$ nodes and at least $(M \log M)$ edges; and graph $\mathbb{G}$_Sqrt has $M$ nodes and at least $M^{3/2}$ edges. Thus, these graphs are obviously **not trees**. We will clarify it.
**(2) [...]Scalability Claim Relative to Existing Closed-Form Solutions[...]highlights scalability (key advantage), but (TW/ST) already have closed-form solutions[...] (unclear) significant computational benefits beyond[...]**
→ We clarify that **our goal is NOT to scale up the computation of either TW or ST, but the Sobolev IPM** for probability measures on a given graph.
In this work, we study Sobolev IPM for probability measures supported in a given graph (line 10-13). To our knowledge, **there are no efficient algorithmic approaches to compute Sobolev IPM effectively**, which hinders its practical applications (line 19-22). Please see Equ. (4) for the challenging optimization problem of the Sobolev IPM.
We propose a novel regularization for Sobolev IPM in Equ. (8) in Def. 3.3 (line 131-139), which yields a closed-form for a fast computation (Theorem 3.5, line 165-179). This paves a way for applying Sobolev IPM in applications, especially large-scale settings.
In our experiments, **the performances of regularized Sobolev IPM kernels compare favorably with those of ST, OT, TW kernels** (line 371-375). Furthermore, note that TW can only use a partial graph information (line 292-294), i.e., tree information sampled from the given graph.
**(3) [...]Evaluation on Embeddings Instead of Direct Graph Metrics[...]designed for probability measures on graphs[...] (experiments) relies on embedding-based representations (e.g., word embeddings for documents, persistence diagrams for TDA)[...]whether the metric truly captures graph-based probability measure discrepancies or if the improvement is due to embeddings[...]**
→ We agree that there are many different approaches for document classification and TDA tasks. However, it is **not** the goal of our empirical studies.
We clarify that as in line 299-306, our experiments aim to illustrate the fast computation of the regularized Sobolev IPM, and show preliminary evidences on its advantages compared to other popular transport approaches (e.g., OT, ST) **for probability measures on a given graph under the same settings**.
More concretely, in our experiments, we consider graphs $\mathbb{G}$_Log, $\mathbb{G}$_Sqrt as in Le et al. (2022) (line 326–327), and evaluate for regularized Sobolev IPM, OT, ST to compare probability measures supported on these given graphs under the same settings.
---
**Concluding remarks.** We would be grateful if you could let us know whether our clarifications and answers address the raised concerns. If so, we kindly ask that you consider increasing your rating. We are also pleased to discuss any other questions you may have.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Many thanks for the clarification. I adjusted my scores accordingly.
Best regards
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer FFqa,
Thank you very much for increasing your rating into positive, and we deeply appreciate your thoughtful endorsement.
We are glad that our clarifications address your concerns on our work. We will revise the paper following the feedback and suggestion.
With best regards, | Summary: The paper proposes an equivalent form for the Sobolev IPM on graphs which they call "Regularized Sobolev IPM", this form has the advantage to be computable in closed form.
Authors show that this proposed IPM is equivalent to the original Sobolev IPM and they provide bounds relating it to previous work on Sobolev Transport , and optimal transport on Graphs.
Authors show that the proposed IPM is negative definite and hence can be used in defining kernels, that are later used in experiments in document classification and TDA.
Claims And Evidence: The claims in the paper are sensible and all proofs are provided in the appendix, and the proofs seem correct.
Methods And Evaluation Criteria: The evaluation is reasonable and compares accuracy to compute time.
Theoretical Claims: I checked overall the proofs but did not read the details, the method is sensible.
Experimental Designs Or Analyses: experiments are on basic classification tasks and show a good accuracy/ compute time tradeoffs.
Supplementary Material: I went through the proofs to check their sensibility and I did not verify everything.
Relation To Broader Scientific Literature: The paper provides a nice contribution in terms of an IPM that is computable in closed form on graphs.
Essential References Not Discussed: Authors cite appropriate relevant literature.
Other Strengths And Weaknesses: The works is solid and sound theoretically nevertheless is not written in a way that is accessible to other machine learning researchers that have no familiarity with this line of work.
This a big weakness of the work as it is not didactic in terms of introducing these concepts in a clear way that is easy to grasp. and after the theorems little discussion is made to explain the results. The notations are heavy and are not conveyed in an easy way to grasp.
some suggestions to improve the presentation and clarity:
* please introduce a simple graph example and two distributions mu and nu on it in Section 2 as a running example through out the paper and give for the quantity defined in equation 1
Please define well quantities such as $\lambda(\Lambda(x))$ how they would be computed and also $\lambda(\gamma_e)$ etc... for a unfamiliar reader that is a lot to unpack.
* after Theorem 3.5 give a **pseudo algorithm** that helps understand how this is computed in practice, the preprocessing section you have in section 4 is dense and not too informative . I checked the code provided to follow what you do in practice the code is in matlab and it will be nice to provide a python code or a pseudo code of how computations are done. Please provide a pseudo code after Theorem 3.5 and show how equation 10 is computed on the running example above.
* After Theorem 4.2 maybe interesting if you can find an example where you can validate this on a small synthetic example.
Other Comments Or Suggestions: I think calling the proposed method as regularized IPM does not make sense , I would call this "weighted sobolev IPM", since it is not just a regularization to the original Sobolev IPM. I would encourage the authors to call it **Weighted Sobolev IPM on Graphs**.
Theorem 4.2 and other derivations seem related to the weighted sobolev norm and the analysis in Peyre https://arxiv.org/pdf/1104.4631
Questions For Authors: Please address the clarity issues in your revision to help the reader follow and understand.
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer XuWp,
Thank you for your valuable feedback. Below are the answers for your questions and comments.
**(1) [...]pseudo algorithm (Theorem 3.5)[...] (compute in practice)[...] (preprocessing) is dense and not too informative[...]**
→ We respectfully clarify that the preprocessing is necessary to compute the regularized Sobolev IPM efficiently (i.e., precompute $\gamma_e$ and $\beta_e$ for each edge $e$ in graph $\mathbb{G}$).
As in line 206-208, following Le et al., 2022, each support $x$ in measure $\mu$ contributes its mass to $\mu(\gamma_e)$ if and only if edge $e$ is in the shortest path between root $z_0$ and $x$. Therefore, for each edge $e$, it is simple to obtain $\mu(\gamma_e)$, $\nu(\gamma_e)$ by checking the **precomputed** shortest paths between root $z_0$ and each support in $\mu$, $\nu$ respectively. Additionally, $\beta_e$ is **precomputed**. Hence, it is **straightforward** to implement Equ. (10) in Theorem 3.5. We will clarify it.
We will follow your suggestion to add the pseudo-code for the computation in Equ. (10).
**(2) [...]example where you can validate (Theorem 4.2)[...]**
→ We respectfully clarify that the findings in Theorem 4.2 are **theoretically and rigorously proved** in Appendix A.5. As in Theorem 4.2, for any nonnegative Borel measure $\lambda$ on graph $\mathbb{G}$ and for $1 \le p < \infty$, the $p$-order regularized Sobolev IPM is equivalent to the original $p$-order Sobolev IPM.
To our knowledge, **there are no efficient algorithmic approaches to compute Sobolev IPM (Equ. (4), line 140-142) effectively yet**.
**(3) [...]Theorem 4.2 and other derivations seem related to the weighted sobolev norm and the analysis (Peyre, 2018)[...]**
→ To our knowledge, our theoretical findings are essentially different from the results in Peyre (2018).
We study the Sobolev IPM for probability measures supported on a given graph. The Sobolev IPM (Equ. (4)) constrains a critic function within the unit ball defined by **the Sobolev norm (Equ. (3), line 124) involving both the critic function and its gradient**, while Peyre (2018) considers the weighted homogeneous Sobolev norm which constraints a critic function in **a semi-norm involving only gradient of the critic function (see Equ. (3) and (4) in Peyre (2018))**. Peyre's approach shares the same spirit as the $2$-order Sobolev transport (Le et al., 2022). We will clarify it.
The proposed regularized Sobolev IPM may share some spirit with the weighted homogeneous Sobolev norm and the Sobolev transport (i.e., constraint on gradient of a critic function). However, we clarify that **the weight function $\hat{w}$ (Equ. (5)) plays the key role to establish the equivalence between Sobolev norm and weighted $L_p$ norm (Theorem 3.2)** for our novel regularization for Sobolev IPM.
**(4) [...] (example) quantity (Equ. 1)[...]**
→ We clarify that Equ. (1) is reviewed from Le et al., 2022. Please see **Fig. 1 in Le et al. (2022)** for such illustration.
We will follow your suggestion to quote more texts and figures from Le et al. (2022) for the review within the limited space in the main manuscript (or placed in the supplementary).
**(5) [...] (define) quantities such as $\lambda(\Gamma(x)), \lambda(\gamma_e)$ (how to compute them)[...]**
→ $\lambda$ is a nonnegative Borel measure on graph $\mathbb{G}$ (**line 70**); $\Gamma(x)$, $\gamma_e$ are subgraphs defined in Equ. (1) (**line 57-59**); $\lambda(\Gamma(x))$, $\lambda(\gamma_e)$ are standard notions of measures on sets.
For a specific instance, as in Theorem 3.5, we consider $\lambda$ to be the **length measure on graph $\mathbb{G}$** (see a review in Def. B.2 and Lemma B.3 in line 1117-1133 (Appendix B.2)), and hence **$\lambda(\gamma_e)$ is the total length of $\gamma_e$**.
**(6) [...]regularized IPM (not make sense)[...]Weighted Sobolev IPM[...]**
→ Following Theorem 3.2, we propose to relax the set of feasible critic functions $\mathcal{B}(p’)$ (line 128) by $\mathcal{B}(p’, \hat{w})$ (Equ. (7), line 128). Therefore, we call it regularized Sobolev IPM.
For “weighted Sobolev IPM”, it may confuse with IPM where a critic function is constrained within the unit ball w.r.t. weighted Sobolev norm (e.g., some weighted versions of the Sobolev norm in Equ. (3)), which are essentially different to our proposed regularized Sobolev IPM.
We will consider your suggestion. Thank you.
**(7) [...] (not) introducing these concepts in a clear way[... ]address the clarity issues[...]**
→ We respectfully clarify that all definitions, theoretical findings, and corresponding proofs are **elaborated rigorously in mathematics**. We will follow your suggestions to add more explanation within the limited space.
---
**Concluding remarks.** We would be grateful if you could let us know whether our clarifications and answers address the raised concerns. If so, we kindly ask that you consider increasing your rating. We are also pleased to discuss any other questions you may have. | null | null | null | null | null | null | null | null |
Expressive Score-Based Priors for Distribution Matching with Geometry-Preserving Regularization | Accept (poster) | Summary: The paper introduces a distribution matching method using VAE by leveraging expressive score-based priors instead of the fixed priors such as Gaussians. The main contribution is the Score Function Substitution (SFS) trick, which reformulates the gradient of the prior’s cross-entropy term to avoid the computationally expensive Jacobian calculation required in latent score-based generative models. By focusing on learning the score function via denoising score matching, the method circumvents the need for explicit density estimation and enables more stable training. Moreover, the paper incorporates structural regularization through a Gromov-Wasserstein-inspired loss, which preserves geometric relationships in the latent space to ensure that latent representations retain task-relevant structure. Experimental results across synthetic datasets, fairness representation learning (using the Adult dataset), domain adaptation (MNIST–USPS), and domain translation (CelebA and FairFace) demonstrate that the proposed approach improves latent space quality and downstream task performance.
########################## Post Rebuttal ##########################
I would like to thank the authors for the detailed response. Most of my concerns are properly addressed. I would like to raise my score.
########################## Post Rebuttal ##########################
Claims And Evidence: Some claims are supported by experiments, while others lack sufficiently detailed evidence. For example:
1. The authors include comparisons (e.g., NLL curves) that indicate improved stability over LSGM. However, a more in-depth analysis (perhaps with additional ablations or broader noise level tests) would strengthen the claim.
2. The authors claim that their method avoids expensive Jacobian computations and is computationally efficient compared to other methods. However, there is no quantitative runtime analysis (e.g., training time per epoch or total convergence time) to support this claim. This makes it difficult to evaluate the true computational benefits relative to simpler methods like using Gaussian priors.
3. The idea of incorporating a Gromov-Wasserstein-inspired loss to preserve latent space geometry is supported by qualitative visualizations and some quantitative metrics. However, the evidence could be more convincing with clearer ablation studies that isolate the impact of this regularization on various downstream tasks.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for a proof-of-concept for distribution matching using expressive score-based priors. The use of benchmarks like MNIST–USPS for domain adaptation, the Adult dataset for fairness, and CelebA/FairFace for image translation demonstrates the method’s practical benefits. However, while these datasets are well-known and widely used in the literature, they are relatively small and may not capture the complexity or scale of real-world applications. Evaluating on larger, more challenging datasets (e.g., VisDA-2017 for domain adaptation) could give more insights into the scalability, robustness, and generalization of the proposed methods across more diverse conditions.
Theoretical Claims: I have checked the proof provided for Proposition 3.1 in Appendix A. This proof derives the gradient of the cross-entropy term using techniques like the reparameterization trick and the chain rule. The steps are logically sound and follow common practices in variational inference.
One minor concern is that while the notation regarding the detachment of z from the gradient (i.e., treating it as a constant with respect to the encoder parameters) is correct, it could be made more clear for readers who are less familiar with such subtleties.
Experimental Designs Or Analyses: I have checked the experimental designs and analyses in the paper, including:
1. A synthetic nested D-shaped dataset to evaluate how different priors affect latent space separation. They analyze performance using AUROC scores derived from classifier performance. While this design is useful for visualizing and quantifying latent space quality, the synthetic nature of the data limits its complexity relative to real-world scenarios.
2. Experiments on the Adult dataset are designed to test the trade-off between demographic parity and accuracy. The experimental setup is standard in fairness evaluation; however, the scale of the dataset and the tasks might not capture challenges encountered in more complex settings.
3. The MNIST-USPS adaptation scenario is tested. Although these benchmarks are common in the literature, they represent relatively simple and low-dimensional domain shifts. This raises concerns about whether the method would generalize to more challenging, large-scale datasets.
4. The qualitative evaluation on domain translation (e.g., CelebA and FairFace) provides visual evidence of the method’s ability to preserve semantic content. However, one issue is the absence of comprehensive quantitative metrics.
#### Concerns:
1. While the datasets used are standard, they are relatively small-scale and may not reflect the challenges of large-scale, high-dimensional data (e.g., VisDA-2017 for domain adaptation).
2. There is limited discussion on isolating the impact of individual components (e.g., the structure-preserving GW loss, the SFS trick).
3. Although the paper claims improved computational efficiency (particularly avoiding expensive Jacobian computations), there is no quantitative runtime analysis (e.g., training time per epoch, total convergence time). Such metrics are important to validate the claimed efficiency benefits.
Supplementary Material: Yes, I have reviewed all the parts in the supplementary material.
Relation To Broader Scientific Literature: The paper’s main contributions build upon several topics, which include generative modeling, variational inference, and optimal transport. To be specific, they include: score-based priors and diffusion models; VAE with expressive priors; structural regularization via optimal transport; and bridging non-adversarial distribution matching and diffusion models.
Essential References Not Discussed: Yes, some references highly related are not included. For example:
1. The work is built on score-based generative modeling, it would benefit from a more detailed discussion of prior advances in variational inference that improve gradient estimation. For example, [1] proposed lower-variance gradient estimators for variational inference, which are conceptually related to the proposed SFS trick.
2. The proposed method is also closely related to latent score-based generative models like [2], which forms the basis for the comparison and motivation. However, the submission did not thoroughly cite or contrast these methods, which are important for understanding the novelty and benefits of the SFS trick.
3. The importance of using learnable priors in VAE is well established in prior literature, particularly in works like VampPrior [3].
4. Recent works such as [4] on Gromov-Wasserstein Autoencoders, which demonstrate how optimal transport can enforce structural constraints in latent spaces, which is directly relevant to the use of the GW-based Semantic Preserving loss.
[1] Roeder, G., Wu, Y. and Duvenaud, D.K., Sticking the landing: Simple, lower-variance gradient estimators for variational inference. NeurIPS 2017.
[2] Rombach, R., Blattmann, A., Lorenz, D., Esser, P. and Ommer, B. High-resolution image synthesis with latent diffusion models. CVPR 2022.
[3] Tomczak, J. and Welling, M. VAE with a VampPrior. In International conference on artificial intelligence and statistics, 2018.
[4] Nakagawa, N., Togo, R., Ogawa, T., and Haseyama, M. Gromov-Wasserstein Autoencoders. ICLR 2023.
Other Strengths And Weaknesses: #### Strengths:
1. The authors propose a novel combination of the score-based generative modeling with structural regularization via Gromov-Wasserstein distances. The introduction of the SFS trick provides a way to sidestep costly Jacobian computations, and the idea of enforcing latent space geometry via GW-based loss is interesting.
2. The ability to learn flexible priors without explicit density estimation is a promising direction that might alleviate common issues in VAEs with fixed priors.
3. Some sections of the paper, particularly the detailed derivations and the pseudo-code in the supplementary material, are clearly presented. These sections provide sufficient details for understanding the theoretical contributions.
4. The authors validate the proposed method on a wide range of tasks.
#### Weaknesses:
1. The overall narrative can be somewhat inconsistent, as the paper shifts between topics like distribution matching, fairness, and domain adaptation. This makes it challenging to pinpoint the central contribution. Additionally, key notations and the overall problem formulation are not introduced early enough, which affects readability.
2. While the experiments are conducted on a wide range of tasks, the evaluation is limited to relatively small and simple datasets. This raises questions about the method’s scalability and applicability to real-world scenarios. Furthermore, the impact of the structural regularization (GW-SP loss) is not thoroughly isolated in ablation studies.
3. My main concern is the absence of quantitative runtime analysis. While the paper claims that the SFS trick leads to improved computational efficiency by avoiding costly Jacobian computations, it does not provide measurements such as training time per epoch, total convergence time, or inference speed compared to standard VAEs with Gaussian priors. Such an analysis would be important to evaluate the practical values of the proposed method in real-world applications.
4. The paper does not sufficiently cite or discuss related works that use optimal transport for domain adaptation and structural regularization, or other methods for learnable priors.
Other Comments Or Suggestions: 1. Make sure that abbreviations (e.g., GW, SFS, VAUB, etc.) are defined once and used consistently throughout the paper.
Questions For Authors: 1. Could the authors clarify whether the Gromov-Wasserstein Semantic Preserving (GW-SP) loss is applied uniformly across all experiments, especially in Sections 5.1 and 5.2? If not, can the authors provide an ablation that shows the impact of including versus omitting this term?
2. Could the authors provide detailed runtime comparisons (e.g., training time per epoch, total convergence time, and inference time) between the proposed methods and standard VAEs with Gaussian priors, as well as methods like LSGM?
3. Could the authors elaborate on how the SFS trick relates to and differs from existing methods such as score distillation sampling and latent score-based generative models like LSGM?
4. Could the authors evaluate the proposed method on large-scale datasets, such as VisDA-2017 for domain adaptation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s thoughtful feedback and are pleased that our work was found both interesting and novel. However, we believe some aspects were probably **misunderstood or overlooked**. For example, the reviewer noted missing references, yet our manuscript thoroughly discusses them.
Gromov-Wasserstein Autoencoder (GWAE) [1] is a cornerstone of our approach. We state in our contribution (line 59, right column) that we "adopt the Gromov-Wasserstein-based constraint from Gromov-Wasserstein Autoencoders," and this is reiterated in Section 3.3 (line 220, right column) and the related works section (line 310, left column), with additional references on lines 65 and 250.
Similarly, VampPrior [4] is used as a baseline model in Section 5.1 (line 291, right column) and is cited in the related works (line 287, left column).
Furthermore, Latent Score-Based Generative Models (LSGM) are integral to our narrative. We compare LSGM with our SFS trick in Section 3.2 and provide a detailed analysis of the gradients in our objective functions.
## Experimental Designs Or Analyses:
Our synthetic nested D-shaped dataset was chosen as a preliminary experiment to isolate and clearly demonstrate how different priors affect latent space separation. This controlled setting provides an intuitive visualization, unlike high-dimensional latent spaces that require non-linear reduction methods. (Please refer to reviewer 5UvJ for section **Additional Experimental Results** for extra experiments comparing between LSGM and ours on Fairness task.)
Finally, we remind the reviewer that our domain translation evaluation includes a CLIP-based metric for semantic preservation, along with LPIPS and SSIM scores, which were selected because they directly assess semantic preservation compared to metrics such as FID or PSNR. We welcome any suggestions for additional or alternative metrics.
## Other Strengths and Weaknesses
Our evaluation across fairness, domain adaptation, and domain translation is intended as a unified process to assess the efficacy of our distribution matching framework rather than a shifting narrative. All downstream tasks were evaluated both with and without the GW constraint. We present additional experiments in **Appendix D** and **Appendix F** that isolates the affects of GW-EP and GW-SP distances.
Furthermore, as detailed in our response to reviewer 5UvJ in the **Runtime and Memory Efficiency** section, our SFS trick offers benefits in terms of reduced VRAM usage compared to LSGM.
Finally, we respectfully note that several references identified as missing were, in fact, included in our manuscript. We welcome any specific suggestions for additional references.
## Questions For Authors:
1. We use GW-EP uniformly across all experiments and only GW-SP in domain adaptation and domain translation experiments. The reason we did not use GW-SP uniformly across all experiments is that, quoting from line 335, we do not have a semantic model to compute the semantic distance within tabular dataset such as Adult dataset that is used in fairness tasks. We do however have such semantic models for image dataset such as CLIP model and thus GW-SP is possible on these image dataset.
3. Latent Score-Based Generative Models learn an expressive score-based prior by approximating the cross-entropy term with a weighted denoising score matching objective. Our SAUB approach also employs a variational probabilistic framework with a learned prior score model to capture expressive latent distributions. For further details, please see Section 3.2 and Appendix E.
The Score Distillation Sampling (SDS) loss bypasses the diffusion UNET's Jacobian computation. Although SDS and our Score Function Substitution (SFS) trick both involve a KL divergence term (i.g., the KL term in ELBO for SAUB), SDS replaces this term with a denoising score matching objective and removes the score gradient using the "Sticking-the-Landing" technique [2]. In contrast, our SFS trick computes the exact KL gradient without omitting any components or resorting to denoising score matching [5].
4. Please refer to **Experimental Designs Or Analysis** response for reviewer 8aGm.
We sincerely appreciate the reviewer's thoughtful feedback and hope that our responses have fully addressed all concerns. If so, we kindly ask that you consider raising the score accordingly. We remain available to provide further clarifications if needed.
[1] Zhang, Z., et al. Gromov-Wasserstein Distances: Entropic Regularization, Duality, and Sample Complexity.
[2] Roeder, G., et al. Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference.
[3] Vahdat, A., et al. Score-based Generative Modeling in Latent Space.
[4] Jakub Tomczak, et al. Vae with a vampprior.
[5] Poole, B., et al. DreamFusion: Text-to-3D using 2D Diffusion.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the detailed response. Most of my concerns are properly addressed. I would like to raise my score. | Summary: This paper proposes a new prior distribution for distribution matching. Specifically, the authors model the prior using denoising score matching, and they enhance this approach by incorporating the minimization of the Gromov-Wasserstein (GW) distance between different distributions as additional regularization. Experiments across various tasks, such as Fairness Representation Learning and Domain Adaptation, confirm the effectiveness of their methods.
Claims And Evidence: All the claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria are suitable for the problem.
Theoretical Claims: Due to the limited time, I couldn't check every detail of the derivation. However, most of the theorems seem intuitive and appear correct.
Experimental Designs Or Analyses: In the domain adaptation experiments, the authors rely solely on simplistic datasets (MNIST and USPS) and use limited baselines such as DANN. To enhance the robustness and relevance of their findings, it is advisable to incorporate more contemporary baselines as mentioned in [1], and to extend testing to more complex datasets like PACS [2] and Office-Home [3].
[1] Farahani, Abolfazl, et al. "A brief review of domain adaptation." Advances in data science and information engineering: proceedings from ICDATA 2020 and IKE 2020 (2021): 877-894.
[2] Li, Da, et al. "Deeper, broader and artier domain generalization." Proceedings of the IEEE international conference on computer vision. 2017.
[3] Venkateswara, Hemanth, et al. "Deep hashing network for unsupervised domain adaptation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
Supplementary Material: I didn’t check the supplementary material.
Relation To Broader Scientific Literature: The paper primarily draws inspiration from paper [1], which introduced a non-adversarial distribution matching technique using a Variational Autoencoder (VAE). Building on this foundational work, the current paper proposes new score-based priors.
[1] Gong, Ziyu, et al. "Towards Practical Non-Adversarial Distribution Matching." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.
Essential References Not Discussed: As far as I known, all the essential references are discussed in the paper.
Other Strengths And Weaknesses: **Strengths:**
1. The introduction of a score-based prior combined with GW distance regularization represents a novel approach.
2. The method's effectiveness is demonstrated through experiments across different tasks, including Fairness Representation Learning and Domain Adaptation.
3. The paper is well-organized and clear, making it easy to understand.
**Other Weaknesses:**
There are no significant weaknesses outside of those mentioned in the "Experimental Designs" section.
Other Comments Or Suggestions: 1. On line 132 of the left column, it seems the authors missed including a citation for the Gromov-Wasserstein (GW) distance.
2. On line 272 of the left column, it seems a pair of brackets is missing for “Section 2”.
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ### Experimental Designs Or Analysis
We appreciate the reviewer’s suggestion and understand the importance of evaluating our method on more complex benchmarks. Our current domain adaptation experiments using MNIST and USPS were chosen deliberately as controlled, standard benchmarks that allow us to clearly illustrate the core capabilities and innovations of our framework. **Our primary goal in this work was to introduce a new, flexible framework for distribution matching that leverages a learned, expressive score-based prior within a variational probabilistic setting, and that incorporates innovations such as the Score Function Substitution (SFS) trick. And therefore, we not only applied to domain adaptation tasks but also other downstream tasks, such as fairness representation learning and domain translation.**
Notably, our current model deploys basic CNN layers and a simple linear layer UNET. We believe that many additional engineering techniques, including added training methods for the score model (e.g., EMA) and the integration of state-of-the-art architectures such as VQ-VAE [5] for the encoder/decoder and Vision Transformers [4] for diffusion models, are needed to achieve competitive real-world performance. Our intent here was to demonstrate the barebone performance of our approach.
We acknowledge that incorporating more contemporary baselines (e.g., as mentioned in [1]) and extending evaluations to complex datasets like PACS [2] and Office-Home [3] would provide further insights into the robustness and scalability of our method. Moreover, it is promising that our method shows potential for saving memory and enhancing stability compared to LSGM seen in [table in reviewer 5UvJ], which is already regarded as a powerful model. With further enhancements, we are confident that our framework can be extended to yield competitive results in more demanding domain adaptation scenarios in future works.
We sincerely appreciate the reviewer’s thoughtful and constructive questions, which have been invaluable in helping us refine and improve the clarity of our manuscript. If our responses have addressed all concerns and cleared any confusions, we kindly request consideration for an improved score. If further questions remain, we remain open to addressing them.
[1] Farahani, Abolfazl, et al. A brief review of domain adaptation.
[2] Li, Da, et al. Deeper, broader and artier domain generalization.
[3] Venkateswara, Hemanth, et al. Deep hashing network for unsupervised domain adaptation.
[4] Gu, S., Chen, et al. Vector Quantized Diffusion Model for Text-to-Image Synthesis.
[5] Razavi, A., et al. Generating Diverse High-Fidelity Images with VQ-VAE-2. | Summary: This paper deals with the limitations of existing distribution matching (DM) methods, which often struggle with scalability, instability, mode collapse, or impose unnecessary biases through fixed priors. To overcome these limitations, the authors builds upon the existing work VAUB, and propose a novel approach that models the prior density through its score function, using denoising score matching techniques. The goal of the proposed approach is to avoid biases from fixed priors.
Claims And Evidence: The claim about expressive prior might be problematic. This seems to be related to the mismatch between $L_{\rm VAUB}$ and $L_{\rm SAUB}$. The prior parameters $\psi$ is not optimized with respect to the actual $L_{\rm VAUB}$ loss. Instead the prior $Q_{\psi}$ seems to be fixed to some average distribution of $q_{\theta}(z|x,d)$ (from Eq. (11)). If this is the case, it defeats the purpose of allowing flexible prior in $L_{\rm VAUB}$.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No, I did not check the proofs.
Experimental Designs Or Analyses: Yes, the experiments on synthetic data, fairness representation learning, domain adaptation, and domain translation are all important applications of the proposed method.
Supplementary Material: I did not read the supplementary materials thoroughly.
Relation To Broader Scientific Literature: Distribution matching is an important problem in many unsupervised learning problems, e.g., generative modeling, domain adaptation, representation learning, domain translation, etc. The proposed method tries to advance techniques for non-adversarial distribution matching from a latent variable model perspective.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. Proposed method appears principled and technically sound.
2. The proposed Score Function Substitution (SFS) trick seems novel and could be useful in other contexts as well.
3. A variety of applications are considered to demostrate the efficacy of the proposed method
Weaknesses and Questions:
1. The claim about flexible prior seems problematic. Please refer to above explanation for details.
2. Usefulness in real world application is only weakly demonstrated. For example, in the considered experiments on Domain adaptation and domain translation, recent baselines are not considered besides VAUB, and the qualitative results are not convincing.
3. LSGM was demonstrated to be a strong image generative model. Can the proposed method be used for building such high-resolution image generative models, or are there inherent challenges ?
4. Does GW regularization based enhancement on existing methods, such as VAUB or ADDA in domain adaptation, also improve their performance? It is of interest to know whether the GW regularization is only useful for the proposed scheme.
Other Comments Or Suggestions: 1. In page 4, Sec. 3.2, Appendix D should be Appendix E ?
2. Some typos (spelling mistakes) in page 2 Sec. 2.
Questions For Authors: Please refer to Weaknesses and Questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful for the reviewer’s thoughtful feedback and recognition of our work. We apologize for any confusion caused by the inadequate explanation in **Section 3.1.1**, where the lack of explicit details led to misunderstandings regarding our training procedure. Below is our refined explanation:
Reviewer Concerns and Clarifications
1. **The claim about flexible prior seems problematic...**
- The reviewer is correct that during the encoder and decoder updates, the VAUB loss is computed with a fixed prior model, meaning the parameter $\psi$ is not updated at that stage. However, in the subsequent training step, we update the score model using the encoder’s posterior. After carefully re-examining the manuscript, we acknowledge that our manuscript did not clearly state that **our training algorithm alternates between updating the encoder/decoder and the diffusion (score) model**. This alternating approach—congruent to the strategy used in training Latent Score-Based Generative Models (LSGMs)[1]—ensures that while the prior remains fixed during certain updates, the subsequent score model update, driven by the evolving encoder posterior, provides the necessary flexibility and expressiveness. We plan to update our manuscript in future versions to explicitly clarify this point. We appreciate the reviewer’s attention to the details of our training algorithm. For further clarity, we would like to kindly refer the reviewer to the pseudocode provided in **Appendix B**, which offers a comprehensive explanation of our approach.
2. **Usefulness in real world application is only weakly demonstrated..**
- We appreciate the reviewer's concern regarding our method's real-world applicability and experimental scope. **Our primary goal was to introduce a flexible distribution matching framework, demonstrating its promise in tasks like domain adaptation, domain translation, and fairness**. Our baseline experiments emphasize the method's versatility and effectiveness.
While additional experiments with more recent baselines could enhance the evaluation, our objective was to establish a foundational framework with strong potential for future work. We believe that with further engineering, optimization, and advanced architectures (e.g., VQ-VAE [4] for the encoder/decoder and vision transformers [3] for diffusion models), our framework could achieve competitive performance. For this initial study, we used simple CNNs for the encoder/decoder and a shallow linear layer for the score model.
3. **LSGM was demonstrated to be a strong image generative model...**
- This is an insightful question. Although our model was not initially designed to be a generative model, theoretically, **our model functions as a generative model if prior samples are generated via a reverse diffusion process and then decoded into synthetic images**. In principle, our approach should be capable of generating high-resolution images similar to those produced by LSGM. We believe with enough engineering with SOTA architecture, we could also achieve competitive image generation results.
One notable advantage of training SAUB as a generative model is the potential for reduced VRAM usage, as our method does not require backpropagation through the score model as LSGM does during the encoder/decoder update seen in the **Runtime and Memory Efficency table** of **Reviewer 5UvJ**. Additionally, our model may offer improved stability during the encoder/decoder training, as demonstrated in **Section 3.2.1**. This may lead to promising future works.
4. **Does GW regularization based enhancement on existing method...**
- This arises an interesting ablation study. We further **add VAUB with GW, LSGM with and without GW** into existing experiment in the fairness experiment settings. As you can see in the figure [anonymous link](https://anonymous.4open.science/r/SAUB-DB57/), GW regularization consistently improves performance across all methods in terms of downstream tasks performances.
We sincerely appreciate the reviewer’s thoughtful and constructive questions, which have been invaluable in helping us refine and improve the clarity of our manuscript. If our responses have addressed all concerns and cleared any confusions, we kindly request consideration for an improved score. If further questions remain, we remain open to addressing them.
[1] Vahdat, A., et al. (2021). Score-based Generative Modeling in Latent Space.
[2] Tomczak, J. M., et al. VAE with a VampPrior. arXiv preprint, arXiv:1705.07120.
[3] Gu, S., et al. (2022). Vector Quantized Diffusion Model for Text-to-Image Synthesis.
[4] Razavi, A., et al. (2019). Generating Diverse High-Fidelity Images with VQ-VAE-2.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
I understand that the algorithm alternates between $\theta, \phi$ opitmization following Eq. (10) and $\psi$ optimization following Eq. (11). My comment was that Eq. (11) is not aligned with the original objective $\min_{\theta, \phi, \psi} L\_{\rm VAUB} $. Mainly, a correct block coordinate descent algorithm would be one that alternates between
1. $\theta, \phi \leftarrow\arg \min_{\theta, \phi} L\_{\rm VAUB}(\theta, \phi, \psi)$
2. $\psi \leftarrow \arg \min_{\psi} L\_{\rm VAUB}(\theta, \phi, \psi)$
Step 1 seems to correspond to Eq. (10). However, step 2 is not Eq. (11). Instead $\psi$ is optimized such that the prior explicitly equals $E\_{p(x,d)} q (z|x,d)$ which may not correspond to Step 2. In that sense, the prior is not correctly optimized according to the presented formulation.
However, this discrepancy does not seem to be discussed. This also affects the claim about flexible prior. Since it appears equivalent to replacing the prior by $E\_{p(x,d)} q (z|x,d)$ instead of jointly optimizing it to minimize the VAUB loss.
---
Reply to Comment 1.1.1:
Comment: We apologize for any misunderstanding. In response to the reviewer’s concerns, we provide both an intuitive overview and a detailed derivation.
We employ the Score Function Substitution (SFS) trick to update the encoder and decoder within our variational framework without needing to compute the intractable density of the prior by leveraging the prior's score function. As the reviewer noted, the gradient of the SAUB loss is equivalent to the gradient of the VAUB loss with respect to the encoder ($\theta$) and decoder ($\varphi$) parameters. Past works [1][2] have shown that allowing the prior model to learn the posterior distribution can both tighten the variational bound and lead to high-fidelity images. In a similar spirit, we aim to learn a prior score model based on the posterior distribution. This model naturally approximates the true score function of the posterior via denoising score matching (DSM) on posterior samples, and it converges almost surely to the true score function at small noise levels [3][4]. This has led us to adopt denoising score matching (DSM) in our approach.
However, we would like to clarify that the DSM-based update is not distinct from or a complete departure from the original VAUB loss framework for updating the prior parameters. In fact, the DSM loss is proportional to updating the VAUB loss with respect to the prior model parameters $\psi$. With appropriate weighting, we have
$$
\nabla\_\psi \mathcal{L}\_\mathrm{DSM} \propto \nabla\_\psi \mathcal{L}\_\mathrm{VAUB}
$$
Below we restate the VAUB loss (with $\beta=1$):
\begin{aligned}
\mathcal{L}\_\mathrm{VAUB} = \sum\_{d} \mathbb{E}\_{q_{\theta}} \left[ -\log \frac{p\_\varphi(x\mid z, d)}{q\_\theta(z\mid x, d)} Q\_\psi(z) \right],
\end{aligned}
\begin{aligned}
&= \sum\_d \Biggl[ \underbrace{\mathbb{E}\_{q\_{\theta}}\left[-\log p\_\varphi(x\mid z,d)\right]}_{\text{reconstruction term}} - \underbrace{\mathbb{E}\_{q\_\theta}\left[-\log q\_\theta(z\mid x, d)\right]}\_{\text{entropy term}} + \underbrace{\mathbb{E}\_{q\_\theta}\left[-\log Q\_\psi(z)\right]}\_{\text{cross-entropy term}} \Biggr]
\end{aligned}
Since the reconstruction and entropy terms do not depend on $\psi$, we have:
\begin{aligned}
\arg\min\_\psi \mathcal{L}\_\mathrm{VAUB} = \arg \min\_\psi \, \mathbb{E}_{q\_\theta}\left[-\log Q\_\psi(z)\right].
\end{aligned}
Under mild smoothness conditions for the noisy posterior and prior distributions, the cross-entropy term can be rederived as a weighted denoising score matching objective (up to an additive constant) [2]:
\begin{aligned}
\text{CE}\left(q\_\theta(z\mid x,d) \\| p\_\psi(z)\right) = \mathbb{E}\_{t\sim U[0,1]}\left[\frac{g(t)^2}{2} \mathbb{E}_{q\_\theta(z\_t,z\_0\mid x,d)}\left[\\|\nabla\_{z\_t}\log q\_\theta(z\_t\mid z\_0,d)-\nabla\_{z\_t}\log p\_\psi(z\_t)\\|^2\_2\right]\right] + \frac{D}{2}\log\left(2\pi e\,\sigma\_0^2\right)..
\end{aligned}
Here, $z\_0$ represents clean posterior samples, $z\_t$ denotes Gaussian-perturbed samples of $z$, and $g(t)$ is a weighting function. For further details and proof, please refer to Appendix A of [2].
Latent Score-Based Generative Models (LSGMs) train the score model corresponding to this cross-entropy term separately from the encoder and decoder updates. In practice, they drop the weighting function $g(t)$ (which improves fidelity) when updating just the diffusion model, but they require Maximum Likelihood (MLE) weighting [5] during the encoder/decoder update to ensure that the posterior properly matches the prior.
Our DSM term aligns with this approach when updating the prior model by using an unweighted DSM loss, which is proportional to the cross-entropy term and, in turn, proportional to the VAUB update of the prior parameters.
We appreciate the reviewer’s insightful feedback, which has highlighted the importance of further elaborating on the nuanced relationship between denoising score matching and the VAUB loss. While we have considered this connection carefully, we agree that providing additional explanation will benefit the reader. Accordingly, we will include an expanded discussion of this relationship if accepted in the camera-ready version.
[1] Gong, Ziyu, et al. Towards Practical Non-Adversarial Distribution Matching.
[2] Vahdat, A., et al. Score-based Generative Modeling in Latent Space.
[3] Vincent, P., et al. A connection between score matching and denoising autoencoders.
[4] Song, Y., et al. Generative modeling by estimating gradients of the data distribution.
[5] Song, Y., et al. Maximum likelihood training of score-based diffusion models. | Summary: Existing DM methods face many challenges, and likelihood-based methods often impose unnecessary biases through fixed priors or require learning complex prior distributions.
This paper introduces a novel approach to distribution matching (DM) by leveraging score-based priors and Gromov-Wasserstein (GW) distance based structural regularization. This new approach eliminates biases from fixed priors and avoids the computational overhead of learning full prior densities through gradient of the log-probability density, preserves the geometric structure of data in the latent space by GW SP/EP structural regularization. Also experiments are conducted, which outperform baseline methods on the MNIST-USPS domain adaptation task.
Claims And Evidence: most evidences are convincing, but experiments against Latent Score-Based Generative Models (LSGM) are not conducted.
This new approach is inspired by LSGM, so experiments against LSGM is important to reveal the improvement.
Methods And Evaluation Criteria: Mostly yes, but it seems GW SP/EP structural regularization is a common method, it is not derived from the score-based priors, so more evaluation are needed:
- new approach without GW SP/EP vs LSGM
- LSGM + GW SP/EP vs LSGM
Theoretical Claims: yes, Proof of Proposition 3.1
Experimental Designs Or Analyses: Yes,more experiments are needed, these are mentioned above.
Supplementary Material: Proof of Proposition 3.1
Relation To Broader Scientific Literature: This paper proposes a new approach with score-based priors and Gromov-Wasserstein (GW) distance based structural regularization, it should be easy and common way for DM, also it can be applied in broad research.
Essential References Not Discussed: no
Other Strengths And Weaknesses: no, comments see above.
Other Comments Or Suggestions: no, comments see above.
Questions For Authors: no
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful review and valuable suggestions. We appreciate your careful assessment of our work, particularly regarding the Gromov-Wasserstein structural preservation regularization(GW) component.
Following your recommendations, we conducted additional experiments to isolate the contributions of our GW components:
**Additional Experimental Results**
We performed ablation studies where we **additionally add VAUB with GW, LSGM [1] with and without GW** into the existing experiment framework. Due to limited time for the rebuttal, we focused these experiments on the fairness dataset, which provides clear metrics for both distribution matching metric (DP gap) and downstream task performance (Accuracy): [*[Figure Anonymous Link]*](https://anonymous.4open.science/r/SAUB-DB57/) (note that we omit other baselines in the figure for clearer comparison)
From the figure we observe that:
1. GW regularization consistently improves performance across all methods in terms of downstream tasks performances.
2. Our SAUB method achieves comparable performance compared to LSGM with and without GW regularization.
**Runtime and Memory Efficiency**
We also compared computational efficiency metrics between our methods and LSGM:
| LSGM/Ours | dim=8 | dim=16 | dim=64 | dim=128 | dim=256 |
|-------------------------|-----------------|-----------------|-----------------|-----------------|-----------------|
| Allocated VRAM (MB) | 28.6/**27.9** | 30.8/**28.3** | 43.1/**33.5** | 60.2/**41.8** | 99.3/**60.3** |
| Training per epoch (ms) | 138.0/**121.8** | 142.5/**138.8** | 140.9/**137.6** | 146.5/**141.4** | 146.9/**140.1** |
Due to limited time, we could not measure extensive runtime analysis on all tasks. Instead, we found that as the dataset has more and more dimensions (e.g. from fairness dataset(114) to CelebA dataset(64 x 64 x 3)), the proportion of the network parameters allocated for score-based prior distributions is higher as well due to the necessity to have larger latent dimensions. Therefore, we varied the latent dimensions for the fairness tasks to simulate the parameter structure encountered in other tasks.
At dimension 128, the model represents a realistic scenario for applications in our domain adaptation experiments, and similarly at dimension 256 mimic the domain translation experiments.
Our approach demonstrates lower VRAM requirements compared to LSGM and becomes more obvious when the latent dimension is larger. Training speed, on the other hand, improves about 1.1-1.2x across different dimensions. Such observations are consistent with our theoretical analysis in Section 3.2.1 and Appendix E. These efficiency gains stem primarily from avoiding the costly Jacobian computations required in LSGM, as detailed in our paper.
We believe these additional results strengthen our paper by clearly delineating the contributions of each component while confirming the complementary benefits of combining score-based priors with GW regularization.
Would these clarifications and additional experimental results address your concerns sufficiently to warrant reconsidering your evaluation score?
[1] Vahdat, A., et al. (2021). Score-based Generative Modeling in Latent Space. | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.